pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Indonesian-GSD | Feature | Description | | --- | --- | | **Name** | `id_udv25_indonesiangsd_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (1325 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `APP`, `ASP`, `ASP+PS2`, `ASP+PS3`, `ASP+T--`, `ASS`, `ASS+PS3`, `B--`, `B--+PS3`, `B--+T--`, `CC-`, `CC-+PS3`, `CC-+T--`, `CD-`, `CD-+PS3`, `CO-`, `CO-+PS3`, `D--`, `D--+PS2`, `D--+PS3`, `D--+T--`, `F--`, `F--+PS1`, `F--+PS2`, `F--+PS3`, `F--+T--`, `G--`, `G--+PS3`, `G--+T--`, `H--`, `H--+T--`, `I--`, `M--`, `M--+PS3`, `M--+T--`, `NOUN`, `NPD`, `NPD+PS2`, `NPD+PS3`, `NSD`, `NSD+PS1`, `NSD+PS2`, `NSD+PS3`, `NSD+T--`, `NSF`, `NSM`, `NSM+PS3`, `NUM`, `O--`, `PP1`, `PP1+T--`, `PP2`, `PP3`, `PP3+T--`, `PROPN`, `PS1`, `PS1+VSA`, `PS1+VSA+T--`, `PS2`, `PS2+VSA`, `PS3`, `PUNCT`, `R--`, `R--+PS1`, `R--+PS2`, `R--+PS3`, `S--`, `S--+PS3`, `T--`, `VERB`, `VPA`, `VSA`, `VSA+PS1`, `VSA+PS2`, `VSA+PS3`, `VSA+T--`, `VSP`, `VSP+PS3`, `VSP+T--`, `W--`, `W--+T--`, `X`, `X--`, `Z--` | | **`morphologizer`** | `POS=PROPN`, `POS=AUX`, `POS=DET\|PronType=Ind`, `Number=Sing\|POS=NOUN`, `POS=PRON\|PronType=Rel`, `Number=Sing\|POS=VERB\|Voice=Pass`, `POS=ADP`, `POS=PUNCT`, `Number=Sing\|POS=PROPN`, `POS=NOUN`, `POS=ADV`, `POS=CCONJ`, `Number=Sing\|POS=VERB\|Voice=Act`, `POS=VERB`, `POS=DET\|PronType=Tot`, `Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `POS=SCONJ`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=DET\|PronType=Dem`, `NumType=Card\|POS=NUM`, `Degree=Pos\|Number=Sing\|POS=NOUN`, `Degree=Pos\|Number=Sing\|POS=ADJ`, `NumType=Card\|POS=DET\|PronType=Ind`, `Degree=Pos\|Number=Sing\|POS=ADP`, `Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Number=Sing\|POS=VERB`, `POS=PRON\|PronType=Int`, `Number=Sing\|POS=ADV\|Voice=Act`, `Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Voice=Act`, `Number=Sing\|POS=ADP\|Voice=Act`, `POS=ADJ`, `Number[psor]=Sing\|POS=ADP\|Person[psor]=3`, `Degree=Pos\|Number=Sing\|POS=DET`, `Degree=Pos\|Number=Sing\|POS=VERB`, `POS=PRON\|PronType=Dem`, `POS=PART\|Polarity=Neg`, `Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Number=Sing\|POS=PRON\|Person=1\|Polite=Form\|PronType=Prs`, `Number=Sing\|POS=ADJ`, `Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=SYM`, `POS=ADV\|PronType=Int`, `Clusivity=In\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Sing\|POS=ADJ\|Voice=Act`, `Degree=Pos\|Number=Sing\|POS=PROPN`, `Degree=Pos\|Number=Sing\|POS=ADV`, `Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3\|Voice=Pass`, `Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3\|Voice=Act`, `Number=Sing\|POS=PROPN\|Voice=Act`, `Number=Sing\|POS=NOUN\|Voice=Act`, `POS=DET`, `Number=Sing\|POS=DET\|Voice=Act`, `NumType=Card\|POS=PRON\|PronType=Ind`, `Number=Sing\|Number[psor]=Sing\|POS=ADV\|Person[psor]=3`, `Number=Sing\|POS=DET`, `Number=Sing\|POS=ADJ\|Voice=Pass`, `POS=CCONJ\|PronType=Dem`, `Number=Sing\|POS=ADP`, `Number=Sing\|POS=ADV`, `Number=Sing\|POS=PRON\|Person=2\|Polite=Infm\|PronType=Prs`, `Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Number=Sing\|POS=PRON`, `POS=PRON`, `NumType=Card\|POS=ADV\|PronType=Ind`, `NumType=Card\|Number[psor]=Sing\|POS=NUM\|Person[psor]=3`, `Number=Sing\|POS=PRON\|Person=3\|Polite=Form\|PronType=Prs`, `POS=DET\|PronType=Int`, `Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Degree=Pos\|Number=Sing\|POS=SCONJ`, `POS=PRON\|PronType=Ind`, `Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3\|Voice=Pass`, `POS=VERB\|PronType=Ind`, `Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Number=Sing\|POS=SCONJ`, `Degree=Sup\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person[psor]=3`, `Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Number=Plur\|POS=NOUN`, `POS=ADV\|PronType=Dem`, `Number=Sing\|POS=VERB\|Person=1\|Voice=Act`, `Degree=Sup\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=ADP\|Voice=Pass`, `Number[psor]=Sing\|POS=PART\|Person[psor]=3`, `Number=Sing\|POS=NOUN\|Voice=Pass`, `Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=CCONJ\|Person[psor]=3`, `POS=PART`, `Number=Sing\|Number[psor]=Sing\|POS=PART\|Person[psor]=3\|Voice=Pass`, `Degree=Sup\|Number=Sing\|POS=ADV`, `Number=Sing\|POS=PRON\|Voice=Act`, `Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3\|Voice=Act`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `Number[psor]=Sing\|POS=PRON\|Person[psor]=3\|PronType=Tot`, `Degree=Pos\|Number=Sing\|POS=X`, `POS=PRON\|PronType=Tot`, `Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADV\|Person[psor]=3`, `Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3`, `Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person[psor]=3`, `Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `POS=SCONJ\|PronType=Int`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|Voice=Act`, `Number[psor]=Sing\|POS=DET\|Person[psor]=3`, `Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=3`, `Clusivity=Ex\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Plur\|POS=VERB\|Voice=Act`, `Number=Sing\|Number[psor]=Sing\|POS=ADV\|Person[psor]=3\|Voice=Act`, `Degree=Pos\|Number=Sing\|POS=NOUN\|Polarity=Neg`, `POS=X`, `Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person[psor]=3`, `Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Number=Sing\|POS=PRON\|Person=1\|Polite=Infm\|PronType=Prs`, `Number=Sing\|POS=PROPN\|Voice=Pass`, `POS=ADV\|Polarity=Neg`, `NumType=Card\|Number=Sing\|POS=NUM`, `Number[psor]=Sing\|POS=ADV\|Person[psor]=2`, `Number[psor]=Sing\|POS=ADV\|Person[psor]=3`, `Degree=Sup\|Number=Sing\|POS=PROPN`, `POS=PROPN\|Polarity=Neg`, `Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person[psor]=2\|Voice=Act`, `Number=Sing\|POS=PROPN\|Person=1\|Voice=Act`, `POS=SCONJ\|PronType=Dem`, `Number=Sing\|Number[psor]=Sing\|POS=ADV\|Person[psor]=2\|Voice=Act`, `Number=Sing\|POS=CCONJ`, `Degree=Sup\|Number=Sing\|POS=VERB`, `Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3`, `Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3\|Voice=Act`, `Degree=Pos\|Number=Sing\|POS=PRON`, `Number=Sing\|POS=ADV\|Voice=Pass`, `Number[psor]=Sing\|POS=ADP\|Person[psor]=2`, `Number=Sing\|POS=SYM`, `POS=ADJ\|Polarity=Neg`, `Degree=Pos\|NumType=Card\|Number=Sing\|POS=NUM`, `Number=Sing\|Number[psor]=Sing\|POS=SCONJ\|Person[psor]=3`, `Degree=Pos\|Number=Sing\|POS=CCONJ`, `Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Number=Sing\|POS=CCONJ\|Voice=Act`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person[psor]=3\|Voice=Pass`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `POS=VERB\|PronType=Dem`, `Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `Number=Sing\|POS=PART\|Voice=Act`, `Degree=Sup\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=3`, `POS=ADP\|PronType=Int`, `Number[psor]=Sing\|POS=VERB\|Person[psor]=3`, `Number[psor]=Sing\|POS=PRON\|Person[psor]=3\|PronType=Rel`, `Degree=Pos\|Number=Sing\|POS=AUX`, `Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Number=Sing\|POS=SCONJ\|Voice=Pass`, `Degree=Sup\|Number=Sing\|POS=ADP`, `Number=Sing\|POS=SCONJ\|Voice=Act`, `NumType=Card\|POS=DET\|PronType=Int`, `Degree=Pos\|Number=Sing\|POS=PART\|Polarity=Neg`, `Degree=Sup\|Number=Sing\|POS=SCONJ`, `Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1\|Voice=Act`, `Number=Plur\|POS=ADJ`, `POS=VERB\|PronType=Int`, `Number=Sing\|POS=VERB\|Person=2\|Voice=Act`, `Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=2`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Number[psor]=Sing\|POS=ADV\|Person[psor]=3\|PronType=Tot`, `POS=DET\|PronType=Rel`, `Number=Sing\|POS=NOUN\|Polarity=Neg`, `Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=2`, `NumType=Card\|Number=Sing\|POS=NUM\|Voice=Act`, `Degree=Pos\|Number=Plur\|POS=ADJ`, `Number[psor]=Sing\|POS=DET\|Person[psor]=3\|PronType=Tot`, `Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Number=Sing\|POS=VERB\|Person=1`, `Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=3`, `NumType=Card\|Number[psor]=Sing\|POS=DET\|Person[psor]=3\|PronType=Ind`, `POS=ADV\|PronType=Tot`, `Degree=Pos\|Number=Plur\|POS=ADV`, `Number=Plur\|POS=ADV\|Voice=Act`, `POS=CCONJ\|PronType=Int`, `Degree=Pos\|Number=Sing\|POS=PART`, `Number[psor]=Sing\|POS=PRON\|Person[psor]=2`, `Number=Plur\|POS=VERB`, `Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=3\|Voice=Pass`, `Degree=Pos\|Number=Sing\|POS=PUNCT`, `Number[psor]=Sing\|POS=ADP\|Person[psor]=1`, `Degree=Sup\|Number=Sing\|POS=NOUN`, `Number[psor]=Sing\|POS=PART\|Person[psor]=3\|Polarity=Neg`, `Number=Sing\|Number[psor]=Sing\|POS=ADP\|Person[psor]=3\|Voice=Act`, `POS=NOUN\|Polarity=Neg`, `Number[psor]=Sing\|POS=PROPN\|Person[psor]=2`, `Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2\|Voice=Act` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `case`, `cc`, `ccomp`, `compound`, `compound:plur`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `1`, `2`, `4`, `6`, `8`, `10`, `12`, `15`, `19`, `22`, `24`, `26`, `27`, `29`, `30`, `31`, `34`, `36`, `37`, `40`, `42`, `44`, `46`, `48`, `50`, `51`, `53`, `54`, `56`, `58`, `47`, `59`, `62`, `64`, `66`, `68`, `70`, `71`, `3`, `72`, `74`, `75`, `77`, `78`, `79`, `81`, `84`, `86`, `87`, `88`, `89`, `92`, `11`, `93`, `95`, `96`, `97`, `98`, `99`, `101`, `103`, `105`, `106`, `107`, `108`, `110`, `111`, `113`, `115`, `116`, `118`, `120`, `122`, `123`, `124`, `125`, `126`, `127`, `128`, `130`, `131`, `132`, `134`, `135`, `137`, `140`, `142`, `143`, `144`, `146`, `147`, `148`, `149`, `150`, `151`, `152`, `153`, `43`, `155`, `157`, `158`, `160`, `161`, `162`, `163`, `164`, `165`, `166`, `167`, `168`, `170`, `171`, `172`, `174`, `175`, `177`, `178`, `179`, `180`, `181`, `182`, `183`, `25`, `184`, `185`, `186`, `187`, `188`, `190`, `192`, `193`, `194`, `196`, `57`, `197`, `198`, `199`, `201`, `203`, `204`, `206`, `207`, `208`, `209`, `210`, `211`, `212`, `213`, `214`, `215`, `217`, `218`, `219`, `220`, `221`, `223`, `225`, `227`, `228`, `230`, `232`, `234`, `236`, `237`, `238`, `240`, `242`, `243`, `244`, `246`, `247`, `248`, `249`, `250`, `251`, `252`, `253`, `254`, `256`, `257`, `258`, `260`, `261`, `262`, `263`, `264`, `266`, `267`, `268`, `269`, `270`, `272`, `41`, `273`, `274`, `275`, `276`, `277`, `278`, `280`, `281`, `282`, `283`, `284`, `285`, `286`, `287`, `288`, `289`, `290`, `291`, `292`, `293`, `294`, `295`, `297`, `298`, `299`, `300`, `301`, `302`, `303`, `304`, `306`, `307`, `308`, `309`, `310`, `312`, `313`, `314`, `317`, `315`, `318`, `320`, `321`, `322`, `323`, `324`, `9`, `325`, `326`, `327`, `329`, `330`, `331`, `332`, `333`, `334`, `336`, `337`, `339`, `341`, `342`, `343`, `345`, `346`, `347`, `348`, `80`, `241`, `349`, `350`, `351`, `353`, `354`, `355`, `356`, `357`, `358`, `359`, `360`, `361`, `363`, `49`, `364`, `365`, `366`, `23`, `367`, `368`, `369`, `370`, `371`, `372`, `373`, `374`, `375`, `376`, `378`, `379`, `380`, `381`, `382`, `383`, `385`, `386`, `387`, `388`, `389`, `390`, `391`, `393`, `394`, `45`, `35`, `395`, `396`, `63`, `397`, `398`, `399`, `400`, `401`, `402`, `403`, `404`, `405`, `406`, `407`, `408`, `409`, `410`, `412`, `413`, `415`, `416`, `417`, `419`, `421`, `422`, `173`, `28`, `424`, `425`, `426`, `427`, `428`, `429`, `430`, `431`, `432`, `434`, `435`, `437`, `439`, `440`, `441`, `442`, `443`, `444`, `445`, `446`, `447`, `448`, `450`, `451`, `453`, `454`, `455`, `457`, `459`, `461`, `463`, `464`, `465`, `466`, `467`, `469`, `470`, `0`, `471`, `472`, `473`, `474`, `475`, `477`, `478`, `479`, `480`, `481`, `482`, `483`, `484`, `485`, `486`, `487`, `489`, `490`, `491`, `493`, `495`, `496`, `497`, `498`, `499`, `500`, `501`, `502`, `503`, `504`, `52`, `506`, `507`, `508`, `509`, `510`, `511`, `512`, `514`, `515`, `516`, `519`, `520`, `67`, `522`, `523`, `525`, `526`, `527`, `528`, `529`, `530`, `531`, `533`, `534`, `535`, `536`, `537`, `538`, `539`, `540`, `541`, `542`, `543`, `544`, `545`, `546`, `548`, `549`, `551`, `553`, `554`, `555`, `556`, `557`, `559`, `560`, `561`, `562`, `563`, `564`, `565`, `566`, `568`, `569`, `570`, `571`, `572`, `573`, `575`, `576`, `577`, `578`, `579`, `513`, `580`, `582`, `583`, `584`, `586`, `587`, `588`, `589`, `591`, `592`, `593`, `594`, `595`, `597`, `599`, `600`, `602`, `607`, `608`, `609`, `610`, `611`, `612`, `613`, `614`, `615`, `616`, `617`, `618`, `619`, `620`, `621`, `623`, `625`, `626`, `627`, `628`, `629`, `630`, `631`, `632`, `633`, `634`, `635`, `636`, `637`, `638`, `639`, `640`, `641`, `642`, `644`, `645`, `646`, `647`, `648`, `649`, `651`, `652`, `653`, `655`, `656`, `657`, `658`, `659`, `660`, `661`, `662`, `664`, `665`, `666`, `667`, `668`, `669`, `670`, `672`, `674`, `675`, `676`, `677`, `169`, `678`, `679`, `680`, `681`, `682`, `683`, `684`, `685`, `686`, `687`, `688`, `689`, `690`, `7`, `691`, `692`, `693`, `694`, `695`, `696`, `697`, `698`, `699`, `701`, `702`, `703`, `704`, `705`, `706`, `708`, `709`, `710`, `711`, `712`, `713`, `715`, `717`, `719`, `720`, `721`, `722`, `723`, `724`, `725`, `726`, `727`, `728`, `729`, `730`, `731`, `732`, `733`, `735`, `736`, `737`, `738`, `740`, `741`, `742`, `743`, `744`, `745`, `746`, `747`, `748`, `749`, `750`, `752`, `753`, `754`, `755`, `756`, `757`, `758`, `760`, `761`, `763`, `764`, `765`, `766`, `767`, `768`, `769`, `770`, `771`, `772`, `773`, `774`, `775`, `776`, `65`, `777`, `778`, `779`, `780`, `781`, `782`, `783`, `784`, `785`, `786`, `788`, `790`, `791`, `792`, `793`, `794`, `795`, `796`, `797`, `798`, `799`, `145`, `800`, `801`, `802`, `803`, `804`, `805`, `806`, `807`, `808`, `809`, `810`, `811`, `812`, `813`, `815`, `817`, `818`, `819`, `820`, `821`, `822`, `823`, `824`, `826`, `829`, `830`, `831`, `832`, `833`, `834`, `835`, `836`, `837`, `838`, `839`, `840`, `841`, `843`, `845`, `847`, `849`, `850`, `851`, `852`, `853`, `854`, `855`, `856`, `857`, `858`, `5`, `859`, `860`, `861`, `862`, `863`, `864`, `865`, `866`, `867`, `868`, `869`, `871`, `872`, `873`, `874`, `875`, `876`, `877`, `878`, `879`, `880`, `881`, `882`, `884`, `885`, `887`, `888`, `889`, `891`, `892`, `893`, `894`, `896`, `897`, `898`, `899`, `900`, `901`, `902`, `903`, `904`, `905`, `906`, `907`, `908`, `69`, `909`, `910`, `912`, `913`, `914`, `915`, `916`, `917`, `919`, `920`, `921`, `922`, `923`, `924`, `925`, `926`, `927`, `929`, `229`, `930`, `931`, `932`, `933`, `934`, `935`, `936`, `937`, `938`, `939`, `940`, `941`, `942`, `944`, `945`, `946`, `947`, `948`, `949`, `950`, `951`, `953`, `954`, `955`, `956`, `957`, `958`, `959`, `960`, `962`, `963`, `964`, `965`, `967`, `968`, `969`, `970`, `971`, `972`, `973`, `974`, `976`, `977`, `978`, `979`, `980`, `981`, `982`, `983`, `984`, `986`, `987`, `988`, `990`, `993`, `994`, `995`, `996`, `997`, `998`, `999`, `1000`, `1001`, `1002`, `1003`, `1004`, `1005`, `1006`, `1007`, `1008`, `1009`, `1012`, `1014`, `1015`, `1016`, `1019`, `1020`, `1021`, `1022`, `1023`, `1024`, `1025`, `1026`, `1027`, `1028`, `1029`, `1030`, `1031`, `1032`, `1033`, `1034`, `1035`, `1036`, `1037`, `1038`, `1039`, `1040`, `1041`, `1042`, `1043`, `1044`, `1045`, `1046`, `1047`, `1048`, `1049`, `1051`, `1052`, `1053`, `1054`, `1055`, `1056`, `1057`, `1058`, `1059`, `1060`, `1062`, `1063`, `1064`, `1065`, `1066`, `1068`, `1069`, `1070`, `1072`, `1074`, `1075`, `1076`, `1077`, `1078`, `1079`, `1080`, `1082`, `1083`, `1085`, `1086`, `1087`, `1088`, `1090`, `1091`, `1092`, `1093`, `1094`, `1095`, `1096`, `1097`, `1098`, `1099`, `1100`, `1101`, `673`, `1102`, `1103`, `1104`, `1106`, `1108`, `1109`, `1110`, `1111`, `1115`, `1116`, `1119`, `1120`, `1089`, `418`, `1121`, `1122`, `1123`, `1124`, `1125`, `1126`, `1127`, `1128`, `1129`, `1130`, `1131`, `1132`, `1134`, `1136`, `1137`, `1138`, `1139`, `1140`, `1141`, `1133`, `1142`, `1143`, `1144`, `1145`, `1146`, `1147`, `1148`, `1149`, `1150`, `1151`, `1153`, `1154`, `1156`, `1157`, `1158`, `1159`, `1160`, `1162`, `1164`, `1165`, `377`, `1166`, `1167`, `1168`, `1169`, `1170`, `1171`, `1172`, `1173`, `1174`, `1175`, `1176`, `1177`, `1179`, `1180`, `1181`, `1182`, `191`, `1183`, `1184`, `1185`, `1186`, `1187`, `1188`, `1190`, `1191`, `1192`, `1194`, `1195`, `1196`, `1197`, `1198`, `1199`, `1200`, `1201` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.99 | | `TOKEN_P` | 99.98 | | `TOKEN_R` | 99.99 | | `TOKEN_ACC` | 100.00 | | `SENTS_F` | 92.98 | | `SENTS_P` | 92.40 | | `SENTS_R` | 93.56 | | `TAG_ACC` | 94.79 | | `POS_ACC` | 93.17 | | `MORPH_ACC` | 95.90 | | `DEP_UAS` | 86.16 | | `DEP_LAS` | 78.38 | | `LEMMA_ACC` | 98.05 |
{"language": ["id"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/id_udv25_indonesiangsd_trf
null
[ "spacy", "token-classification", "id", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "id" ]
TAGS #spacy #token-classification #id #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Indonesian-GSD ### Label Scheme View label scheme (1325 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (1325 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #id #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (1325 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Korean-GSD | Feature | Description | | --- | --- | | **Name** | `ko_udv25_koreangsd_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (2415 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `+SW`, `??`, `EC`, `EC+JX`, `ETM`, `IC`, `IC+VCP+ETM`, `IC+VV+EC`, `JC`, `JC+SN`, `JKB`, `JKB+JKG`, `JKB+JX`, `JKC`, `JKG`, `JKO`, `JKQ`, `JKQ+JX`, `JKS`, `JX`, `JX+JKO`, `JX+JX`, `JX+SN+NNB`, `MAG`, `MAG+JKB`, `MAG+JKB+JX`, `MAG+JKS`, `MAG+JX`, `MAG+JX+JX`, `MAG+MM`, `MAG+NNG`, `MAG+NNG+JKB`, `MAG+VA+EF`, `MAG+VCP+EC`, `MAG+VCP+EF`, `MAG+VCP+EP+EC`, `MAG+VCP+ETM`, `MAG+VCP+ETN`, `MAG+VV+EC`, `MAG+VV+EC+NNP+NNP`, `MAG+VV+EC+VX+EP+EC`, `MAG+VV+EF`, `MAG+VV+EP+EC`, `MAG+VV+EP+EF`, `MAG+VV+EP+ETM`, `MAG+VV+ETM`, `MAG+VV+ETN`, `MAG+XSA+EC`, `MAG+XSA+ETM`, `MAG+XSN`, `MAG+XSV+EC`, `MAG+XSV+EC+JKS`, `MAG+XSV+EC+JX`, `MAG+XSV+EC+VX+EC`, `MAG+XSV+EC+VX+EP+EC`, `MAG+XSV+EC+VX+EP+EF`, `MAG+XSV+EC+VX+EP+EP+EC`, `MAG+XSV+EF`, `MAG+XSV+EP+EC`, `MAG+XSV+EP+EF`, `MAG+XSV+EP+ETM`, `MAG+XSV+ETM`, `MAJ`, `MM`, `MM+NNB`, `MM+NNB+JKB`, `MM+NNB+JKG`, `MM+NNB+JX`, `MM+NNB+NNB+JKG`, `MM+NNB+VCP+EC`, `MM+NNB+VCP+ETM`, `MM+NNB+XSN`, `MM+NNB+XSN+JKB`, `MM+NNG`, `MM+NNG+JC`, `MM+NNG+JKB`, `MM+NNG+JKG`, `MM+NNG+JKO`, `MM+NNG+JKS`, `MM+NNG+JX`, `MM+NNG+NNG+JKG`, `MM+NNG+VCP+ETM`, `MM+NNG+XSN+VCP+ETM`, `MM+NNP`, `MM+NNP+JKS`, `MM+NNP+JX+JX`, `MM+SN+NR+NNB+XSN`, `NA`, `NNB`, `NNB+JC`, `NNB+JKB`, `NNB+JKB+JX`, `NNB+JKC`, `NNB+JKG`, `NNB+JKO`, `NNB+JKS`, `NNB+JX`, `NNB+JX+JKB`, `NNB+JX+JKO`, `NNB+JX+JKS`, `NNB+JX+VV+EF`, `NNB+NNB`, `NNB+NNB+JKG`, `NNB+NNB+JX`, `NNB+NNB+NNG+JKG`, `NNB+NNG`, `NNB+NNG+JKB`, `NNB+NNG+JKO`, `NNB+NNG+JX`, `NNB+NNG+XSN`, `NNB+NNP+JKB+JX`, `NNB+NNP+JKB+VCP+EF`, `NNB+NNP+JKG`, `NNB+VCP+EC`, `NNB+VCP+EC+JX`, `NNB+VCP+EF`, `NNB+VCP+EP+EF`, `NNB+VCP+EP+ETM`, `NNB+VCP+EP+ETN`, `NNB+VCP+ETM`, `NNB+VCP+ETM+NNG`, `NNB+VCP+ETM+NNG+JKB`, `NNB+VCP+ETN`, `NNB+XSA+EC`, `NNB+XSA+EP+EC`, `NNB+XSA+EP+EF`, `NNB+XSA+ETM`, `NNB+XSA+ETN`, `NNB+XSN`, `NNB+XSN+JC`, `NNB+XSN+JKB`, `NNB+XSN+JKG`, `NNB+XSN+JKS`, `NNB+XSN+JX`, `NNB+XSN+VCP+EF`, `NNG`, `NNG+EC`, `NNG+EC+EF`, `NNG+EF`, `NNG+JC`, `NNG+JKB`, `NNG+JKB+JC`, `NNG+JKB+JKB`, `NNG+JKB+JKG`, `NNG+JKB+JX`, `NNG+JKB+NNG+NNG+NNG+XSN+SL`, `NNG+JKB+VCP+EC`, `NNG+JKB+VCP+ETM`, `NNG+JKC`, `NNG+JKG`, `NNG+JKO`, `NNG+JKO+VV+EC`, `NNG+JKS`, `NNG+JKS+JX`, `NNG+JKS+VA+EC`, `NNG+JX`, `NNG+JX+JKB`, `NNG+JX+JKG`, `NNG+JX+JKO`, `NNG+JX+JKS`, `NNG+JX+JX`, `NNG+JX+VCP+EC`, `NNG+JX+VCP+EP+EF`, `NNG+JX+VV+EC`, `NNG+JX+VV+ETM`, `NNG+MAG`, `NNG+NA`, `NNG+NNB`, `NNG+NNB+JKB`, `NNG+NNB+JKB+JX`, `NNG+NNB+JKG`, `NNG+NNB+JKO`, `NNG+NNB+JKS`, `NNG+NNB+JX`, `NNG+NNB+NNG`, `NNG+NNB+NNP+JKB`, `NNG+NNB+VCP+EC`, `NNG+NNB+VCP+EF`, `NNG+NNB+VCP+ETM`, `NNG+NNB+VCP+ETM+NNG+JKG`, `NNG+NNG`, `NNG+NNG+ETN+JKB+JX`, `NNG+NNG+JC`, `NNG+NNG+JKB`, `NNG+NNG+JKB+JKG`, `NNG+NNG+JKB+JX`, `NNG+NNG+JKC`, `NNG+NNG+JKG`, `NNG+NNG+JKO`, `NNG+NNG+JKS`, `NNG+NNG+JX`, `NNG+NNG+JX+JKS`, `NNG+NNG+JX+JX`, `NNG+NNG+JX+JX+VV+EC`, `NNG+NNG+JX+NNG`, `NNG+NNG+MAG`, `NNG+NNG+NNB`, `NNG+NNG+NNB+JKB`, `NNG+NNG+NNB+JKO`, `NNG+NNG+NNB+JKS`, `NNG+NNG+NNB+JX`, `NNG+NNG+NNB+VCP+EC`, `NNG+NNG+NNB+VCP+ETM`, `NNG+NNG+NNG`, `NNG+NNG+NNG+JC`, `NNG+NNG+NNG+JKB`, `NNG+NNG+NNG+JKB+JKG`, `NNG+NNG+NNG+JKB+JX`, `NNG+NNG+NNG+JKG`, `NNG+NNG+NNG+JKO`, `NNG+NNG+NNG+JKS`, `NNG+NNG+NNG+JX`, `NNG+NNG+NNG+NNG`, `NNG+NNG+NNG+NNG+JC`, `NNG+NNG+NNG+NNG+JKB`, `NNG+NNG+NNG+NNG+JKO`, `NNG+NNG+NNG+NNG+JKS`, `NNG+NNG+NNG+NNG+JX`, `NNG+NNG+NNG+NNG+NNG`, `NNG+NNG+NNG+NNG+NNG+JKO`, `NNG+NNG+NNG+NNG+NNG+JKS`, `NNG+NNG+NNG+NNG+NNG+JX`, `NNG+NNG+NNG+NNG+VCP+EF`, `NNG+NNG+NNG+NNG+VCP+ETM`, `NNG+NNG+NNG+VCP+EC`, `NNG+NNG+NNG+VCP+EF`, `NNG+NNG+NNG+VCP+ETM`, `NNG+NNG+NNG+XSN`, `NNG+NNG+NNG+XSN+JKB`, `NNG+NNG+NNG+XSN+NNG+VCP+EC`, `NNG+NNG+NNP`, `NNG+NNG+SN+NNB+JX`, `NNG+NNG+SN+NNG`, `NNG+NNG+VCP+EC`, `NNG+NNG+VCP+EC+JX`, `NNG+NNG+VCP+EF`, `NNG+NNG+VCP+EP+EF`, `NNG+NNG+VCP+EP+ETM`, `NNG+NNG+VCP+ETM`, `NNG+NNG+VCP+ETN`, `NNG+NNG+VCP+ETN+JKB`, `NNG+NNG+VCP+ETN+JX`, `NNG+NNG+VV+ETN`, `NNG+NNG+XSN`, `NNG+NNG+XSN+JC`, `NNG+NNG+XSN+JKB`, `NNG+NNG+XSN+JKB+JX`, `NNG+NNG+XSN+JKG`, `NNG+NNG+XSN+JKO`, `NNG+NNG+XSN+JKS`, `NNG+NNG+XSN+JX`, `NNG+NNG+XSN+NNG+JX`, `NNG+NNG+XSN+VCP+EF`, `NNG+NNG+XSV+EC`, `NNG+NNG+XSV+EF`, `NNG+NNG+XSV+EP+EC`, `NNG+NNG+XSV+EP+EF`, `NNG+NNG+XSV+ETM`, `NNG+NNG+XSV+ETN`, `NNG+NNG+XSV+ETN+JKO`, `NNG+NNP`, `NNG+NNP+JKB`, `NNG+NNP+JKS`, `NNG+NNP+JX`, `NNG+NNP+JX+JKG`, `NNG+NNP+NNG`, `NNG+NNP+NNG+JKB`, `NNG+NNP+NNG+NNG`, `NNG+NNP+NNG+NNG+NNG+JKB+JX`, `NNG+NNP+NNP`, `NNG+SL`, `NNG+SL+JKS`, `NNG+SL+JX`, `NNG+SN`, `NNG+SN+JKB+JX`, `NNG+SN+JKG`, `NNG+SN+JKO`, `NNG+SN+NNB`, `NNG+SN+NNB+JKB`, `NNG+SN+NNG`, `NNG+SN+NNG+JX`, `NNG+SN+NNG+NNG+JKG`, `NNG+SN+SL+JX`, `NNG+VA+EC`, `NNG+VA+EF`, `NNG+VA+ETM`, `NNG+VA+ETN`, `NNG+VCN+EP+EC`, `NNG+VCP+EC`, `NNG+VCP+EC+JKO`, `NNG+VCP+EC+JKS`, `NNG+VCP+EC+JX`, `NNG+VCP+EF`, `NNG+VCP+EP+EC`, `NNG+VCP+EP+EC+JX`, `NNG+VCP+EP+EF`, `NNG+VCP+EP+ETM`, `NNG+VCP+EP+ETN`, `NNG+VCP+ETM`, `NNG+VCP+ETM+NNB`, `NNG+VCP+ETN`, `NNG+VCP+ETN+JKB`, `NNG+VCP+ETN+JKB+JX`, `NNG+VCP+ETN+JKO`, `NNG+VCP+ETN+JKS`, `NNG+VCP+ETN+JX`, `NNG+VV`, `NNG+VV+EC`, `NNG+VV+EC+VCP+EC`, `NNG+VV+EC+VX+EC`, `NNG+VV+EC+VX+ETM`, `NNG+VV+EF`, `NNG+VV+EP+EC`, `NNG+VV+EP+EF`, `NNG+VV+EP+ETM`, `NNG+VV+ETM`, `NNG+VV+ETN`, `NNG+VV+ETN+JKS`, `NNG+VV+ETN+NNG`, `NNG+XPN+NNG`, `NNG+XPN+NNG+JKO`, `NNG+XPN+NNP+JKG`, `NNG+XSA+EC`, `NNG+XSA+EC+VX+EC`, `NNG+XSA+EF`, `NNG+XSA+EP+EF`, `NNG+XSA+ETM`, `NNG+XSA+ETN`, `NNG+XSA+ETN+JC`, `NNG+XSA+ETN+JKO`, `NNG+XSA+ETN+JKS`, `NNG+XSN`, `NNG+XSN+JC`, `NNG+XSN+JKB`, `NNG+XSN+JKB+JKB`, `NNG+XSN+JKB+JKG`, `NNG+XSN+JKB+JX`, `NNG+XSN+JKG`, `NNG+XSN+JKO`, `NNG+XSN+JKS`, `NNG+XSN+JKS+JX`, `NNG+XSN+JX`, `NNG+XSN+JX+JKO`, `NNG+XSN+MAG`, `NNG+XSN+NNG`, `NNG+XSN+NNG+JKG`, `NNG+XSN+NNG+JKO`, `NNG+XSN+NNG+JX`, `NNG+XSN+NNG+NNG+JC`, `NNG+XSN+VCP+EC`, `NNG+XSN+VCP+EF`, `NNG+XSN+VCP+EP+EC`, `NNG+XSN+VCP+EP+ETM`, `NNG+XSN+VCP+EP+ETN`, `NNG+XSN+VCP+ETM`, `NNG+XSN+XSN`, `NNG+XSN+XSN+JC`, `NNG+XSN+XSN+JKB`, `NNG+XSN+XSN+JKG`, `NNG+XSN+XSN+JKO`, `NNG+XSN+XSN+JKS`, `NNG+XSN+XSN+JX`, `NNG+XSN+XSN+VCP+EC`, `NNG+XSN+XSV+EC`, `NNG+XSN+XSV+EF`, `NNG+XSN+XSV+EP+EC`, `NNG+XSN+XSV+EP+EF`, `NNG+XSN+XSV+ETM`, `NNG+XSN+XSV+ETN`, `NNG+XSV+EC`, `NNG+XSV+EC+JKO`, `NNG+XSV+EC+JX`, `NNG+XSV+EC+NP+JKB`, `NNG+XSV+EC+VX+EC`, `NNG+XSV+EC+VX+EF`, `NNG+XSV+EC+VX+EP+EC`, `NNG+XSV+EC+VX+EP+EF`, `NNG+XSV+EC+VX+EP+ETM`, `NNG+XSV+EC+VX+ETM`, `NNG+XSV+EC+VX+ETN`, `NNG+XSV+EC+VX+ETN+JKO`, `NNG+XSV+EF`, `NNG+XSV+EP+EC`, `NNG+XSV+EP+EC+JKB`, `NNG+XSV+EP+EC+JX`, `NNG+XSV+EP+EF`, `NNG+XSV+EP+EP+EC`, `NNG+XSV+EP+EP+ETM`, `NNG+XSV+EP+ETM`, `NNG+XSV+EP+ETN`, `NNG+XSV+EP+ETN+JKO`, `NNG+XSV+ETM`, `NNG+XSV+ETM+NNB`, `NNG+XSV+ETM+NNB+XSA+ETM`, `NNG+XSV+ETM+NNG`, `NNG+XSV+ETM+NNG+JX`, `NNG+XSV+ETN`, `NNG+XSV+ETN+JC`, `NNG+XSV+ETN+JKB`, `NNG+XSV+ETN+JKB+JX`, `NNG+XSV+ETN+JKO`, `NNG+XSV+ETN+JKS`, `NNG+XSV+ETN+JX`, `NNP`, `NNP+JC`, `NNP+JKB`, `NNP+JKB+JKG`, `NNP+JKB+JKO`, `NNP+JKB+JX`, `NNP+JKC`, `NNP+JKG`, `NNP+JKG+NNG`, `NNP+JKO`, `NNP+JKS`, `NNP+JX`, `NNP+JX+JKG`, `NNP+NNB`, `NNP+NNB+JC`, `NNP+NNB+JKB`, `NNP+NNB+JKB+JX`, `NNP+NNB+JKG`, `NNP+NNB+JKO`, `NNP+NNB+JKS`, `NNP+NNB+JX`, `NNP+NNB+NNG+NNG+JKB+JX`, `NNP+NNB+XSN`, `NNP+NNB+XSN+JKO`, `NNP+NNG`, `NNP+NNG+JC`, `NNP+NNG+JKB`, `NNP+NNG+JKB+JKG`, `NNP+NNG+JKB+JX`, `NNP+NNG+JKG`, `NNP+NNG+JKO`, `NNP+NNG+JKS`, `NNP+NNG+JX`, `NNP+NNG+JX+JKB`, `NNP+NNG+JX+JX`, `NNP+NNG+NNB`, `NNP+NNG+NNB+JKS`, `NNP+NNG+NNB+NNP+NNG+JKB`, `NNP+NNG+NNG`, `NNP+NNG+NNG+JC`, `NNP+NNG+NNG+JKB`, `NNP+NNG+NNG+JKB+JKG`, `NNP+NNG+NNG+JKB+JX`, `NNP+NNG+NNG+JKG`, `NNP+NNG+NNG+JKO`, `NNP+NNG+NNG+JKS`, `NNP+NNG+NNG+JX`, `NNP+NNG+NNG+MM`, `NNP+NNG+NNG+NNG`, `NNP+NNG+NNG+NNG+JC`, `NNP+NNG+NNG+NNG+JKG`, `NNP+NNG+NNG+NNG+JKO`, `NNP+NNG+NNG+NNG+JKS`, `NNP+NNG+NNG+NNG+NNG`, `NNP+NNG+NNG+NNP`, `NNP+NNG+VCP+EF`, `NNP+NNG+VCP+ETM`, `NNP+NNG+VV+ETN`, `NNP+NNG+XSN`, `NNP+NNG+XSN+JKB`, `NNP+NNG+XSN+JKG`, `NNP+NNG+XSN+JKO`, `NNP+NNG+XSN+JKS`, `NNP+NNG+XSV+ETN+JKS`, `NNP+NNP`, `NNP+NNP+JKB`, `NNP+NNP+JKG`, `NNP+NNP+JKS`, `NNP+NNP+NNB`, `NNP+NNP+NNG`, `NNP+NNP+NNG+NNG+NNG`, `NNP+NNP+NNG+NNP+NNB+JKO`, `NNP+NP`, `NNP+NP+JC`, `NNP+NP+NNB+JKS`, `NNP+SL`, `NNP+SL+JKB`, `NNP+SL+JKO`, `NNP+SL+JKS`, `NNP+SL+JX`, `NNP+SL+NNG+JKB`, `NNP+SN`, `NNP+SN+NNG`, `NNP+VA+ETM`, `NNP+VCP+EC`, `NNP+VCP+EF`, `NNP+VCP+EP+EC`, `NNP+VCP+ETM`, `NNP+VV+ETM`, `NNP+VV+NNP+NNG+NNG+JKG`, `NNP+XSN`, `NNP+XSN+JKB`, `NNP+XSN+JKB+JX`, `NNP+XSN+JKG`, `NNP+XSN+JKO`, `NNP+XSN+JKS`, `NNP+XSN+VCP+EC`, `NNP+XSN+VCP+EF`, `NNP+XSN+XSN+JX`, `NP`, `NP+EF`, `NP+JKB`, `NP+JKB+JX`, `NP+JKB+VCP+EC`, `NP+JKG`, `NP+JKO`, `NP+JKS`, `NP+JX`, `NP+JX+JKG`, `NP+JX+JX`, `NP+JX+VV+ETM`, `NP+NNB`, `NP+NNB+JKG`, `NP+NNB+JKO`, `NP+NNG`, `NP+NNG+JKB`, `NP+NNG+JKG`, `NP+NNG+XSN+JKG`, `NP+NP`, `NP+VA+EC+JX`, `NP+VA+ETM`, `NP+VCP+EC`, `NP+VCP+EC+JKB`, `NP+VCP+EF`, `NP+VCP+EP+EC`, `NP+VCP+ETN`, `NP+VV+EC`, `NP+XSN`, `NP+XSN+JKB`, `NP+XSN+JKC`, `NP+XSN+JKG`, `NP+XSN+JKO`, `NP+XSN+JKS`, `NP+XSN+JX`, `NP+XSN+XSN`, `NP+XSV+EC`, `NR`, `NR+JC`, `NR+JKB`, `NR+JKG`, `NR+JKO`, `NR+JKS`, `NR+JX`, `NR+JX+JKO`, `NR+NNB`, `NR+NNB+JKB`, `NR+NNB+JKB+JX`, `NR+NNB+JKG`, `NR+NNB+JKO`, `NR+NNB+JKS`, `NR+NNB+JX`, `NR+NNB+VCP+EP+EC`, `NR+NNG`, `NR+NNG+JKB`, `NR+NNG+JKB+JX`, `NR+NNG+JKG`, `NR+NR+JC`, `NR+NR+NNG+JKO`, `NR+SN+NNB`, `NR+SN+NNB+VCP+EP+EF`, `NR+VCP+EC`, `NR+VCP+EF`, `NR+VCP+EP+EC`, `NR+VCP+EP+EF`, `NR+VCP+ETM`, `NR+XSN`, `NR+XSN+JX`, `SE`, `SF`, `SH`, `SH+SL+SH`, `SL`, `SL+JC`, `SL+JKB`, `SL+JKB+JKG`, `SL+JKB+JX`, `SL+JKG`, `SL+JKO`, `SL+JKS`, `SL+JKS+JX`, `SL+JX`, `SL+MM+NNB`, `SL+NNB`, `SL+NNB+JKB`, `SL+NNB+JKG`, `SL+NNB+JKS`, `SL+NNB+JX`, `SL+NNG`, `SL+NNG+JC`, `SL+NNG+JKB`, `SL+NNG+JKG`, `SL+NNG+JKO`, `SL+NNG+JKS`, `SL+NNG+JX`, `SL+NNG+NNB+JKB`, `SL+NNG+NNG`, `SL+NNG+NNG+JKB`, `SL+NNG+VCP+EF`, `SL+NNG+XSN+JKS`, `SL+NNP`, `SL+NNP+NNG`, `SL+NNP+NNG+JKG`, `SL+NNP+NNP+JKB`, `SL+SF+SL+JKB`, `SL+SL`, `SL+SL+JC`, `SL+SL+JKG`, `SL+SL+SL`, `SL+SL+SL+JKG`, `SL+SL+VCP+ETM`, `SL+SN`, `SL+SN+JC`, `SL+SN+JKB`, `SL+SN+JKO`, `SL+SN+JX`, `SL+SN+NNG`, `SL+SN+SL`, `SL+SN+SN+SL`, `SL+SN+VCP+EC`, `SL+VCP+ETM`, `SL+VV+ETM`, `SL+XSA+EC`, `SL+XSN`, `SL+XSN+JKG`, `SN`, `SN+JC`, `SN+JKB`, `SN+JKG`, `SN+JKO`, `SN+JKS`, `SN+JX`, `SN+NNB`, `SN+NNB+JC`, `SN+NNB+JKB`, `SN+NNB+JKB+JX`, `SN+NNB+JKG`, `SN+NNB+JKO`, `SN+NNB+JKS`, `SN+NNB+JX`, `SN+NNB+JX+JKB`, `SN+NNB+JX+JKO`, `SN+NNB+JX+JX`, `SN+NNB+NNB`, `SN+NNB+NNB+JKB`, `SN+NNB+NNB+JKG`, `SN+NNB+NNG`, `SN+NNB+NNG+JC`, `SN+NNB+NNG+JKB`, `SN+NNB+NNG+JKO`, `SN+NNB+NNG+JKS`, `SN+NNB+NNG+JX`, `SN+NNB+NNG+VCP+EF`, `SN+NNB+SN+JKB`, `SN+NNB+SN+NNB`, `SN+NNB+SN+NNB+JKB`, `SN+NNB+SN+NNB+JKO`, `SN+NNB+SN+NNB+JX`, `SN+NNB+SN+NNB+SN+NNB+SN+NNB`, `SN+NNB+SN+NNB+VCP+EF`, `SN+NNB+SN+NNG+SN+NNG+JKG`, `SN+NNB+SN+NR`, `SN+NNB+VCP+EC`, `SN+NNB+VCP+EF`, `SN+NNB+VCP+EP+EC`, `SN+NNB+VCP+EP+EF`, `SN+NNB+VCP+EP+ETM`, `SN+NNB+VCP+ETM`, `SN+NNB+VCP+ETN+JKB`, `SN+NNB+XSN`, `SN+NNB+XSN+JKB`, `SN+NNB+XSN+JKO`, `SN+NNB+XSN+JKS`, `SN+NNB+XSN+JX+JX`, `SN+NNB+XSN+VCP+EF`, `SN+NNG`, `SN+NNG+JC`, `SN+NNG+JKB`, `SN+NNG+JKB+JX`, `SN+NNG+JKG`, `SN+NNG+JKO`, `SN+NNG+JKS`, `SN+NNG+JX`, `SN+NNG+NNG`, `SN+NNG+NNG+JKB`, `SN+NNG+NNG+JKO`, `SN+NNG+NNG+VCP+ETM`, `SN+NNG+SN+NNG`, `SN+NNG+SN+NNG+JKB`, `SN+NNG+SN+NNG+JKG`, `SN+NNG+VCP+EC`, `SN+NNG+VCP+EF`, `SN+NNG+VCP+ETM`, `SN+NNG+XSN`, `SN+NNG+XSN+JKB`, `SN+NNP+NNB+SN+NNB`, `SN+NR`, `SN+NR+JKB`, `SN+NR+JKS`, `SN+NR+JX`, `SN+NR+NNB`, `SN+NR+NNB+JKB`, `SN+NR+NNB+JKG`, `SN+NR+NNB+JKO`, `SN+NR+NNB+JKS`, `SN+NR+NNB+JX`, `SN+NR+NNB+SN+NR+NNB`, `SN+NR+NNB+VCP+EC`, `SN+NR+NNB+VCP+EF`, `SN+NR+NNB+XSN`, `SN+NR+NNG+JKG`, `SN+NR+NNG+SN+NNB`, `SN+NR+NNG+SN+NNG+VCP+ETM`, `SN+NR+NNG+SN+NR+SN+NNB`, `SN+NR+SN`, `SN+NR+SN+NNB`, `SN+NR+SN+NNB+JKB`, `SN+NR+SN+NNB+JKS`, `SN+NR+SN+NNB+XSA+EC+VV`, `SN+NR+SN+NNG`, `SN+NR+SN+NNG+JKG`, `SN+NR+SN+NNP`, `SN+NR+SN+NR`, `SN+NR+SN+NR+NNB`, `SN+NR+SN+NR+NNB+JKO`, `SN+NR+SN+NR+NNB+JX`, `SN+NR+SN+NR+NNB+VCP+EF`, `SN+NR+SN+NR+SN+NNB+JKB`, `SN+NR+SN+NR+SN+NNB+JKG`, `SN+NR+SN+NR+SN+NNB+JKO`, `SN+NR+SN+NR+SN+NR`, `SN+NR+SN+NR+SN+NR+NNB`, `SN+NR+SN+NR+SN+NR+SN+NNB+JKB`, `SN+NR+SN+NR+SN+NR+SN+NR`, `SN+NR+SN+NR+VCP+EF`, `SN+NR+SN+NR+XSN`, `SN+NR+SN+SL+NNG`, `SN+NR+XSN`, `SN+NR+XSN+NNB+JKO`, `SN+NR+XSN+NNB+JKS`, `SN+SL`, `SN+SL+JKB`, `SN+SL+JKG`, `SN+SL+JKO`, `SN+SL+JKS`, `SN+SL+NNG`, `SN+SL+NNG+JKO`, `SN+SL+NNG+JX`, `SN+SL+SN+JKS`, `SN+SL+VCP+EC`, `SN+SN`, `SN+SN+JKB`, `SN+SN+NNB`, `SN+SN+NNB+JKG`, `SN+SN+NNG`, `SN+SN+NNG+JX`, `SN+SN+SL`, `SN+SN+SL+JKB`, `SN+SN+SN`, `SN+SN+SN+SN`, `SN+VCP+ETM`, `SN+XSN`, `SN+XSN+JKB+JX`, `SN+XSN+JKG`, `SN+XSN+JKO`, `SN+XSN+NNB`, `SN+XSN+NNB+JKB`, `SN+XSN+NNB+JKG`, `SN+XSN+NNB+JKS`, `SN+XSN+NNB+NNB`, `SN+XSN+SL+JKG`, `SN+XSN+XSN+JX`, `SO`, `SP`, `SS`, `SW`, `VA`, `VA+EC`, `VA+EC+EC`, `VA+EC+JKO`, `VA+EC+JKS`, `VA+EC+JX`, `VA+EC+JX+JX`, `VA+EC+VCP+EC`, `VA+EC+VCP+EF`, `VA+EC+VV+ETM`, `VA+EC+VX+EC`, `VA+EC+VX+EF`, `VA+EC+VX+EP+EC`, `VA+EC+VX+EP+EF`, `VA+EC+VX+EP+ETM`, `VA+EC+VX+ETM`, `VA+EC+VX+ETN`, `VA+EF`, `VA+EF+ETM+NNG`, `VA+EP+EC`, `VA+EP+EF`, `VA+EP+ETM`, `VA+EP+ETN`, `VA+ETM`, `VA+ETM+EC`, `VA+ETM+NNB`, `VA+ETM+NNB+JKG`, `VA+ETM+NNB+XSA+ETM`, `VA+ETM+NNG`, `VA+ETM+NNG+JKG`, `VA+ETN`, `VA+ETN+JKB`, `VA+ETN+JKB+JX`, `VA+ETN+JKG`, `VA+ETN+JX`, `VCN+EC`, `VCN+EC+JX`, `VCN+EF`, `VCN+EP+EC`, `VCN+EP+ETM`, `VCN+ETM`, `VCN+ETN`, `VCP+EC`, `VCP+EC+SN`, `VCP+EC+VX+EC`, `VCP+EF`, `VCP+EP+EC`, `VCP+EP+EF`, `VCP+EP+ETM`, `VCP+ETM`, `VV`, `VV+EC`, `VV+EC+EC`, `VV+EC+EP+EC`, `VV+EC+EP+EF`, `VV+EC+ETN`, `VV+EC+JKB`, `VV+EC+JKG`, `VV+EC+JKG+NNG+JKO`, `VV+EC+JKO`, `VV+EC+JKS`, `VV+EC+JX`, `VV+EC+JX+JKG`, `VV+EC+JX+MM`, `VV+EC+JX+NNB+JKB`, `VV+EC+SH+JKB`, `VV+EC+VCP+EC`, `VV+EC+VCP+EF`, `VV+EC+VCP+EP`, `VV+EC+VCP+EP+EF`, `VV+EC+VCP+ETM`, `VV+EC+VV+EC`, `VV+EC+VV+EF`, `VV+EC+VV+EP+EC`, `VV+EC+VV+EP+EF`, `VV+EC+VV+ETM`, `VV+EC+VV+ETN`, `VV+EC+VV+ETN+NNB+JKB`, `VV+EC+VX+EC`, `VV+EC+VX+EC+JKG`, `VV+EC+VX+EC+VX+EF`, `VV+EC+VX+EF`, `VV+EC+VX+EP+EC`, `VV+EC+VX+EP+EF`, `VV+EC+VX+EP+EP+EC`, `VV+EC+VX+EP+EP+EF`, `VV+EC+VX+EP+ETM`, `VV+EC+VX+EP+ETN`, `VV+EC+VX+ETM`, `VV+EC+VX+ETM+NNB`, `VV+EC+VX+ETM+NNB+XSA+EC`, `VV+EC+VX+ETN`, `VV+EC+VX+ETN+EC`, `VV+EC+VX+ETN+JKB`, `VV+EC+VX+ETN+JKG`, `VV+EC+VX+ETN+JX`, `VV+EC+XSN`, `VV+EC+XSN+JKS`, `VV+EC+XSN+XSN+JKB`, `VV+EF`, `VV+EP+EC`, `VV+EP+EC+JX`, `VV+EP+EC+VCP+EC`, `VV+EP+EC+VX+EC`, `VV+EP+EC+VX+EP+EF`, `VV+EP+EF`, `VV+EP+EF+EC`, `VV+EP+EP+EC`, `VV+EP+EP+EF`, `VV+EP+EP+EP+EC`, `VV+EP+EP+ETN`, `VV+EP+ETM`, `VV+EP+ETN`, `VV+EP+ETN+JKO`, `VV+ETM`, `VV+ETM+NNB`, `VV+ETM+NNB+JKB`, `VV+ETM+NNB+JKG`, `VV+ETM+NNB+JKS`, `VV+ETM+NNB+JX`, `VV+ETM+NNB+NNG+JKB`, `VV+ETM+NNB+VCP+EC`, `VV+ETM+NNB+XSA+EC`, `VV+ETM+NNB+XSA+EP+EC`, `VV+ETM+NNB+XSA+ETM`, `VV+ETM+NNB+XSN+JKG`, `VV+ETM+NNG`, `VV+ETM+NNG+NNG+NNG`, `VV+ETM+NNG+NNG+NNG+NNG+NNG+NNG`, `VV+ETM+NNP`, `VV+ETM+NNP+JX+JKG`, `VV+ETM+VV+EC`, `VV+ETN`, `VV+ETN+JKB`, `VV+ETN+JKB+JX`, `VV+ETN+JKO`, `VV+ETN+JKS`, `VV+ETN+JX`, `VV+ETN+JX+JX`, `VV+ETN+MAG`, `VV+ETN+VA+ETM`, `VV+ETN+VCP+EF`, `VV+ETN+VCP+EP+EF`, `VV+NNG+JKG`, `VV+NNG+JKO`, `VV+NNP`, `VV+VV+EP+EC`, `VX+EC`, `VX+EC+JKB`, `VX+EC+JKO`, `VX+EC+JX`, `VX+EC+VX+EC`, `VX+EC+VX+EF`, `VX+EC+VX+EP+EC`, `VX+EC+VX+ETM`, `VX+EF`, `VX+EP+EC`, `VX+EP+EF`, `VX+EP+EF+ETM+NNG`, `VX+EP+EP+EC`, `VX+EP+ETM`, `VX+EP+ETN`, `VX+EP+ETN+JKO`, `VX+EP+ETN+NNB+JKB`, `VX+ETM`, `VX+ETN`, `VX+ETN+JKO`, `VX+ETN+JKS`, `VX+ETN+JX`, `XPN`, `XPN+NNC`, `XPN+NNG`, `XPN+NNG+JC`, `XPN+NNG+JKB`, `XPN+NNG+JKB+JX`, `XPN+NNG+JKG`, `XPN+NNG+JKO`, `XPN+NNG+JKS`, `XPN+NNG+JX`, `XPN+NNG+NNB+JX`, `XPN+NNG+NNG`, `XPN+NNG+NNG+JKB`, `XPN+NNG+NNG+JKG`, `XPN+NNG+NNG+JKO`, `XPN+NNG+NNG+JKS`, `XPN+NNG+NNG+JX`, `XPN+NNG+NNG+NNG`, `XPN+NNG+NNG+NNG+JX`, `XPN+NNG+VCP+EC`, `XPN+NNG+VCP+EF`, `XPN+NNG+VCP+EP+EF`, `XPN+NNG+VCP+ETM`, `XPN+NNG+XSA+EC`, `XPN+NNG+XSA+ETM`, `XPN+NNG+XSN`, `XPN+NNG+XSN+JKB`, `XPN+NNG+XSN+JKB+JX`, `XPN+NNG+XSN+JKG`, `XPN+NNG+XSN+JKO`, `XPN+NNG+XSN+VCP+ETM`, `XPN+NNG+XSN+XSN+VCP+ETM`, `XPN+NNG+XSV+EC`, `XPN+NNG+XSV+EC+JX`, `XPN+NNG+XSV+EF`, `XPN+NNG+XSV+EP+EC`, `XPN+NNG+XSV+EP+EF`, `XPN+NNG+XSV+ETM`, `XPN+NNG+XSV+ETN+JX+JKG`, `XPN+NNP`, `XPN+NNP+JC`, `XPN+NNP+JKG`, `XPN+NNP+JX`, `XPN+NNP+NNG+JKS`, `XPN+NNP+VCP+EC`, `XPN+NNP+XSN`, `XPN+NNP+XSN+JKB`, `XPN+SN`, `XPN+SN+JKG`, `XPN+SN+NNB`, `XPN+SN+NNB+JKO`, `XPN+SN+NNB+VCP+ETM`, `XPN+SN+NNG`, `XPN+SN+NNG+JC`, `XPN+SN+NNG+JKB`, `XPN+SN+NNG+JKB+JKG`, `XPN+SN+NNG+JKG`, `XPN+SN+NNG+JKO`, `XPN+SN+NNG+JKS`, `XPN+SN+NNG+JX`, `XPN+SN+NNG+NNG`, `XPN+SN+NNG+NNG+JKB+JKG`, `XPN+SN+NNG+NNG+JKO`, `XPN+SN+NNP+JX`, `XPN+VV+EP+EP+EC`, `XPN+XR+JX`, `XPN+XR+XSA+EC`, `XPN+XR+XSA+EF`, `XPN+XR+XSA+ETM`, `XPN+XR+XSN+JKO`, `XR`, `XR+JKB`, `XR+JKB+JKB`, `XR+JKB+JKO`, `XR+NNG`, `XR+NNG+JKB`, `XR+NNG+JKS`, `XR+NNG+JX`, `XR+NNG+NNG`, `XR+NNG+NNG+JX`, `XR+NNG+NNG+NNG`, `XR+NNG+NNG+NNG+JX`, `XR+NNG+VCP+ETM`, `XR+XSA+EC`, `XR+XSA+EC+JX`, `XR+XSA+EC+NNB+JX`, `XR+XSA+EC+VX+EC`, `XR+XSA+EC+VX+EF`, `XR+XSA+EC+VX+EP+EF`, `XR+XSA+EC+VX+ETM`, `XR+XSA+EF`, `XR+XSA+EP+EC`, `XR+XSA+EP+EC+JX`, `XR+XSA+EP+EF`, `XR+XSA+EP+ETM`, `XR+XSA+ETM`, `XR+XSA+ETN`, `XR+XSA+ETN+JC`, `XR+XSA+ETN+JKB`, `XR+XSA+ETN+JKO`, `XR+XSA+ETN+JKS`, `XR+XSA+ETN+JX`, `XR+XSN`, `XR+XSN+JC`, `XR+XSN+JKB`, `XR+XSN+JKO`, `XR+XSN+JKS`, `XR+XSN+JX`, `XR+XSN+VCP+EC`, `XR+XSN+VCP+ETM`, `XR+XSV+EC`, `XR+XSV+ETM`, `XSA+ETM`, `XSN`, `XSN+JKB`, `XSN+JKS`, `XSN+JX`, `XSN+NNB+JKS`, `XSV+EC`, `XSV+ETM` | | **`morphologizer`** | `POS=NOUN`, `POS=ADV`, `POS=VERB`, `POS=PUNCT`, `POS=ADP`, `NumType=Card\|POS=NUM`, `POS=PRON`, `POS=DET`, `POS=ADJ`, `POS=NUM`, `POS=PROPN`, `POS=CCONJ`, `POS=X`, `POS=SYM`, `POS=AUX`, `POS=PART`, `POS=INTJ`, `NumType=Card\|POS=PUNCT`, `NumType=Card\|POS=DET` | | **`parser`** | `ROOT`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `conj`, `cop`, `csubj`, `dep`, `det`, `det:poss`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `punct` | | **`experimental_edit_tree_lemmatizer`** | `2`, `3`, `5`, `6`, `8`, `11`, `15`, `17`, `19`, `20`, `23`, `24`, `27`, `29`, `31`, `33`, `34`, `36`, `39`, `42`, `45`, `47`, `49`, `50`, `52`, `53`, `55`, `57`, `59`, `61`, `64`, `66`, `69`, `71`, `74`, `77`, `79`, `80`, `81`, `83`, `85`, `88`, `90`, `91`, `94`, `96`, `98`, `100`, `104`, `105`, `108`, `111`, `113`, `116`, `118`, `119`, `120`, `122`, `124`, `127`, `131`, `132`, `134`, `136`, `138`, `139`, `142`, `145`, `147`, `149`, `151`, `155`, `157`, `163`, `166`, `169`, `171`, `174`, `177`, `180`, `183`, `68`, `184`, `185`, `187`, `188`, `190`, `193`, `195`, `199`, `201`, `202`, `204`, `206`, `209`, `212`, `215`, `217`, `67`, `220`, `223`, `225`, `228`, `230`, `232`, `234`, `214`, `235`, `237`, `238`, `241`, `243`, `244`, `247`, `250`, `253`, `256`, `258`, `259`, `261`, `263`, `266`, `269`, `271`, `78`, `274`, `277`, `280`, `224`, `282`, `283`, `285`, `286`, `289`, `291`, `295`, `296`, `299`, `302`, `303`, `306`, `308`, `311`, `314`, `317`, `288`, `318`, `320`, `323`, `326`, `328`, `329`, `222`, `331`, `332`, `333`, `335`, `337`, `338`, `339`, `341`, `343`, `344`, `346`, `347`, `350`, `353`, `349`, `357`, `359`, `361`, `363`, `364`, `365`, `367`, `369`, `370`, `373`, `375`, `377`, `380`, `381`, `384`, `386`, `389`, `390`, `393`, `394`, `117`, `397`, `398`, `399`, `402`, `404`, `405`, `406`, `410`, `412`, `415`, `417`, `419`, `420`, `422`, `425`, `428`, `431`, `432`, `433`, `436`, `438`, `440`, `442`, `445`, `447`, `448`, `449`, `452`, `454`, `457`, `461`, `463`, `466`, `469`, `471`, `474`, `476`, `479`, `482`, `485`, `486`, `487`, `490`, `491`, `493`, `156`, `496`, `497`, `499`, `501`, `503`, `504`, `507`, `509`, `512`, `515`, `516`, `519`, `522`, `524`, `525`, `529`, `531`, `533`, `535`, `536`, `538`, `541`, `543`, `545`, `546`, `548`, `549`, `551`, `553`, `556`, `557`, `559`, `560`, `563`, `566`, `570`, `572`, `574`, `575`, `578`, `581`, `582`, `585`, `587`, `588`, `590`, `593`, `594`, `596`, `597`, `600`, `602`, `605`, `606`, `608`, `611`, `613`, `418`, `615`, `617`, `620`, `622`, `625`, `628`, `631`, `632`, `634`, `635`, `638`, `640`, `643`, `87`, `645`, `648`, `649`, `651`, `652`, `654`, `656`, `233`, `658`, `660`, `663`, `666`, `668`, `669`, `670`, `671`, `673`, `675`, `677`, `679`, `681`, `684`, `685`, `688`, `689`, `692`, `695`, `698`, `701`, `705`, `707`, `708`, `709`, `712`, `715`, `716`, `720`, `722`, `724`, `727`, `729`, `732`, `733`, `734`, `735`, `737`, `738`, `437`, `742`, `744`, `745`, `746`, `748`, `750`, `752`, `754`, `755`, `756`, `758`, `514`, `759`, `760`, `762`, `764`, `270`, `766`, `767`, `768`, `770`, `772`, `773`, `776`, `778`, `780`, `782`, `785`, `787`, `791`, `793`, `796`, `797`, `798`, `800`, `801`, `804`, `806`, `807`, `810`, `812`, `814`, `595`, `815`, `817`, `819`, `820`, `823`, `825`, `828`, `830`, `832`, `835`, `837`, `839`, `841`, `843`, `845`, `847`, `849`, `852`, `855`, `857`, `858`, `862`, `865`, `866`, `868`, `870`, `872`, `874`, `877`, `879`, `882`, `884`, `886`, `888`, `890`, `891`, `893`, `896`, `899`, `901`, `902`, `903`, `905`, `908`, `911`, `599`, `913`, `915`, `917`, `918`, `921`, `923`, `924`, `925`, `926`, `929`, `931`, `933`, `934`, `935`, `936`, `939`, `942`, `943`, `945`, `946`, `949`, `951`, `953`, `955`, `958`, `960`, `963`, `964`, `967`, `968`, `969`, `971`, `973`, `974`, `977`, `979`, `981`, `983`, `985`, `987`, `989`, `991`, `994`, `995`, `996`, `998`, `1001`, `1003`, `1005`, `1007`, `1009`, `1012`, `1014`, `1015`, `1016`, `1018`, `1021`, `1022`, `1024`, `1025`, `1026`, `1030`, `1033`, `1034`, `1035`, `1038`, `1040`, `1043`, `1044`, `1047`, `219`, `990`, `1048`, `1050`, `1052`, `1053`, `1056`, `1058`, `1060`, `1061`, `1063`, `1066`, `1067`, `1070`, `1071`, `1074`, `1075`, `1076`, `1078`, `1081`, `1083`, `1085`, `1088`, `1089`, `1090`, `1093`, `1095`, `1097`, `1098`, `1101`, `1103`, `1105`, `1108`, `1109`, `260`, `1112`, `1115`, `1116`, `1118`, `1119`, `1120`, `1121`, `1123`, `1125`, `1126`, `1128`, `1131`, `1133`, `1135`, `1138`, `1140`, `1142`, `1143`, `1145`, `1148`, `1151`, `1153`, `1154`, `1157`, `1159`, `1160`, `1162`, `1163`, `1166`, `1168`, `1169`, `1170`, `1171`, `1172`, `1173`, `1174`, `639`, `1178`, `35`, `1180`, `1183`, `1185`, `1187`, `1190`, `1192`, `1195`, `1197`, `1198`, `1201`, `1202`, `1203`, `1205`, `1207`, `1208`, `1210`, `1213`, `1214`, `1215`, `1218`, `1220`, `1223`, `1227`, `1228`, `1232`, `1235`, `1236`, `1237`, `1238`, `1240`, `1243`, `1245`, `1249`, `1252`, `1256`, `1257`, `1260`, `1261`, `1264`, `1265`, `1267`, `1270`, `1274`, `1276`, `1280`, `1283`, `1285`, `1288`, `1291`, `1292`, `1293`, `1296`, `1298`, `1300`, `1303`, `1305`, `1307`, `1309`, `1311`, `1313`, `1314`, `1317`, `1321`, `1324`, `1327`, `1329`, `1330`, `1332`, `1334`, `1336`, `1340`, `1342`, `1346`, `1347`, `1349`, `1351`, `1353`, `1354`, `1357`, `1359`, `1361`, `1363`, `1365`, `1367`, `1368`, `1370`, `1372`, `1376`, `1378`, `1380`, `1381`, `1382`, `1384`, `1387`, `1390`, `1393`, `1396`, `1400`, `1402`, `1404`, `993`, `1407`, `1408`, `1410`, `1411`, `1413`, `1414`, `1415`, `1417`, `1418`, `1419`, `1422`, `1425`, `1426`, `1429`, `1431`, `1433`, `325`, `1435`, `1437`, `1439`, `1442`, `1444`, `1446`, `1448`, `1452`, `1455`, `1457`, `1460`, `1462`, `1465`, `1468`, `1469`, `133`, `1471`, `1472`, `1475`, `1477`, `1478`, `1480`, `1481`, `1482`, `1485`, `1487`, `1488`, `1489`, `1492`, `1479`, `1494`, `1497`, `1498`, `1500`, `1502`, `179`, `1504`, `249`, `1507`, `1510`, `1511`, `1512`, `1514`, `1515`, `1517`, `1520`, `753`, `1523`, `1524`, `1525`, `1526`, `1528`, `1530`, `1531`, `1534`, `1535`, `1536`, `1539`, `818`, `1542`, `1544`, `1545`, `1546`, `1547`, `1548`, `1550`, `1551`, `1552`, `1555`, `1557`, `1558`, `1560`, `1561`, `1563`, `1565`, `1566`, `1568`, `1569`, `1571`, `1573`, `1574`, `1579`, `1581`, `1584`, `1587`, `1588`, `1356`, `1589`, `1590`, `1591`, `1592`, `1594`, `1595`, `1598`, `1600`, `1602`, `1604`, `1607`, `1610`, `1613`, `1616`, `1617`, `1619`, `1622`, `1624`, `1627`, `1630`, `1632`, `1634`, `1635`, `1637`, `1640`, `1643`, `1645`, `1648`, `1650`, `1652`, `1653`, `1656`, `1657`, `1659`, `1660`, `1662`, `1664`, `1667`, `1669`, `1672`, `1674`, `1675`, `1677`, `1678`, `1681`, `1683`, `1686`, `1687`, `1689`, `1691`, `1693`, `1696`, `1698`, `1699`, `1701`, `1004`, `1702`, `1704`, `1706`, `1709`, `1710`, `1711`, `1712`, `1713`, `1715`, `1717`, `1720`, `1724`, `1725`, `1727`, `1729`, `1730`, `1733`, `1167`, `1734`, `1738`, `1739`, `1741`, `1743`, `1744`, `1746`, `1748`, `1749`, `1752`, `1754`, `1755`, `1756`, `1758`, `1759`, `1762`, `1716`, `1765`, `148`, `1767`, `1770`, `1771`, `1772`, `1774`, `1777`, `1778`, `1779`, `1780`, `1782`, `1783`, `1787`, `1788`, `1304`, `1789`, `1791`, `1794`, `1796`, `1799`, `1801`, `1803`, `1805`, `1808`, `1810`, `1812`, `1814`, `1817`, `1818`, `1819`, `1822`, `1825`, `1826`, `1829`, `1832`, `1834`, `1836`, `1840`, `1844`, `1847`, `1848`, `1850`, `1851`, `1853`, `1855`, `1857`, `1287`, `1859`, `1860`, `1861`, `1863`, `1865`, `1658`, `1867`, `1869`, `1870`, `1871`, `1872`, `1874`, `1876`, `1878`, `1879`, `1881`, `1883`, `1884`, `1885`, `1887`, `1890`, `1891`, `1893`, `1896`, `1897`, `1899`, `1901`, `1902`, `1903`, `1904`, `1905`, `1906`, `1908`, `1909`, `1910`, `1913`, `293`, `1914`, `1915`, `1916`, `1917`, `1919`, `1921`, `1922`, `1924`, `1926`, `1927`, `1928`, `1930`, `1932`, `1933`, `1935`, `1936`, `1938`, `1939`, `1942`, `1943`, `1945`, `1947`, `1949`, `1951`, `1954`, `1956`, `1957`, `1959`, `1961`, `1962`, `1964`, `1965`, `1968`, `1970`, `1971`, `1972`, `601`, `1973`, `1974`, `1977`, `1979`, `1981`, `1984`, `1985`, `1987`, `1989`, `1990`, `1991`, `1993`, `1994`, `1996`, `1998`, `1999`, `2002`, `2003`, `2006`, `2007`, `2008`, `2010`, `2012`, `2014`, `2016`, `2020`, `2021`, `2024`, `2027`, `2029`, `2030`, `2031`, `2034`, `2036`, `2038`, `2040`, `2041`, `2042`, `2044`, `2046`, `2048`, `2051`, `2053`, `1513`, `2056`, `2057`, `2060`, `2063`, `2065`, `2067`, `2068`, `2070`, `2071`, `2073`, `2074`, `2076`, `2080`, `2081`, `2083`, `2085`, `1335`, `2086`, `2088`, `2091`, `2093`, `2095`, `2097`, `2099`, `2101`, `2103`, `2104`, `2106`, `1633`, `2108`, `2110`, `2114`, `2116`, `2118`, `2120`, `644`, `2121`, `475`, `2122`, `2123`, `2125`, `2126`, `2127`, `2128`, `2130`, `2132`, `2133`, `2134`, `2136`, `2138`, `2140`, `2141`, `2144`, `2145`, `2147`, `2148`, `2150`, `2153`, `2154`, `2156`, `2159`, `2161`, `2163`, `2164`, `2166`, `2169`, `2171`, `2174`, `2179`, `2180`, `2182`, `2184`, `2185`, `2186`, `2188`, `2189`, `2190`, `2191`, `2193`, `2196`, `2199`, `2200`, `2202`, `2205`, `2207`, `2209`, `2210`, `2212`, `2214`, `2216`, `2218`, `2220`, `2223`, `2226`, `2228`, `2229`, `2230`, `2232`, `2235`, `2237`, `2239`, `2241`, `2243`, `2246`, `2249`, `2250`, `2253`, `2254`, `2255`, `2256`, `2259`, `786`, `2262`, `2264`, `2266`, `2268`, `2272`, `2273`, `2275`, `2277`, `2279`, `2281`, `2282`, `2283`, `2285`, `2286`, `2287`, `2290`, `2292`, `2293`, `2294`, `2295`, `2297`, `2299`, `2300`, `2302`, `115`, `2303`, `2305`, `680`, `2306`, `2311`, `2312`, `2314`, `2135`, `2316`, `2318`, `2320`, `2322`, `2323`, `2326`, `2329`, `2332`, `2333`, `2335`, `2338`, `2340`, `2342`, `2344`, `2346`, `2348`, `2349`, `2351`, `2353`, `2355`, `2357`, `683`, `2358`, `2359`, `2361`, `2364`, `2366`, `2370`, `2372`, `2373`, `2374`, `2375`, `2376`, `1809`, `2377`, `2380`, `2385`, `2387`, `2388`, `2390`, `2391`, `2392`, `2394`, `2395`, `2396`, `2162`, `97`, `2398`, `2400`, `2403`, `2407`, `2412`, `2414`, `2415`, `2416`, `2418`, `2419`, `803`, `2421`, `2423`, `2426`, `2428`, `2430`, `2432`, `2433`, `2436`, `2439`, `345`, `2441`, `2442`, `2443`, `2446`, `2448`, `2452`, `2453`, `2456`, `2459`, `2462`, `2463`, `2464`, `2466`, `2467`, `2469`, `2471`, `2473`, `2474`, `2477`, `2478`, `2479`, `2481`, `2485`, `2117`, `2486`, `2488`, `2491`, `2494`, `2495`, `2496`, `2498`, `2499`, `2500`, `2502`, `2504`, `446` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.92 | | `TOKEN_P` | 99.90 | | `TOKEN_R` | 99.94 | | `TOKEN_ACC` | 100.00 | | `SENTS_F` | 95.08 | | `SENTS_P` | 95.54 | | `SENTS_R` | 94.63 | | `TAG_ACC` | 90.73 | | `POS_ACC` | 96.49 | | `MORPH_ACC` | 99.83 | | `DEP_UAS` | 85.05 | | `DEP_LAS` | 80.96 | | `LEMMA_ACC` | 92.78 |
{"language": ["ko"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/ko_udv25_koreangsd_trf
null
[ "spacy", "token-classification", "ko", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ko" ]
TAGS #spacy #token-classification #ko #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Korean-GSD ### Label Scheme View label scheme (2415 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (2415 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #ko #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (2415 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Korean-Kaist | Feature | Description | | --- | --- | | **Name** | `ko_udv25_koreankaist_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (5329 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `ecs`, `etm`, `f`, `f+f+jcj`, `f+f+jcs`, `f+f+jct`, `f+f+jxt`, `f+jca`, `f+jca+jp+ecc`, `f+jca+jp+ep+ef`, `f+jca+jxc`, `f+jca+jxc+jcm`, `f+jca+jxt`, `f+jcj`, `f+jcm`, `f+jco`, `f+jcs`, `f+jct`, `f+jct+jcm`, `f+jp+ef`, `f+jp+ep+ef`, `f+jp+etm`, `f+jxc`, `f+jxt`, `f+ncn`, `f+ncn+jcm`, `f+ncn+jcs`, `f+ncn+jp+ecc`, `f+ncn+jxt`, `f+ncpa+jcm`, `f+npp+jcs`, `f+nq`, `f+xsn`, `f+xsn+jco`, `f+xsn+jxt`, `ii`, `jca`, `jca+jcm`, `jca+jxc`, `jca+jxt`, `jcc`, `jcj`, `jcm`, `jco`, `jcr`, `jcr+jxc`, `jcs`, `jct`, `jct+jcm`, `jct+jxt`, `jp+ecc`, `jp+ecs`, `jp+ef`, `jp+ef+jcr`, `jp+ef+jcr+jxc`, `jp+ep+ecs`, `jp+ep+ef`, `jp+ep+etm`, `jp+ep+etn`, `jp+etm`, `jp+etn`, `jp+etn+jco`, `jp+etn+jxc`, `jxc`, `jxc+jca`, `jxc+jco`, `jxc+jcs`, `jxt`, `mad`, `mad+jxc`, `mad+jxt`, `mag`, `mag+jca`, `mag+jcm`, `mag+jcs`, `mag+jp+ef+jcr`, `mag+jxc`, `mag+jxc+jxc`, `mag+jxt`, `mag+xsn`, `maj`, `maj+jxc`, `maj+jxt`, `mma`, `mmd`, `nbn`, `nbn+jca`, `nbn+jca+jcj`, `nbn+jca+jcm`, `nbn+jca+jp+ef`, `nbn+jca+jxc`, `nbn+jca+jxt`, `nbn+jcc`, `nbn+jcj`, `nbn+jcm`, `nbn+jco`, `nbn+jcr`, `nbn+jcs`, `nbn+jct`, `nbn+jct+jcm`, `nbn+jct+jxt`, `nbn+jp+ecc`, `nbn+jp+ecs`, `nbn+jp+ecs+jca`, `nbn+jp+ecs+jcm`, `nbn+jp+ecs+jco`, `nbn+jp+ecs+jxc`, `nbn+jp+ecs+jxt`, `nbn+jp+ecx`, `nbn+jp+ef`, `nbn+jp+ef+jca`, `nbn+jp+ef+jco`, `nbn+jp+ef+jcr`, `nbn+jp+ef+jcr+jxc`, `nbn+jp+ef+jcr+jxt`, `nbn+jp+ef+jcs`, `nbn+jp+ef+jxc`, `nbn+jp+ef+jxc+jco`, `nbn+jp+ef+jxf`, `nbn+jp+ef+jxt`, `nbn+jp+ep+ecc`, `nbn+jp+ep+ecs`, `nbn+jp+ep+ecs+jxc`, `nbn+jp+ep+ef`, `nbn+jp+ep+ef+jcr`, `nbn+jp+ep+etm`, `nbn+jp+ep+etn`, `nbn+jp+ep+etn+jco`, `nbn+jp+ep+etn+jcs`, `nbn+jp+etm`, `nbn+jp+etn`, `nbn+jp+etn+jca`, `nbn+jp+etn+jca+jxt`, `nbn+jp+etn+jco`, `nbn+jp+etn+jcs`, `nbn+jp+etn+jxc`, `nbn+jp+etn+jxt`, `nbn+jxc`, `nbn+jxc+jca`, `nbn+jxc+jca+jxc`, `nbn+jxc+jca+jxt`, `nbn+jxc+jcc`, `nbn+jxc+jcm`, `nbn+jxc+jco`, `nbn+jxc+jcs`, `nbn+jxc+jp+ef`, `nbn+jxc+jxc`, `nbn+jxc+jxt`, `nbn+jxt`, `nbn+nbn`, `nbn+nbn+jp+ef`, `nbn+xsm+ecs`, `nbn+xsm+ef`, `nbn+xsm+ep+ef`, `nbn+xsm+ep+ef+jcr`, `nbn+xsm+etm`, `nbn+xsn`, `nbn+xsn+jca`, `nbn+xsn+jca+jp+ef+jcr`, `nbn+xsn+jca+jxc`, `nbn+xsn+jca+jxt`, `nbn+xsn+jcm`, `nbn+xsn+jco`, `nbn+xsn+jcs`, `nbn+xsn+jct`, `nbn+xsn+jp+ecc`, `nbn+xsn+jp+ecs`, `nbn+xsn+jp+ef`, `nbn+xsn+jp+ef+jcr`, `nbn+xsn+jp+ep+ef`, `nbn+xsn+jxc`, `nbn+xsn+jxt`, `nbn+xsv+etm`, `nbu`, `nbu+jca`, `nbu+jca+jxc`, `nbu+jca+jxt`, `nbu+jcc`, `nbu+jcc+jxc`, `nbu+jcj`, `nbu+jcm`, `nbu+jco`, `nbu+jcs`, `nbu+jct`, `nbu+jct+jxc`, `nbu+jp+ecc`, `nbu+jp+ecs`, `nbu+jp+ef`, `nbu+jp+ef+jcr`, `nbu+jp+ef+jxc`, `nbu+jp+ep+ecc`, `nbu+jp+ep+ecs`, `nbu+jp+ep+ef`, `nbu+jp+ep+ef+jcr`, `nbu+jp+ep+etm`, `nbu+jp+ep+etn+jco`, `nbu+jp+etm`, `nbu+jxc`, `nbu+jxc+jca`, `nbu+jxc+jcs`, `nbu+jxc+jp+ef`, `nbu+jxc+jp+ep+ef`, `nbu+jxc+jxt`, `nbu+jxt`, `nbu+ncn`, `nbu+ncn+jca`, `nbu+ncn+jcm`, `nbu+xsn`, `nbu+xsn+jca`, `nbu+xsn+jca+jxc`, `nbu+xsn+jca+jxt`, `nbu+xsn+jcm`, `nbu+xsn+jco`, `nbu+xsn+jcs`, `nbu+xsn+jp+ecs`, `nbu+xsn+jp+ep+ef`, `nbu+xsn+jxc`, `nbu+xsn+jxc+jxt`, `nbu+xsn+jxt`, `nbu+xsv+ecc`, `nbu+xsv+etm`, `ncn`, `ncn+f+ncpa+jco`, `ncn+jca`, `ncn+jca+jca`, `ncn+jca+jcc`, `ncn+jca+jcj`, `ncn+jca+jcm`, `ncn+jca+jcs`, `ncn+jca+jct`, `ncn+jca+jp+ecc`, `ncn+jca+jp+ecs`, `ncn+jca+jp+ef`, `ncn+jca+jp+ep+ef`, `ncn+jca+jp+etm`, `ncn+jca+jp+etn+jxt`, `ncn+jca+jxc`, `ncn+jca+jxc+jcc`, `ncn+jca+jxc+jcm`, `ncn+jca+jxc+jxc`, `ncn+jca+jxc+jxt`, `ncn+jca+jxt`, `ncn+jcc`, `ncn+jcc+jxc`, `ncn+jcj`, `ncn+jcj+jxt`, `ncn+jcm`, `ncn+jco`, `ncn+jcr`, `ncn+jcr+jxc`, `ncn+jcs`, `ncn+jcs+jxt`, `ncn+jct`, `ncn+jct+jcm`, `ncn+jct+jxc`, `ncn+jct+jxt`, `ncn+jcv`, `ncn+jp+ecc`, `ncn+jp+ecc+jct`, `ncn+jp+ecc+jxc`, `ncn+jp+ecs`, `ncn+jp+ecs+jcm`, `ncn+jp+ecs+jco`, `ncn+jp+ecs+jxc`, `ncn+jp+ecs+jxt`, `ncn+jp+ecx`, `ncn+jp+ef`, `ncn+jp+ef+jca`, `ncn+jp+ef+jcm`, `ncn+jp+ef+jco`, `ncn+jp+ef+jcr`, `ncn+jp+ef+jcr+jxc`, `ncn+jp+ef+jcr+jxt`, `ncn+jp+ef+jp+etm`, `ncn+jp+ef+jxc`, `ncn+jp+ef+jxf`, `ncn+jp+ef+jxt`, `ncn+jp+ep+ecc`, `ncn+jp+ep+ecs`, `ncn+jp+ep+ecs+jxc`, `ncn+jp+ep+ecx`, `ncn+jp+ep+ef`, `ncn+jp+ep+ef+jcr`, `ncn+jp+ep+ef+jcr+jxc`, `ncn+jp+ep+ef+jxc`, `ncn+jp+ep+ef+jxf`, `ncn+jp+ep+ef+jxt`, `ncn+jp+ep+ep+etm`, `ncn+jp+ep+etm`, `ncn+jp+ep+etn`, `ncn+jp+ep+etn+jca`, `ncn+jp+ep+etn+jca+jxc`, `ncn+jp+ep+etn+jco`, `ncn+jp+ep+etn+jcs`, `ncn+jp+ep+etn+jxt`, `ncn+jp+etm`, `ncn+jp+etn`, `ncn+jp+etn+jca`, `ncn+jp+etn+jca+jxc`, `ncn+jp+etn+jca+jxt`, `ncn+jp+etn+jco`, `ncn+jp+etn+jcs`, `ncn+jp+etn+jct`, `ncn+jp+etn+jxc`, `ncn+jp+etn+jxt`, `ncn+jxc`, `ncn+jxc+jca`, `ncn+jxc+jca+jxc`, `ncn+jxc+jca+jxt`, `ncn+jxc+jcc`, `ncn+jxc+jcm`, `ncn+jxc+jco`, `ncn+jxc+jcs`, `ncn+jxc+jct+jxt`, `ncn+jxc+jp+ef`, `ncn+jxc+jp+ef+jcr`, `ncn+jxc+jp+ep+ecs`, `ncn+jxc+jp+ep+ef`, `ncn+jxc+jp+etm`, `ncn+jxc+jxc`, `ncn+jxc+jxt`, `ncn+jxt`, `ncn+jxt+jcm`, `ncn+jxt+jxc`, `ncn+nbn`, `ncn+nbn+jca`, `ncn+nbn+jcm`, `ncn+nbn+jcs`, `ncn+nbn+jp+ecc`, `ncn+nbn+jp+ep+ef`, `ncn+nbn+jxc`, `ncn+nbn+jxt`, `ncn+nbu`, `ncn+nbu+jca`, `ncn+nbu+jcm`, `ncn+nbu+jco`, `ncn+nbu+jp+ef`, `ncn+nbu+jxc`, `ncn+nbu+ncn`, `ncn+ncn`, `ncn+ncn+jca`, `ncn+ncn+jca+jcc`, `ncn+ncn+jca+jcm`, `ncn+ncn+jca+jxc`, `ncn+ncn+jca+jxc+jcm`, `ncn+ncn+jca+jxc+jxc`, `ncn+ncn+jca+jxt`, `ncn+ncn+jcc`, `ncn+ncn+jcj`, `ncn+ncn+jcm`, `ncn+ncn+jco`, `ncn+ncn+jcr`, `ncn+ncn+jcs`, `ncn+ncn+jct`, `ncn+ncn+jct+jcm`, `ncn+ncn+jct+jxc`, `ncn+ncn+jct+jxt`, `ncn+ncn+jp+ecc`, `ncn+ncn+jp+ecs`, `ncn+ncn+jp+ef`, `ncn+ncn+jp+ef+jcm`, `ncn+ncn+jp+ef+jcr`, `ncn+ncn+jp+ef+jcs`, `ncn+ncn+jp+ep+ecc`, `ncn+ncn+jp+ep+ecs`, `ncn+ncn+jp+ep+ef`, `ncn+ncn+jp+ep+ef+jcr`, `ncn+ncn+jp+ep+ep+etm`, `ncn+ncn+jp+ep+etm`, `ncn+ncn+jp+ep+etn`, `ncn+ncn+jp+etm`, `ncn+ncn+jp+etn`, `ncn+ncn+jp+etn+jca`, `ncn+ncn+jp+etn+jco`, `ncn+ncn+jp+etn+jxc`, `ncn+ncn+jxc`, `ncn+ncn+jxc+jca`, `ncn+ncn+jxc+jcc`, `ncn+ncn+jxc+jcm`, `ncn+ncn+jxc+jco`, `ncn+ncn+jxc+jcs`, `ncn+ncn+jxc+jxc`, `ncn+ncn+jxt`, `ncn+ncn+nbn`, `ncn+ncn+ncn`, `ncn+ncn+ncn+jca`, `ncn+ncn+ncn+jca+jcm`, `ncn+ncn+ncn+jca+jxt`, `ncn+ncn+ncn+jcj`, `ncn+ncn+ncn+jcm`, `ncn+ncn+ncn+jco`, `ncn+ncn+ncn+jcs`, `ncn+ncn+ncn+jct+jxt`, `ncn+ncn+ncn+jp+etn+jxc`, `ncn+ncn+ncn+jxt`, `ncn+ncn+ncn+ncn+jca`, `ncn+ncn+ncn+ncn+jca+jxt`, `ncn+ncn+ncn+ncn+jco`, `ncn+ncn+ncn+xsn+jp+etm`, `ncn+ncn+ncpa`, `ncn+ncn+ncpa+jca`, `ncn+ncn+ncpa+jcm`, `ncn+ncn+ncpa+jco`, `ncn+ncn+ncpa+jcs`, `ncn+ncn+ncpa+jxc`, `ncn+ncn+ncpa+jxt`, `ncn+ncn+ncpa+ncn`, `ncn+ncn+ncpa+ncn+jca`, `ncn+ncn+ncpa+ncn+jcj`, `ncn+ncn+ncpa+ncn+jcm`, `ncn+ncn+ncpa+ncn+jxt`, `ncn+ncn+xsn`, `ncn+ncn+xsn+jca`, `ncn+ncn+xsn+jca+jxt`, `ncn+ncn+xsn+jcj`, `ncn+ncn+xsn+jcm`, `ncn+ncn+xsn+jco`, `ncn+ncn+xsn+jcs`, `ncn+ncn+xsn+jct`, `ncn+ncn+xsn+jp+ecs`, `ncn+ncn+xsn+jp+ep+ef`, `ncn+ncn+xsn+jp+etm`, `ncn+ncn+xsn+jxc`, `ncn+ncn+xsn+jxc+jcs`, `ncn+ncn+xsn+jxt`, `ncn+ncn+xsv+ecc`, `ncn+ncn+xsv+etm`, `ncn+ncpa`, `ncn+ncpa+jca`, `ncn+ncpa+jca+jcm`, `ncn+ncpa+jca+jxc`, `ncn+ncpa+jca+jxt`, `ncn+ncpa+jcc`, `ncn+ncpa+jcj`, `ncn+ncpa+jcm`, `ncn+ncpa+jco`, `ncn+ncpa+jcr`, `ncn+ncpa+jcs`, `ncn+ncpa+jct`, `ncn+ncpa+jct+jcm`, `ncn+ncpa+jct+jxt`, `ncn+ncpa+jp+ecc`, `ncn+ncpa+jp+ecc+jxc`, `ncn+ncpa+jp+ecs`, `ncn+ncpa+jp+ecs+jxc`, `ncn+ncpa+jp+ef`, `ncn+ncpa+jp+ef+jcr`, `ncn+ncpa+jp+ef+jcr+jxc`, `ncn+ncpa+jp+ep+ef`, `ncn+ncpa+jp+ep+etm`, `ncn+ncpa+jp+ep+etn`, `ncn+ncpa+jp+etm`, `ncn+ncpa+jxc`, `ncn+ncpa+jxc+jca+jxc`, `ncn+ncpa+jxc+jco`, `ncn+ncpa+jxc+jcs`, `ncn+ncpa+jxt`, `ncn+ncpa+nbn+jcs`, `ncn+ncpa+ncn`, `ncn+ncpa+ncn+jca`, `ncn+ncpa+ncn+jca+jcm`, `ncn+ncpa+ncn+jca+jxc`, `ncn+ncpa+ncn+jca+jxt`, `ncn+ncpa+ncn+jcj`, `ncn+ncpa+ncn+jcm`, `ncn+ncpa+ncn+jco`, `ncn+ncpa+ncn+jcs`, `ncn+ncpa+ncn+jct`, `ncn+ncpa+ncn+jct+jcm`, `ncn+ncpa+ncn+jp+ef+jcr`, `ncn+ncpa+ncn+jp+ep+etm`, `ncn+ncpa+ncn+jxc`, `ncn+ncpa+ncn+jxt`, `ncn+ncpa+ncn+xsn+jcm`, `ncn+ncpa+ncn+xsn+jxt`, `ncn+ncpa+ncpa`, `ncn+ncpa+ncpa+jca`, `ncn+ncpa+ncpa+jcj`, `ncn+ncpa+ncpa+jcm`, `ncn+ncpa+ncpa+jco`, `ncn+ncpa+ncpa+jcs`, `ncn+ncpa+ncpa+jp+ep+ef`, `ncn+ncpa+ncpa+jxt`, `ncn+ncpa+ncpa+ncn`, `ncn+ncpa+xsn`, `ncn+ncpa+xsn+jcm`, `ncn+ncpa+xsn+jco`, `ncn+ncpa+xsn+jcs`, `ncn+ncpa+xsn+jp+ecc`, `ncn+ncpa+xsn+jp+etm`, `ncn+ncpa+xsn+jxt`, `ncn+ncpa+xsv+ecc`, `ncn+ncpa+xsv+ecs`, `ncn+ncpa+xsv+ecx`, `ncn+ncpa+xsv+ecx+px+etm`, `ncn+ncpa+xsv+ef`, `ncn+ncpa+xsv+ef+jcm`, `ncn+ncpa+xsv+ef+jcr`, `ncn+ncpa+xsv+etm`, `ncn+ncpa+xsv+etn`, `ncn+ncpa+xsv+etn+jco`, `ncn+ncps`, `ncn+ncps+jca`, `ncn+ncps+jcm`, `ncn+ncps+jco`, `ncn+ncps+jcs`, `ncn+ncps+jp+ecs`, `ncn+ncps+jxt`, `ncn+ncps+ncn+jcs`, `ncn+ncps+ncpa+ncn`, `ncn+ncps+xsm+ef`, `ncn+ncps+xsm+etm`, `ncn+nnc`, `ncn+nnc+jcs`, `ncn+nnc+nnc`, `ncn+nno`, `ncn+nq`, `ncn+nq+jca`, `ncn+nq+jca+jxc`, `ncn+nq+jca+jxt`, `ncn+nq+jcm`, `ncn+nq+jcs`, `ncn+nq+jxt`, `ncn+nq+ncn+jcm`, `ncn+nq+ncn+xsn+jcs`, `ncn+nq+xsn+jxt`, `ncn+xsa`, `ncn+xsm+ecc`, `ncn+xsm+ecs`, `ncn+xsm+ecs+jxc`, `ncn+xsm+ecx`, `ncn+xsm+ecx+jcs`, `ncn+xsm+ecx+px+ep+etm`, `ncn+xsm+ef`, `ncn+xsm+ef+jcr`, `ncn+xsm+etm`, `ncn+xsm+etn+jcm`, `ncn+xsm+etn+jp+ef+jcr`, `ncn+xsn`, `ncn+xsn+jca`, `ncn+xsn+jca+jcj`, `ncn+xsn+jca+jxc`, `ncn+xsn+jca+jxc+jxc`, `ncn+xsn+jca+jxt`, `ncn+xsn+jcc`, `ncn+xsn+jcj`, `ncn+xsn+jcm`, `ncn+xsn+jco`, `ncn+xsn+jcs`, `ncn+xsn+jcs+jxt`, `ncn+xsn+jct`, `ncn+xsn+jct+jcm`, `ncn+xsn+jct+jxc`, `ncn+xsn+jct+jxt`, `ncn+xsn+jcv`, `ncn+xsn+jp+ecc`, `ncn+xsn+jp+ecc+jxc`, `ncn+xsn+jp+ecs`, `ncn+xsn+jp+ecs+jxc`, `ncn+xsn+jp+ecx`, `ncn+xsn+jp+ecx+jxt`, `ncn+xsn+jp+ef`, `ncn+xsn+jp+ef+jca`, `ncn+xsn+jp+ef+jcr`, `ncn+xsn+jp+ep+ecc`, `ncn+xsn+jp+ep+ecs`, `ncn+xsn+jp+ep+ef`, `ncn+xsn+jp+ep+ef+jcr`, `ncn+xsn+jp+ep+etm`, `ncn+xsn+jp+ep+etn`, `ncn+xsn+jp+etm`, `ncn+xsn+jp+etn`, `ncn+xsn+jp+etn+jca`, `ncn+xsn+jp+etn+jca+jxt`, `ncn+xsn+jp+etn+jxc`, `ncn+xsn+jp+etn+jxt`, `ncn+xsn+jxc`, `ncn+xsn+jxc+jcm`, `ncn+xsn+jxc+jco`, `ncn+xsn+jxc+jcs`, `ncn+xsn+jxc+jxc`, `ncn+xsn+jxt`, `ncn+xsn+ncn+jca`, `ncn+xsn+ncn+jca+jxt`, `ncn+xsn+ncn+jcs`, `ncn+xsn+ncpa+jca`, `ncn+xsn+xsn`, `ncn+xsn+xsn+jca`, `ncn+xsn+xsn+jcm`, `ncn+xsn+xsn+jp+ecs`, `ncn+xsn+xsn+jxc`, `ncn+xsn+xsn+jxc+jcc`, `ncn+xsn+xsn+jxc+jcs`, `ncn+xsn+xsv+ecc`, `ncn+xsn+xsv+etm`, `ncn+xsn+xsv+etn`, `ncn+xsv+ecc`, `ncn+xsv+ecs`, `ncn+xsv+ecx`, `ncn+xsv+ef`, `ncn+xsv+ep+ecs`, `ncn+xsv+ep+ef`, `ncn+xsv+ep+etm`, `ncn+xsv+etm`, `ncn+xsv+etn+jca`, `ncpa`, `ncpa+jca`, `ncpa+jca+jcm`, `ncpa+jca+jct`, `ncpa+jca+jp+ecs`, `ncpa+jca+jp+ef`, `ncpa+jca+jp+ep+ef`, `ncpa+jca+jxc`, `ncpa+jca+jxc+jcm`, `ncpa+jca+jxc+jxc`, `ncpa+jca+jxc+jxt`, `ncpa+jca+jxt`, `ncpa+jcc`, `ncpa+jcj`, `ncpa+jcm`, `ncpa+jco`, `ncpa+jcr`, `ncpa+jcs`, `ncpa+jct`, `ncpa+jct+jcm`, `ncpa+jct+jxc`, `ncpa+jct+jxt`, `ncpa+jp+ecc`, `ncpa+jp+ecs`, `ncpa+jp+ecs+jxc`, `ncpa+jp+ecx`, `ncpa+jp+ecx+jxc`, `ncpa+jp+ef`, `ncpa+jp+ef+jca`, `ncpa+jp+ef+jco`, `ncpa+jp+ef+jcr`, `ncpa+jp+ef+jxc`, `ncpa+jp+ef+jxf`, `ncpa+jp+ep+ecc`, `ncpa+jp+ep+ecs`, `ncpa+jp+ep+ef`, `ncpa+jp+ep+ef+jca`, `ncpa+jp+ep+ef+jcr`, `ncpa+jp+ep+ef+jxt`, `ncpa+jp+ep+etm`, `ncpa+jp+ep+etn+jca`, `ncpa+jp+ep+etn+jca+jxc`, `ncpa+jp+ep+etn+jcs`, `ncpa+jp+etm`, `ncpa+jp+etn`, `ncpa+jp+etn+jca`, `ncpa+jp+etn+jca+jxt`, `ncpa+jp+etn+jco`, `ncpa+jp+etn+jcs`, `ncpa+jp+etn+jxc`, `ncpa+jp+etn+jxt`, `ncpa+jxc`, `ncpa+jxc+jca`, `ncpa+jxc+jca+jxc`, `ncpa+jxc+jca+jxt`, `ncpa+jxc+jcc`, `ncpa+jxc+jcm`, `ncpa+jxc+jco`, `ncpa+jxc+jcs`, `ncpa+jxc+jxc`, `ncpa+jxt`, `ncpa+jxt+jxc`, `ncpa+jxt+jxt`, `ncpa+nbn+jca`, `ncpa+nbn+jct`, `ncpa+nbn+jp+ef`, `ncpa+nbn+jp+ep+ef`, `ncpa+nbn+jp+etm`, `ncpa+nbn+jxc+jcc`, `ncpa+nbu+jca`, `ncpa+ncn`, `ncpa+ncn+jca`, `ncpa+ncn+jca+jcm`, `ncpa+ncn+jca+jxc`, `ncpa+ncn+jca+jxc+jcm`, `ncpa+ncn+jca+jxt`, `ncpa+ncn+jcc`, `ncpa+ncn+jcj`, `ncpa+ncn+jcm`, `ncpa+ncn+jco`, `ncpa+ncn+jcr`, `ncpa+ncn+jcs`, `ncpa+ncn+jct`, `ncpa+ncn+jct+jcm`, `ncpa+ncn+jct+jxc`, `ncpa+ncn+jp+ecc`, `ncpa+ncn+jp+ecs`, `ncpa+ncn+jp+ef`, `ncpa+ncn+jp+ef+jcr`, `ncpa+ncn+jp+ef+jcr+jxc`, `ncpa+ncn+jp+ep+ef`, `ncpa+ncn+jp+ep+etm`, `ncpa+ncn+jp+etm`, `ncpa+ncn+jp+etn+jca+jxt`, `ncpa+ncn+jp+etn+jco`, `ncpa+ncn+jp+etn+jxc`, `ncpa+ncn+jxc`, `ncpa+ncn+jxc+jcc`, `ncpa+ncn+jxc+jco`, `ncpa+ncn+jxt`, `ncpa+ncn+nbn`, `ncpa+ncn+ncn`, `ncpa+ncn+ncn+jca`, `ncpa+ncn+ncn+jca+jxt`, `ncpa+ncn+ncn+jcm`, `ncpa+ncn+ncn+jco`, `ncpa+ncn+ncn+jcs`, `ncpa+ncn+ncn+jp+ep+ef`, `ncpa+ncn+ncn+jp+etm`, `ncpa+ncn+ncn+jxt`, `ncpa+ncn+ncn+ncn`, `ncpa+ncn+ncn+xsn+jxt`, `ncpa+ncn+ncpa`, `ncpa+ncn+ncpa+jca`, `ncpa+ncn+ncpa+jcj`, `ncpa+ncn+ncpa+jco`, `ncpa+ncn+ncpa+ncn`, `ncpa+ncn+ncpa+ncn+jco`, `ncpa+ncn+xsn`, `ncpa+ncn+xsn+jca`, `ncpa+ncn+xsn+jca+jxc`, `ncpa+ncn+xsn+jcj`, `ncpa+ncn+xsn+jcm`, `ncpa+ncn+xsn+jco`, `ncpa+ncn+xsn+jcs`, `ncpa+ncn+xsn+jct`, `ncpa+ncn+xsn+jp+ep+ef`, `ncpa+ncn+xsn+jp+etm`, `ncpa+ncn+xsn+jxt`, `ncpa+ncpa`, `ncpa+ncpa+jca`, `ncpa+ncpa+jca+jcm`, `ncpa+ncpa+jca+jxc`, `ncpa+ncpa+jca+jxt`, `ncpa+ncpa+jcj`, `ncpa+ncpa+jcm`, `ncpa+ncpa+jco`, `ncpa+ncpa+jcs`, `ncpa+ncpa+jct`, `ncpa+ncpa+jct+jxc`, `ncpa+ncpa+jct+jxt`, `ncpa+ncpa+jp+ecc`, `ncpa+ncpa+jp+ecs`, `ncpa+ncpa+jp+ecx`, `ncpa+ncpa+jp+ef`, `ncpa+ncpa+jp+ef+jca`, `ncpa+ncpa+jp+ef+jcr`, `ncpa+ncpa+jp+ef+jcr+jxc`, `ncpa+ncpa+jp+ep+ecs`, `ncpa+ncpa+jp+etm`, `ncpa+ncpa+jxc`, `ncpa+ncpa+jxt`, `ncpa+ncpa+ncn`, `ncpa+ncpa+ncn+jca`, `ncpa+ncpa+ncn+jcj`, `ncpa+ncpa+ncn+jcm`, `ncpa+ncpa+ncn+jco`, `ncpa+ncpa+ncn+jcs`, `ncpa+ncpa+ncn+jxt`, `ncpa+ncpa+ncpa+jcm`, `ncpa+ncpa+ncpa+jcs`, `ncpa+ncpa+ncpa+ncpa+jco`, `ncpa+ncpa+xsn`, `ncpa+ncpa+xsn+jca`, `ncpa+ncpa+xsn+jcj`, `ncpa+ncpa+xsn+jco`, `ncpa+ncpa+xsn+jcs`, `ncpa+ncpa+xsn+jxc`, `ncpa+ncpa+xsn+jxt`, `ncpa+ncpa+xsv+ecc`, `ncpa+ncpa+xsv+ecs`, `ncpa+ncpa+xsv+ef`, `ncpa+ncpa+xsv+ep+ef`, `ncpa+ncpa+xsv+ep+etm`, `ncpa+ncpa+xsv+etm`, `ncpa+ncpa+xsv+etn+jca`, `ncpa+ncps`, `ncpa+ncps+jca`, `ncpa+ncps+jcm`, `ncpa+ncps+jco`, `ncpa+ncps+jcs`, `ncpa+ncps+jxt`, `ncpa+ncps+xsm+etm`, `ncpa+nq+jca`, `ncpa+xsa`, `ncpa+xsn`, `ncpa+xsn+jca`, `ncpa+xsn+jca+jxc`, `ncpa+xsn+jca+jxt`, `ncpa+xsn+jcc`, `ncpa+xsn+jcj`, `ncpa+xsn+jcm`, `ncpa+xsn+jco`, `ncpa+xsn+jcs`, `ncpa+xsn+jct`, `ncpa+xsn+jp+ecc`, `ncpa+xsn+jp+ecs`, `ncpa+xsn+jp+ecs+jxc`, `ncpa+xsn+jp+ecx`, `ncpa+xsn+jp+ecx+jxt`, `ncpa+xsn+jp+ef`, `ncpa+xsn+jp+ef+jcr`, `ncpa+xsn+jp+ef+jxf`, `ncpa+xsn+jp+ep+ecc`, `ncpa+xsn+jp+ep+ef`, `ncpa+xsn+jp+ep+ef+jco`, `ncpa+xsn+jp+ep+ef+jcr`, `ncpa+xsn+jp+etm`, `ncpa+xsn+jp+etn`, `ncpa+xsn+jp+etn+jco`, `ncpa+xsn+jp+etn+jxc`, `ncpa+xsn+jxc`, `ncpa+xsn+jxt`, `ncpa+xsv+ecc`, `ncpa+xsv+ecc+jcm`, `ncpa+xsv+ecc+jxc`, `ncpa+xsv+ecc+jxt`, `ncpa+xsv+ecs`, `ncpa+xsv+ecs+jca`, `ncpa+xsv+ecs+jco`, `ncpa+xsv+ecs+jp+ef`, `ncpa+xsv+ecs+jxc`, `ncpa+xsv+ecs+jxc+jxt`, `ncpa+xsv+ecs+jxt`, `ncpa+xsv+ecx`, `ncpa+xsv+ecx+jco`, `ncpa+xsv+ecx+jxc`, `ncpa+xsv+ecx+jxt`, `ncpa+xsv+ecx+px+ecc`, `ncpa+xsv+ecx+px+ecs`, `ncpa+xsv+ecx+px+ecx`, `ncpa+xsv+ecx+px+ecx+jxc`, `ncpa+xsv+ecx+px+ecx+px+ecs`, `ncpa+xsv+ecx+px+ef`, `ncpa+xsv+ecx+px+ef+jcr`, `ncpa+xsv+ecx+px+ep+ecc`, `ncpa+xsv+ecx+px+ep+ecs`, `ncpa+xsv+ecx+px+ep+ef`, `ncpa+xsv+ecx+px+ep+ef+jcr`, `ncpa+xsv+ecx+px+ep+etm`, `ncpa+xsv+ecx+px+ep+etn+jca`, `ncpa+xsv+ecx+px+ep+etn+jco`, `ncpa+xsv+ecx+px+ep+etn+jxc`, `ncpa+xsv+ecx+px+ep+etn+jxt`, `ncpa+xsv+ecx+px+etm`, `ncpa+xsv+ecx+px+etn`, `ncpa+xsv+ecx+px+etn+jca`, `ncpa+xsv+ecx+px+etn+jco`, `ncpa+xsv+ef`, `ncpa+xsv+ef+jca`, `ncpa+xsv+ef+jcj`, `ncpa+xsv+ef+jcm`, `ncpa+xsv+ef+jco`, `ncpa+xsv+ef+jcr`, `ncpa+xsv+ef+jcr+jxt`, `ncpa+xsv+ef+jcs`, `ncpa+xsv+ef+jxc`, `ncpa+xsv+ef+jxf`, `ncpa+xsv+ef+jxt`, `ncpa+xsv+ep+ecc`, `ncpa+xsv+ep+ecs`, `ncpa+xsv+ep+ecs+jco`, `ncpa+xsv+ep+ecs+jxc`, `ncpa+xsv+ep+ecs+jxt`, `ncpa+xsv+ep+ecx`, `ncpa+xsv+ep+ecx+jxc`, `ncpa+xsv+ep+ef`, `ncpa+xsv+ep+ef+jca`, `ncpa+xsv+ep+ef+jca+jxt`, `ncpa+xsv+ep+ef+jco`, `ncpa+xsv+ep+ef+jcr`, `ncpa+xsv+ep+ef+jcr+jxc`, `ncpa+xsv+ep+ef+jcr+jxc+jxt`, `ncpa+xsv+ep+ef+jxc`, `ncpa+xsv+ep+ef+jxf`, `ncpa+xsv+ep+ef+jxt`, `ncpa+xsv+ep+ep+ecs`, `ncpa+xsv+ep+ep+ef`, `ncpa+xsv+ep+etm`, `ncpa+xsv+ep+etn`, `ncpa+xsv+ep+etn+jca`, `ncpa+xsv+ep+etn+jca+jxc`, `ncpa+xsv+ep+etn+jcj`, `ncpa+xsv+ep+etn+jco`, `ncpa+xsv+ep+etn+jcs`, `ncpa+xsv+ep+etn+jxt`, `ncpa+xsv+etm`, `ncpa+xsv+etn`, `ncpa+xsv+etn+jca`, `ncpa+xsv+etn+jca+jxc`, `ncpa+xsv+etn+jca+jxt`, `ncpa+xsv+etn+jco`, `ncpa+xsv+etn+jcs`, `ncpa+xsv+etn+jct`, `ncpa+xsv+etn+jxc`, `ncpa+xsv+etn+jxc+jcm`, `ncpa+xsv+etn+jxc+jcs`, `ncpa+xsv+etn+jxc+jxc`, `ncpa+xsv+etn+jxc+jxt`, `ncpa+xsv+etn+jxt`, `ncps`, `ncps+jca`, `ncps+jca+jcm`, `ncps+jca+jxc`, `ncps+jca+jxc+jcm`, `ncps+jcc`, `ncps+jcj`, `ncps+jcm`, `ncps+jco`, `ncps+jcs`, `ncps+jct`, `ncps+jct+jcm`, `ncps+jct+jxt`, `ncps+jp+ecc`, `ncps+jp+ecs`, `ncps+jp+ecs+jxt`, `ncps+jp+ef`, `ncps+jp+ef+jcr`, `ncps+jp+ep+ef`, `ncps+jp+ep+etn`, `ncps+jp+etm`, `ncps+jp+etn+jcs`, `ncps+jp+etn+jxt`, `ncps+jxc`, `ncps+jxc+jxc`, `ncps+jxt`, `ncps+nbn+jp+etm`, `ncps+nbn+jxc`, `ncps+ncn`, `ncps+ncn+jca`, `ncps+ncn+jca+jcm`, `ncps+ncn+jcm`, `ncps+ncn+jco`, `ncps+ncn+jcs`, `ncps+ncn+jct+jxt`, `ncps+ncn+jp+ef`, `ncps+ncn+jp+ef+jcr`, `ncps+ncn+jp+etm`, `ncps+ncn+jxc+jco`, `ncps+ncn+jxt`, `ncps+ncn+ncn`, `ncps+ncn+ncn+jca+jxc`, `ncps+ncn+ncn+jcm`, `ncps+ncn+ncn+jco`, `ncps+ncn+ncn+jxt`, `ncps+ncn+xsn`, `ncps+ncn+xsn+jca`, `ncps+ncn+xsn+jcj`, `ncps+ncn+xsn+jco`, `ncps+ncn+xsn+jp+ecc`, `ncps+ncn+xsn+jp+etm`, `ncps+ncpa`, `ncps+ncpa+jca`, `ncps+ncpa+jcc`, `ncps+ncpa+jcj`, `ncps+ncpa+jcm`, `ncps+ncpa+jco`, `ncps+ncpa+jcs`, `ncps+ncpa+jp+etm`, `ncps+ncpa+jxt`, `ncps+ncpa+xsv+etm`, `ncps+ncps+jca`, `ncps+ncps+jcm`, `ncps+ncps+xsm+ecc`, `ncps+ncps+xsm+ecs`, `ncps+ncps+xsm+etm`, `ncps+xsa`, `ncps+xsa+jxc`, `ncps+xsm+ecc`, `ncps+xsm+ecc+jxc`, `ncps+xsm+ecc+jxt`, `ncps+xsm+ecs`, `ncps+xsm+ecs+jxc`, `ncps+xsm+ecs+jxt`, `ncps+xsm+ecx`, `ncps+xsm+ecx+jcs`, `ncps+xsm+ecx+jxc`, `ncps+xsm+ecx+jxt`, `ncps+xsm+ecx+px+ecc`, `ncps+xsm+ecx+px+ecs`, `ncps+xsm+ecx+px+ecx`, `ncps+xsm+ecx+px+ecx+jxt`, `ncps+xsm+ecx+px+ef`, `ncps+xsm+ecx+px+ep+ecs`, `ncps+xsm+ecx+px+ep+ef`, `ncps+xsm+ecx+px+ep+etm`, `ncps+xsm+ecx+px+ep+etn+jco`, `ncps+xsm+ecx+px+etm`, `ncps+xsm+ecx+px+etn`, `ncps+xsm+ecx+px+etn+jca`, `ncps+xsm+ecx+px+etn+jcj`, `ncps+xsm+ecx+px+etn+jco`, `ncps+xsm+ef`, `ncps+xsm+ef+jco`, `ncps+xsm+ef+jcr`, `ncps+xsm+ef+jcr+jxc`, `ncps+xsm+ef+jcr+jxt`, `ncps+xsm+ef+jxf`, `ncps+xsm+ef+jxt`, `ncps+xsm+ep+ecc`, `ncps+xsm+ep+ecs`, `ncps+xsm+ep+ecs+etm`, `ncps+xsm+ep+ef`, `ncps+xsm+ep+ef+jco`, `ncps+xsm+ep+ef+jcr`, `ncps+xsm+ep+ef+jxf`, `ncps+xsm+ep+ep+ef`, `ncps+xsm+ep+etm`, `ncps+xsm+ep+etn`, `ncps+xsm+ep+etn+jxt`, `ncps+xsm+etm`, `ncps+xsm+etn`, `ncps+xsm+etn+jca`, `ncps+xsm+etn+jca+jxt`, `ncps+xsm+etn+jcj`, `ncps+xsm+etn+jcm`, `ncps+xsm+etn+jco`, `ncps+xsm+etn+jcs`, `ncps+xsm+etn+jct`, `ncps+xsm+etn+jct+jcm`, `ncps+xsm+etn+jp+ef+jcr`, `ncps+xsm+etn+jp+etm`, `ncps+xsm+etn+jxc`, `ncps+xsm+etn+jxc+jxt`, `ncps+xsm+etn+jxt`, `ncps+xsn`, `ncps+xsn+jca`, `ncps+xsn+jca+jxt`, `ncps+xsn+jcm`, `ncps+xsn+jco`, `ncps+xsn+jcs`, `ncps+xsn+jp+ecc`, `ncps+xsn+jp+ep+ecs`, `ncps+xsn+jp+etm`, `ncps+xsn+jxc`, `ncps+xsn+jxt`, `ncps+xsv+etm`, `nnc`, `nnc+f`, `nnc+f+jca`, `nnc+f+jp+ef`, `nnc+jca`, `nnc+jca+jxc`, `nnc+jca+jxt`, `nnc+jcc`, `nnc+jcj`, `nnc+jcm`, `nnc+jco`, `nnc+jcs`, `nnc+jp+ecc`, `nnc+jp+ecs`, `nnc+jp+ef`, `nnc+jp+ef+jcr`, `nnc+jp+ep+ef`, `nnc+jp+ep+etm`, `nnc+jp+etm`, `nnc+jp+etn+jca`, `nnc+jxc`, `nnc+jxt`, `nnc+nbn`, `nnc+nbn+jcm`, `nnc+nbn+jco`, `nnc+nbn+nbu+jcc`, `nnc+nbn+nbu+jcs`, `nnc+nbn+xsn`, `nnc+nbu`, `nnc+nbu+jca`, `nnc+nbu+jca+jxc`, `nnc+nbu+jcc`, `nnc+nbu+jcj`, `nnc+nbu+jcm`, `nnc+nbu+jco`, `nnc+nbu+jcs`, `nnc+nbu+jp+ef`, `nnc+nbu+jp+ef+jcr`, `nnc+nbu+jp+ep+ecs`, `nnc+nbu+jp+ep+ef`, `nnc+nbu+jp+etm`, `nnc+nbu+jxc`, `nnc+nbu+jxc+jcs`, `nnc+nbu+jxc+jxt`, `nnc+nbu+jxt`, `nnc+nbu+nbu`, `nnc+nbu+nbu+jcm`, `nnc+nbu+nbu+jp+ef+jcr`, `nnc+nbu+ncn`, `nnc+nbu+ncn+jca`, `nnc+nbu+ncn+jcj`, `nnc+nbu+ncn+jcm`, `nnc+nbu+ncn+jxc`, `nnc+nbu+xsn`, `nnc+nbu+xsn+jca`, `nnc+nbu+xsn+jcm`, `nnc+nbu+xsn+jco`, `nnc+nbu+xsn+jcs`, `nnc+nbu+xsn+jp+ecc`, `nnc+nbu+xsn+jp+ef`, `nnc+nbu+xsn+jxc`, `nnc+nbu+xsn+jxc+jcm`, `nnc+nbu+xsn+jxt`, `nnc+nbu+xsv+etm`, `nnc+ncn`, `nnc+ncn+jca`, `nnc+ncn+jca+jxt`, `nnc+ncn+jcj`, `nnc+ncn+jcm`, `nnc+ncn+jco`, `nnc+ncn+jcs`, `nnc+ncn+jct`, `nnc+ncn+jp+ef`, `nnc+ncn+jp+etm`, `nnc+ncn+jxc`, `nnc+ncn+jxt`, `nnc+ncn+nbu`, `nnc+ncn+nbu+xsn+jca`, `nnc+ncn+ncn+jca+jxt`, `nnc+ncn+ncn+xsn`, `nnc+ncn+nnc+nnc`, `nnc+ncn+xsn`, `nnc+ncn+xsn+jp+etm`, `nnc+ncn+xsn+jxt`, `nnc+ncpa`, `nnc+ncpa+jcs`, `nnc+nnc`, `nnc+nnc+jca`, `nnc+nnc+jca+jxt`, `nnc+nnc+jcm`, `nnc+nnc+jco`, `nnc+nnc+jp+ef`, `nnc+nnc+nbu`, `nnc+nnc+nbu+jca`, `nnc+nnc+nbu+jcc`, `nnc+nnc+nbu+jcm`, `nnc+nnc+nbu+jco`, `nnc+nnc+nbu+jcs`, `nnc+nnc+nbu+jp+ep+ef`, `nnc+nnc+nbu+jp+etm`, `nnc+nnc+nbu+jxc`, `nnc+nnc+nbu+xsn`, `nnc+nnc+nbu+xsn+jcm`, `nnc+nnc+nbu+xsn+jxc`, `nnc+nnc+ncn+jco`, `nnc+nnc+nnc`, `nnc+nnc+nnc+nnc`, `nnc+nnc+su+jp+ef`, `nnc+nnc+xsn`, `nnc+nnc+xsn+jcm`, `nnc+nnc+xsn+nbu+jca`, `nnc+nnc+xsn+nbu+jcm`, `nnc+nnc+xsn+nbu+jco`, `nnc+nnc+xsn+nbu+jcs`, `nnc+nno+nbu`, `nnc+nno+nbu+jcc`, `nnc+su`, `nnc+su+jca`, `nnc+su+jcm`, `nnc+su+jco`, `nnc+su+jcs`, `nnc+su+jxc`, `nnc+su+xsn`, `nnc+xsn`, `nnc+xsn+jca`, `nnc+xsn+jca+jxt`, `nnc+xsn+jcm`, `nnc+xsn+jco`, `nnc+xsn+jcs`, `nnc+xsn+jp+ef`, `nnc+xsn+jxc`, `nnc+xsn+nbn+jca`, `nnc+xsn+nbu`, `nnc+xsn+nbu+jca`, `nnc+xsn+nbu+jcm`, `nnc+xsn+nbu+jco`, `nnc+xsn+nbu+jcs`, `nnc+xsn+nnc+nbu`, `nnc+xsn+nnc+nbu+jcm`, `nno`, `nno+jca`, `nno+jca+jxt`, `nno+jcj`, `nno+jcm`, `nno+jco`, `nno+jcs`, `nno+jxt`, `nno+nbn`, `nno+nbn+jcm`, `nno+nbn+xsn`, `nno+nbu`, `nno+nbu+jca`, `nno+nbu+jca+jxc`, `nno+nbu+jca+jxt`, `nno+nbu+jcc`, `nno+nbu+jcj`, `nno+nbu+jcm`, `nno+nbu+jco`, `nno+nbu+jcs`, `nno+nbu+jct`, `nno+nbu+jp+ecc`, `nno+nbu+jp+ecs`, `nno+nbu+jp+ef`, `nno+nbu+jp+ep+ecc`, `nno+nbu+jp+ep+ecs`, `nno+nbu+jp+ep+ef`, `nno+nbu+jp+etm`, `nno+nbu+jxc`, `nno+nbu+jxc+jca`, `nno+nbu+jxc+jcm`, `nno+nbu+jxc+jp+ef`, `nno+nbu+jxc+jp+etm`, `nno+nbu+jxc+jxc`, `nno+nbu+jxc+jxt`, `nno+nbu+jxt`, `nno+nbu+nbu`, `nno+nbu+ncn`, `nno+nbu+ncn+jp+ep+ef`, `nno+nbu+ncn+ncn`, `nno+nbu+xsn`, `nno+nbu+xsn+jca`, `nno+nbu+xsn+jcc`, `nno+nbu+xsn+jcm`, `nno+nbu+xsn+jxc`, `nno+nbu+xsn+jxt`, `nno+ncn`, `nno+ncn+jca`, `nno+ncn+jca+jxc`, `nno+ncn+jca+jxt`, `nno+ncn+jcm`, `nno+ncn+jco`, `nno+ncn+jcs`, `nno+ncn+jct`, `nno+ncn+jp+ef`, `nno+ncn+jp+etm`, `nno+ncn+jxc`, `nno+ncn+jxc+jxt`, `nno+ncn+ncn+jp+etm`, `nno+ncn+xsn`, `nno+ncn+xsn+jca`, `nno+ncn+xsn+jp+ep+ef`, `nno+ncn+xsn+jp+etm`, `nno+ncpa+jp+ep+etn+jca+jxc`, `nno+nnc`, `nno+xsn`, `nno+xsn+jca`, `nno+xsn+jca+jxc`, `nno+xsn+jxc`, `nno+xsn+jxc+jcs`, `nno+xsn+nbu`, `nno+xsn+nbu+jcm`, `npd`, `npd+jca`, `npd+jca+jcm`, `npd+jca+jp+ef`, `npd+jca+jp+ef+jca`, `npd+jca+jxc`, `npd+jca+jxc+jcm`, `npd+jca+jxt`, `npd+jcc`, `npd+jcj`, `npd+jcm`, `npd+jco`, `npd+jcs`, `npd+jct`, `npd+jct+jcm`, `npd+jct+jxt`, `npd+jp+ecc`, `npd+jp+ecs`, `npd+jp+ecs+jco`, `npd+jp+ecs+jxt`, `npd+jp+ef`, `npd+jp+ef+jca`, `npd+jp+ef+jcm`, `npd+jp+ef+jco`, `npd+jp+ef+jcr`, `npd+jp+ef+jcs`, `npd+jp+ef+jp+ef`, `npd+jp+ef+jp+etm`, `npd+jp+ef+jxc`, `npd+jp+ef+jxt`, `npd+jp+ep+ef`, `npd+jp+etm`, `npd+jxc`, `npd+jxc+jca`, `npd+jxc+jca+jxc`, `npd+jxc+jcc`, `npd+jxc+jcr`, `npd+jxc+jp+ef`, `npd+jxc+jxc`, `npd+jxc+jxt`, `npd+jxt`, `npd+nbn`, `npd+nbn+jca`, `npd+nbn+jcs`, `npd+nbn+jxc`, `npd+nbn+jxc+jxt`, `npd+ncn`, `npd+ncn+jca`, `npd+ncn+jca+jxc`, `npd+ncn+jcm`, `npd+ncn+jco`, `npd+ncn+jcs`, `npd+ncn+jxt`, `npd+npd`, `npd+xsn`, `npd+xsn+jca`, `npd+xsn+jca+jxc`, `npd+xsn+jca+jxt`, `npd+xsn+jcm`, `npd+xsn+jco`, `npd+xsn+jcs`, `npd+xsn+jct`, `npd+xsn+jp+ef`, `npd+xsn+jxc`, `npd+xsn+jxt`, `npp`, `npp+jca`, `npp+jca+jcm`, `npp+jca+jxc`, `npp+jca+jxc+jcm`, `npp+jca+jxt`, `npp+jcc`, `npp+jcj`, `npp+jcm`, `npp+jco`, `npp+jcs`, `npp+jcs+jxt`, `npp+jct`, `npp+jct+jcm`, `npp+jct+jxc`, `npp+jct+jxt`, `npp+jp+ecs`, `npp+jp+ecs+jco`, `npp+jp+ef`, `npp+jp+ef+jcs`, `npp+jp+ef+jxc+jcs`, `npp+jp+ef+jxt`, `npp+jp+ep+ecc`, `npp+jp+ep+ef`, `npp+jp+ep+etm`, `npp+jp+etm`, `npp+jxc`, `npp+jxc+jcc`, `npp+jxc+jcm`, `npp+jxc+jco`, `npp+jxt`, `npp+nbn+jca`, `npp+nbn+jcs`, `npp+ncn`, `npp+ncn+jca`, `npp+ncn+jca+jxc`, `npp+ncn+jca+jxt`, `npp+ncn+jcm`, `npp+ncn+jco`, `npp+ncn+jcs`, `npp+ncn+jct`, `npp+ncn+jct+jxt`, `npp+ncn+jp+ecs`, `npp+ncn+jxc`, `npp+ncn+jxt`, `npp+ncn+xsn`, `npp+ncpa`, `npp+ncpa+jca`, `npp+ncpa+jca+jxc`, `npp+ncpa+jcj`, `npp+ncpa+jcm`, `npp+ncpa+jco`, `npp+ncpa+jcs`, `npp+ncpa+jxt`, `npp+ncpa+ncpa+jca`, `npp+ncpa+xsn+jp+ecc`, `npp+ncpa+xsn+jp+etm`, `npp+npp+jco`, `npp+xsn`, `npp+xsn+jca`, `npp+xsn+jca+jxc`, `npp+xsn+jca+jxc+jxc`, `npp+xsn+jca+jxt`, `npp+xsn+jcj`, `npp+xsn+jcm`, `npp+xsn+jco`, `npp+xsn+jcs`, `npp+xsn+jcs+jxt`, `npp+xsn+jct`, `npp+xsn+jct+jcm`, `npp+xsn+jct+jxt`, `npp+xsn+jp+ecs`, `npp+xsn+jp+ef`, `npp+xsn+jp+etm`, `npp+xsn+jxc`, `npp+xsn+jxc+jcs`, `npp+xsn+jxc+jxt`, `npp+xsn+jxt`, `npp+xsn+ncn`, `npp+xsn+xsn`, `npp+xsn+xsn+jca`, `npp+xsn+xsn+jca+jxt`, `nq`, `nq+jca`, `nq+jca+jca`, `nq+jca+jca+jxc`, `nq+jca+jcm`, `nq+jca+jxc`, `nq+jca+jxc+jcm`, `nq+jca+jxc+jxc`, `nq+jca+jxt`, `nq+jcc`, `nq+jcj`, `nq+jcm`, `nq+jco`, `nq+jcr`, `nq+jcs`, `nq+jcs+jca+jxc`, `nq+jcs+jxt`, `nq+jct`, `nq+jct+jcm`, `nq+jct+jxt`, `nq+jp+ecc`, `nq+jp+ecs`, `nq+jp+ef`, `nq+jp+ef+jcr`, `nq+jp+ef+jcr+jxc`, `nq+jp+ep+ecc`, `nq+jp+ep+ecs`, `nq+jp+ep+ef`, `nq+jp+ep+etm`, `nq+jp+ep+etn`, `nq+jp+etm`, `nq+jp+etn+jco`, `nq+jxc`, `nq+jxc+jca+jxt`, `nq+jxc+jcm`, `nq+jxc+jcs`, `nq+jxc+jp+ef`, `nq+jxc+jp+ef+jcr`, `nq+jxc+jxc`, `nq+jxc+jxc+jxt`, `nq+jxc+jxt`, `nq+jxt`, `nq+nbn`, `nq+nbn+jca`, `nq+nbn+jcm`, `nq+nbn+jp+ep+ef`, `nq+ncn`, `nq+ncn+jca`, `nq+ncn+jca+jcm`, `nq+ncn+jca+jxc`, `nq+ncn+jca+jxt`, `nq+ncn+jcc`, `nq+ncn+jcj`, `nq+ncn+jcm`, `nq+ncn+jco`, `nq+ncn+jcs`, `nq+ncn+jct`, `nq+ncn+jct+jcm`, `nq+ncn+jct+jxc`, `nq+ncn+jct+jxt`, `nq+ncn+jp+ef`, `nq+ncn+jp+ep+ef`, `nq+ncn+jp+ep+etm`, `nq+ncn+jp+etm`, `nq+ncn+jxc`, `nq+ncn+jxc+jxt`, `nq+ncn+jxt`, `nq+ncn+ncn`, `nq+ncn+ncn+jca`, `nq+ncn+ncn+jca+jxt`, `nq+ncn+ncn+jcm`, `nq+ncn+ncn+jco`, `nq+ncn+ncn+jp+etm`, `nq+ncn+ncn+jxc`, `nq+ncn+ncn+ncn`, `nq+ncn+ncn+ncn+jca`, `nq+ncn+ncn+ncn+jcs`, `nq+ncn+ncn+xsn+jxt`, `nq+ncn+ncpa+jca`, `nq+ncn+ncpa+jcs`, `nq+ncn+ncpa+jxt`, `nq+ncn+ncpa+ncn`, `nq+ncn+ncpa+ncn+jcm`, `nq+ncn+xsn`, `nq+ncn+xsn+jca`, `nq+ncn+xsn+jca+jxt`, `nq+ncn+xsn+jcm`, `nq+ncn+xsn+jco`, `nq+ncn+xsn+jcs`, `nq+ncn+xsn+jct`, `nq+ncn+xsn+jp+etm`, `nq+ncn+xsn+jxt`, `nq+ncpa`, `nq+ncpa+jca`, `nq+ncpa+jcm`, `nq+ncpa+jco`, `nq+ncpa+jxt`, `nq+ncpa+ncn+jcm`, `nq+ncpa+ncn+jp+ef`, `nq+ncpa+ncn+jp+etm`, `nq+nq`, `nq+nq+jca`, `nq+nq+jcj`, `nq+nq+jcm`, `nq+nq+jcs`, `nq+nq+jct`, `nq+nq+jxc+jcs`, `nq+nq+jxt`, `nq+nq+ncn`, `nq+nq+ncn+jca`, `nq+nq+nq+jxt`, `nq+nq+nq+nq+jcm`, `nq+xsm+ecs`, `nq+xsm+etm`, `nq+xsn`, `nq+xsn+jca`, `nq+xsn+jca+jxc`, `nq+xsn+jca+jxt`, `nq+xsn+jcj`, `nq+xsn+jcm`, `nq+xsn+jco`, `nq+xsn+jcs`, `nq+xsn+jcs+jxt`, `nq+xsn+jct`, `nq+xsn+jct+jcm`, `nq+xsn+jp+ef`, `nq+xsn+jp+ef+jcr`, `nq+xsn+jp+ep+ef`, `nq+xsn+jp+etm`, `nq+xsn+jp+etn+jco`, `nq+xsn+jxc`, `nq+xsn+jxt`, `nq+xsn+xsn`, `nq+xsn+xsn+jcj`, `nq+xsn+xsn+jcs`, `nq+xsn+xsv+ep+etm`, `nq+xsv+ecs`, `paa+ecc`, `paa+ecc+jxc`, `paa+ecc+jxt`, `paa+ecs`, `paa+ecs+etm`, `paa+ecs+jca`, `paa+ecs+jcm`, `paa+ecs+jco`, `paa+ecs+jct`, `paa+ecs+jp+ecc`, `paa+ecs+jp+ep+ef`, `paa+ecs+jxc`, `paa+ecs+jxc+jxt`, `paa+ecs+jxt`, `paa+ecx`, `paa+ecx+jco`, `paa+ecx+jcs`, `paa+ecx+jxc`, `paa+ecx+jxt`, `paa+ecx+px+ecc`, `paa+ecx+px+ecs`, `paa+ecx+px+ecx`, `paa+ecx+px+ecx+jxc`, `paa+ecx+px+ecx+px+ecc`, `paa+ecx+px+ecx+px+ecx`, `paa+ecx+px+ecx+px+ef`, `paa+ecx+px+ecx+px+ep+ef`, `paa+ecx+px+ecx+px+etm`, `paa+ecx+px+ef`, `paa+ecx+px+ef+jcr`, `paa+ecx+px+ep+ecc`, `paa+ecx+px+ep+ecs`, `paa+ecx+px+ep+ef`, `paa+ecx+px+ep+ef+jcr`, `paa+ecx+px+ep+etm`, `paa+ecx+px+ep+etn`, `paa+ecx+px+ep+etn+jco`, `paa+ecx+px+etm`, `paa+ecx+px+etn`, `paa+ecx+px+etn+jca`, `paa+ecx+px+etn+jco`, `paa+ecx+px+etn+jcs`, `paa+ecx+px+etn+jxc`, `paa+ecx+px+etn+jxt`, `paa+ef`, `paa+ef+ecc`, `paa+ef+ecs`, `paa+ef+ecs+jxc`, `paa+ef+jca`, `paa+ef+jcm`, `paa+ef+jco`, `paa+ef+jcr`, `paa+ef+jcr+jxc`, `paa+ef+jcr+jxt`, `paa+ef+jxf`, `paa+ep+ecc`, `paa+ep+ecs`, `paa+ep+ecs+jxc`, `paa+ep+ef`, `paa+ep+ef+jcr`, `paa+ep+ef+jxc`, `paa+ep+ef+jxf`, `paa+ep+ef+jxt`, `paa+ep+ep+ecs`, `paa+ep+ep+ef`, `paa+ep+ep+etm`, `paa+ep+etm`, `paa+ep+etn`, `paa+ep+etn+jca`, `paa+ep+etn+jca+jxc`, `paa+ep+etn+jco`, `paa+ep+etn+jcs`, `paa+ep+etn+jxt`, `paa+etm`, `paa+etn`, `paa+etn+jca`, `paa+etn+jca+jxc`, `paa+etn+jca+jxt`, `paa+etn+jcc`, `paa+etn+jcj`, `paa+etn+jcm`, `paa+etn+jco`, `paa+etn+jcs`, `paa+etn+jct`, `paa+etn+jp+ecc`, `paa+etn+jp+ef`, `paa+etn+jp+ep+ecs`, `paa+etn+jp+ep+ef`, `paa+etn+jxc`, `paa+etn+jxt`, `paa+jxt`, `pad+ecc`, `pad+ecc+jxt`, `pad+ecs`, `pad+ecs+jxc`, `pad+ecs+jxt`, `pad+ecx`, `pad+ecx+jcs`, `pad+ecx+jxc`, `pad+ecx+jxt`, `pad+ecx+px+ecs`, `pad+ecx+px+ecx+px+ecc+jxt`, `pad+ef`, `pad+ef+jcr`, `pad+ef+jcr+jxt`, `pad+ef+jxf`, `pad+ef+jxt`, `pad+ep+ecc`, `pad+ep+ecs`, `pad+ep+ef`, `pad+ep+ef+jco`, `pad+ep+etm`, `pad+etm`, `pad+etn`, `pad+etn+jxt`, `pvd+ecc+jxc`, `pvd+ecs`, `pvd+ecs+jp+ecs`, `pvd+ecs+jxc`, `pvd+ecs+jxt`, `pvd+ecx`, `pvd+ep+ef`, `pvd+ep+etm`, `pvd+etm`, `pvd+etn`, `pvd+etn+jca`, `pvd+etn+jca+jxc`, `pvg+ecc`, `pvg+ecc+jxc`, `pvg+ecc+jxt`, `pvg+ecs`, `pvg+ecs+ecs`, `pvg+ecs+jca`, `pvg+ecs+jca+jxt`, `pvg+ecs+jcc`, `pvg+ecs+jcm`, `pvg+ecs+jco`, `pvg+ecs+jcs`, `pvg+ecs+jct`, `pvg+ecs+jp+ecs`, `pvg+ecs+jp+ef`, `pvg+ecs+jp+ep+ecs`, `pvg+ecs+jp+ep+ef`, `pvg+ecs+jp+ep+ef+jcr`, `pvg+ecs+jxc`, `pvg+ecs+jxc+jcc`, `pvg+ecs+jxc+jp+ef`, `pvg+ecs+jxc+jp+ep+ef`, `pvg+ecs+jxt`, `pvg+ecx`, `pvg+ecx+jco`, `pvg+ecx+jxc`, `pvg+ecx+jxt`, `pvg+ecx+jxt+px+ep+ef`, `pvg+ecx+px+ecc`, `pvg+ecx+px+ecc+jxc`, `pvg+ecx+px+ecc+jxt`, `pvg+ecx+px+ecs`, `pvg+ecx+px+ecs+jxc`, `pvg+ecx+px+ecs+jxt`, `pvg+ecx+px+ecx`, `pvg+ecx+px+ecx+jco`, `pvg+ecx+px+ecx+jxc`, `pvg+ecx+px+ecx+jxt`, `pvg+ecx+px+ecx+px+ecc`, `pvg+ecx+px+ecx+px+ecs`, `pvg+ecx+px+ecx+px+ecs+jxt`, `pvg+ecx+px+ecx+px+ecx`, `pvg+ecx+px+ecx+px+ecx+px+ecc`, `pvg+ecx+px+ecx+px+ef`, `pvg+ecx+px+ecx+px+ep+ecc`, `pvg+ecx+px+ecx+px+ep+ef`, `pvg+ecx+px+ecx+px+ep+etm`, `pvg+ecx+px+ecx+px+ep+etn+jco`, `pvg+ecx+px+ecx+px+etm`, `pvg+ecx+px+ecx+px+etn`, `pvg+ecx+px+ecx+px+etn+jca`, `pvg+ecx+px+ef`, `pvg+ecx+px+ef+jca`, `pvg+ecx+px+ef+jcm`, `pvg+ecx+px+ef+jcr`, `pvg+ecx+px+ep+ecc`, `pvg+ecx+px+ep+ecs`, `pvg+ecx+px+ep+ecs+jxc`, `pvg+ecx+px+ep+ef`, `pvg+ecx+px+ep+ef+jcm`, `pvg+ecx+px+ep+ef+jcr`, `pvg+ecx+px+ep+ef+jxf`, `pvg+ecx+px+ep+ep+ecs`, `pvg+ecx+px+ep+etm`, `pvg+ecx+px+ep+etn`, `pvg+ecx+px+ep+etn+jca`, `pvg+ecx+px+ep+etn+jca+jxc`, `pvg+ecx+px+ep+etn+jco`, `pvg+ecx+px+etm`, `pvg+ecx+px+etn`, `pvg+ecx+px+etn+jca`, `pvg+ecx+px+etn+jca+jxc`, `pvg+ecx+px+etn+jca+jxt`, `pvg+ecx+px+etn+jco`, `pvg+ecx+px+etn+jcs`, `pvg+ecx+px+etn+jct`, `pvg+ecx+px+etn+jxc`, `pvg+ecx+px+etn+jxc+jxt`, `pvg+ecx+px+etn+jxt`, `pvg+ef`, `pvg+ef+jca`, `pvg+ef+jcm`, `pvg+ef+jco`, `pvg+ef+jcr`, `pvg+ef+jcr+jxc`, `pvg+ef+jcr+jxt`, `pvg+ef+jcs`, `pvg+ef+jp+ef+jcr`, `pvg+ef+jp+etm`, `pvg+ef+jxc`, `pvg+ef+jxf`, `pvg+ef+jxt`, `pvg+ep+ecc`, `pvg+ep+ecc+jxt`, `pvg+ep+ecs`, `pvg+ep+ecs+jca+jxt`, `pvg+ep+ecs+jco`, `pvg+ep+ecs+jxc`, `pvg+ep+ecs+jxt`, `pvg+ep+ecx`, `pvg+ep+ecx+px+ef`, `pvg+ep+ef`, `pvg+ep+ef+jca`, `pvg+ep+ef+jcm`, `pvg+ep+ef+jco`, `pvg+ep+ef+jcr`, `pvg+ep+ef+jcr+jxc`, `pvg+ep+ef+jcr+jxt`, `pvg+ep+ef+jct`, `pvg+ep+ef+jxc`, `pvg+ep+ef+jxf`, `pvg+ep+ef+jxt`, `pvg+ep+ep+ef`, `pvg+ep+ep+ef+jco`, `pvg+ep+ep+ef+jxf`, `pvg+ep+etm`, `pvg+ep+etn`, `pvg+ep+etn+jca`, `pvg+ep+etn+jca+jxc`, `pvg+ep+etn+jca+jxt`, `pvg+ep+etn+jco`, `pvg+ep+etn+jcs`, `pvg+ep+etn+jxt`, `pvg+etm`, `pvg+etn`, `pvg+etn+jca`, `pvg+etn+jca+jxc`, `pvg+etn+jca+jxt`, `pvg+etn+jcc`, `pvg+etn+jcj`, `pvg+etn+jcm`, `pvg+etn+jco`, `pvg+etn+jcr`, `pvg+etn+jcs`, `pvg+etn+jct`, `pvg+etn+jct+jxt`, `pvg+etn+jp+ecc`, `pvg+etn+jp+ecs`, `pvg+etn+jp+ef`, `pvg+etn+jp+ef+jcr`, `pvg+etn+jp+ef+jcs`, `pvg+etn+jp+ep+ef`, `pvg+etn+jp+ep+ef+jcr`, `pvg+etn+jp+etm`, `pvg+etn+jxc`, `pvg+etn+jxc+jca+jxt`, `pvg+etn+jxc+jcm`, `pvg+etn+jxc+jco`, `pvg+etn+jxc+jcs`, `pvg+etn+jxc+jxt`, `pvg+etn+jxt`, `pvg+etn+xsm+ecs`, `pvg+etn+xsn+jcm`, `px+ecc`, `px+ecc+jxc`, `px+ecc+jxc+jp+ef`, `px+ecc+jxt`, `px+ecs`, `px+ecs+jca`, `px+ecs+jcc`, `px+ecs+jcj`, `px+ecs+jcm`, `px+ecs+jco`, `px+ecs+jp+ep+ef`, `px+ecs+jxc`, `px+ecs+jxt`, `px+ecx`, `px+ecx+jxc`, `px+ecx+jxt`, `px+ecx+px+ecs`, `px+ecx+px+ecx`, `px+ecx+px+ef`, `px+ecx+px+ef+jcr`, `px+ecx+px+ep+ef`, `px+ecx+px+etm`, `px+ecx+px+etn+jca`, `px+ef`, `px+ef+etm`, `px+ef+jca`, `px+ef+jca+jxc`, `px+ef+jcj`, `px+ef+jcm`, `px+ef+jco`, `px+ef+jcr`, `px+ef+jcr+jxc`, `px+ef+jcs`, `px+ef+jp+etm`, `px+ef+jxc`, `px+ef+jxf`, `px+ef+jxt`, `px+ep+ecc`, `px+ep+ecs`, `px+ep+ecs+jxc`, `px+ep+ecs+jxt`, `px+ep+ecx`, `px+ep+ef`, `px+ep+ef+jca`, `px+ep+ef+jco`, `px+ep+ef+jcr`, `px+ep+ef+jcr+jxc`, `px+ep+ef+jxf`, `px+ep+ep+ef`, `px+ep+ep+ef+jxf`, `px+ep+etm`, `px+ep+etn`, `px+ep+etn+jca`, `px+ep+etn+jca+jxc`, `px+ep+etn+jco`, `px+ep+etn+jcs`, `px+ep+etn+jxc`, `px+ep+etn+jxt`, `px+etm`, `px+etn`, `px+etn+jca`, `px+etn+jca+jxc`, `px+etn+jca+jxt`, `px+etn+jco`, `px+etn+jcs`, `px+etn+jct`, `px+etn+jxc`, `px+etn+jxc+jxt`, `px+etn+jxt`, `sf`, `sl`, `sp`, `sr`, `su`, `su+jca`, `su+jcm`, `xp+nbn`, `xp+nbu`, `xp+ncn`, `xp+ncn+jca`, `xp+ncn+jcm`, `xp+ncn+jco`, `xp+ncn+jcs`, `xp+ncn+jp+ef`, `xp+ncn+jp+ep+ef`, `xp+ncn+jxt`, `xp+ncn+ncn+jca`, `xp+ncn+ncn+jcm`, `xp+ncn+ncn+jco`, `xp+ncn+ncpa+jco`, `xp+ncn+xsn`, `xp+ncn+xsn+jca`, `xp+ncn+xsn+jcm`, `xp+ncn+xsn+jp+ef`, `xp+ncn+xsn+jp+etm`, `xp+ncpa`, `xp+ncpa+jca`, `xp+ncpa+jcm`, `xp+ncpa+jco`, `xp+ncpa+ncn+jcm`, `xp+ncpa+ncn+jco`, `xp+ncpa+ncpa+jco`, `xp+ncpa+xsn`, `xp+ncpa+xsn+jp+etm`, `xp+ncpa+xsv+ecc`, `xp+ncpa+xsv+ecs`, `xp+ncpa+xsv+ecx`, `xp+ncpa+xsv+ef`, `xp+ncpa+xsv+ef+jcr`, `xp+ncpa+xsv+ep+ef`, `xp+ncpa+xsv+etm`, `xp+ncpa+xsv+etn+jca`, `xp+ncps`, `xp+ncps+xsm+ecs`, `xp+ncps+xsm+ecx`, `xp+ncps+xsm+ef`, `xp+ncps+xsm+ep+ef`, `xp+ncps+xsm+etm`, `xp+ncps+xsn`, `xp+nnc`, `xp+nnc+jcm`, `xp+nnc+nbn`, `xp+nnc+nbu`, `xp+nnc+nbu+jcs`, `xp+nnc+ncn`, `xp+nnc+ncn+jca`, `xp+nnc+ncn+jcm`, `xp+nnc+ncn+jcs`, `xp+nnc+ncn+jp+ef+jcr`, `xp+nno`, `xp+nno+jcm`, `xp+nno+nbn+jca`, `xp+nno+nbu`, `xp+nno+nbu+jcs`, `xp+nno+ncn`, `xp+nno+ncn+jca`, `xp+nno+ncn+jcs`, `xp+nno+ncn+jxt`, `xp+nq`, `xp+nq+ncn+jca`, `xp+nq+ncpa`, `xp+nq+ncpa+jco`, `xp+nq+ncpa+jp+etm`, `xsm+etm`, `xsn`, `xsn+jca`, `xsn+jca+jxt`, `xsn+jco`, `xsn+jcs`, `xsn+jp+ef`, `xsn+jp+ep+ef`, `xsn+jxc+jca+jxt`, `xsn+jxc+jcs`, `xsn+jxt`, `xsv+ecc`, `xsv+ecs`, `xsv+ecx+px+ep+ef`, `xsv+ep+ecx`, `xsv+etm` | | **`morphologizer`** | `POS=CCONJ`, `POS=ADV`, `POS=SCONJ`, `POS=DET`, `POS=NOUN`, `POS=VERB`, `POS=ADJ`, `POS=PUNCT`, `POS=AUX`, `POS=PRON`, `POS=PROPN`, `POS=NUM`, `POS=INTJ`, `POS=PART`, `POS=X`, `POS=ADP`, `POS=SYM` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `discourse`, `dislocated`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `punct`, `vocative`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `0`, `3`, `5`, `7`, `9`, `11`, `12`, `16`, `18`, `20`, `22`, `25`, `28`, `31`, `34`, `35`, `36`, `39`, `40`, `43`, `45`, `47`, `48`, `51`, `54`, `56`, `58`, `60`, `61`, `63`, `65`, `67`, `69`, `71`, `73`, `75`, `76`, `78`, `79`, `82`, `85`, `87`, `89`, `92`, `95`, `97`, `99`, `101`, `104`, `106`, `109`, `112`, `114`, `116`, `119`, `121`, `122`, `124`, `126`, `127`, `128`, `130`, `133`, `135`, `137`, `140`, `142`, `145`, `147`, `148`, `150`, `151`, `152`, `155`, `156`, `158`, `161`, `162`, `164`, `167`, `169`, `172`, `174`, `176`, `177`, `179`, `182`, `184`, `186`, `188`, `191`, `192`, `194`, `196`, `199`, `202`, `203`, `173`, `115`, `205`, `207`, `210`, `213`, `216`, `218`, `221`, `146`, `223`, `225`, `227`, `229`, `230`, `231`, `232`, `234`, `236`, `238`, `239`, `242`, `244`, `246`, `248`, `251`, `253`, `255`, `256`, `259`, `261`, `264`, `265`, `268`, `270`, `272`, `274`, `276`, `278`, `279`, `282`, `285`, `287`, `289`, `293`, `295`, `297`, `300`, `302`, `304`, `307`, `309`, `310`, `313`, `315`, `226`, `318`, `319`, `321`, `323`, `325`, `327`, `329`, `332`, `334`, `335`, `337`, `149`, `339`, `340`, `342`, `344`, `346`, `348`, `349`, `350`, `352`, `354`, `356`, `358`, `360`, `361`, `363`, `365`, `369`, `370`, `372`, `374`, `376`, `21`, `15`, `377`, `379`, `382`, `385`, `387`, `388`, `254`, `390`, `393`, `395`, `397`, `399`, `401`, `403`, `404`, `405`, `407`, `408`, `411`, `414`, `417`, `418`, `421`, `422`, `424`, `427`, `429`, `431`, `435`, `437`, `439`, `440`, `442`, `443`, `444`, `447`, `449`, `451`, `389`, `454`, `455`, `457`, `460`, `461`, `463`, `466`, `468`, `471`, `473`, `476`, `477`, `479`, `482`, `296`, `485`, `487`, `490`, `492`, `493`, `495`, `497`, `500`, `502`, `504`, `505`, `507`, `510`, `511`, `514`, `267`, `516`, `520`, `472`, `523`, `525`, `526`, `527`, `530`, `532`, `462`, `533`, `534`, `535`, `537`, `540`, `541`, `465`, `543`, `545`, `546`, `547`, `550`, `551`, `552`, `553`, `555`, `556`, `72`, `558`, `560`, `562`, `563`, `564`, `567`, `568`, `571`, `574`, `577`, `579`, `581`, `582`, `584`, `587`, `589`, `591`, `594`, `595`, `597`, `600`, `603`, `606`, `608`, `610`, `611`, `613`, `614`, `616`, `617`, `620`, `10`, `623`, `626`, `629`, `632`, `633`, `635`, `637`, `638`, `640`, `642`, `644`, `645`, `647`, `648`, `651`, `652`, `653`, `655`, `657`, `659`, `660`, `664`, `666`, `667`, `669`, `672`, `674`, `675`, `676`, `678`, `679`, `680`, `683`, `684`, `687`, `689`, `690`, `692`, `694`, `697`, `699`, `702`, `703`, `706`, `707`, `710`, `713`, `715`, `717`, `719`, `721`, `723`, `725`, `728`, `730`, `733`, `735`, `738`, `740`, `743`, `744`, `649`, `747`, `749`, `753`, `756`, `757`, `759`, `761`, `764`, `767`, `769`, `772`, `774`, `777`, `780`, `783`, `785`, `787`, `789`, `792`, `794`, `797`, `799`, `800`, `802`, `805`, `806`, `808`, `809`, `811`, `812`, `813`, `815`, `817`, `819`, `820`, `59`, `822`, `824`, `827`, `829`, `831`, `618`, `832`, `834`, `836`, `838`, `724`, `841`, `55`, `842`, `844`, `846`, `847`, `850`, `852`, `855`, `857`, `859`, `861`, `863`, `865`, `868`, `869`, `871`, `873`, `874`, `877`, `880`, `884`, `887`, `890`, `891`, `892`, `893`, `896`, `898`, `901`, `351`, `904`, `906`, `908`, `911`, `913`, `915`, `650`, `918`, `920`, `830`, `921`, `923`, `924`, `926`, `927`, `930`, `931`, `934`, `937`, `938`, `940`, `941`, `942`, `945`, `947`, `949`, `952`, `954`, `957`, `960`, `963`, `965`, `967`, `970`, `972`, `974`, `977`, `980`, `981`, `983`, `985`, `986`, `988`, `991`, `994`, `997`, `999`, `1000`, `1002`, `1005`, `1006`, `1007`, `1010`, `125`, `1013`, `1016`, `1017`, `1019`, `1020`, `1024`, `1026`, `1028`, `1030`, `1032`, `1034`, `1036`, `1038`, `1040`, `1041`, `1044`, `1045`, `1048`, `415`, `1051`, `1053`, `1055`, `1056`, `1058`, `1061`, `1063`, `1065`, `1067`, `1068`, `1069`, `1070`, `1074`, `946`, `1077`, `1079`, `1081`, `1083`, `1086`, `1088`, `1089`, `1092`, `936`, `1096`, `1098`, `1101`, `1104`, `1106`, `1108`, `1110`, `1112`, `1114`, `1116`, `1118`, `1119`, `1120`, `1085`, `1123`, `1125`, `1127`, `1031`, `1128`, `1131`, `1124`, `1134`, `1135`, `1137`, `1139`, `1142`, `1144`, `1145`, `1147`, `1150`, `1152`, `1156`, `1158`, `1159`, `1162`, `1164`, `1166`, `1167`, `1170`, `1172`, `1174`, `1176`, `1178`, `1180`, `1181`, `1183`, `1186`, `1187`, `1189`, `1192`, `1195`, `1198`, `1200`, `1201`, `1204`, `1206`, `1208`, `1209`, `763`, `1211`, `1212`, `1214`, `1215`, `1218`, `1220`, `1222`, `1225`, `1226`, `1227`, `1228`, `1230`, `1232`, `1234`, `1236`, `1237`, `1239`, `1241`, `181`, `1244`, `1245`, `1247`, `1249`, `1251`, `1253`, `1256`, `1257`, `1260`, `1261`, `1262`, `1264`, `1267`, `1268`, `1269`, `1272`, `1274`, `1277`, `1280`, `1283`, `1285`, `1287`, `1289`, `1290`, `1294`, `1296`, `1279`, `1298`, `1300`, `1303`, `1304`, `1306`, `1309`, `1311`, `1313`, `1314`, `1317`, `1319`, `1320`, `1324`, `1327`, `1329`, `1332`, `1334`, `1336`, `1338`, `1340`, `1342`, `1344`, `1345`, `303`, `1346`, `1349`, `1350`, `1352`, `1354`, `1356`, `1359`, `362`, `1360`, `1363`, `1365`, `1366`, `1367`, `1369`, `1370`, `1372`, `1374`, `1375`, `1378`, `1380`, `1384`, `1385`, `1389`, `1390`, `1393`, `1395`, `1398`, `1403`, `1404`, `1405`, `1407`, `1410`, `1413`, `1415`, `1418`, `1420`, `1422`, `1423`, `1425`, `1426`, `1428`, `1429`, `1431`, `1433`, `1435`, `1436`, `1438`, `1440`, `1442`, `1444`, `1447`, `1448`, `1449`, `1451`, `1452`, `105`, `1454`, `1456`, `1457`, `1459`, `1462`, `1463`, `1464`, `1466`, `1468`, `1470`, `1471`, `1475`, `810`, `1476`, `1478`, `1480`, `1483`, `1485`, `1487`, `1490`, `1493`, `450`, `1496`, `1498`, `1501`, `1504`, `1506`, `1508`, `1510`, `1513`, `1515`, `1517`, `1520`, `1523`, `1526`, `1529`, `1531`, `1535`, `1536`, `1538`, `1540`, `1542`, `1545`, `1548`, `1550`, `1554`, `1555`, `1558`, `1559`, `1561`, `1563`, `1565`, `1566`, `1568`, `1569`, `1572`, `1574`, `1576`, `1578`, `1580`, `1581`, `1582`, `1585`, `1586`, `1589`, `1591`, `1593`, `1596`, `1597`, `416`, `615`, `1599`, `1601`, `1603`, `1608`, `1611`, `840`, `1613`, `1614`, `1616`, `1618`, `1622`, `1624`, `1627`, `1630`, `1633`, `1636`, `1638`, `1642`, `1645`, `1647`, `1650`, `1653`, `1656`, `1659`, `1661`, `1664`, `1665`, `1668`, `1670`, `1671`, `1674`, `1676`, `1679`, `1680`, `1683`, `1685`, `1687`, `1689`, `1694`, `1697`, `1698`, `1699`, `1700`, `1702`, `1705`, `1706`, `1709`, `1711`, `1712`, `1714`, `1718`, `1720`, `1721`, `1723`, `1725`, `1726`, `1728`, `987`, `506`, `1730`, `1733`, `1735`, `1736`, `1738`, `1740`, `1741`, `1743`, `1745`, `1747`, `1748`, `166`, `1750`, `1752`, `1753`, `1755`, `1758`, `1761`, `1763`, `224`, `1764`, `1767`, `1768`, `1771`, `1773`, `1777`, `1779`, `1783`, `1786`, `1787`, `1791`, `1794`, `1797`, `1798`, `1799`, `1801`, `1804`, `1806`, `1807`, `1809`, `228`, `1810`, `1813`, `1814`, `1817`, `1819`, `1821`, `1824`, `1826`, `1829`, `1831`, `1833`, `1834`, `1835`, `1837`, `1839`, `1637`, `1840`, `1844`, `1846`, `905`, `1850`, `1851`, `1853`, `1855`, `1858`, `1859`, `1861`, `1862`, `1863`, `1866`, `1867`, `1869`, `1873`, `1875`, `1878`, `1879`, `1883`, `1884`, `1887`, `1890`, `1892`, `1895`, `1896`, `1899`, `1901`, `1903`, `1905`, `1907`, `1908`, `1909`, `1910`, `1912`, `1914`, `1917`, `1920`, `1922`, `1924`, `1926`, `1928`, `1929`, `1932`, `1933`, `1935`, `1936`, `1937`, `1940`, `1942`, `1944`, `1946`, `1947`, `1949`, `1952`, `1953`, `1956`, `1959`, `1960`, `1962`, `1964`, `1965`, `1966`, `1967`, `1970`, `1971`, `1972`, `1974`, `1975`, `1976`, `1977`, `1978`, `1979`, `1981`, `1982`, `1983`, `1985`, `1987`, `1991`, `673`, `1992`, `1994`, `1995`, `1997`, `1999`, `2002`, `2003`, `2005`, `2008`, `2010`, `2012`, `2013`, `2015`, `2017`, `2019`, `2020`, `2023`, `2026`, `2027`, `2030`, `2032`, `2034`, `2036`, `2038`, `2040`, `2041`, `2042`, `2045`, `2046`, `2048`, `2049`, `2051`, `2052`, `2053`, `1295`, `2054`, `536`, `2057`, `2059`, `2062`, `2064`, `2066`, `2067`, `2068`, `2072`, `2075`, `2076`, `2078`, `2081`, `2083`, `2085`, `2086`, `2088`, `2090`, `2091`, `2093`, `2096`, `2098`, `2099`, `2102`, `2104`, `2105`, `2107`, `2110`, `2111`, `17`, `2113`, `2116`, `2118`, `2121`, `2123`, `2124`, `2125`, `2127`, `2128`, `2129`, `2131`, `2133`, `2135`, `2137`, `2140`, `2141`, `2143`, `2145`, `2146`, `2147`, `2149`, `2151`, `2154`, `2155`, `2156`, `2159`, `2160`, `2161`, `2162`, `2163`, `2165`, `2168`, `1477`, `2170`, `2171`, `2173`, `2174`, `2175`, `2177`, `2180`, `2181`, `2183`, `2185`, `2187`, `2188`, `2190`, `2193`, `2195`, `2199`, `2202`, `2204`, `2205`, `2207`, `2210`, `2212`, `2213`, `2216`, `338`, `2218`, `2220`, `2222`, `2224`, `2226`, `2229`, `2231`, `2233`, `2236`, `2238`, `2240`, `2243`, `2245`, `2247`, `2248`, `593`, `2250`, `2251`, `2256`, `2258`, `2261`, `2263`, `2264`, `2266`, `2268`, `2271`, `2274`, `2277`, `2278`, `2281`, `2282`, `2284`, `2287`, `2289`, `2292`, `345`, `2294`, `2297`, `2299`, `2301`, `2304`, `2306`, `2308`, `2310`, `2312`, `2315`, `2317`, `2318`, `2321`, `2322`, `2323`, `1663`, `2324`, `2328`, `2331`, `2332`, `2335`, `2337`, `2339`, `2341`, `2344`, `2346`, `2348`, `2350`, `2354`, `2355`, `2359`, `2361`, `2363`, `2366`, `2368`, `2369`, `2372`, `2375`, `2376`, `2380`, `2384`, `2167`, `2385`, `2386`, `2388`, `2391`, `2393`, `2395`, `2397`, `2398`, `2400`, `2403`, `2404`, `2406`, `2410`, `2412`, `2414`, `2416`, `2418`, `1111`, `2420`, `2421`, `2422`, `2425`, `2428`, `2431`, `2433`, `2435`, `2437`, `2438`, `2439`, `2442`, `2445`, `2447`, `2448`, `2450`, `2453`, `2456`, `2459`, `2461`, `2462`, `2463`, `2466`, `2467`, `2470`, `2471`, `2473`, `2476`, `2478`, `2479`, `2482`, `2485`, `2486`, `2488`, `2489`, `2491`, `2494`, `2496`, `2498`, `2501`, `2503`, `2506`, `2507`, `2508`, `2510`, `2512`, `2513`, `2515`, `2517`, `2518`, `2520`, `2522`, `2526`, `2529`, `2531`, `1219`, `2534`, `2536`, `2538`, `2540`, `2542`, `2544`, `2546`, `2547`, `2549`, `2550`, `2552`, `2553`, `2556`, `2559`, `2561`, `2563`, `2565`, `2567`, `2569`, `2571`, `2573`, `2575`, `2577`, `2578`, `2579`, `2580`, `2583`, `2585`, `2587`, `2590`, `2594`, `2596`, `2598`, `2602`, `2605`, `2607`, `2609`, `2613`, `2614`, `2615`, `2616`, `2620`, `2621`, `2625`, `2626`, `2629`, `2631`, `2632`, `2634`, `2636`, `2639`, `2640`, `2642`, `2643`, `2644`, `2647`, `2648`, `2650`, `2653`, `2656`, `2658`, `864`, `2661`, `1052`, `2662`, `2664`, `2665`, `2666`, `2669`, `2672`, `2674`, `2676`, `2679`, `2680`, `2682`, `2684`, `2687`, `2688`, `2693`, `2695`, `2697`, `2699`, `2700`, `2703`, `2705`, `2686`, `2706`, `2709`, `2711`, `2714`, `2717`, `2719`, `2721`, `2725`, `2728`, `2730`, `2192`, `2731`, `2734`, `2735`, `2738`, `2739`, `2741`, `2744`, `2745`, `2747`, `2750`, `2753`, `2755`, `2758`, `2759`, `2761`, `2763`, `2766`, `2768`, `2769`, `2771`, `2773`, `2775`, `2776`, `2779`, `2782`, `2785`, `2786`, `2788`, `1406`, `2790`, `2791`, `2792`, `2793`, `2794`, `2796`, `2799`, `2801`, `2804`, `2807`, `2810`, `2813`, `2814`, `2816`, `2818`, `2820`, `2822`, `2824`, `2827`, `2828`, `2830`, `2833`, `2803`, `2835`, `2837`, `2839`, `2841`, `2844`, `2845`, `2846`, `2847`, `2849`, `2850`, `2852`, `2853`, `2854`, `2857`, `2859`, `2861`, `2863`, `2865`, `2867`, `2869`, `2871`, `2872`, `2874`, `2876`, `2878`, `2880`, `2882`, `2884`, `2886`, `2887`, `2891`, `2894`, `2895`, `2896`, `2897`, `2900`, `2903`, `2904`, `386`, `2906`, `2909`, `2912`, `2913`, `2915`, `2917`, `2919`, `2920`, `2923`, `2924`, `2925`, `2926`, `2928`, `2930`, `2932`, `2935`, `2938`, `2939`, `2940`, `2944`, `2946`, `2947`, `2951`, `2952`, `2955`, `2957`, `2961`, `2963`, `2965`, `2968`, `2971`, `275`, `2973`, `2975`, `2977`, `2980`, `2982`, `2984`, `2988`, `573`, `2990`, `2991`, `2993`, `2994`, `2995`, `2998`, `3001`, `3004`, `3007`, `3009`, `378`, `3012`, `3013`, `3014`, `3015`, `3018`, `3020`, `3022`, `3024`, `3026`, `3028`, `3031`, `3033`, `3036`, `3037`, `3039`, `3041`, `3042`, `3043`, `3044`, `3046`, `3048`, `3049`, `3050`, `3053`, `3055`, `3056`, `3058`, `3060`, `3062`, `3064`, `3066`, `3068`, `3071`, `3072`, `3073`, `3076`, `3078`, `3079`, `3081`, `3084`, `3085`, `3087`, `445`, `3089`, `3091`, `3093`, `3094`, `3097`, `3098`, `3100`, `456`, `3104`, `3106`, `3107`, `3109`, `3111`, `3113`, `3115`, `3117`, `3118`, `3121`, `3122`, `3124`, `3126`, `3128`, `3130`, `3132`, `3135`, `3136`, `3137`, `3138`, `3141`, `3142`, `3144`, `3146`, `1080`, `3151`, `3153`, `3155`, `3156`, `3160`, `98`, `3162`, `3163`, `3165`, `3166`, `3169`, `3171`, `3173`, `3175`, `3176`, `3179`, `3182`, `3185`, `3186`, `3189`, `3192`, `3195`, `3198`, `3199`, `3201`, `3203`, `3204`, `3205`, `3208`, `3209`, `2597`, `3210`, `3213`, `3216`, `3217`, `3218`, `1592`, `3221`, `3222`, `3224`, `3227`, `3229`, `3230`, `3231`, `3233`, `3237`, `3240`, `3243`, `3246`, `3248`, `3251`, `3252`, `3253`, `347`, `3255`, `3258`, `3260`, `3263`, `3266`, `3267`, `3271`, `3272`, `3275`, `3276`, `3279`, `3281`, `3283`, `3286`, `3289`, `3290`, `3293`, `3294`, `3295`, `3297`, `3299`, `3300`, `3301`, `3304`, `3307`, `3311`, `136`, `3313`, `3314`, `3316`, `3318`, `3320`, `3324`, `3326`, `3330`, `3333`, `3335`, `3337`, `3341`, `3343`, `3346`, `3350`, `3352`, `3353`, `3355`, `3356`, `3358`, `3360`, `3362`, `3364`, `3366`, `3369`, `3370`, `3372`, `3373`, `3376`, `3378`, `3380`, `2106`, `3382`, `3386`, `3387`, `3390`, `3392`, `3393`, `3395`, `3398`, `3400`, `3403`, `3404`, `3407`, `3409`, `3410`, `1762`, `3412`, `3414`, `3416`, `3418`, `3420`, `3421`, `3424`, `3427`, `3430`, `3432`, `3433`, `3435`, `3437`, `3438`, `3440`, `3442`, `1205`, `3445`, `3447`, `3448`, `3449`, `3453`, `3455`, `3456`, `3457`, `1626`, `3461`, `3464`, `3465`, `3468`, `3471`, `3472`, `3475`, `3478`, `3131`, `3480`, `3482`, `3483`, `3486`, `3489`, `3492`, `3494`, `3496`, `3497`, `3500`, `3502`, `3505`, `3506`, `3509`, `3511`, `3514`, `3516`, `3519`, `3522`, `3523`, `3525`, `3531`, `3534`, `3536`, `3538`, `3540`, `3541`, `3543`, `3546`, `3549`, `3551`, `3554`, `3555`, `3558`, `3560`, `3562`, `3565`, `3567`, `3569`, `3573`, `3574`, `3577`, `3579`, `3581`, `3584`, `3587`, `3590`, `3592`, `3595`, `3597`, `3599`, `3601`, `3604`, `3607`, `3610`, `3612`, `3615`, `3617`, `3620`, `3623`, `3627`, `3629`, `3632`, `3634`, `3635`, `3637`, `3639`, `3642`, `3645`, `3648`, `3649`, `3652`, `3654`, `3655`, `3657`, `3658`, `3660`, `3665`, `2016`, `3669`, `3670`, `3672`, `3674`, `3675`, `3676`, `3678`, `3680`, `3683`, `3686`, `3689`, `3690`, `3693`, `3696`, `3698`, `3700`, `3702`, `3704`, `3706`, `3708`, `3710`, `3712`, `3714`, `3716`, `3719`, `3720`, `3721`, `3723`, `3724`, `3726`, `3730`, `3732`, `3735`, `3736`, `3737`, `3738`, `3741`, `3743`, `3746`, `3748`, `3750`, `3752`, `3755`, `3756`, `3757`, `3759`, `3761`, `3764`, `3765`, `3767`, `3771`, `3772`, `3774`, `3776`, `3778`, `3781`, `3783`, `3784`, `3786`, `3789`, `3790`, `3793`, `3796`, `3799`, `3802`, `3805`, `3806`, `3807`, `3809`, `3811`, `3815`, `3817`, `3818`, `3823`, `3825`, `3828`, `3831`, `3832`, `3834`, `3836`, `3838`, `3841`, `3843`, `3845`, `3847`, `3848`, `3850`, `1800`, `3852`, `3854`, `3856`, `3858`, `3861`, `3865`, `3866`, `3868`, `3869`, `3873`, `3875`, `3878`, `3879`, `3881`, `3884`, `3886`, `3888`, `3891`, `3893`, `3895`, `3897`, `3898`, `3900`, `3901`, `3904`, `3907`, `3908`, `3910`, `3912`, `3913`, `3914`, `3916`, `3917`, `3919`, `3920`, `3923`, `3924`, `3926`, `3928`, `3930`, `3931`, `3934`, `3939`, `3941`, `3942`, `3944`, `3948`, `3950`, `3951`, `3953`, `3956`, `3957`, `3958`, `3960`, `3963`, `3966`, `3969`, `3971`, `3975`, `3977`, `3979`, `3980`, `3983`, `3985`, `3987`, `3990`, `3991`, `3992`, `3994`, `3997`, `4000`, `4002`, `4005`, `4006`, `4008`, `4010`, `4013`, `4015`, `4019`, `4021`, `4024`, `4026`, `4028`, `4030`, `3795`, `4031`, `4033`, `4035`, `4037`, `4039`, `4042`, `4044`, `4047`, `4049`, `4051`, `4054`, `2235`, `4056`, `4059`, `4061`, `4062`, `4063`, `4065`, `4067`, `4069`, `4071`, `4072`, `4075`, `4077`, `4080`, `4083`, `4086`, `4088`, `4090`, `4092`, `4094`, `4095`, `4097`, `4098`, `979`, `4099`, `4100`, `4102`, `4104`, `4107`, `4109`, `4111`, `4112`, `4113`, `4117`, `4118`, `4120`, `4122`, `4124`, `4125`, `4126`, `4128`, `4129`, `4131`, `4134`, `4135`, `4136`, `4138`, `4141`, `4143`, `4146`, `4148`, `4150`, `4152`, `4154`, `4157`, `4161`, `4163`, `4164`, `4167`, `4168`, `4170`, `4173`, `4175`, `4177`, `4178`, `4180`, `4183`, `4185`, `4188`, `4189`, `4190`, `4192`, `4193`, `4195`, `4197`, `4199`, `4201`, `4203`, `4204`, `4206`, `4208`, `4209`, `4211`, `4214`, `4216`, `4218`, `4220`, `4221`, `4224`, `4226`, `4228`, `4230`, `4232`, `4235`, `4238`, `4240`, `4242`, `4244`, `4247`, `4248`, `4250`, `4252`, `123`, `4254`, `4255`, `4256`, `4258`, `4260`, `4261`, `4262`, `4264`, `4266`, `4267`, `4269`, `4271`, `4273`, `4275`, `4278`, `4279`, `4281`, `4282`, `4283`, `4285`, `4286`, `4289`, `4292`, `4294`, `4297`, `4299`, `4302`, `4303`, `4305`, `4307`, `4308`, `4312`, `4314`, `4316`, `4318`, `4321`, `4323`, `4325`, `4327`, `4329`, `4332`, `4335`, `4336`, `4338`, `4341`, `4342`, `4343`, `4344`, `4347`, `4348`, `4351`, `4354`, `4357`, `4358`, `2303`, `4360`, `4363`, `4366`, `4368`, `4370`, `4371`, `4374`, `3317`, `4375`, `4378`, `4381`, `4384`, `4387`, `4390`, `4392`, `4394`, `4397`, `4399`, `3754`, `4401`, `4402`, `4405`, `4407`, `4410`, `4411`, `4414`, `4415`, `4417`, `4420`, `4422`, `4423`, `4426`, `4429`, `4430`, `4433`, `4435`, `4436`, `4438`, `4440`, `4441`, `4442`, `4444`, `4447`, `4450`, `4451`, `4453`, `4454`, `4455`, `4456`, `4458`, `4460`, `4462`, `4465`, `4467`, `4468`, `4469`, `4471`, `4475`, `4477`, `4480`, `4481`, `2509`, `4484`, `4486`, `4487`, `4490`, `4492`, `4493`, `4495`, `4496`, `4498`, `4500`, `4503`, `4506`, `4508`, `4511`, `4512`, `4514`, `4516`, `4518`, `4520`, `4522`, `4524`, `4527`, `4528`, `4529`, `4532`, `2176`, `4536`, `4539`, `4541`, `4542`, `4543`, `4544`, `4545`, `4548`, `4549`, `4551`, `4553`, `4555`, `4559`, `4562`, `4564`, `4566`, `4569`, `4571`, `4574`, `4577`, `4578`, `4581`, `4584`, `4586`, `4589`, `4592`, `4593`, `4596`, `4598`, `4602`, `4605`, `4607`, `4609`, `4611`, `4613`, `4616`, `4618`, `4620`, `4622`, `4624`, `4627`, `4628`, `4629`, `4630`, `4631`, `4632`, `4633`, `4635`, `4637`, `4638`, `4640`, `4642`, `4645`, `4647`, `4649`, `4651`, `4653`, `4655`, `4657`, `4659`, `4660`, `4661`, `4662`, `4664`, `4665`, `4666`, `4669`, `4672`, `4673`, `4675`, `4678`, `4679`, `4681`, `4682`, `4685`, `4687`, `4688`, `4689`, `4692`, `4695`, `4698`, `4700`, `4702`, `4705`, `4707`, `4710`, `4712`, `4713`, `4715`, `4718`, `4719`, `4722`, `4724`, `4726`, `4729`, `4730`, `4732`, `4733`, `4734`, `4736`, `4737`, `4739`, `4741`, `4743`, `4745`, `4748`, `4750`, `4753`, `4755`, `4757`, `4759`, `4762`, `4763`, `4765`, `4767`, `4768`, `4770`, `4772`, `4775`, `4777`, `4779`, `4781`, `4784`, `4786`, `4787`, `4790`, `3108`, `4793`, `4794`, `4797`, `4798`, `4801`, `4803`, `4805`, `4806`, `4808`, `4014`, `4809`, `4811`, `4813`, `4815`, `4818`, `4821`, `4824`, `4826`, `4827`, `4830`, `4833`, `4835`, `4650`, `4838`, `4841`, `4843`, `4844`, `4846`, `4847`, `4848`, `4849`, `4851`, `4853`, `4855`, `4857`, `4860`, `4861`, `4862`, `4865`, `4867`, `4868`, `4870`, `4874`, `4875`, `4877`, `4882`, `4883`, `4885`, `4888`, `4890`, `4891`, `4892`, `4894`, `4896`, `4899`, `4901`, `4904`, `4907`, `4908`, `4615`, `4911`, `4914`, `4916`, `4918`, `4920`, `4921`, `4924`, `4926`, `4929`, `4930`, `4931`, `4934`, `4936`, `4937`, `4939`, `4942`, `4945`, `4948`, `2484`, `4949`, `4950`, `4952`, `4953`, `4956`, `4957`, `914`, `4958`, `4959`, `4961`, `4963`, `4964`, `4967`, `4969`, `4970`, `4973`, `1259`, `4974`, `4977`, `4978`, `4979`, `4982`, `4984`, `4985`, `4988`, `4991`, `4994`, `4995`, `3747`, `4997`, `4999`, `5001`, `5002`, `5004`, `5006`, `5009`, `665`, `2784`, `1854`, `5011`, `5012`, `5014`, `5016`, `5018`, `5021`, `5022`, `5025`, `5028`, `5031`, `5033`, `5036`, `5037`, `5039`, `3846`, `5040`, `5042`, `5044`, `5046`, `5048`, `5050`, `5017`, `5053`, `5054`, `5055`, `5057`, `5059`, `5061`, `5064`, `5067`, `5068`, `5071`, `5074`, `5076`, `5078`, `5080`, `5083`, `5086`, `5088`, `5091`, `5093`, `5097`, `5099`, `5101`, `5102`, `5103`, `5104`, `5108`, `5110`, `5112`, `5085`, `5116`, `5118`, `5120`, `5122`, `5125`, `5126`, `5128`, `5130`, `5132`, `5134`, `5136`, `5139`, `5141`, `5142`, `5143`, `5146`, `5149`, `5151`, `5154`, `5156`, `5159`, `5162`, `5165`, `5167`, `5171`, `5173`, `5176`, `5178`, `5182`, `5185`, `5186`, `5187`, `5190`, `5192`, `5196`, `5197`, `5198`, `5199`, `5201`, `4873`, `5203`, `5207`, `5209`, `5212`, `5215`, `5217`, `5219`, `5222`, `5223`, `5225`, `5227`, `5229`, `5231`, `5232`, `5234`, `5236`, `5237`, `5240`, `5242`, `5244`, `5246`, `5248`, `5250`, `5251`, `5255`, `5257`, `5258`, `5259`, `5261`, `5264`, `5268`, `5271`, `2858`, `5272`, `5274`, `5275`, `5277`, `5278`, `5281`, `5282`, `5285`, `5287`, `5289`, `5291`, `5292`, `5295`, `5297`, `5298`, `5299`, `5300`, `5302`, `5303`, `5305`, `5307`, `446`, `5309`, `5310`, `5312`, `5315`, `5317`, `5318`, `5320`, `3824`, `5323`, `5324`, `5326`, `5329`, `5331`, `5333`, `5336`, `5338`, `5339`, `5340`, `5342`, `5345`, `5347`, `5349`, `5351`, `5353`, `5355`, `5357`, `3381`, `5358`, `5359`, `5360`, `796`, `5362`, `5365`, `5368`, `5369`, `5372`, `5374`, `5377`, `5379`, `5381`, `5382`, `5383`, `5384`, `5387`, `5389`, `5391`, `5392`, `5395`, `5396`, `5397`, `5400`, `5403`, `5406`, `5408`, `5411`, `5413`, `5415`, `5418`, `5422`, `5424`, `5425`, `5428`, `5431`, `5434`, `5435`, `5437`, `5438`, `5441`, `5442`, `5444`, `5446`, `5449`, `5452`, `5456`, `5458`, `5461`, `5466`, `5468`, `5471`, `5474`, `5476`, `5478`, `5481`, `5483`, `5486`, `5487`, `5489`, `5492`, `5493`, `5496`, `5498`, `5499`, `5501`, `5503`, `5504`, `5507`, `5510`, `5514`, `5516`, `5427`, `1805`, `5519`, `5521`, `5522`, `5523`, `5526`, `5527`, `5529`, `5531`, `5532`, `5534`, `5535`, `5536`, `5538`, `5539`, `5542`, `5545`, `5547`, `5548`, `5549`, `5550`, `5552`, `5554`, `5555`, `5556`, `5558`, `5560`, `5561`, `5564`, `5565`, `5567`, `5569`, `5572`, `5573`, `5575`, `5578`, `5581`, `5584`, `5588`, `5591`, `5593`, `5594`, `5596`, `5598`, `5599`, `5603`, `5605`, `5607`, `5609`, `5611`, `5612`, `5613`, `5615`, `5616`, `5619`, `5621`, `5622`, `5625`, `5627`, `5630`, `5633`, `5635`, `5639`, `5642`, `5645`, `5647`, `5650`, `5651`, `5652`, `5653`, `5654`, `5655`, `5656`, `5658`, `5659`, `5660`, `5663`, `5664`, `5667`, `5668`, `5669`, `5670`, `5671`, `5672`, `5676`, `5680`, `5682`, `5683`, `5684`, `5685`, `5687`, `5690`, `5692`, `5695`, `5696`, `5697`, `5699`, `5701`, `5703`, `5705`, `5706`, `5709`, `5710`, `5712`, `5714`, `5716`, `5718`, `5720`, `5723`, `5726`, `5727`, `5729`, `5732`, `5734`, `5736`, `5738`, `5741`, `5743`, `5744`, `5747`, `5749`, `5751`, `5752`, `5754`, `5756`, `5759`, `5760`, `5764`, `5766`, `3947`, `5769`, `5770`, `5774`, `5775`, `5777`, `5779`, `5782`, `5784`, `5786`, `5789`, `5791`, `5794`, `5796`, `5798`, `5799`, `5802`, `5804`, `5806`, `5810`, `5811`, `5813`, `5815`, `5817`, `5819`, `5821`, `5824`, `5825`, `5828`, `5830`, `5833`, `5834`, `5836`, `5837`, `5838`, `5840`, `5842`, `5844`, `5846`, `5848`, `5849`, `5850`, `5853`, `5855`, `5857`, `5858`, `483`, `5860`, `5863`, `5864`, `5866`, `5870`, `5872`, `5874`, `5876`, `5879`, `5881`, `5882`, `5885`, `5886`, `5887`, `5889`, `5891`, `5893`, `5895`, `5896`, `5898`, `5900`, `5902`, `5904`, `5906`, `5909`, `5911`, `5912`, `5914`, `5916`, `5921`, `5923`, `5924`, `5926`, `5928`, `5931`, `5933`, `5936`, `5938`, `5940`, `5941`, `5945`, `5947`, `5949`, `5951`, `5953`, `5956`, `5958`, `5960`, `5963`, `5965`, `5966`, `5968`, `5971`, `5973`, `5975`, `5978`, `5980`, `5983`, `5984`, `5986`, `5987`, `5990`, `5991`, `5994`, `5996`, `5997`, `5999`, `6001`, `6002`, `6005`, `6007`, `6009`, `6011`, `6014`, `6016`, `6020`, `6022`, `6024`, `6025`, `6028`, `6029`, `6030`, `6033`, `6036`, `6038`, `6039`, `6040`, `6042`, `6044`, `6045`, `6046`, `6048`, `6050`, `6052`, `6053`, `6054`, `6055`, `6058`, `6060`, `6062`, `6063`, `6065`, `3788`, `6068`, `6071`, `6073`, `6074`, `6077`, `6078`, `6080`, `6081`, `6084`, `1254`, `6087`, `6089`, `6091`, `6094`, `6095`, `6098`, `6099`, `266`, `6100`, `6102`, `6103`, `6104`, `6106`, `6107`, `6109`, `6110`, `4817`, `6112`, `6115`, `6117`, `6118`, `5491`, `3359`, `6119`, `6121`, `6123`, `6126`, `6128`, `6130`, `6132`, `6136`, `6137`, `6139`, `6141`, `6142`, `6145`, `6147`, `6149`, `6151`, `6154`, `6156`, `6157`, `6158`, `6160`, `6163`, `6165`, `6167`, `6168`, `6170`, `6174`, `6178`, `835`, `4523`, `6180`, `4485`, `6181`, `6184`, `6187`, `6190`, `6193`, `6197`, `6199`, `6200`, `6202`, `6204`, `6205`, `6207`, `6209`, `6212`, `6215`, `6218`, `6219`, `6221`, `6223`, `6226`, `6228`, `6230`, `6233`, `6234`, `6236`, `6238`, `6241`, `6243`, `6246`, `6248`, `6250`, `6251`, `6253`, `6254`, `6256`, `6257`, `6259`, `6261`, `6264`, `6266`, `6267`, `6268`, `6270`, `6272`, `6275`, `6277`, `6279`, `2497`, `6282`, `6284`, `6287`, `6289`, `6290`, `6291`, `6293`, `6296`, `6297`, `6300`, `6302`, `6303`, `6307`, `6309`, `6311`, `2972`, `6314`, `6317`, `6319`, `6322`, `6324`, `6326`, `6328`, `6331`, `6332`, `6334`, `6336`, `6338`, `6339`, `6341`, `6344`, `6345`, `6346`, `6349`, `6352`, `6353`, `6355`, `6356`, `6359`, `5488`, `6361`, `6362`, `6365`, `6366`, `6368`, `6371`, `6373`, `6375`, `6377`, `6378`, `6380`, `6383`, `6386`, `6388`, `6390`, `6391`, `1351`, `6393`, `6395`, `6396`, `6397`, `6399`, `6401`, `6402`, `6403`, `6406`, `6408`, `6409`, `6411`, `6414`, `6417`, `6420`, `6423`, `6425`, `6429`, `6430`, `6431`, `910`, `6433`, `6434`, `6435`, `6437`, `2487`, `6439`, `6441`, `6445`, `6448`, `6450`, `6454`, `6456`, `6458`, `6460`, `6463`, `6464`, `6467`, `6468`, `6470`, `6472`, `6474`, `6477`, `6478`, `6479`, `6480`, `6482`, `6485`, `6486`, `6489`, `6491`, `6494`, `6497`, `6499`, `6502`, `6504`, `6505`, `6507`, `6509`, `6510`, `6512`, `6515`, `6516`, `6517`, `6519`, `6522`, `6524`, `6525`, `6527`, `6529`, `6532`, `1640`, `6533`, `6534`, `6536`, `6539`, `6542`, `4355`, `6545`, `6546`, `6548`, `6550`, `6551`, `6552`, `6555`, `6556`, `6558`, `6560`, `6563`, `6564`, `6566`, `6568`, `6569`, `6570` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 100.00 | | `TOKEN_P` | 100.00 | | `TOKEN_R` | 100.00 | | `TOKEN_ACC` | 100.00 | | `SENTS_F` | 100.00 | | `SENTS_P` | 100.00 | | `SENTS_R` | 100.00 | | `TAG_ACC` | 88.93 | | `POS_ACC` | 96.52 | | `MORPH_ACC` | 100.00 | | `MORPH_PER_FEAT` | 0.00 | | `DEP_UAS` | 89.48 | | `DEP_LAS` | 87.18 | | `LEMMA_ACC` | 94.51 |
{"language": ["ko"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/ko_udv25_koreankaist_trf
null
[ "spacy", "token-classification", "ko", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ko" ]
TAGS #spacy #token-classification #ko #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Korean-Kaist ### Label Scheme View label scheme (5329 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (5329 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #ko #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (5329 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Lithuanian-ALKSNIS | Feature | Description | | --- | --- | | **Name** | `lt_udv25_lithuanianalksnis_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (3674 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `.`, `akr.`, `bdv.aukšt.mot.dgs.K.`, `bdv.aukšt.mot.dgs.V.`, `bdv.aukšt.mot.dgs.Vt.`, `bdv.aukšt.mot.dgs.Įn.`, `bdv.aukšt.mot.vns.G.`, `bdv.aukšt.mot.vns.K.`, `bdv.aukšt.mot.vns.V.`, `bdv.aukšt.vyr.dgs.G.`, `bdv.aukšt.vyr.dgs.K.`, `bdv.aukšt.vyr.dgs.N.`, `bdv.aukšt.vyr.dgs.V.`, `bdv.aukšt.vyr.dgs.Vt.`, `bdv.aukšt.vyr.dgs.Įn.`, `bdv.aukšt.vyr.vns.G.`, `bdv.aukšt.vyr.vns.K.`, `bdv.aukšt.vyr.vns.N.`, `bdv.aukšt.vyr.vns.V.`, `bdv.aukšt.vyr.vns.Vt.`, `bdv.aukšt.vyr.vns.Įn.`, `bdv.aukšč.bev.`, `bdv.aukšč.mot.dgs.G.`, `bdv.aukšč.mot.dgs.K.`, `bdv.aukšč.mot.dgs.V.`, `bdv.aukšč.mot.dgs.Įn.`, `bdv.aukšč.mot.vns.K.`, `bdv.aukšč.mot.vns.V.`, `bdv.aukšč.mot.vns.Vt.`, `bdv.aukšč.mot.vns.Įn.`, `bdv.aukšč.vyr.dgs.G.`, `bdv.aukšč.vyr.dgs.K.`, `bdv.aukšč.vyr.dgs.V.`, `bdv.aukšč.vyr.dgs.Vt.`, `bdv.aukšč.vyr.dgs.Įn.`, `bdv.aukšč.vyr.vns.G.`, `bdv.aukšč.vyr.vns.K.`, `bdv.aukšč.vyr.vns.V.`, `bdv.aukšč.vyr.vns.Įn.`, `bdv.aukšč.įvardž.mot.vns.K.`, `bdv.nelygin.`, `bdv.nelygin..vyr.vns.K.`, `bdv.nelygin.bev.`, `bdv.nelygin.mot.dgs.G.`, `bdv.nelygin.mot.dgs.K.`, `bdv.nelygin.mot.dgs.N.`, `bdv.nelygin.mot.dgs.V`, `bdv.nelygin.mot.dgs.V.`, `bdv.nelygin.mot.dgs.Vt.`, `bdv.nelygin.mot.dgs.Įn.`, `bdv.nelygin.mot.vns.G.`, `bdv.nelygin.mot.vns.K.`, `bdv.nelygin.mot.vns.N.`, `bdv.nelygin.mot.vns.V.`, `bdv.nelygin.mot.vns.Vt.`, `bdv.nelygin.mot.vns.Įn.`, `bdv.nelygin.vyr.dgs.G.`, `bdv.nelygin.vyr.dgs.K.`, `bdv.nelygin.vyr.dgs.N.`, `bdv.nelygin.vyr.dgs.V.`, `bdv.nelygin.vyr.dgs.Vt.`, `bdv.nelygin.vyr.dgs.Įn.`, `bdv.nelygin.vyr.vns.G.`, `bdv.nelygin.vyr.vns.K.`, `bdv.nelygin.vyr.vns.N.`, `bdv.nelygin.vyr.vns.V.`, `bdv.nelygin.vyr.vns.Vt.`, `bdv.nelygin.vyr.vns.Įn.`, `bdv.nelygin.įvardž.mot.dgs.G.`, `bdv.nelygin.įvardž.mot.dgs.K.`, `bdv.nelygin.įvardž.mot.dgs.N.`, `bdv.nelygin.įvardž.mot.dgs.V.`, `bdv.nelygin.įvardž.mot.dgs.Įn.`, `bdv.nelygin.įvardž.mot.vns.G.`, `bdv.nelygin.įvardž.mot.vns.K.`, `bdv.nelygin.įvardž.mot.vns.N.`, `bdv.nelygin.įvardž.mot.vns.V.`, `bdv.nelygin.įvardž.mot.vns.Vt.`, `bdv.nelygin.įvardž.mot.vns.Įn.`, `bdv.nelygin.įvardž.vyr.dgs.G.`, `bdv.nelygin.įvardž.vyr.dgs.K.`, `bdv.nelygin.įvardž.vyr.dgs.V.`, `bdv.nelygin.įvardž.vyr.dgs.Vt.`, `bdv.nelygin.įvardž.vyr.dgs.Įn.`, `bdv.nelygin.įvardž.vyr.vns.G.`, `bdv.nelygin.įvardž.vyr.vns.K.`, `bdv.nelygin.įvardž.vyr.vns.N.`, `bdv.nelygin.įvardž.vyr.vns.V.`, `bdv.nelygin.įvardž.vyr.vns.Vt.`, `bdv.nelygin.įvardž.vyr.vns.Įn.`, `būdv.nelygin.įvardž.vyr.dgs.K.`, `dkt.`, `dkt.bendr.dgs.V.`, `dkt.bendr.vns.K.`, `dkt.bendr.vns.N.`, `dkt.bendr.vns.V.`, `dkt.mot.`, `dkt.mot.dgs.G.`, `dkt.mot.dgs.K.`, `dkt.mot.dgs.N.`, `dkt.mot.dgs.V.`, `dkt.mot.dgs.Vt.`, `dkt.mot.dgs.Įn.`, `dkt.mot.vns.G.`, `dkt.mot.vns.Il.`, `dkt.mot.vns.K`, `dkt.mot.vns.K.`, `dkt.mot.vns.N.`, `dkt.mot.vns.V.`, `dkt.mot.vns.Vt.`, `dkt.mot.vns.Įn.`, `dkt.mot.vns.Įv.`, `dkt.mot.vns.Š.`, `dkt.sngr.vyr.dgs.G.`, `dkt.sngr.vyr.dgs.K.`, `dkt.sngr.vyr.dgs.V.`, `dkt.sngr.vyr.dgs.Įn.`, `dkt.sngr.vyr.vns.G.`, `dkt.sngr.vyr.vns.K.`, `dkt.sngr.vyr.vns.N.`, `dkt.sngr.vyr.vns.V.`, `dkt.sngr.vyr.vns.Įn.`, `dkt.tikr.`, `dkt.tikr.mot.`, `dkt.tikr.mot.dgs.K.`, `dkt.tikr.mot.vns.G.`, `dkt.tikr.mot.vns.K.`, `dkt.tikr.mot.vns.N.`, `dkt.tikr.mot.vns.V.`, `dkt.tikr.mot.vns.Vt.`, `dkt.tikr.mot.vns.Įn.`, `dkt.tikr.vyr.dgs.K.`, `dkt.tikr.vyr.vns.G.`, `dkt.tikr.vyr.vns.K.`, `dkt.tikr.vyr.vns.N.`, `dkt.tikr.vyr.vns.V.`, `dkt.tikr.vyr.vns.Vt.`, `dkt.tikr.vyr.vns.Įn.`, `dkt.vyr.`, `dkt.vyr.dgs.G.`, `dkt.vyr.dgs.K.`, `dkt.vyr.dgs.N.`, `dkt.vyr.dgs.V.`, `dkt.vyr.dgs.Vt.`, `dkt.vyr.dgs.v.`, `dkt.vyr.dgs.Įn.`, `dkt.vyr.vns,K.`, `dkt.vyr.vns.G.`, `dkt.vyr.vns.Il.`, `dkt.vyr.vns.K.`, `dkt.vyr.vns.N.`, `dkt.vyr.vns.V.`, `dkt.vyr.vns.Vt.`, `dkt.vyr.vns.vt.`, `dkt.vyr.vns.Įn.`, `dkt.vyr.vns.Š.`, `dktv.mot.vns.K.`, `dll`, `dll.`, `dlv.neveik.es.mot.vns.V.`, `jng.`, `jst.`, `kita`, `kita.`, `prl.G.`, `prl.K.`, `prl.Įn.`, `prv.aukšt.`, `prv.aukšč.`, `prv.nelygin.`, `prv.neygin.`, `prv.sampl.nelygin.`, `samp.įv.mot.dgs.N.`, `sampl.dll.`, `sampl.jng.`, `sampl.jst.`, `sampl.prv.`, `sampl.prv.nelyg.`, `sampl.prv.nelygin.`, `sampl.sktv.`, `sampl.sktv.raid.kiek.`, `sampl.sutr.`, `sampl.užs.`, `sampl.vksm.pad.es.`, `sampl.įv.`, `sampl.įv.G.`, `sampl.įv.K.`, `sampl.įv.V.`, `sampl.įv.bev.`, `sampl.įv.mot.dgs.G.`, `sampl.įv.mot.dgs.K.`, `sampl.įv.mot.dgs.V.`, `sampl.įv.mot.dgs.Vt.`, `sampl.įv.mot.dgs.Įn.`, `sampl.įv.mot.vns.G.`, `sampl.įv.mot.vns.K.`, `sampl.įv.mot.vns.N.`, `sampl.įv.mot.vns.V.`, `sampl.įv.mot.vns.Vt.`, `sampl.įv.mot.vns.Įn.`, `sampl.įv.vyr.dgs.G.`, `sampl.įv.vyr.dgs.K.`, `sampl.įv.vyr.dgs.N.`, `sampl.įv.vyr.dgs.V.`, `sampl.įv.vyr.dgs.Vt.`, `sampl.įv.vyr.dgs.Įn.`, `sampl.įv.vyr.vns.G.`, `sampl.įv.vyr.vns.K.`, `sampl.įv.vyr.vns.V.`, `sampl.įv.vyr.vns.Vt.`, `sampl.įv.vyr.vns.Įn.`, `sampl.įv.Įn.`, `sktv.`, `sktv.arab`, `sktv.arab.`, `sktv.kelint.mot.vns.Vt.`, `sktv.kelint.įvardž.mot.vns.V.`, `sktv.kelint.įvardž.vyr.vns.G.`, `sktv.kiek.mot.V.`, `sktv.kiek.vyr.dgs.G.`, `sktv.mišr.`, `sktv.mišr.kelint.įvardž.mot.vns.G.`, `sktv.mišr.kelint.įvardž.mot.vns.K.`, `sktv.mišr.kelint.įvardž.mot.vns.V.`, `sktv.mišr.kelint.įvardž.vyr.vns.G.`, `sktv.mišr.kelint.įvardž.vyr.vns.K.`, `sktv.mišr.kelint.įvardž.vyr.vns.Vt.`, `sktv.raid.daugin.vyr.G.`, `sktv.raid.daugin.vyr.K.`, `sktv.raid.kelint.bev.`, `sktv.raid.kelint.mot.vns.K.`, `sktv.raid.kelint.mot.vns.V.`, `sktv.raid.kelint.mot.vns.Vt.`, `sktv.raid.kelint.vyr.dgs.K.`, `sktv.raid.kelint.vyr.dgs.V.`, `sktv.raid.kelint.vyr.dgs.Vt.`, `sktv.raid.kelint.vyr.dgs.Įn.`, `sktv.raid.kelint.vyr.vns.G.`, `sktv.raid.kelint.vyr.vns.K.`, `sktv.raid.kelint.vyr.vns.V.`, `sktv.raid.kelint.vyr.vns.Vt.`, `sktv.raid.kelint.įvardž.mot.vns.G.`, `sktv.raid.kelint.įvardž.mot.vns.K.`, `sktv.raid.kelint.įvardž.mot.vns.N.`, `sktv.raid.kelint.įvardž.mot.vns.V.`, `sktv.raid.kelint.įvardž.mot.vns.Vt.`, `sktv.raid.kelint.įvardž.vyr.dgs.K.`, `sktv.raid.kelint.įvardž.vyr.dgs.N.`, `sktv.raid.kelint.įvardž.vyr.dgs.V.`, `sktv.raid.kelint.įvardž.vyr.dgs.Įn.`, `sktv.raid.kelint.įvardž.vyr.vns.G.`, `sktv.raid.kelint.įvardž.vyr.vns.K.`, `sktv.raid.kelint.įvardž.vyr.vns.V.`, `sktv.raid.kiek.`, `sktv.raid.kiek.K.`, `sktv.raid.kiek.mot.G.`, `sktv.raid.kiek.mot.K.`, `sktv.raid.kiek.mot.N.`, `sktv.raid.kiek.mot.V.`, `sktv.raid.kiek.mot.Vt.`, `sktv.raid.kiek.mot.dgs.V.`, `sktv.raid.kiek.mot.vns.G.`, `sktv.raid.kiek.mot.vns.K.`, `sktv.raid.kiek.mot.vns.Įn.`, `sktv.raid.kiek.mot.Įn.`, `sktv.raid.kiek.vyr.G.`, `sktv.raid.kiek.vyr.K.`, `sktv.raid.kiek.vyr.N.`, `sktv.raid.kiek.vyr.V.`, `sktv.raid.kiek.vyr.Vt.`, `sktv.raid.kiek.vyr.dgs.K.`, `sktv.raid.kiek.vyr.dgs.V.`, `sktv.raid.kiek.vyr.vns.G.`, `sktv.raid.kiek.vyr.vns.K.`, `sktv.raid.kiek.vyr.vns.V.`, `sktv.raid.kiek.vyr.Įn.`, `sktv.raid.kiekin.mot.vns.G.`, `sktv.raid.kiekin.mot.vns.V.`, `sktv.raid.kuopin.G.`, `sktv.rom.`, `skyr.`, `sutr.`, `tęs`, `tęs.`, `tęs.sktv.raid.kelint.vyr.vns.G.`, `tęs.įv.vyr.dgs.G.`, `tęs.įv.vyr.dgs.N.`, `tęs.įv.vyr.vns.G.`, `tęs.įv.vyr.vns.N.`, `tęs.įv.vyr.vns.V.`, `tęs.įv.vyr.vns.Įn.`, `užs.`, `vksm.asm.liep.dgs.1.`, `vksm.asm.liep.dgs.2.`, `vksm.asm.liep.vns.2.`, `vksm.asm.liep.vns.3.`, `vksm.asm.neig.liep.dgs.2.`, `vksm.asm.neig.liep.vns.2.`, `vksm.asm.neig.sngr.liep.dgs.2.`, `vksm.asm.neig.sngr.tar.3.`, `vksm.asm.neig.sngr.tar.dgs.1.`, `vksm.asm.neig.sngr.tar.vns.1.`, `vksm.asm.neig.sngr.tar.vns.3.`, `vksm.asm.neig.sngr.tiesiog.būs.vns.2.`, `vksm.asm.neig.sngr.tiesiog.būs.vns.3.`, `vksm.asm.neig.sngr.tiesiog.būt-k.3.`, `vksm.asm.neig.sngr.tiesiog.būt-k.dgs.3.`, `vksm.asm.neig.sngr.tiesiog.būt-k.vns.1.`, `vksm.asm.neig.sngr.tiesiog.būt-k.vns.3.`, `vksm.asm.neig.sngr.tiesiog.es.3.`, `vksm.asm.neig.sngr.tiesiog.es.dgs.3.`, `vksm.asm.neig.sngr.tiesiog.es.vns.1.`, `vksm.asm.neig.sngr.tiesiog.es.vns.3.`, `vksm.asm.neig.tar.3.`, `vksm.asm.neig.tar.dgs.1.`, `vksm.asm.neig.tar.dgs.3.`, `vksm.asm.neig.tar.vns.1.`, `vksm.asm.neig.tar.vns.2.`, `vksm.asm.neig.tar.vns.3.`, `vksm.asm.neig.tiesiog.būs.3.`, `vksm.asm.neig.tiesiog.būs.dgs.1.`, `vksm.asm.neig.tiesiog.būs.dgs.3.`, `vksm.asm.neig.tiesiog.būs.vns.1.`, `vksm.asm.neig.tiesiog.būs.vns.2.`, `vksm.asm.neig.tiesiog.būs.vns.3.`, `vksm.asm.neig.tiesiog.būt-d.vns.1.`, `vksm.asm.neig.tiesiog.būt-d.vns.3.`, `vksm.asm.neig.tiesiog.būt-k.3.`, `vksm.asm.neig.tiesiog.būt-k.dgs.1.`, `vksm.asm.neig.tiesiog.būt-k.dgs.3.`, `vksm.asm.neig.tiesiog.būt-k.vns.1.`, `vksm.asm.neig.tiesiog.būt-k.vns.2.`, `vksm.asm.neig.tiesiog.būt-k.vns.3.`, `vksm.asm.neig.tiesiog.es.3.`, `vksm.asm.neig.tiesiog.es.dgs.1.`, `vksm.asm.neig.tiesiog.es.dgs.2.`, `vksm.asm.neig.tiesiog.es.dgs.3.`, `vksm.asm.neig.tiesiog.es.vns.1.`, `vksm.asm.neig.tiesiog.es.vns.2.`, `vksm.asm.neig.tiesiog.es.vns.3.`, `vksm.asm.sngr.liep.dgs.1.`, `vksm.asm.sngr.liep.dgs.2.`, `vksm.asm.sngr.liep.vns.2.`, `vksm.asm.sngr.tar.3.`, `vksm.asm.sngr.tar.dgs.3.`, `vksm.asm.sngr.tar.vns.1.`, `vksm.asm.sngr.tar.vns.3.`, `vksm.asm.sngr.tiesiog.būs.dgs.1.`, `vksm.asm.sngr.tiesiog.būs.dgs.2.`, `vksm.asm.sngr.tiesiog.būs.dgs.3.`, `vksm.asm.sngr.tiesiog.būs.vns.2.`, `vksm.asm.sngr.tiesiog.būs.vns.3.`, `vksm.asm.sngr.tiesiog.būt-d.dgs.3.`, `vksm.asm.sngr.tiesiog.būt-d.vns.1.`, `vksm.asm.sngr.tiesiog.būt-d.vns.3.`, `vksm.asm.sngr.tiesiog.būt-k.3.`, `vksm.asm.sngr.tiesiog.būt-k.dgs.1.`, `vksm.asm.sngr.tiesiog.būt-k.dgs.3.`, `vksm.asm.sngr.tiesiog.būt-k.vns.1.`, `vksm.asm.sngr.tiesiog.būt-k.vns.3.`, `vksm.asm.sngr.tiesiog.es.3.`, `vksm.asm.sngr.tiesiog.es.dgs.1.`, `vksm.asm.sngr.tiesiog.es.dgs.3.`, `vksm.asm.sngr.tiesiog.es.vns.1.`, `vksm.asm.sngr.tiesiog.es.vns.2.`, `vksm.asm.sngr.tiesiog.es.vns.3.`, `vksm.asm.tar.3.`, `vksm.asm.tar.dgs.1.`, `vksm.asm.tar.dgs.2.`, `vksm.asm.tar.dgs.3.`, `vksm.asm.tar.vns.1.`, `vksm.asm.tar.vns.2.`, `vksm.asm.tar.vns.3.`, `vksm.asm.tiesiog.būs.3.`, `vksm.asm.tiesiog.būs.dgs.1.`, `vksm.asm.tiesiog.būs.dgs.2.`, `vksm.asm.tiesiog.būs.dgs.3.`, `vksm.asm.tiesiog.būs.vns.1.`, `vksm.asm.tiesiog.būs.vns.2.`, `vksm.asm.tiesiog.būs.vns.3.`, `vksm.asm.tiesiog.būt-d.3.`, `vksm.asm.tiesiog.būt-d.dgs.3.`, `vksm.asm.tiesiog.būt-d.vns.1.`, `vksm.asm.tiesiog.būt-d.vns.2.`, `vksm.asm.tiesiog.būt-d.vns.3.`, `vksm.asm.tiesiog.būt-k.`, `vksm.asm.tiesiog.būt-k.3.`, `vksm.asm.tiesiog.būt-k.dgs.1.`, `vksm.asm.tiesiog.būt-k.dgs.2.`, `vksm.asm.tiesiog.būt-k.dgs.3.`, `vksm.asm.tiesiog.būt-k.vns.1.`, `vksm.asm.tiesiog.būt-k.vns.2.`, `vksm.asm.tiesiog.būt-k.vns.3.`, `vksm.asm.tiesiog.es.3.`, `vksm.asm.tiesiog.es.dgs.1.`, `vksm.asm.tiesiog.es.dgs.2.`, `vksm.asm.tiesiog.es.dgs.3.`, `vksm.asm.tiesiog.es.vns.1.`, `vksm.asm.tiesiog.es.vns.2.`, `vksm.asm.tiesiog.es.vns.3.`, `vksm.bndr.`, `vksm.bndr.neig.`, `vksm.bndr.neig.sngr.`, `vksm.bndr.sngr.`, `vksm.dlv.neig.neveik.būt.bev.`, `vksm.dlv.neig.neveik.būt.mot.dgs.G.`, `vksm.dlv.neig.neveik.būt.mot.dgs.K.`, `vksm.dlv.neig.neveik.būt.mot.dgs.V.`, `vksm.dlv.neig.neveik.būt.mot.vns.K.`, `vksm.dlv.neig.neveik.būt.mot.vns.V.`, `vksm.dlv.neig.neveik.būt.vyr.dgs.N.`, `vksm.dlv.neig.neveik.būt.vyr.dgs.V.`, `vksm.dlv.neig.neveik.būt.vyr.vns.G.`, `vksm.dlv.neig.neveik.būt.vyr.vns.N.`, `vksm.dlv.neig.neveik.būt.vyr.vns.V.`, `vksm.dlv.neig.neveik.es.bev.`, `vksm.dlv.neig.neveik.es.mot.dgs.K.`, `vksm.dlv.neig.neveik.es.mot.dgs.V.`, `vksm.dlv.neig.neveik.es.mot.vns.G.`, `vksm.dlv.neig.neveik.es.mot.vns.K.`, `vksm.dlv.neig.neveik.es.mot.vns.V.`, `vksm.dlv.neig.neveik.es.mot.vns.Įn.`, `vksm.dlv.neig.neveik.es.vyr.dgs.G.`, `vksm.dlv.neig.neveik.es.vyr.dgs.K.`, `vksm.dlv.neig.neveik.es.vyr.dgs.V.`, `vksm.dlv.neig.neveik.es.vyr.vns.V.`, `vksm.dlv.neig.neveik.es.įvardž.mot.dgs.V.`, `vksm.dlv.neig.reik.bev.`, `vksm.dlv.neig.reik.mot.dgs.K.`, `vksm.dlv.neig.reik.mot.vns.V.`, `vksm.dlv.neig.reik.vyr.vns.V.`, `vksm.dlv.neig.sngr.neveik.būt.bev.`, `vksm.dlv.neig.sngr.neveik.es.bev.`, `vksm.dlv.neig.sngr.veik.būt-k.vyr.dgs.V.`, `vksm.dlv.neig.sngr.veik.es.vyr.vns.V.`, `vksm.dlv.neig.veik.būt-k.bev.`, `vksm.dlv.neig.veik.būt-k.vyr.dgs.V.`, `vksm.dlv.neig.veik.būt-k.vyr.dgs.Įn.`, `vksm.dlv.neig.veik.būt-k.vyr.vns.G.`, `vksm.dlv.neig.veik.būt-k.vyr.vns.V.`, `vksm.dlv.neig.veik.es.mot.dgs.K.`, `vksm.dlv.neig.veik.es.mot.vns.N.`, `vksm.dlv.neig.veik.es.mot.vns.V.`, `vksm.dlv.neig.veik.es.mot.vns.Įn.`, `vksm.dlv.neig.veik.es.vyr.dgs.G.`, `vksm.dlv.neig.veik.es.vyr.dgs.N.`, `vksm.dlv.neig.veik.es.vyr.dgs.V.`, `vksm.dlv.neig.veik.es.vyr.dgs.Įn.`, `vksm.dlv.neig.veik.es.vyr.vns.K.`, `vksm.dlv.neig.veik.es.vyr.vns.N.`, `vksm.dlv.neig.veik.es.vyr.vns.V.`, `vksm.dlv.neig.veik.es.įvardž.vyr.dgs.V.`, `vksm.dlv.neig.veik.es.įvardž.vyr.dgs.Įn.`, `vksm.dlv.neveik.būs.vyr.vns.G.`, `vksm.dlv.neveik.būs.vyr.vns.N.`, `vksm.dlv.neveik.būt-k.vyr.dgs.V.`, `vksm.dlv.neveik.būt-k.vyr.vns.V.`, `vksm.dlv.neveik.būt.bev.`, `vksm.dlv.neveik.būt.mot.V.`, `vksm.dlv.neveik.būt.mot.dgs.G.`, `vksm.dlv.neveik.būt.mot.dgs.K`, `vksm.dlv.neveik.būt.mot.dgs.K.`, `vksm.dlv.neveik.būt.mot.dgs.N.`, `vksm.dlv.neveik.būt.mot.dgs.V.`, `vksm.dlv.neveik.būt.mot.dgs.Įn.`, `vksm.dlv.neveik.būt.mot.vns.G.`, `vksm.dlv.neveik.būt.mot.vns.K.`, `vksm.dlv.neveik.būt.mot.vns.N.`, `vksm.dlv.neveik.būt.mot.vns.V`, `vksm.dlv.neveik.būt.mot.vns.V.`, `vksm.dlv.neveik.būt.mot.vns.Vt.`, `vksm.dlv.neveik.būt.mot.vns.Įn.`, `vksm.dlv.neveik.būt.vyr.dgs.G.`, `vksm.dlv.neveik.būt.vyr.dgs.K.`, `vksm.dlv.neveik.būt.vyr.dgs.N.`, `vksm.dlv.neveik.būt.vyr.dgs.V`, `vksm.dlv.neveik.būt.vyr.dgs.V.`, `vksm.dlv.neveik.būt.vyr.dgs.Vt.`, `vksm.dlv.neveik.būt.vyr.dgs.Įn.`, `vksm.dlv.neveik.būt.vyr.vns.G.`, `vksm.dlv.neveik.būt.vyr.vns.K.`, `vksm.dlv.neveik.būt.vyr.vns.N.`, `vksm.dlv.neveik.būt.vyr.vns.V`, `vksm.dlv.neveik.būt.vyr.vns.V.`, `vksm.dlv.neveik.būt.vyr.vns.Vt.`, `vksm.dlv.neveik.būt.vyr.vns.Įn.`, `vksm.dlv.neveik.būt.įvardž.mot.dgs.G.`, `vksm.dlv.neveik.būt.įvardž.mot.dgs.K.`, `vksm.dlv.neveik.būt.įvardž.vyr.dgs.G.`, `vksm.dlv.neveik.būt.įvardž.vyr.dgs.K.`, `vksm.dlv.neveik.būt.įvardž.vyr.dgs.V.`, `vksm.dlv.neveik.būt.įvardž.vyr.vns.K.`, `vksm.dlv.neveik.būt.įvardž.vyr.vns.V.`, `vksm.dlv.neveik.būts.vyr.dgs.V.`, `vksm.dlv.neveik.es.bev.`, `vksm.dlv.neveik.es.mot.V.`, `vksm.dlv.neveik.es.mot.dgs.G.`, `vksm.dlv.neveik.es.mot.dgs.K.`, `vksm.dlv.neveik.es.mot.dgs.N.`, `vksm.dlv.neveik.es.mot.dgs.V.`, `vksm.dlv.neveik.es.mot.dgs.Vt.`, `vksm.dlv.neveik.es.mot.dgs.Įn.`, `vksm.dlv.neveik.es.mot.vns.G.`, `vksm.dlv.neveik.es.mot.vns.K.`, `vksm.dlv.neveik.es.mot.vns.N.`, `vksm.dlv.neveik.es.mot.vns.V`, `vksm.dlv.neveik.es.mot.vns.V.`, `vksm.dlv.neveik.es.mot.vns.Vt.`, `vksm.dlv.neveik.es.mot.vns.Įn.`, `vksm.dlv.neveik.es.vyr.dgs.G.`, `vksm.dlv.neveik.es.vyr.dgs.K.`, `vksm.dlv.neveik.es.vyr.dgs.N.`, `vksm.dlv.neveik.es.vyr.dgs.V.`, `vksm.dlv.neveik.es.vyr.dgs.Įn.`, `vksm.dlv.neveik.es.vyr.vns.G.`, `vksm.dlv.neveik.es.vyr.vns.K.`, `vksm.dlv.neveik.es.vyr.vns.N.`, `vksm.dlv.neveik.es.vyr.vns.V.`, `vksm.dlv.neveik.es.vyr.vns.Įn.`, `vksm.dlv.neveik.es.įvardž.mot.dgs.K.`, `vksm.dlv.neveik.es.įvardž.mot.dgs.V.`, `vksm.dlv.neveik.es.įvardž.mot.dgs.Įn.`, `vksm.dlv.neveik.es.įvardž.mot.vns.G.`, `vksm.dlv.neveik.es.įvardž.mot.vns.K.`, `vksm.dlv.neveik.es.įvardž.mot.vns.N.`, `vksm.dlv.neveik.es.įvardž.mot.vns.V.`, `vksm.dlv.neveik.es.įvardž.vyr.dgs.G.`, `vksm.dlv.neveik.es.įvardž.vyr.dgs.K.`, `vksm.dlv.neveik.es.įvardž.vyr.dgs.N.`, `vksm.dlv.neveik.es.įvardž.vyr.dgs.V.`, `vksm.dlv.neveik.es.įvardž.vyr.vns.G.`, `vksm.dlv.neveik.es.įvardž.vyr.vns.K.`, `vksm.dlv.neveik.es.įvardž.vyr.vns.N.`, `vksm.dlv.neveik.es.įvardž.vyr.vns.V.`, `vksm.dlv.neveik.es.įvardž.vyr.vns.Įn.`, `vksm.dlv.neveik.mot.vns.V.`, `vksm.dlv.neveik.vyr.dgs.K.`, `vksm.dlv.neveik.įvardž.es.mot.vns.Vt.`, `vksm.dlv.neveik.įvardž.es.vyr.dgs.K.`, `vksm.dlv.neveik.įvardž.es.vyr.vns.K.`, `vksm.dlv.reik.bev.`, `vksm.dlv.reik.mot.vns.V.`, `vksm.dlv.reik.vyr.dgs.K.`, `vksm.dlv.reik.vyr.dgs.V.`, `vksm.dlv.reik.vyr.vns.V.`, `vksm.dlv.sngr.neveik.būt.bev.`, `vksm.dlv.sngr.neveik.būt.mot.dgs.G.`, `vksm.dlv.sngr.neveik.būt.mot.dgs.V.`, `vksm.dlv.sngr.neveik.būt.mot.vns.V.`, `vksm.dlv.sngr.neveik.būt.mot.vns.Vt.`, `vksm.dlv.sngr.neveik.būt.vyr.dgs.G.`, `vksm.dlv.sngr.neveik.būt.vyr.dgs.V.`, `vksm.dlv.sngr.neveik.būt.vyr.dgs.Vt.`, `vksm.dlv.sngr.neveik.būt.vyr.dgs.Įn.`, `vksm.dlv.sngr.neveik.būt.vyr.vns.G.`, `vksm.dlv.sngr.neveik.būt.vyr.vns.K.`, `vksm.dlv.sngr.neveik.būt.vyr.vns.V.`, `vksm.dlv.sngr.neveik.es.bev.`, `vksm.dlv.sngr.neveik.es.mot.dgs.V.`, `vksm.dlv.sngr.neveik.es.mot.vns.V.`, `vksm.dlv.sngr.neveik.es.vyr.dgs.Įn.`, `vksm.dlv.sngr.neveik.es.vyr.vns.V.`, `vksm.dlv.sngr.veik.būt-k.bev.`, `vksm.dlv.sngr.veik.būt-k.mot.dgs.G.`, `vksm.dlv.sngr.veik.būt-k.mot.dgs.K.`, `vksm.dlv.sngr.veik.būt-k.mot.dgs.V.`, `vksm.dlv.sngr.veik.būt-k.mot.dgs.Įn.`, `vksm.dlv.sngr.veik.būt-k.mot.vns.G.`, `vksm.dlv.sngr.veik.būt-k.mot.vns.K.`, `vksm.dlv.sngr.veik.būt-k.mot.vns.V.`, `vksm.dlv.sngr.veik.būt-k.mot.vns.Įn.`, `vksm.dlv.sngr.veik.būt-k.vyr.dgs.G.`, `vksm.dlv.sngr.veik.būt-k.vyr.dgs.K.`, `vksm.dlv.sngr.veik.būt-k.vyr.dgs.V.`, `vksm.dlv.sngr.veik.būt-k.vyr.dgs.Įn.`, `vksm.dlv.sngr.veik.būt-k.vyr.vns.G.`, `vksm.dlv.sngr.veik.būt-k.vyr.vns.K.`, `vksm.dlv.sngr.veik.būt-k.vyr.vns.V.`, `vksm.dlv.sngr.veik.es.mot.dgs.K.`, `vksm.dlv.sngr.veik.es.mot.dgs.V.`, `vksm.dlv.sngr.veik.es.mot.dgs.Įn.`, `vksm.dlv.sngr.veik.es.mot.vns.K.`, `vksm.dlv.sngr.veik.es.vyr.dgs.G.`, `vksm.dlv.sngr.veik.es.vyr.dgs.K.`, `vksm.dlv.sngr.veik.es.vyr.dgs.N.`, `vksm.dlv.sngr.veik.es.vyr.dgs.V.`, `vksm.dlv.sngr.veik.es.vyr.vns.G.`, `vksm.dlv.sngr.veik.es.vyr.vns.K.`, `vksm.dlv.sngr.veik.es.vyr.vns.N.`, `vksm.dlv.sngr.veik.es.vyr.vns.V.`, `vksm.dlv.sngr.veik.es.įvardž.mot.vns.K.`, `vksm.dlv.veik.būs.vyr.vns.V.`, `vksm.dlv.veik.būt-k.bev.`, `vksm.dlv.veik.būt-k.mot.dgs.G.`, `vksm.dlv.veik.būt-k.mot.dgs.K.`, `vksm.dlv.veik.būt-k.mot.dgs.N.`, `vksm.dlv.veik.būt-k.mot.dgs.V.`, `vksm.dlv.veik.būt-k.mot.dgs.Vt.`, `vksm.dlv.veik.būt-k.mot.vns.G.`, `vksm.dlv.veik.būt-k.mot.vns.K.`, `vksm.dlv.veik.būt-k.mot.vns.N.`, `vksm.dlv.veik.būt-k.mot.vns.V.`, `vksm.dlv.veik.būt-k.mot.vns.Įn.`, `vksm.dlv.veik.būt-k.vyr.dgs.G.`, `vksm.dlv.veik.būt-k.vyr.dgs.K.`, `vksm.dlv.veik.būt-k.vyr.dgs.N.`, `vksm.dlv.veik.būt-k.vyr.dgs.V.`, `vksm.dlv.veik.būt-k.vyr.dgs.Įn.`, `vksm.dlv.veik.būt-k.vyr.vns.G.`, `vksm.dlv.veik.būt-k.vyr.vns.K.`, `vksm.dlv.veik.būt-k.vyr.vns.N.`, `vksm.dlv.veik.būt-k.vyr.vns.V.`, `vksm.dlv.veik.būt-k.vyr.vns.Vt.`, `vksm.dlv.veik.būt-k.vyr.vns.Įn.`, `vksm.dlv.veik.būt-k.įvardž.vyr.dgs.K.`, `vksm.dlv.veik.būt-k.įvardž.vyr.dgs.V.`, `vksm.dlv.veik.būt-k.įvardž.vyr.vns.K.`, `vksm.dlv.veik.būt-k.įvardž.vyr.vns.V.`, `vksm.dlv.veik.būt-k.įvardž.vyr.vns.Įn.`, `vksm.dlv.veik.būt.k.vyr.dgs.V.`, `vksm.dlv.veik.es.mot.dgs.G.`, `vksm.dlv.veik.es.mot.dgs.K.`, `vksm.dlv.veik.es.mot.dgs.N.`, `vksm.dlv.veik.es.mot.dgs.V.`, `vksm.dlv.veik.es.mot.dgs.Vt.`, `vksm.dlv.veik.es.mot.dgs.Įn.`, `vksm.dlv.veik.es.mot.vns.G.`, `vksm.dlv.veik.es.mot.vns.K.`, `vksm.dlv.veik.es.mot.vns.N.`, `vksm.dlv.veik.es.mot.vns.V`, `vksm.dlv.veik.es.mot.vns.V.`, `vksm.dlv.veik.es.mot.vns.Vt.`, `vksm.dlv.veik.es.mot.vns.Įn.`, `vksm.dlv.veik.es.vyr.dgs.G.`, `vksm.dlv.veik.es.vyr.dgs.K.`, `vksm.dlv.veik.es.vyr.dgs.N.`, `vksm.dlv.veik.es.vyr.dgs.V.`, `vksm.dlv.veik.es.vyr.dgs.Vt.`, `vksm.dlv.veik.es.vyr.dgs.Įn.`, `vksm.dlv.veik.es.vyr.vns.G.`, `vksm.dlv.veik.es.vyr.vns.K.`, `vksm.dlv.veik.es.vyr.vns.N.`, `vksm.dlv.veik.es.vyr.vns.V.`, `vksm.dlv.veik.es.vyr.vns.Vt.`, `vksm.dlv.veik.es.vyr.vns.Įn.`, `vksm.dlv.veik.es.įvardž.mot.vns.K.`, `vksm.dlv.veik.es.įvardž.mot.vns.V.`, `vksm.dlv.veik.es.įvardž.vyr.dgs.K.`, `vksm.dlv.veik.es.įvardž.vyr.vns.K.`, `vksm.dlv.veik.es.įvardž.vyr.vns.N.`, `vksm.neig.dlv.neveik.es.mot.vns.V.`, `vksm.neveik.būt.vyr.dgs.V.`, `vksm.pad.būt-k.`, `vksm.pad.es.`, `vksm.pad.es.sngr.`, `vksm.pad.neig.būt-k.`, `vksm.pad.neig.es.`, `vksm.pad.neig.sngr.būt-k.`, `vksm.pad.neig.sngr.es.`, `vksm.pad.sngr.būt-k.`, `vksm.pad.sngr.es.`, `vksm.padlv.sngr.es.`, `vksm.pusd.mot.dgs.`, `vksm.pusd.mot.vns.`, `vksm.pusd.neig.mot.vns.`, `vksm.pusd.neig.vyr.dgs.`, `vksm.pusd.neig.vyr.vns.`, `vksm.pusd.sngr.mot.dgs.`, `vksm.pusd.sngr.mot.vns.`, `vksm.pusd.sngr.vyr.dgs.`, `vksm.pusd.sngr.vyr.vns.`, `vksm.pusd.vyr.dgs.`, `vksm.pusd.vyr.vns.`, `vksm.sngr.pad.es.`, `įv.G.`, `įv.K.`, `įv.N.`, `įv.V.`, `įv.bev.`, `įv.dgs.G.`, `įv.dgs.K.`, `įv.dgs.N.`, `įv.dgs.V.`, `įv.dgs.Vt.`, `įv.dgs.Įn.`, `įv.dvisk.V.`, `įv.mot.G.`, `įv.mot.K.`, `įv.mot.V.`, `įv.mot.dgs.G.`, `įv.mot.dgs.K.`, `įv.mot.dgs.N.`, `įv.mot.dgs.V.`, `įv.mot.dgs.Vt.`, `įv.mot.dgs.Įn.`, `įv.mot.dvisk.N.`, `įv.mot.dvisk.V.`, `įv.mot.vns.G.`, `įv.mot.vns.K.`, `įv.mot.vns.N.`, `įv.mot.vns.V.`, `įv.mot.vns.Vt.`, `įv.mot.vns.Įn.`, `įv.vns.G.`, `įv.vns.K.`, `įv.vns.N.`, `įv.vns.V.`, `įv.vns.Vt.`, `įv.vns.Įn.`, `įv.vyr.G.`, `įv.vyr.K.`, `įv.vyr.N.`, `įv.vyr.V.`, `įv.vyr.dgs.G.`, `įv.vyr.dgs.K.`, `įv.vyr.dgs.N.`, `įv.vyr.dgs.V.`, `įv.vyr.dgs.Vt.`, `įv.vyr.dgs.Įn.`, `įv.vyr.dvisk.G.`, `įv.vyr.dvisk.K.`, `įv.vyr.dvisk.V.`, `įv.vyr.vns.G.`, `įv.vyr.vns.K.`, `įv.vyr.vns.N.`, `įv.vyr.vns.V.`, `įv.vyr.vns.Vt.`, `įv.vyr.vns.Įn.`, `įv.vyr.Įn,`, `įv.Įn.`, `įv.įvardž.bev.`, `įv.įvardž.mot.vns.K.`, `įv.įvardž.mot.vns.V.` | | **`morphologizer`** | `Definite=Ind\|Gender=Neut\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `POS=VERB\|Polarity=Pos\|VerbForm=Inf`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Ger`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=CCONJ`, `POS=PUNCT`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|POS=X`, `AdpType=Prep\|Case=Gen\|POS=ADP`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|POS=VERB\|Person=3\|Polarity=Pos\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Degree=Pos\|POS=ADV`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Degree=Pos\|Hyph=Yes\|POS=ADV`, `Hyph=Yes\|POS=X`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=SCONJ`, `Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|POS=PRON\|PronType=Ind`, `POS=PART`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Masc\|NumForm=Word\|NumType=Card\|POS=NUM`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Ins\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Gender=Neut\|POS=DET\|PronType=Dem`, `Mood=Ind\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Degree=Pos\|Gender=Neut\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Ger`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Inf`, `Degree=Cmp\|POS=ADV`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|NumForm=Digit\|POS=NUM`, `Case=Gen\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Definite=Ind\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `NumForm=Word\|NumType=Card\|POS=NUM`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Hyph=Yes\|POS=PART`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `AdpType=Prep\|Case=Acc\|POS=ADP`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|VerbForm=Fin`, `Case=Gen\|Definite=Def\|Gender=Fem\|NumForm=Combi\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Nom\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|NumForm=Roman\|POS=NUM`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Mood=Nec\|Number=Sing\|POS=VERB\|Polarity=Pos\|VerbForm=Part`, `Case=Nom\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Degree=Sup\|POS=ADV`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Mood=Nec\|Number=Plur\|POS=VERB\|Polarity=Neg\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Definite=Def\|Gender=Masc\|NumForm=Combi\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `AdpType=Prep\|Case=Ins\|POS=ADP`, `Case=Gen\|Definite=Ind\|Gender=Masc\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|NumForm=Word\|NumType=Card\|POS=NUM`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN\|Reflex=Yes`, `Case=Ins\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Gender=Neut\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=INTJ`, `Definite=Ind\|Gender=Neut\|NumForm=Word\|NumType=Ord\|POS=NUM`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|POS=PRON\|PronType=Neg`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Aspect=Perf\|Mood=Ind\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Gender=Neut\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Definite=Ind\|Gender=Neut\|Hyph=Yes\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Neut\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|POS=PRON\|PronType=Int`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Perf\|Definite=Ind\|Gender=Neut\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Hyph=Yes\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Ger`, `Definite=Ind\|Gender=Neut\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Hab\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Reflex=Yes\|VerbForm=Fin`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Reflex=Yes\|Tense=Past\|VerbForm=Fin`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|POS=NOUN`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Neg`, `Hyph=Yes\|POS=SCONJ`, `Aspect=Perf\|Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Emp`, `Case=Acc\|Definite=Def\|Gender=Masc\|NumForm=Combi\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Mood=Nec\|Number=Plur\|POS=VERB\|Polarity=Pos\|VerbForm=Part`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Definite=Def\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Ind\|POS=PRON\|PronType=Int`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Gender=Masc\|POS=NOUN`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|Definite=Ind\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Ger`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Sing\|POS=DET\|PronType=Int`, `Mood=Ind\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin`, `Aspect=Perf\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Ger`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Definite=Ind\|Gender=Fem\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Hyph=Yes\|POS=PRON\|PronType=Int`, `Mood=Cnd\|POS=AUX\|Person=3\|Polarity=Pos\|VerbForm=Fin`, `POS=AUX\|Polarity=Pos\|Tense=Pres\|VerbForm=Ger`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|NumForm=Word\|NumType=Card\|POS=NUM`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Fin`, `Case=Loc\|Gender=Fem\|NumForm=Word\|NumType=Card\|POS=NUM`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Hyph=Yes\|POS=ADV`, `Case=Gen\|Gender=Masc\|NumForm=Word\|NumType=Card\|POS=NUM`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Emp`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Ins\|Definite=Ind\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Gender=Neut\|POS=VERB\|Polarity=Neg\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Masc\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN\|Reflex=Yes`, `Case=Nom\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Gender=Neut\|Hyph=Yes\|POS=DET\|PronType=Tot`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Conv`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|POS=VERB\|Person=3\|Polarity=Neg\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|VerbForm=Fin`, `Case=Ins\|Definite=Ind\|POS=PRON\|PronType=Int`, `Case=Nom\|Definite=Def\|Gender=Fem\|NumForm=Combi\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Gen\|NumForm=Word\|NumType=Card\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|VerbForm=Conv`, `Case=Acc\|Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Foreign=Yes\|POS=X`, `Case=Acc\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Acc\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `POS=PROPN`, `Aspect=Perf\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Ger`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Ger`, `Case=Nom\|Definite=Ind\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Gender=Neut\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Def\|Gender=Fem\|NumForm=Combi\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|VerbForm=Conv`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Definite=Ind\|Hyph=Yes\|POS=NUM`, `POS=VERB\|Polarity=Neg\|Reflex=Yes\|Tense=Pres\|VerbForm=Ger`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Dat\|Definite=Ind\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Case=Ins\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Definite=Ind\|Gender=Neut\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|VerbForm=Conv`, `Case=Acc\|Definite=Ind\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|NumForm=Word\|NumType=Card\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Nom\|Definite=Ind\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Mood=Ind\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Fut\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|VerbForm=Fin`, `Case=Ins\|Definite=Ind\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|VerbForm=Conv`, `Case=Acc\|Definite=Ind\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Fut\|VerbForm=Fin`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=AUX\|Polarity=Pos\|VerbForm=Inf`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Hyph=Yes\|POS=CCONJ`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Mood=Nec\|Number=Sing\|POS=VERB\|Polarity=Pos\|VerbForm=Part`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Emp`, `Case=Acc\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=X`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Ger`, `Aspect=Perf\|Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Gender=Fem\|POS=PRON\|PronType=Ind`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Definite=Ind\|Gender=Neut\|POS=VERB\|Polarity=Neg\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Neg\|Reflex=Yes\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|POS=PRON\|PronType=Int`, `Case=Ins\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Dual\|POS=PRON\|PronType=Ind`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Emp`, `Aspect=Perf\|Case=Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Definite=Ind\|Degree=Pos\|POS=ADJ`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Definite=Ind\|Gender=Neut\|Mood=Nec\|POS=VERB\|Polarity=Pos\|VerbForm=Part`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Ind\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Fem\|NumForm=Word\|NumType=Card\|POS=NUM`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Emp`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Emp`, `Definite=Def\|Gender=Neut\|POS=DET\|PronType=Dem`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Dual\|POS=PRON\|PronType=Ind`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Hyph=Yes\|POS=PRON\|PronType=Int`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Foreign=Yes\|Hyph=Yes\|POS=X`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Emp`, `Case=Ins\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Definite=Ind\|Degree=Sup\|Gender=Neut\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=ADV`, `Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|VerbForm=Conv`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `POS=SYM`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Acc\|Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Mult\|POS=NUM`, `Case=Nom\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Aspect=Perf\|Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Neut\|POS=DET\|PronType=Tot`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Emp`, `Case=Nom\|Definite=Ind\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Pos\|VerbForm=Fin`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Definite=Ind\|NumForm=Combi\|POS=NUM`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Acc\|Definite=Ind\|Gender=Masc\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Fem\|NumForm=Word\|NumType=Card\|POS=NUM`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Definite=Ind\|Gender=Masc\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|POS=PRON\|PronType=Neg`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Ind\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Reflex=Yes\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Definite=Ind\|Gender=Masc\|POS=PRON\|PronType=Int`, `Aspect=Perf\|Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Loc\|Definite=Ind\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Definite=Ind\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Loc\|Definite=Ind\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|VerbForm=Fin`, `Case=Dat\|Definite=Ind\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Conv`, `Case=Gen\|Definite=Ind\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ins\|Gender=Fem\|NumForm=Word\|NumType=Card\|POS=NUM`, `Case=Acc\|Definite=Ind\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Perf\|Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Cnd\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Ins\|Definite=Ind\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ins\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Conv`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|VerbForm=Fin`, `Aspect=Hab\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Aspect=Perf\|Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ill\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=NOUN`, `Case=Loc\|Definite=Ind\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past\|VerbForm=Fin`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Ind\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Ind\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Loc\|Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Definite=Ind\|POS=NUM`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Emp`, `Case=Gen\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Hyph=Yes\|NumForm=Word\|NumType=Card\|POS=NUM`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Nom\|Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `POS=VERB\|Polarity=Neg\|VerbForm=Inf`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|POS=VERB\|Polarity=Neg\|Reflex=Yes\|Tense=Past\|VerbForm=Ger`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|POS=PRON\|PronType=Ind`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Definite=Ind\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Reflex=Yes\|VerbForm=Fin`, `Case=Ins\|Definite=Ind\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN\|Reflex=Yes`, `Aspect=Perf\|Case=Ins\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Cnd\|POS=VERB\|Person=3\|Polarity=Neg\|Reflex=Yes\|VerbForm=Fin`, `POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres`, `Definite=Ind\|Gender=Masc\|POS=PRON\|PronType=Ind`, `Case=Acc\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Loc\|Definite=Ind\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Cnd\|POS=VERB\|Person=3\|Polarity=Neg\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|VerbForm=Fin`, `Aspect=Perf\|Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|VerbForm=Conv`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Nom\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Emp`, `POS=VERB\|Polarity=Neg\|Reflex=Yes\|VerbForm=Inf`, `Case=Dat\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Emp`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Emp`, `Case=Ins\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Definite=Ind\|Gender=Neut\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Reflex=Yes\|VerbForm=Fin`, `Case=Dat\|Definite=Ind\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Gender=Fem\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ill\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Pos\|VerbForm=Fin`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Abbr=Yes\|Hyph=Yes\|POS=X`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Pos\|VerbForm=Fin`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Peri`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Def\|Gender=Masc\|NumForm=Combi\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Dat\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Dat\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Emp`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN\|Reflex=Yes`, `Gender=Fem\|POS=PROPN`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN\|Reflex=Yes`, `Case=Gen\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Ins\|Definite=Ind\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Loc\|Definite=Ind\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN\|Reflex=Yes`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|NumForm=Word\|NumType=Card\|POS=NUM`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin`, `Case=Ins\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=NOUN`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Loc\|Gender=Masc\|NumForm=Word\|NumType=Card\|POS=NUM`, `Case=Dat\|Gender=Masc\|NumForm=Word\|NumType=Card\|POS=NUM`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|VerbForm=Fin`, `Definite=Ind\|Gender=Neut\|Mood=Nec\|POS=VERB\|Polarity=Neg\|VerbForm=Part`, `Case=Dat\|Definite=Ind\|Number=Sing\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin`, `Case=Ins\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Fem\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|VerbForm=Conv`, `Case=Acc\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Definite=Ind\|Gender=Neut\|Hyph=Yes\|POS=PRON\|PronType=Int`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Reflex=Yes\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past\|VerbForm=Fin`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Sing\|POS=PRON\|PronType=Ind`, `Aspect=Perf\|Definite=Ind\|Gender=Neut\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Ind\|Hyph=Yes\|POS=PRON\|PronType=Int`, `Case=Nom\|Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Ins\|Definite=Ind\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Mood=Nec\|Number=Plur\|POS=VERB\|Polarity=Pos\|VerbForm=Part`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=X`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN\|Reflex=Yes`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Ind\|POS=PRON\|PronType=Neg`, `Aspect=Hab\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Hyph=Yes\|POS=PRON\|PronType=Int`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Sing\|POS=DET\|PronType=Int`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|VerbForm=Fin`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Fin`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past\|VerbForm=Fin`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=AUX\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|VerbForm=Fin`, `Hyph=Yes\|POS=INTJ`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Aspect=Hab\|Mood=Ind\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Com\|Number=Sing\|POS=NOUN`, `Aspect=Hab\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Fin`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Perf\|Definite=Ind\|Gender=Neut\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Com\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Ind\|NumForm=Word\|NumType=Sets\|POS=NUM`, `Case=Gen\|Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Mult\|POS=NUM`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Reflex=Yes\|Tense=Past\|VerbForm=Fin`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Past\|VerbForm=Fin`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Reflex=Yes\|Tense=Fut\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Ind\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Reflex=Yes\|Tense=Past\|VerbForm=Fin`, `Case=Ins\|Definite=Ind\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Sing\|POS=DET\|PronType=Dem`, `Aspect=Hab\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Fut\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Dual\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=NOUN`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|VerbForm=Fin`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Mood=Nec\|Number=Sing\|POS=VERB\|Polarity=Neg\|VerbForm=Part`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Dual\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Dual\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|Tense=Fut\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Number=Dual\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Perf\|Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN\|Reflex=Yes`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Mood=Nec\|Number=Sing\|POS=VERB\|Polarity=Neg\|VerbForm=Part`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Definite=Ind\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Hyph=Yes\|POS=PRON\|PronType=Ind`, `Case=Ins\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Ins\|Definite=Ind\|POS=PRON\|PronType=Ind`, `Aspect=Perf\|Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|VerbForm=Fin`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|NumType=Card\|POS=NUM`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Definite=Ind\|Hyph=Yes\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Def\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Loc\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=AUX\|Polarity=Pos\|VerbForm=Conv`, `Case=Loc\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Aspect=Perf\|Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Gender=Fem\|POS=PRON\|PronType=Ind`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Hyph=Yes\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Sing\|POS=DET\|PronType=Dem`, `Aspect=Perf\|Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Gender=Fem\|Hyph=Yes\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|Number=Sing\|POS=NUM`, `Case=Loc\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Gender=Fem\|NumForm=Word\|Number=Sing\|POS=NUM`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Conv`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Loc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Ins\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Ins\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Gender=Fem\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Pass` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `advmod:emph`, `amod`, `appos`, `case`, `cc`, `ccomp`, `conj`, `cop`, `csubj`, `dep`, `det`, `flat`, `flat:foreign`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `nummod:gov`, `obj`, `obl`, `obl:arg`, `orphan`, `parataxis`, `punct`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `2`, `3`, `5`, `7`, `9`, `12`, `16`, `18`, `19`, `21`, `24`, `26`, `30`, `32`, `34`, `37`, `39`, `41`, `43`, `44`, `46`, `48`, `50`, `52`, `55`, `59`, `62`, `64`, `66`, `68`, `70`, `72`, `74`, `75`, `77`, `79`, `81`, `84`, `86`, `88`, `90`, `92`, `94`, `96`, `98`, `101`, `103`, `105`, `107`, `109`, `110`, `111`, `113`, `115`, `117`, `119`, `121`, `123`, `125`, `127`, `129`, `131`, `133`, `135`, `137`, `139`, `142`, `146`, `148`, `151`, `153`, `155`, `158`, `162`, `165`, `167`, `168`, `170`, `173`, `175`, `177`, `180`, `182`, `184`, `185`, `187`, `189`, `190`, `194`, `195`, `196`, `197`, `200`, `202`, `204`, `205`, `206`, `207`, `208`, `209`, `211`, `213`, `216`, `217`, `219`, `220`, `222`, `224`, `225`, `227`, `231`, `234`, `238`, `242`, `246`, `249`, `251`, `252`, `255`, `258`, `261`, `263`, `265`, `267`, `269`, `272`, `274`, `276`, `278`, `281`, `284`, `285`, `287`, `289`, `292`, `294`, `295`, `297`, `299`, `301`, `303`, `306`, `308`, `310`, `313`, `314`, `317`, `319`, `323`, `325`, `328`, `331`, `333`, `336`, `339`, `341`, `344`, `346`, `350`, `353`, `356`, `359`, `360`, `363`, `366`, `368`, `371`, `374`, `376`, `378`, `380`, `382`, `384`, `385`, `387`, `389`, `390`, `391`, `393`, `395`, `397`, `402`, `403`, `404`, `406`, `408`, `409`, `413`, `415`, `417`, `419`, `420`, `423`, `424`, `426`, `429`, `432`, `434`, `436`, `439`, `442`, `445`, `447`, `448`, `450`, `452`, `455`, `456`, `458`, `460`, `463`, `465`, `468`, `472`, `475`, `477`, `480`, `482`, `483`, `485`, `487`, `488`, `489`, `491`, `492`, `494`, `496`, `497`, `500`, `501`, `502`, `504`, `505`, `506`, `508`, `509`, `513`, `515`, `518`, `519`, `521`, `522`, `523`, `525`, `527`, `529`, `533`, `535`, `538`, `541`, `542`, `545`, `547`, `550`, `552`, `554`, `555`, `557`, `560`, `561`, `563`, `566`, `569`, `572`, `574`, `577`, `580`, `582`, `584`, `589`, `594`, `596`, `599`, `600`, `602`, `604`, `607`, `609`, `611`, `613`, `615`, `616`, `619`, `623`, `625`, `628`, `629`, `631`, `633`, `635`, `638`, `640`, `642`, `645`, `647`, `649`, `653`, `655`, `658`, `660`, `661`, `663`, `665`, `666`, `668`, `670`, `671`, `672`, `673`, `675`, `678`, `679`, `681`, `683`, `685`, `688`, `691`, `693`, `697`, `699`, `700`, `702`, `703`, `704`, `705`, `706`, `707`, `709`, `714`, `715`, `717`, `719`, `721`, `722`, `725`, `726`, `728`, `730`, `732`, `735`, `738`, `739`, `741`, `742`, `743`, `746`, `748`, `750`, `754`, `755`, `757`, `759`, `761`, `762`, `765`, `768`, `770`, `773`, `774`, `777`, `781`, `784`, `785`, `788`, `791`, `793`, `795`, `796`, `799`, `801`, `803`, `805`, `807`, `808`, `811`, `813`, `814`, `816`, `817`, `818`, `822`, `825`, `827`, `829`, `831`, `835`, `836`, `838`, `839`, `841`, `843`, `844`, `846`, `849`, `850`, `851`, `854`, `855`, `856`, `857`, `858`, `859`, `860`, `861`, `367`, `862`, `865`, `867`, `868`, `869`, `870`, `873`, `874`, `875`, `878`, `879`, `882`, `886`, `888`, `890`, `893`, `895`, `898`, `900`, `901`, `902`, `903`, `905`, `907`, `908`, `910`, `912`, `914`, `915`, `917`, `919`, `921`, `922`, `924`, `928`, `929`, `930`, `931`, `932`, `935`, `936`, `938`, `940`, `942`, `944`, `945`, `947`, `951`, `953`, `956`, `958`, `959`, `961`, `963`, `965`, `967`, `969`, `970`, `972`, `975`, `976`, `977`, `979`, `980`, `981`, `983`, `987`, `990`, `992`, `993`, `995`, `996`, `998`, `1000`, `1002`, `1004`, `1006`, `1008`, `1009`, `1012`, `1014`, `1015`, `1016`, `1018`, `1019`, `1022`, `1024`, `1026`, `1028`, `1029`, `1033`, `1036`, `1038`, `1040`, `1042`, `1047`, `1049`, `1051`, `1053`, `1055`, `1057`, `1060`, `1063`, `1065`, `1067`, `1069`, `1070`, `1071`, `1073`, `1075`, `1078`, `1080`, `1082`, `1084`, `1086`, `1089`, `1092`, `1093`, `1094`, `1095`, `1096`, `1098`, `1100`, `1102`, `1104`, `1105`, `1107`, `1108`, `1110`, `1113`, `1115`, `1118`, `1121`, `1123`, `1124`, `1126`, `1127`, `1129`, `1131`, `1134`, `1137`, `1141`, `1142`, `1144`, `1146`, `1148`, `1150`, `1151`, `1153`, `1154`, `1156`, `1158`, `1159`, `1162`, `1164`, `1166`, `1169`, `1173`, `1175`, `1178`, `1180`, `1183`, `1184`, `1186`, `1187`, `1189`, `1191`, `1194`, `1196`, `1197`, `1198`, `1200`, `1201`, `1203`, `1205`, `1207`, `1210`, `1212`, `1215`, `1216`, `1218`, `1220`, `1223`, `1224`, `1227`, `1229`, `1232`, `1234`, `1235`, `1238`, `1241`, `1242`, `1243`, `1246`, `1247`, `1249`, `1251`, `1252`, `1253`, `1256`, `1259`, `1262`, `1264`, `1267`, `1269`, `1271`, `1272`, `1275`, `1277`, `1278`, `1280`, `1282`, `1284`, `1285`, `1288`, `1291`, `1293`, `1296`, `1298`, `1300`, `1301`, `1302`, `1303`, `1305`, `1307`, `1309`, `1312`, `1315`, `1316`, `1319`, `1320`, `1321`, `1322`, `1323`, `1324`, `1327`, `1330`, `1333`, `1334`, `1335`, `1336`, `1339`, `1341`, `1344`, `1345`, `1347`, `1349`, `1350`, `1351`, `1352`, `1354`, `1357`, `1358`, `1359`, `1360`, `1362`, `1365`, `1368`, `1369`, `1370`, `1372`, `1374`, `1376`, `1377`, `1379`, `1382`, `1385`, `1386`, `1390`, `1393`, `1394`, `1396`, `1398`, `1400`, `1403`, `1405`, `1408`, `1410`, `1413`, `1415`, `1418`, `1420`, `1421`, `1423`, `1424`, `1426`, `1428`, `1429`, `1432`, `1434`, `1436`, `1438`, `1441`, `1443`, `1444`, `1445`, `1447`, `1449`, `1450`, `1451`, `1453`, `1455`, `1457`, `1458`, `1460`, `1461`, `1463`, `1465`, `1467`, `1470`, `1472`, `1474`, `1476`, `1477`, `1479`, `1481`, `1482`, `1483`, `1484`, `1486`, `1489`, `1492`, `1494`, `1495`, `1497`, `1498`, `1501`, `1503`, `1505`, `1506`, `1507`, `1508`, `1510`, `1511`, `1514`, `1515`, `1518`, `1521`, `1524`, `1526`, `1529`, `1532`, `1533`, `1534`, `1537`, `1539`, `1540`, `1542`, `1544`, `1545`, `1547`, `1549`, `1550`, `1551`, `1552`, `1553`, `1555`, `1557`, `1559`, `1562`, `1565`, `1568`, `1570`, `1571`, `1574`, `1576`, `1579`, `1580`, `1582`, `1583`, `1585`, `1586`, `1588`, `1590`, `1591`, `1592`, `1594`, `1595`, `1597`, `1598`, `1600`, `1602`, `1605`, `1607`, `1608`, `1609`, `1611`, `1613`, `1615`, `1616`, `1617`, `1620`, `1621`, `1623`, `1624`, `1625`, `1628`, `1630`, `1632`, `1634`, `1635`, `1636`, `1638`, `1639`, `1641`, `1643`, `1644`, `1647`, `1649`, `1650`, `1651`, `1652`, `1654`, `1656`, `1657`, `1658`, `1659`, `1660`, `1661`, `1663`, `1664`, `1665`, `1666`, `1669`, `1672`, `1673`, `1674`, `1675`, `1678`, `1679`, `1682`, `1685`, `1686`, `1689`, `1690`, `1691`, `1693`, `1694`, `1695`, `1697`, `1699`, `1701`, `1702`, `1704`, `1706`, `1707`, `1709`, `1711`, `1713`, `1715`, `1716`, `1720`, `1722`, `1724`, `1726`, `1727`, `1728`, `1729`, `1732`, `1733`, `1736`, `1737`, `1740`, `1741`, `1742`, `1744`, `1747`, `1749`, `1751`, `1755`, `1756`, `1757`, `1759`, `1761`, `1763`, `1764`, `1766`, `1769`, `1771`, `1772`, `1774`, `1776`, `1777`, `1780`, `1781`, `1782`, `1784`, `1785`, `1787`, `1789`, `1790`, `1791`, `1794`, `1796`, `1798`, `1801`, `1802`, `1805`, `1806`, `1807`, `1808`, `1811`, `1812`, `1815`, `1818`, `1821`, `1823`, `1825`, `1828`, `1830`, `1832`, `1835`, `1836`, `1839`, `1841`, `1844`, `1847`, `1850`, `1852`, `1853`, `1854`, `1855`, `1856`, `1857`, `1860`, `1862`, `1863`, `1864`, `1866`, `1867`, `1869`, `1870`, `1871`, `1874`, `1876`, `1878`, `1879`, `1882`, `1885`, `1887`, `1890`, `1893`, `1896`, `1898`, `1900`, `1902`, `1903`, `1904`, `1905`, `1906`, `1909`, `1912`, `1913`, `1917`, `1919`, `1921`, `1924`, `1925`, `1926`, `1928`, `1929`, `1931`, `1933`, `1935`, `1936`, `1937`, `1939`, `1941`, `1944`, `1946`, `1947`, `1950`, `1951`, `1954`, `1955`, `1957`, `1958`, `1960`, `1961`, `1964`, `1966`, `1968`, `1970`, `1971`, `1972`, `1975`, `1977`, `1980`, `1982`, `1983`, `1984`, `1985`, `1986`, `1987`, `1988`, `1991`, `1993`, `1995`, `1996`, `1997`, `1999`, `2000`, `2001`, `2003`, `2005`, `2008`, `2011`, `2012`, `2014`, `2017`, `2018`, `2019`, `2020`, `2022`, `2024`, `2025`, `2027`, `2029`, `2031`, `2032`, `2035`, `2036`, `2039`, `2040`, `2041`, `2044`, `2045`, `40`, `2046`, `2048`, `2049`, `2052`, `2055`, `2056`, `2058`, `2059`, `2061`, `2063`, `2066`, `2068`, `2069`, `2071`, `2072`, `2074`, `2076`, `2077`, `2078`, `2079`, `2080`, `2082`, `2084`, `2086`, `2087`, `2088`, `2090`, `2091`, `2094`, `2097`, `2098`, `2100`, `2102`, `2103`, `2104`, `2106`, `2107`, `2108`, `2111`, `2113`, `2114`, `2116`, `2118`, `2121`, `2124`, `2126`, `2128`, `2130`, `2134`, `2137`, `2139`, `2141`, `2143`, `2145`, `2146`, `2148`, `2150`, `2152`, `2155`, `2157`, `2160`, `2161`, `2163`, `2164`, `2165`, `2166`, `2167`, `2169`, `2170`, `2171`, `2174`, `2177`, `2178`, `2179`, `2180`, `2182`, `2185`, `2186`, `2187`, `2189`, `2190`, `2191`, `2192`, `2194`, `2195`, `2196`, `2199`, `2200`, `2202`, `2204`, `2206`, `2207`, `2208`, `2211`, `2213`, `2214`, `2215`, `2216`, `2217`, `2219`, `2220`, `2221`, `2222`, `2223`, `2225`, `2226`, `2227`, `2228`, `2230`, `2232`, `2234`, `2237`, `2239`, `2240`, `2241`, `2242`, `2243`, `2244`, `2245`, `2246`, `2247`, `2249`, `2251`, `2254`, `2256`, `2257`, `2258`, `2260`, `2261`, `2263`, `2266`, `2268`, `2269`, `2270`, `2271`, `2272`, `2273`, `2274`, `2275`, `2276`, `2279`, `2281`, `2283`, `2284`, `2285`, `2286`, `2287`, `2289`, `2291`, `2294`, `2295`, `2297`, `2298`, `2301`, `2302`, `2303`, `2304`, `2305`, `2306`, `2308`, `2310`, `2311`, `2312`, `2313`, `2314`, `2315`, `2316`, `2317`, `2318`, `2319`, `2322`, `2324`, `2326`, `2327`, `2330`, `2331`, `2332`, `2334`, `2335`, `2336`, `2337`, `2338`, `2339`, `2340`, `2341`, `2342`, `2343`, `2344`, `2345`, `2346`, `2347`, `2348`, `2349`, `2351`, `2353`, `2354`, `2356`, `2357`, `2358`, `2359`, `2360`, `2361`, `2362`, `2363`, `2364`, `2365`, `2366`, `2367`, `2368`, `2369`, `2370`, `2371`, `2372`, `2375`, `2376`, `2377`, `2378`, `2379`, `2380`, `2381`, `2382`, `2383`, `2384`, `2385`, `2386`, `2387`, `2388`, `2389`, `2390`, `2391`, `2392`, `2393`, `2394`, `2395`, `2396`, `2398`, `2399`, `2401`, `2403`, `2404`, `2405`, `2406`, `2407`, `2410`, `2411`, `2413`, `2414`, `2415`, `2416`, `2417`, `2418`, `2419`, `2420`, `2421`, `2422`, `2423`, `2424`, `2425`, `2426`, `2429`, `2430`, `2432`, `2433`, `2435`, `2437`, `2440`, `2443`, `2444`, `2445`, `2446`, `2447`, `2448`, `2450`, `2451`, `2452`, `2453`, `2454`, `2455`, `2456`, `2457`, `2458`, `2459`, `2460`, `2461`, `2462`, `2463`, `2466`, `2468`, `2469`, `2470`, `2471`, `2472`, `2473`, `2474`, `2475`, `2476`, `2477`, `2478`, `2480`, `2482`, `2483`, `2484`, `2485`, `2486`, `2487`, `2488`, `2489`, `2491`, `2493`, `2495`, `2496`, `2498`, `2500`, `2501`, `2504`, `2505`, `2506`, `2508`, `2509`, `2511`, `2513`, `2515`, `2516`, `2518`, `2519`, `2520`, `2522`, `2525`, `2526`, `2528`, `2530`, `2532`, `2533`, `2534`, `2535`, `2537`, `2539`, `2540`, `2541`, `2542`, `2543`, `2544`, `2546`, `2548`, `2550`, `2552`, `2553`, `2555`, `2557`, `2558`, `2560`, `2561`, `2564`, `2565`, `2566`, `2567`, `2568`, `2570`, `2572`, `2574`, `2578`, `2579`, `2580`, `2581`, `2583`, `2584`, `2585`, `2586`, `2588`, `2589`, `2590`, `2591`, `2594`, `2596`, `2597`, `2599`, `2600`, `2601`, `2602`, `2603`, `2606`, `2609`, `2612`, `2613`, `2617`, `2618`, `2621`, `2622`, `2625`, `2629`, `2631`, `2633`, `2634`, `2636`, `2637`, `2638`, `2639`, `2640`, `2641`, `2643`, `2645`, `2647`, `2649`, `2650`, `2651`, `2653`, `2654`, `2657`, `2658`, `2659`, `2660`, `2662`, `2663`, `2665`, `2669`, `2671`, `2673`, `2676`, `2677`, `2678`, `2680`, `2682`, `2684`, `2687`, `2690`, `2692`, `2694`, `2696`, `2697`, `2698`, `2699`, `2700`, `2701`, `2702`, `2703`, `2704`, `2705`, `2706`, `2708`, `2710`, `2711`, `2712`, `2713`, `2716`, `2718`, `2721`, `2722`, `2725`, `2726`, `2727`, `2730`, `2731`, `2732`, `2733`, `2737`, `2740`, `2741`, `2742`, `2744`, `2747`, `2750`, `2752`, `2754`, `2756`, `2757`, `2760`, `2762`, `2765`, `2768`, `2769`, `2772`, `2775`, `2778`, `2779`, `2780`, `2782`, `2784`, `2786`, `2787`, `2789`, `2790`, `2791`, `2794`, `2795`, `2797`, `2799`, `2800`, `2801`, `2803`, `2804`, `2805`, `2808`, `2810`, `2811`, `2812`, `2815`, `2817`, `2818`, `2819`, `2821`, `2822`, `2823`, `2824`, `2826`, `2828`, `2829`, `2830`, `2833`, `2834`, `2836`, `2838`, `2839`, `2842`, `2845`, `2847`, `2848`, `2849`, `2851`, `2854`, `2856`, `2859`, `2861`, `2863`, `2864`, `2865`, `2866`, `2867`, `2870`, `2873`, `2874`, `2876`, `2880`, `2882`, `2884`, `2887`, `2889`, `2890`, `2893`, `2895`, `2897`, `2899`, `2900`, `2901`, `2904`, `2905`, `2907`, `2909`, `2911`, `2912`, `2914`, `2916`, `2917`, `2918`, `2919`, `2922`, `2925`, `2926`, `2855`, `2928`, `2930`, `2932`, `2933`, `2936`, `2937`, `2939`, `2940`, `2941`, `2942`, `2943`, `2945`, `2946`, `2948`, `2952`, `2954`, `2957`, `2958`, `2961`, `2962`, `2963`, `2965`, `2967`, `2969`, `2971`, `2974`, `2976`, `2979`, `2980`, `2981`, `2982`, `2984`, `2985`, `2987`, `2989`, `2991`, `2993`, `2995`, `2997`, `2998`, `2999`, `3000`, `3001`, `3002`, `3004`, `3005`, `3007`, `3008`, `3009`, `3010`, `3011`, `3012`, `3013`, `3015`, `3017`, `3019`, `3021`, `3022`, `3023`, `3024`, `3025`, `3027`, `3028`, `3030`, `3031`, `3033`, `3036`, `3039`, `3040`, `3042`, `3045`, `3047`, `3049`, `3050`, `3053`, `3054`, `3055`, `3056`, `3057`, `3058`, `3059`, `3061`, `3062`, `3063`, `3064`, `3066`, `3067`, `3068`, `3069`, `3071`, `3073`, `3074`, `3077`, `3078`, `3080`, `3081`, `3083`, `3084`, `3085`, `3086`, `3087`, `3088`, `3089`, `3090`, `3091`, `3093`, `3095`, `3096`, `3098`, `3100`, `3101`, `3102`, `3103`, `3104`, `3107`, `3109`, `3113`, `3114`, `3115`, `3116`, `3117`, `3119`, `3120`, `3123`, `3124`, `3125`, `3128`, `3130`, `3133`, `3134`, `3136`, `3137`, `3139`, `3140`, `3141`, `3142`, `3143`, `3145`, `3147`, `3148`, `3150`, `3151`, `3153`, `3154`, `3157`, `3158`, `3159`, `3161`, `3163`, `3164`, `3165`, `3166`, `3167`, `3169`, `3170`, `3173`, `3174`, `3177`, `3178`, `3179`, `3182`, `3185`, `3187`, `3190`, `3191`, `3192`, `3193`, `3194`, `3195`, `3196`, `3197`, `3198`, `3199`, `3200`, `3201`, `3203`, `3205`, `3206`, `3209`, `3212`, `3213`, `3215`, `3216`, `3217`, `3218`, `3219`, `3220`, `3222`, `3225`, `3226`, `3229`, `3232`, `3234`, `3236`, `3237`, `3240`, `3241`, `3242`, `3243`, `3244`, `3245`, `3246`, `3247`, `3248`, `3249`, `3250`, `3252`, `3255`, `3257`, `3258`, `3259`, `3260`, `3261`, `3262`, `3263`, `3264`, `3266`, `3267`, `3268`, `3269`, `3272`, `3273`, `3276`, `3279`, `3282`, `3283`, `3285`, `3286`, `3287`, `3289`, `3292`, `3294`, `3297`, `3299`, `3301`, `3303`, `3305`, `3307`, `3309`, `3310`, `3311`, `3312`, `3313`, `3314`, `3315`, `3317`, `3318`, `3319`, `3322`, `3324`, `3325`, `3328`, `3330`, `3331`, `3333`, `3334`, `3337`, `3341`, `3342`, `3344`, `3345`, `3347`, `3349`, `3350`, `3352`, `3354`, `3355`, `3356`, `3357`, `3358`, `3359`, `3360`, `3361`, `3362`, `3364`, `3365`, `3366`, `3367`, `3368`, `3370`, `3373`, `3374`, `3375`, `3376`, `3377`, `3378`, `3379`, `3380`, `3381`, `3382`, `3385`, `3386`, `3388`, `3390`, `3391`, `3392`, `3394`, `3395`, `3396`, `3397`, `3398`, `3399`, `3400`, `3401`, `3402`, `3404`, `3406`, `3407`, `3408`, `3409`, `3410`, `3411`, `3412`, `3413`, `3414`, `3415`, `3416`, `3418`, `3420`, `3423`, `3426`, `3429`, `3430`, `3431`, `3432`, `3433`, `3434`, `3435`, `3436`, `3437`, `3440`, `3441`, `3442`, `3445`, `3446`, `3448`, `3449`, `3451`, `3453`, `3455`, `3457`, `3458`, `3459`, `3460`, `3461`, `3462`, `3463`, `3464`, `3465`, `3466`, `3468`, `3469`, `3473`, `3474`, `3475`, `3477`, `3478`, `3479`, `3481`, `3482`, `3483`, `3485`, `3488`, `3489`, `3490`, `3491`, `3492`, `3493`, `3495`, `3496`, `3497`, `3498`, `3499`, `3500`, `3501`, `3502`, `3503`, `3504`, `3505`, `3506`, `3509`, `3510`, `3511`, `3512`, `3514`, `3516`, `3517`, `3518`, `3519`, `3520`, `3521`, `3522`, `3523`, `3524`, `3525`, `3526`, `3527`, `3528`, `3529`, `3530`, `3531`, `3532`, `3535`, `3536`, `3537`, `3538`, `3539`, `3540`, `3542`, `3543`, `3546`, `3547`, `3548`, `3549`, `3550`, `3551`, `3552`, `3553` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.96 | | `TOKEN_P` | 99.94 | | `TOKEN_R` | 99.98 | | `TOKEN_ACC` | 100.00 | | `SENTS_F` | 95.65 | | `SENTS_P` | 96.84 | | `SENTS_R` | 94.49 | | `TAG_ACC` | 95.43 | | `POS_ACC` | 98.07 | | `MORPH_ACC` | 95.50 | | `DEP_UAS` | 88.11 | | `DEP_LAS` | 83.62 | | `LEMMA_ACC` | 90.46 |
{"language": ["lt"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/lt_udv25_lithuanianalksnis_trf
null
[ "spacy", "token-classification", "lt", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "lt" ]
TAGS #spacy #token-classification #lt #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Lithuanian-ALKSNIS ### Label Scheme View label scheme (3674 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (3674 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #lt #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (3674 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Latvian-LVTB | Feature | Description | | --- | --- | | **Name** | `lv_udv25_latvianlvtb_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (6012 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `X`, `affpanc`, `affpanp`, `affpayc`, `affpayp`, `affpays`, `affpdnc`, `affpdnp`, `affpdyc`, `affpdyp`, `affpdys`, `affpgnp`, `affpgyc`, `affpgyp`, `affplnc`, `affplnp`, `affplyc`, `affplyp`, `affpnnc`, `affpnnp`, `affpnyc`, `affpnyp`, `affpnys`, `affsanc`, `affsanp`, `affsayc`, `affsayp`, `affsays`, `affsdnc`, `affsdnp`, `affsdyc`, `affsdyp`, `affsgnc`, `affsgnp`, `affsgyc`, `affsgyp`, `affsgys`, `affslnc`, `affslnp`, `affslyc`, `affslyp`, `affslys`, `affsnnc`, `affsnnp`, `affsnyc`, `affsnyp`, `affsnys`, `affsvyp`, `afmpanc`, `afmpanp`, `afmpayc`, `afmpayp`, `afmpays`, `afmpdnc`, `afmpdnp`, `afmpdyc`, `afmpdyp`, `afmpdys`, `afmpgnc`, `afmpgnp`, `afmpgyc`, `afmpgyp`, `afmpgys`, `afmplnc`, `afmplnp`, `afmplyc`, `afmplyp`, `afmplys`, `afmpnnc`, `afmpnnp`, `afmpnyc`, `afmpnyp`, `afmpnys`, `afmpvyp`, `afmsanc`, `afmsanp`, `afmsayc`, `afmsayp`, `afmsays`, `afmsdnc`, `afmsdnp`, `afmsdyc`, `afmsdyp`, `afmsdys`, `afmsgnc`, `afmsgnp`, `afmsgyc`, `afmsgyp`, `afmsgys`, `afmslnc`, `afmslnp`, `afmslyc`, `afmslyp`, `afmslys`, `afmsnnc`, `afmsnnp`, `afmsnyc`, `afmsnyp`, `afmsnys`, `arfpanp`, `arfpayp`, `arfpdnp`, `arfpdyc`, `arfpdyp`, `arfpgnp`, `arfpgyp`, `arfplnc`, `arfplnp`, `arfplyc`, `arfplyp`, `arfpnnc`, `arfpnnp`, `arfpnyp`, `arfpnys`, `arfsanp`, `arfsayp`, `arfsdnp`, `arfsdyp`, `arfsgnc`, `arfsgnp`, `arfsgyp`, `arfslnp`, `arfslyp`, `arfsnnc`, `arfsnnp`, `arfsnyc`, `arfsnyp`, `arfsvyp`, `armpanp`, `armpayc`, `armpayp`, `armpdnp`, `armpdyc`, `armpdyp`, `armpdys`, `armpgnp`, `armpgyp`, `armplnp`, `armplyc`, `armplyp`, `armpnnc`, `armpnnp`, `armpnyc`, `armpnyp`, `armsanp`, `armsayc`, `armsayp`, `armsdnp`, `armsdyp`, `armsgnp`, `armsgyp`, `armslnp`, `armslyp`, `armsnnp`, `armsnyp`, `armsnys`, `cc`, `cs`, `i`, `mcc0p0`, `mccfpa`, `mccmpn`, `mccmsa`, `mcs0p0`, `mcsfp0`, `mcsfpa`, `mcsfpd`, `mcsfpg`, `mcsfpl`, `mcsfpn`, `mcsfsa`, `mcsfsd`, `mcsfsg`, `mcsfsl`, `mcsfsn`, `mcsmpa`, `mcsmpd`, `mcsmpg`, `mcsmpl`, `mcsmpn`, `mcsmsa`, `mcsmsd`, `mcsmsg`, `mcsmsl`, `mcsmsn`, `mfcfsa`, `mfcfsg`, `mfcfsn`, `mfsmsg`, `mocfsg`, `mocmsg`, `mosfpa`, `mosfpd`, `mosfpg`, `mosfpl`, `mosfpn`, `mosfsa`, `mosfsd`, `mosfsg`, `mosfsl`, `mosfsn`, `mosmpa`, `mosmpd`, `mosmpg`, `mosmpl`, `mosmpn`, `mosmsa`, `mosmsd`, `mosmsg`, `mosmsl`, `mosmsn`, `n0msa1`, `nc0000`, `nc000g`, `nc00g1`, `nc00gg`, `ncfda4`, `ncfda5`, `ncfda6`, `ncfdd4`, `ncfdd5`, `ncfdd6`, `ncfdg4`, `ncfdg5`, `ncfdg6`, `ncfdgg`, `ncfdl4`, `ncfdl5`, `ncfdl6`, `ncfdn4`, `ncfdn5`, `ncfdn6`, `ncfpa4`, `ncfpa5`, `ncfpa6`, `ncfpar`, `ncfpd4`, `ncfpd5`, `ncfpd6`, `ncfpdr`, `ncfpg1`, `ncfpg2`, `ncfpg4`, `ncfpg5`, `ncfpg6`, `ncfpgg`, `ncfpl4`, `ncfpl5`, `ncfpl6`, `ncfpn1`, `ncfpn4`, `ncfpn5`, `ncfpn6`, `ncfpnr`, `ncfsa1`, `ncfsa2`, `ncfsa4`, `ncfsa5`, `ncfsa6`, `ncfsar`, `ncfsd4`, `ncfsd5`, `ncfsd6`, `ncfsg1`, `ncfsg4`, `ncfsg5`, `ncfsg6`, `ncfsgg`, `ncfsgr`, `ncfsl1`, `ncfsl4`, `ncfsl5`, `ncfsl6`, `ncfslr`, `ncfsn4`, `ncfsn5`, `ncfsn6`, `ncfsnr`, `ncfsv4`, `ncfsv5`, `ncfva4`, `ncfva5`, `ncfvd5`, `ncfvg4`, `ncfvg5`, `ncfvl4`, `ncfvl5`, `ncfvn5`, `ncm000`, `ncmda1`, `ncmda2`, `ncmda6`, `ncmdd1`, `ncmdd2`, `ncmdd3`, `ncmdd6`, `ncmdg1`, `ncmdg2`, `ncmdg3`, `ncmdg6`, `ncmdgg`, `ncmdl1`, `ncmdl2`, `ncmdn1`, `ncmdn2`, `ncmdn6`, `ncmpa1`, `ncmpa2`, `ncmpa3`, `ncmpa4`, `ncmpd1`, `ncmpd2`, `ncmpd3`, `ncmpd5`, `ncmpg1`, `ncmpg2`, `ncmpg3`, `ncmpg4`, `ncmpg5`, `ncmpg6`, `ncmpgg`, `ncmpl1`, `ncmpl2`, `ncmpl3`, `ncmpl4`, `ncmpn0`, `ncmpn1`, `ncmpn2`, `ncmpn3`, `ncmpn4`, `ncmpn5`, `ncmpv1`, `ncmpv2`, `ncmsa1`, `ncmsa2`, `ncmsa3`, `ncmsa4`, `ncmsa5`, `ncmsd1`, `ncmsd2`, `ncmsd3`, `ncmsd4`, `ncmsg0`, `ncmsg1`, `ncmsg2`, `ncmsg3`, `ncmsg4`, `ncmsgg`, `ncmsgr`, `ncmsl1`, `ncmsl2`, `ncmsl3`, `ncmsl4`, `ncmsl5`, `ncmsn1`, `ncmsn2`, `ncmsn3`, `ncmsn4`, `ncmsnr`, `ncmsv1`, `ncmsv2`, `ncmva1`, `ncmva3`, `ncmvd1`, `ncmvd3`, `ncmvg1`, `ncmvg3`, `ncmvl1`, `ncmvl3`, `ncmvn1`, `ncmvn3`, `np0000`, `npfda4`, `npfdd4`, `npfdd6`, `npfdg1`, `npfdg4`, `npfdg6`, `npfdl4`, `npfdl6`, `npfdn4`, `npfdn5`, `npfdn6`, `npfpa5`, `npfpd5`, `npfpg2`, `npfpg4`, `npfpn4`, `npfpn5`, `npfsa4`, `npfsa5`, `npfsa6`, `npfsd4`, `npfsd5`, `npfsg1`, `npfsg3`, `npfsg4`, `npfsg5`, `npfsg6`, `npfsl4`, `npfsl5`, `npfsl6`, `npfsn3`, `npfsn4`, `npfsn5`, `npfsn6`, `npfsv4`, `npfsv5`, `npmda1`, `npmda2`, `npmdd1`, `npmdd2`, `npmdg1`, `npmdg2`, `npmdl1`, `npmdl2`, `npmdn1`, `npmdn2`, `npmpa1`, `npmpd1`, `npmpd2`, `npmpg1`, `npmpg2`, `npmpgg`, `npmpl1`, `npmpl2`, `npmpn1`, `npmpn2`, `npmsa1`, `npmsa2`, `npmsa3`, `npmsa4`, `npmsa5`, `npmsd1`, `npmsd2`, `npmsd3`, `npmsd4`, `npmsd5`, `npmsg0`, `npmsg1`, `npmsg2`, `npmsg3`, `npmsg4`, `npmsg5`, `npmsl1`, `npmsl2`, `npmsn1`, `npmsn2`, `npmsn3`, `npmsn4`, `npmsn5`, `npmsv1`, `npmsv2`, `pd0fpan`, `pd0fpdn`, `pd0fpgn`, `pd0fpln`, `pd0fpnn`, `pd0fsan`, `pd0fsdn`, `pd0fsgn`, `pd0fsln`, `pd0fsnn`, `pd0mpan`, `pd0mpdn`, `pd0mpgn`, `pd0mpln`, `pd0mply`, `pd0mpnn`, `pd0msan`, `pd0msdn`, `pd0msgn`, `pd0msln`, `pd0msnn`, `pd3fpan`, `pd3fpdn`, `pd3fpgn`, `pd3fpln`, `pd3fpnn`, `pd3fsan`, `pd3fsdn`, `pd3fsgn`, `pd3fsln`, `pd3fsnn`, `pd3mpan`, `pd3mpdn`, `pd3mpgn`, `pd3mpln`, `pd3mpnn`, `pd3msan`, `pd3msdn`, `pd3msgn`, `pd3msln`, `pd3msnn`, `pg0fpan`, `pg0fpdn`, `pg0fpgn`, `pg0fpln`, `pg0fpnn`, `pg0fsan`, `pg0fsdn`, `pg0fsgn`, `pg0fsln`, `pg0fsnn`, `pg0mpan`, `pg0mpdn`, `pg0mpgn`, `pg0mpln`, `pg0mpnn`, `pg0msan`, `pg0msdn`, `pg0msgn`, `pg0msln`, `pg0msnn`, `pi000an`, `pi000ay`, `pi000dn`, `pi000dy`, `pi000gn`, `pi000gy`, `pi000nn`, `pi000ny`, `pi0fpan`, `pi0fpay`, `pi0fpdn`, `pi0fpgn`, `pi0fpgy`, `pi0fpln`, `pi0fply`, `pi0fpnn`, `pi0fpny`, `pi0fsan`, `pi0fsay`, `pi0fsdn`, `pi0fsgn`, `pi0fsgy`, `pi0fsln`, `pi0fsnn`, `pi0fsny`, `pi0mpan`, `pi0mpay`, `pi0mpdn`, `pi0mpgn`, `pi0mpgy`, `pi0mpln`, `pi0mpnn`, `pi0mpny`, `pi0msan`, `pi0msay`, `pi0msdn`, `pi0msdy`, `pi0msgn`, `pi0msgy`, `pi0msln`, `pi0msly`, `pi0msnn`, `pi0msny`, `pi3msnn`, `pp10pan`, `pp10pdn`, `pp10pgn`, `pp10pln`, `pp10pnn`, `pp10san`, `pp10sdn`, `pp10sgn`, `pp10sln`, `pp10snn`, `pp1mpgn`, `pp20pan`, `pp20pdn`, `pp20pgn`, `pp20pnn`, `pp20san`, `pp20sdn`, `pp20sgn`, `pp20sln`, `pp20snn`, `pp2fsln`, `pp3fpan`, `pp3fpdn`, `pp3fpgn`, `pp3fpnn`, `pp3fsan`, `pp3fsdn`, `pp3fsgn`, `pp3fsln`, `pp3fsnn`, `pp3mpan`, `pp3mpdn`, `pp3mpgn`, `pp3mpln`, `pp3mpnn`, `pp3msan`, `pp3msdn`, `pp3msgn`, `pp3msln`, `pp3msnn`, `pq000an`, `pq000dn`, `pq000gn`, `pq000nn`, `pq0fpan`, `pq0fpnn`, `pq0fsnn`, `pq0mpnn`, `pq0msan`, `pq0msdn`, `pq0msln`, `pq0msnn`, `pr000an`, `pr000dn`, `pr000gn`, `pr000nn`, `pr00pgn`, `pr0fpan`, `pr0fpdn`, `pr0fpgn`, `pr0fpln`, `pr0fpnn`, `pr0fsan`, `pr0fsdn`, `pr0fsgn`, `pr0fsln`, `pr0fsnn`, `pr0mpan`, `pr0mpdn`, `pr0mpgn`, `pr0mpln`, `pr0mpnn`, `pr0msan`, `pr0msdn`, `pr0msgn`, `pr0msln`, `pr0msnn`, `ps0fpan`, `ps0fpdn`, `ps0fpgn`, `ps0fpln`, `ps0fpnn`, `ps0fsan`, `ps0fsdn`, `ps0fsgn`, `ps0fsln`, `ps0fsnn`, `ps0mpan`, `ps0mpdn`, `ps0mpgn`, `ps0mpln`, `ps0mpnn`, `ps0msan`, `ps0msdn`, `ps0msgn`, `ps0msln`, `ps0msnn`, `ps10sgn`, `ps1mpnn`, `ps1msgn`, `ps1msnn`, `ps2fsnn`, `px000an`, `px000dn`, `px000gn`, `px000ln`, `q`, `r0c`, `r0m`, `r0p`, `r0q`, `r0t`, `rcc`, `rcm`, `rcp`, `rcq`, `rct`, `rpc`, `rpm`, `rpp`, `rpq`, `rpt`, `rrm`, `rrp`, `rrt`, `rsm`, `rsp`, `rsq`, `rst`, `sp00`, `sppd`, `sppg`, `spsa`, `spsd`, `spsg`, `stpg`, `stsg`, `vcnc0ii00an`, `vcnc0ii00ay`, `vcnd0ii00an`, `vcnifi130an`, `vcnifii1pan`, `vcnifii1pay`, `vcnifii1san`, `vcnifii1say`, `vcnifii2pan`, `vcnifii2pay`, `vcnifii2san`, `vcnifii2say`, `vcnifii30an`, `vcnifii30ay`, `vcnipii1pan`, `vcnipii1pay`, `vcnipii1san`, `vcnipii1say`, `vcnipii2pan`, `vcnipii2pay`, `vcnipii2san`, `vcnipii2say`, `vcnipii30an`, `vcnipii30ay`, `vcnisii1pan`, `vcnisii1pay`, `vcnisii1san`, `vcnisii1say`, `vcnisii2pay`, `vcnisii30an`, `vcnisii30ay`, `vcnist330an`, `vcnm0ii2pan`, `vcnm0ii2san`, `vcnn0ii000n`, `vcnn0ii000y`, `vcnn0ii00an`, `vcnn0t3000n`, `vcnpdfpnasnpn`, `vcnpdfsaasypn`, `vcnpdfsgapypn`, `vcnpdfsnapnpn`, `vcnpdfsnasnpn`, `vcnpdmplasypn`, `vcnpdmpnasnpn`, `vcnpdmsaasnpy`, `vcnpdmsaasypn`, `vcnpdmsnasn0n`, `vcnpdmsnasnpn`, `vcnppfsn0000n`, `vcnppmpn0000n`, `vcnppmsn0000n`, `vcnpu0000000n`, `vcnrfii00an`, `vcnrpii00an`, `vcnrpii00ay`, `venipi130an`, `venipi130ay`, `venisi130an`, `veyifii30an`, `veyipi130an`, `veyipi130ay`, `veyipi330an`, `veyipii30an`, `veyipii30ay`, `veyisi130an`, `veyisi330an`, `veyisii30an`, `veyisii30ay`, `veypdmpnasnpn`, `veypdmsnasnpn`, `vgnpdmsgapypn`, `vmnc0i100an`, `vmnc0i100ay`, `vmnc0i10say`, `vmnc0i200an`, `vmnc0i300an`, `vmnc0i300ay`, `vmnc0ii000n`, `vmnc0ii00an`, `vmnc0ii00ay`, `vmnc0t100an`, `vmnc0t100ay`, `vmnc0t200an`, `vmnc0t200ay`, `vmnc0t300an`, `vmnc0t300ay`, `vmnc0ti00an`, `vmnd0i100an`, `vmnd0i200an`, `vmnd0i300an`, `vmnd0ii00an`, `vmnd0t100an`, `vmnd0t130an`, `vmnd0t200an`, `vmnd0t300an`, `vmnd0ti00an`, `vmnd0ti00pn`, `vmnifi11pan`, `vmnifi11pay`, `vmnifi11san`, `vmnifi11say`, `vmnifi12pan`, `vmnifi12san`, `vmnifi130an`, `vmnifi130ay`, `vmnifi13san`, `vmnifi21pan`, `vmnifi21san`, `vmnifi21say`, `vmnifi22san`, `vmnifi230an`, `vmnifi230ay`, `vmnifi31pan`, `vmnifi32san`, `vmnifi32say`, `vmnifi330an`, `vmnifi330ay`, `vmnifii1san`, `vmnifii2san`, `vmnifii30an`, `vmnifii30ay`, `vmnift11pan`, `vmnift11pay`, `vmnift11san`, `vmnift11say`, `vmnift12pan`, `vmnift12san`, `vmnift12say`, `vmnift130an`, `vmnift130ay`, `vmnift21pan`, `vmnift21pay`, `vmnift21san`, `vmnift21say`, `vmnift22pan`, `vmnift22pay`, `vmnift22san`, `vmnift22say`, `vmnift230an`, `vmnift230ay`, `vmnift31pan`, `vmnift31pay`, `vmnift31san`, `vmnift31say`, `vmnift32pan`, `vmnift32san`, `vmnift32say`, `vmnift330an`, `vmnift330ay`, `vmnifti1san`, `vmnifti2san`, `vmnifti30an`, `vmnim0230an`, `vmnipi11pan`, `vmnipi11pay`, `vmnipi11san`, `vmnipi12pan`, `vmnipi12san`, `vmnipi130an`, `vmnipi130ay`, `vmnipi21pan`, `vmnipi21san`, `vmnipi22pan`, `vmnipi22pay`, `vmnipi22san`, `vmnipi230an`, `vmnipi230ay`, `vmnipi23san`, `vmnipi31pan`, `vmnipi31san`, `vmnipi31say`, `vmnipi32pan`, `vmnipi32san`, `vmnipi330an`, `vmnipi330ay`, `vmnipii1pan`, `vmnipii1san`, `vmnipii2pan`, `vmnipii2pay`, `vmnipii2san`, `vmnipii30an`, `vmnipii30ay`, `vmnipt110an`, `vmnipt11pan`, `vmnipt11pay`, `vmnipt11san`, `vmnipt11say`, `vmnipt12pan`, `vmnipt12san`, `vmnipt12say`, `vmnipt130an`, `vmnipt130ay`, `vmnipt21pan`, `vmnipt21pay`, `vmnipt21san`, `vmnipt21say`, `vmnipt22pan`, `vmnipt22san`, `vmnipt22say`, `vmnipt230an`, `vmnipt230ay`, `vmnipt23san`, `vmnipt31pan`, `vmnipt31pay`, `vmnipt31san`, `vmnipt31say`, `vmnipt32pan`, `vmnipt32san`, `vmnipt32say`, `vmnipt330an`, `vmnipt330ay`, `vmnipti1pan`, `vmnipti1san`, `vmnipti2pan`, `vmnipti30an`, `vmnipti30ay`, `vmnipti3san`, `vmnisi11pan`, `vmnisi11san`, `vmnisi11say`, `vmnisi12san`, `vmnisi130an`, `vmnisi130ay`, `vmnisi21pan`, `vmnisi21san`, `vmnisi22pan`, `vmnisi230an`, `vmnisi230ay`, `vmnisi31pan`, `vmnisi31san`, `vmnisi31say`, `vmnisi330an`, `vmnisi330ay`, `vmnisii1pan`, `vmnisii1pay`, `vmnisii1san`, `vmnisii2san`, `vmnisii30an`, `vmnisii30ay`, `vmnist11pan`, `vmnist11pay`, `vmnist11san`, `vmnist11say`, `vmnist12pan`, `vmnist12san`, `vmnist130an`, `vmnist130ay`, `vmnist21pan`, `vmnist21pay`, `vmnist21san`, `vmnist21say`, `vmnist230an`, `vmnist230ay`, `vmnist31pan`, `vmnist31pay`, `vmnist31san`, `vmnist31say`, `vmnist32pan`, `vmnist32san`, `vmnist32say`, `vmnist330an`, `vmnist330ay`, `vmnisti1san`, `vmnisti30an`, `vmnisti30ay`, `vmnm0i12pan`, `vmnm0i12pay`, `vmnm0i12san`, `vmnm0i12say`, `vmnm0i21san`, `vmnm0i22pan`, `vmnm0i22san`, `vmnm0i32pan`, `vmnm0i32san`, `vmnm0i32say`, `vmnm0ii1pan`, `vmnm0ii2pan`, `vmnm0ii2san`, `vmnm0t11san`, `vmnm0t12pan`, `vmnm0t12pay`, `vmnm0t12san`, `vmnm0t12say`, `vmnm0t130an`, `vmnm0t21san`, `vmnm0t21say`, `vmnm0t22pan`, `vmnm0t22san`, `vmnm0t22say`, `vmnm0t230an`, `vmnm0t31san`, `vmnm0t32pan`, `vmnm0t32pay`, `vmnm0t32san`, `vmnm0t32say`, `vmnm0ti2pan`, `vmnm0ti2san`, `vmnmpi130ay`, `vmnmpi32san`, `vmnmpii2pan`, `vmnmpt12pan`, `vmnmpt12say`, `vmnmpt130ay`, `vmnmpt22san`, `vmnmpt32pan`, `vmnmpt32san`, `vmnn0i1000n`, `vmnn0i1000y`, `vmnn0i100an`, `vmnn0i130an`, `vmnn0i2000n`, `vmnn0i2000y`, `vmnn0i200an`, `vmnn0i3000n`, `vmnn0i3000y`, `vmnn0i300an`, `vmnn0ii000n`, `vmnn0ii000y`, `vmnn0t1000n`, `vmnn0t1000y`, `vmnn0t100an`, `vmnn0t2000n`, `vmnn0t2000y`, `vmnn0t200an`, `vmnn0t3000n`, `vmnn0t3000y`, `vmnn0t300an`, `vmnn0ti000n`, `vmnn0ti00an`, `vmnpdfpaapnpn`, `vmnpdfpaapypn`, `vmnpdfpaasnpn`, `vmnpdfpaasypn`, `vmnpdfpappnpn`, `vmnpdfpappnpy`, `vmnpdfpappypn`, `vmnpdfpapsnpn`, `vmnpdfpapsnpy`, `vmnpdfpapsypn`, `vmnpdfpdapnpn`, `vmnpdfpdapnpy`, `vmnpdfpdapypn`, `vmnpdfpdapysn`, `vmnpdfpdasnpn`, `vmnpdfpdasypn`, `vmnpdfpdppnpn`, `vmnpdfpdppnpy`, `vmnpdfpdppypn`, `vmnpdfpdpsnpn`, `vmnpdfpdpsypn`, `vmnpdfpdpsypy`, `vmnpdfpgapncn`, `vmnpdfpgapypn`, `vmnpdfpgppnpn`, `vmnpdfpgppnpy`, `vmnpdfpgppypn`, `vmnpdfpgpsnpn`, `vmnpdfpgpsypn`, `vmnpdfplapnpn`, `vmnpdfplapypn`, `vmnpdfplasnpn`, `vmnpdfplasypn`, `vmnpdfplppnpy`, `vmnpdfplpsnpn`, `vmnpdfplpsypn`, `vmnpdfpnapn0n`, `vmnpdfpnapnpn`, `vmnpdfpnapnpy`, `vmnpdfpnapypn`, `vmnpdfpnasnpn`, `vmnpdfpnasypn`, `vmnpdfpnasypy`, `vmnpdfpnppnpn`, `vmnpdfpnppnpy`, `vmnpdfpnppypn`, `vmnpdfpnpsnpn`, `vmnpdfpnpsnpy`, `vmnpdfpnpsypn`, `vmnpdfpnpsypy`, `vmnpdfsaapn0n`, `vmnpdfsaapncn`, `vmnpdfsaapnpn`, `vmnpdfsaapnpy`, `vmnpdfsaapypn`, `vmnpdfsaasnpn`, `vmnpdfsaasypn`, `vmnpdfsappnpn`, `vmnpdfsappnpy`, `vmnpdfsappypn`, `vmnpdfsappypy`, `vmnpdfsapsncn`, `vmnpdfsapsnpn`, `vmnpdfsapsnpy`, `vmnpdfsapsypn`, `vmnpdfsdapnpn`, `vmnpdfsdapypn`, `vmnpdfsdasnpn`, `vmnpdfsdasypn`, `vmnpdfsdppnpn`, `vmnpdfsdppypn`, `vmnpdfsdpsnpn`, `vmnpdfsdpsnpy`, `vmnpdfsdpsypn`, `vmnpdfsgapnpn`, `vmnpdfsgapypn`, `vmnpdfsgasnpn`, `vmnpdfsgasypn`, `vmnpdfsgppnpn`, `vmnpdfsgppnpy`, `vmnpdfsgppypn`, `vmnpdfsgpsnpn`, `vmnpdfsgpsypn`, `vmnpdfsgpsypy`, `vmnpdfslapnpn`, `vmnpdfslapypn`, `vmnpdfslasnpn`, `vmnpdfslasypn`, `vmnpdfslppnpn`, `vmnpdfslppypn`, `vmnpdfslpsnpn`, `vmnpdfslpsypn`, `vmnpdfslpsypy`, `vmnpdfsnapnpn`, `vmnpdfsnapnpy`, `vmnpdfsnapypn`, `vmnpdfsnapysn`, `vmnpdfsnasn0n`, `vmnpdfsnasnpn`, `vmnpdfsnasnpy`, `vmnpdfsnasypn`, `vmnpdfsnppncn`, `vmnpdfsnppnpn`, `vmnpdfsnppnpy`, `vmnpdfsnppypn`, `vmnpdfsnppypy`, `vmnpdfsnpsncn`, `vmnpdfsnpsnpn`, `vmnpdfsnpsnpy`, `vmnpdfsnpsypn`, `vmnpdfsnpsypy`, `vmnpdmpaapnpn`, `vmnpdmpaapycn`, `vmnpdmpaapypn`, `vmnpdmpaasnpn`, `vmnpdmpaasypn`, `vmnpdmpappnpn`, `vmnpdmpappypn`, `vmnpdmpapsnpn`, `vmnpdmpapsnpy`, `vmnpdmpapsypn`, `vmnpdmpapsypy`, `vmnpdmpdapnpn`, `vmnpdmpdapypn`, `vmnpdmpdasnpn`, `vmnpdmpdasypn`, `vmnpdmpdppnpn`, `vmnpdmpdppycn`, `vmnpdmpdppypn`, `vmnpdmpdpsnpn`, `vmnpdmpdpsnpy`, `vmnpdmpdpsycn`, `vmnpdmpdpsypn`, `vmnpdmpdpsypy`, `vmnpdmpgapnpn`, `vmnpdmpgapypn`, `vmnpdmpgasnpn`, `vmnpdmpgasypn`, `vmnpdmpgppypn`, `vmnpdmpgpsnpn`, `vmnpdmpgpsypn`, `vmnpdmpgpsypy`, `vmnpdmplapnpn`, `vmnpdmplapypn`, `vmnpdmplpsnpn`, `vmnpdmplpsypn`, `vmnpdmpnapnpn`, `vmnpdmpnapypn`, `vmnpdmpnasnpn`, `vmnpdmpnasypn`, `vmnpdmpnppn0n`, `vmnpdmpnppnpn`, `vmnpdmpnppnpy`, `vmnpdmpnppypn`, `vmnpdmpnpsnpn`, `vmnpdmpnpsnpy`, `vmnpdmpnpsypn`, `vmnpdmpnpsypy`, `vmnpdmpvppypn`, `vmnpdmsaapnpn`, `vmnpdmsaapypn`, `vmnpdmsaasnpn`, `vmnpdmsaasypn`, `vmnpdmsappnpn`, `vmnpdmsappnpy`, `vmnpdmsappypn`, `vmnpdmsappypy`, `vmnpdmsapsnpn`, `vmnpdmsapsnpy`, `vmnpdmsapsypn`, `vmnpdmsapsypy`, `vmnpdmsdapnpn`, `vmnpdmsdapypn`, `vmnpdmsdasnpn`, `vmnpdmsdppnpn`, `vmnpdmsdppypn`, `vmnpdmsdppypy`, `vmnpdmsdpsnpn`, `vmnpdmsdpsypn`, `vmnpdmsdpsypy`, `vmnpdmsgapnpn`, `vmnpdmsgapypn`, `vmnpdmsgasnpn`, `vmnpdmsgasypn`, `vmnpdmsgppnpn`, `vmnpdmsgppy0n`, `vmnpdmsgppypn`, `vmnpdmsgppypy`, `vmnpdmsgpsnpn`, `vmnpdmsgpsycn`, `vmnpdmsgpsypn`, `vmnpdmsgpsypy`, `vmnpdmslapnpn`, `vmnpdmslapypn`, `vmnpdmslasnpn`, `vmnpdmslasypn`, `vmnpdmslppnpn`, `vmnpdmslppy0n`, `vmnpdmslppypn`, `vmnpdmslpsnpn`, `vmnpdmslpsypn`, `vmnpdmsnapnpn`, `vmnpdmsnapnpy`, `vmnpdmsnapypn`, `vmnpdmsnasn0n`, `vmnpdmsnasnpn`, `vmnpdmsnasnpy`, `vmnpdmsnasypn`, `vmnpdmsnppnpn`, `vmnpdmsnppnpy`, `vmnpdmsnppypn`, `vmnpdmsnppypy`, `vmnpdmsnpsnpn`, `vmnpdmsnpsnpy`, `vmnpdmsnpsycn`, `vmnpdmsnpsypn`, `vmnpdmsnpsypy`, `vmnppfpn0000y`, `vmnppfsn0000n`, `vmnppmpn0000n`, `vmnppmpnap00n`, `vmnppmpnap0pn`, `vmnppmpnap0py`, `vmnppmsn0000n`, `vmnpu0000000n`, `vmnpu0000000y`, `vmnpu000000pn`, `vmnpu00000n0n`, `vmnpu000apnpn`, `vmnpumpgpsnpn`, `vmnr0t100an`, `vmnr0t3000n`, `vmnrfi100an`, `vmnrft100an`, `vmnrft200an`, `vmnrft200ay`, `vmnrft300an`, `vmnrpi1000y`, `vmnrpi100an`, `vmnrpi2000n`, `vmnrpi200an`, `vmnrpi300an`, `vmnrpii00an`, `vmnrpii00ay`, `vmnrpt100an`, `vmnrpt100ay`, `vmnrpt200an`, `vmnrpt200ay`, `vmnrpt300an`, `vmnrpt300ay`, `vmyc0i100an`, `vmyc0i100ay`, `vmyc0i200an`, `vmyc0i200ay`, `vmyc0i300an`, `vmyc0i300ay`, `vmyc0t100an`, `vmyc0t200an`, `vmyc0t300an`, `vmyc0ti00an`, `vmyd0i100an`, `vmyd0i200an`, `vmyd0i300an`, `vmyd0ii00an`, `vmyd0t100an`, `vmyd0t200an`, `vmyd0t300an`, `vmyd0ti00an`, `vmyifi11pan`, `vmyifi11san`, `vmyifi11say`, `vmyifi12pan`, `vmyifi12san`, `vmyifi130an`, `vmyifi130ay`, `vmyifi21san`, `vmyifi230an`, `vmyifi230ay`, `vmyifi31pan`, `vmyifi31san`, `vmyifi31say`, `vmyifi32san`, `vmyifi330an`, `vmyifi330ay`, `vmyift11pan`, `vmyift130an`, `vmyift21san`, `vmyift31pan`, `vmyift32san`, `vmyift330an`, `vmyifti1san`, `vmyifti30an`, `vmyipi110ay`, `vmyipi11pan`, `vmyipi11san`, `vmyipi12pan`, `vmyipi12san`, `vmyipi12say`, `vmyipi130an`, `vmyipi130ay`, `vmyipi21pan`, `vmyipi21san`, `vmyipi21say`, `vmyipi22pan`, `vmyipi22san`, `vmyipi230an`, `vmyipi230ay`, `vmyipi31pan`, `vmyipi31san`, `vmyipi31say`, `vmyipi32pan`, `vmyipi32san`, `vmyipi330an`, `vmyipi330ay`, `vmyipii1pan`, `vmyipt11pan`, `vmyipt11san`, `vmyipt12san`, `vmyipt130an`, `vmyipt130ay`, `vmyipt21san`, `vmyipt22san`, `vmyipt230an`, `vmyipt31pan`, `vmyipt31san`, `vmyipt31say`, `vmyipt32pan`, `vmyipt32san`, `vmyipt32say`, `vmyipt330an`, `vmyipt330ay`, `vmyipti1pan`, `vmyipti1san`, `vmyipti2pan`, `vmyipti30an`, `vmyipti30ay`, `vmyisi11pan`, `vmyisi11san`, `vmyisi12san`, `vmyisi130an`, `vmyisi130ay`, `vmyisi13pan`, `vmyisi21pan`, `vmyisi21san`, `vmyisi22san`, `vmyisi230an`, `vmyisi230ay`, `vmyisi31pan`, `vmyisi31san`, `vmyisi31say`, `vmyisi32san`, `vmyisi330an`, `vmyisi330ay`, `vmyisii1san`, `vmyisii30an`, `vmyist11pan`, `vmyist11san`, `vmyist130an`, `vmyist21pan`, `vmyist230an`, `vmyist230ay`, `vmyist31pan`, `vmyist31san`, `vmyist32pan`, `vmyist330an`, `vmyist330ay`, `vmyisti1pan`, `vmyisti1san`, `vmyisti30an`, `vmyisti30ay`, `vmym0i11san`, `vmym0i12pan`, `vmym0i12san`, `vmym0i12say`, `vmym0i22pan`, `vmym0i22san`, `vmym0i22say`, `vmym0i32pan`, `vmym0i32pay`, `vmym0i32san`, `vmym0t22pan`, `vmym0t22san`, `vmym0t32pan`, `vmym0t32san`, `vmympi32san`, `vmympt32san`, `vmyn0i1000n`, `vmyn0i1000y`, `vmyn0i2000n`, `vmyn0i3000n`, `vmyn0i3000y`, `vmyn0ii000n`, `vmyn0ii00an`, `vmyn0t1000n`, `vmyn0t1000y`, `vmyn0t100an`, `vmyn0t2000n`, `vmyn0t3000n`, `vmyn0t3000y`, `vmyn0ti000n`, `vmypdfpaasnpn`, `vmypdfpnasnpn`, `vmypdfpnasnpy`, `vmypdfpnasypn`, `vmypdfpnppypn`, `vmypdfsaasnpn`, `vmypdfsaasnpy`, `vmypdfsnasn0n`, `vmypdfsnasnpn`, `vmypdmpaapnpn`, `vmypdmpaasypn`, `vmypdmpnasn0n`, `vmypdmpnasnpn`, `vmypdmsaapnpn`, `vmypdmsaasnpn`, `vmypdmsnasn0n`, `vmypdmsnasnpn`, `vmypdmsnasnpy`, `vmypdmsnpsnpn`, `vmyppf0n0000n`, `vmyppfsn0000n`, `vmyppfsn0000y`, `vmyppm0n0000n`, `vmyppmpn0000n`, `vmyppms00000n`, `vmyppmsn0000n`, `vmypu0000000n`, `vmypu0000000y`, `vmypu000000pn`, `vmypumsnasnpn`, `vmyrfi100an`, `vmyrpi200an`, `vmyrpi300an`, `vmyrpt100an`, `vmyrpt300an`, `vmyrpt300ay`, `vonc0i100an`, `vonc0i100ay`, `vonc0i300an`, `vonc0i300ay`, `vonc0t300ay`, `vond0i100an`, `vond0t300an`, `vondpi300an`, `vonifi11pay`, `vonifi12pay`, `vonifi130an`, `vonifi130ay`, `vonifi230an`, `vonifi31pan`, `vonifi31san`, `vonifi31say`, `vonifi32san`, `vonifi32say`, `vonifi330an`, `vonifi330ay`, `vonift31say`, `vonift32san`, `vonift330an`, `vonift330ay`, `vonipi11pan`, `vonipi11pay`, `vonipi11san`, `vonipi11say`, `vonipi12pan`, `vonipi130an`, `vonipi130ay`, `vonipi21pan`, `vonipi230an`, `vonipi230ay`, `vonipi300ay`, `vonipi31pan`, `vonipi31pay`, `vonipi31san`, `vonipi31say`, `vonipi32pan`, `vonipi32pay`, `vonipi32san`, `vonipi32say`, `vonipi330an`, `vonipi330ay`, `vonipii30an`, `vonipt130an`, `vonipt230an`, `vonipt31pan`, `vonipt31pay`, `vonipt31san`, `vonipt31say`, `vonipt32pan`, `vonipt32san`, `vonipt330an`, `vonipt330ay`, `vonisi11san`, `vonisi11say`, `vonisi130an`, `vonisi130ay`, `vonisi230an`, `vonisi31pan`, `vonisi31pay`, `vonisi31san`, `vonisi31say`, `vonisi32pan`, `vonisi330an`, `vonisi330ay`, `vonist130an`, `vonist330an`, `vonist330ay`, `vonm0i32san`, `vonmpi32san`, `vonn0i3000n`, `vonn0t3000n`, `vonpdfpn00npy`, `vonpdfpnasnpn`, `vonpdfsnasnpn`, `vonpdfsnasnpy`, `vonpdmpnasnpn`, `vonpdmsnasnpn`, `vonpdmsnpsnpn`, `vonpdmsnpsypn`, `vonppfsn0000n`, `vonppmsn0000n`, `vonppmsn0000y`, `vonpu0000000n`, `vonpu0000000y`, `vonrft300an`, `vonrpi100ay`, `vonrpi300an`, `vonrpi300ay`, `vonrpt300an`, `vonrpt300ay`, `voyc0i100an`, `voyc0i100ay`, `voyc0i300an`, `voyc0i300ay`, `voyc0t300an`, `voyd0i100an`, `voyifi12san`, `voyifi130an`, `voyifi330an`, `voyifi330ay`, `voyifii30an`, `voyipi11pan`, `voyipi11san`, `voyipi11say`, `voyipi130an`, `voyipi130ay`, `voyipi230ay`, `voyipi300ay`, `voyipi31pan`, `voyipi31san`, `voyipi31say`, `voyipi32pan`, `voyipi330an`, `voyipi330ay`, `voyipii30an`, `voyipt11pan`, `voyipt130an`, `voyipt31san`, `voyipt32san`, `voyipt330an`, `voyipt330ay`, `voyisi11pan`, `voyisi11san`, `voyisi11say`, `voyisi130an`, `voyisi230an`, `voyisi31san`, `voyisi31say`, `voyisi330an`, `voyisi330ay`, `voyist11san`, `voyist330an`, `voym0i12pay`, `voyn0i1000n`, `voyn0i3000n`, `voyn0t1000n`, `voyp0msnap00n`, `voypdfsnasnpn`, `voypdmpnasnpn`, `voypdmsnasnpn`, `voypdmsnasnpy`, `voypu0000000n`, `voyrfi100an`, `voyrpi100an`, `voyrpi300ay`, `vpnc0i100an`, `vpnc0i300an`, `vpnd0i100an`, `vpnd0t100an`, `vpnifi12san`, `vpnifi130an`, `vpnifi31pan`, `vpnifi330an`, `vpnift130an`, `vpnift31pan`, `vpnipi11pan`, `vpnipi11pay`, `vpnipi11san`, `vpnipi130an`, `vpnipi130ay`, `vpnipi330an`, `vpnipt11pan`, `vpnipt11san`, `vpnipt130an`, `vpnipt31pan`, `vpnisi11pan`, `vpnisi11san`, `vpnisi11say`, `vpnisi130an`, `vpnisi130ay`, `vpnisi230an`, `vpnisi31san`, `vpnisi330an`, `vpnist11san`, `vpnist130an`, `vpnist330an`, `vpnisti30an`, `vpnm0i12san`, `vpnm0i32san`, `vpnm0t32san`, `vpnn0i1000n`, `vpnn0i3000n`, `vpnn0t1000n`, `vpnn0t3000n`, `vpnpdfpnasnpn`, `vpnpdfsgasypn`, `vpnpdfsnasnpn`, `vpnpdmpnasnpn`, `vpnpdmsnasnpn`, `vpnpdmsnpsnpn`, `vpnppmsn0000n`, `vpnpu0000000n`, `vpyifi130an`, `vpyipi130an`, `vpyisi130an`, `vtnc0i100an`, `vtnc0i100ay`, `vtnc0t200an`, `vtnd0i100an`, `vtnifi11pay`, `vtnifi11san`, `vtnifi130an`, `vtnifi130ay`, `vtnift130an`, `vtnipi11pan`, `vtnipi11san`, `vtnipi130an`, `vtnipi130ay`, `vtnipi230an`, `vtnipii30an`, `vtnipt230an`, `vtnipt330an`, `vtnisi11san`, `vtnisi12san`, `vtnisi130an`, `vtnisi130ay`, `vtnist330an`, `vtnn0i1000n`, `vtnn0i100an`, `vtnn0t1000n`, `vtnpdfpnasnpn`, `vtnpdfsnasnpn`, `vtnpdmpnasnpn`, `vtnpdmsnasnpn`, `vtnppmsn0000n`, `vtnpu0000000n`, `vtnrpi100an`, `vtyc0i300ay`, `vtyifi330an`, `vtyipi11san`, `vtyipi130an`, `vtyipi130ay`, `vtyipi330an`, `vtyipi330ay`, `vtyipt11pay`, `vtyipt11say`, `vtyipt130an`, `vtyipt330an`, `vtyisi11san`, `vtyisi130an`, `vtyisi330an`, `vtyist11pan`, `vtyist11san`, `vtyist130an`, `vtyist330an`, `vtyn0i1000n`, `vtyn0i3000n`, `vtyn0t1000n`, `vtyn0t3000n`, `vtypdfsnasnpn`, `vtypdmsnasnpn`, `xf`, `xn`, `xo`, `xu`, `xx`, `ya`, `yd`, `yn`, `yp`, `yr`, `yv`, `z_`, `zb`, `zc`, `zd`, `zo`, `zq`, `zs`, `zx` | | **`morphologizer`** | `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=PUNCT`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=PART`, `POS=CCONJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Evident=Fh\|Mood=Ind\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=ADP`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=VERB\|Polarity=Pos\|VerbForm=Conv`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Degree=Pos\|POS=ADV`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=ADV`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `NumType=Card\|POS=NUM`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Coll\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=ADV\|PronType=Dem`, `POS=ADV\|PronType=Int`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat\|Gender=Fem\|Number=Ptan\|POS=PROPN`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Ptan\|POS=PROPN`, `Aspect=Perf\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN\|Typo=Yes`, `POS=SCONJ`, `Mood=Cnd\|POS=VERB\|Polarity=Pos\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|POS=PRON\|PronType=Rel`, `POS=AUX\|Polarity=Pos\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Aspect=Perf\|Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `POS=VERB\|Polarity=Pos\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Ptan\|POS=NOUN`, `POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Conv`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|POS=PRON\|PronType=Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `POS=CCONJ\|Polarity=Neg`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Coll\|POS=NOUN`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Evident=Fh\|Mood=Ind\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Degree=Cmp\|POS=ADV`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Abbr=Yes\|POS=PROPN`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Ptan\|POS=NOUN`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Reflex=Yes\|VerbForm=Fin\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=ADV\|PronType=Neg`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Ptan\|POS=NOUN`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Inf`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Perf\|Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|POS=PRON\|PronType=Ind,Neg`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Pos\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Tot`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Loc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|POS=PRON\|PronType=Rel`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Loc\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Cnd\|POS=VERB\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=ADV\|PronType=Ind`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Ptan\|POS=NOUN`, `Aspect=Imp\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Ptan\|POS=NOUN`, `Case=Loc\|Gender=Masc\|Number=Coll\|POS=NOUN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Evident=Nfh\|Mood=Qot\|POS=VERB\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Evident=Nfh\|Mood=Qot\|POS=AUX\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Evident=Nfh\|Mood=Qot\|POS=AUX\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Evident=Nfh\|Mood=Qot\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Evident=Nfh\|Mood=Qot\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Ptan\|POS=NOUN`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Mood=Cnd\|POS=AUX\|Polarity=Pos\|VerbForm=Fin\|Voice=Act`, `Mood=Cnd\|POS=AUX\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Mood=Nec\|POS=VERB\|Polarity=Pos\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Evident=Fh\|Mood=Ind\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Loc\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=PART\|Polarity=Neg`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Acc\|POS=PRON\|PronType=Int`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `POS=VERB\|Polarity=Neg\|Reflex=Yes\|VerbForm=Conv`, `Evident=Fh\|Mood=Ind\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Aspect=Perf\|Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind,Neg`, `Mood=Nec\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind,Neg`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|POS=PRON\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Imp\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Ptan\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Ptan\|POS=NOUN`, `Case=Acc\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `POS=VERB\|Polarity=Neg\|VerbForm=Inf`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Aspect=Perf\|Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Gender=Fem\|NumType=Frac\|Number=Sing\|POS=NUM`, `Mood=Cnd\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Fin\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Imp\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Neg\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Ptan\|POS=NOUN`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Aspect=Perf\|Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Masc\|Number=Ptan\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel\|Typo=Yes`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `NumType=Ord\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Aspect=Perf\|Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `POS=PROPN`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Gender=Masc\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Evident=Fh\|Mood=Ind\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `POS=VERB\|Polarity=Neg\|VerbForm=Conv`, `Mood=Nec\|POS=AUX\|Polarity=Pos\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Aspect=Imp\|Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Coll\|POS=NOUN`, `Abbr=Yes\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Coll\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Coll\|POS=NOUN`, `Aspect=Perf\|Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `POS=SYM`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Tot`, `Foreign=Yes\|POS=X\|Typo=Yes`, `Aspect=Perf\|Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `POS=CCONJ\|Typo=Yes`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Dat\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Typo=Yes`, `POS=X\|Typo=Yes`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Aspect=Imp\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Abbr=Yes\|POS=SYM`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `Mood=Cnd\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Aspect=Perf\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Aspect=Perf\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Foreign=Yes\|POS=X`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel\|Typo=Yes`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|POS=NOUN`, `Aspect=Imp\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem\|Typo=Yes`, `Aspect=Perf\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Tot`, `Aspect=Perf\|Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem\|Typo=Yes`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Loc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Ptan\|POS=PROPN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Gen\|POS=DET\|PronType=Rel`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `POS=PART\|Typo=Yes`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|VerbForm=Fin\|Voice=Act`, `POS=ADV\|PronType=Int,Neg`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Neg\|Reflex=Yes\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Perf\|Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|POS=PRON\|PronType=Ind,Neg`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|POS=DET\|PronType=Ind,Neg`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Neg\|Reflex=Yes\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind,Neg`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Typo=Yes`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Nom\|POS=PRON\|PronType=Int`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|VerbForm=Conv`, `Case=Acc\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind,Neg`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|POS=PRON\|PronType=Int`, `Case=Gen\|POS=PRON\|PronType=Int`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `POS=ADV\|PronType=Tot`, `Aspect=Imp\|Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Conv`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `Aspect=Perf\|Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind,Neg`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind,Neg`, `Aspect=Imp\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Sup\|POS=ADV`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|POS=DET\|PronType=Ind`, `Case=Acc\|POS=PRON\|PronType=Ind,Neg`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Aspect=Perf\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|VerbForm=Conv`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Conv`, `Case=Dat\|Gender=Masc\|Number=Ptan\|POS=PROPN`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Reflex=Yes\|VerbForm=Conv`, `POS=INTJ`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Aspect=Perf\|Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Conv`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Aspect=Perf\|Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Tot`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind,Neg`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind,Neg`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Loc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Aspect=Imp\|Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=PART\|Polarity=Pos`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|VerbForm=Conv`, `Case=Dat\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Coll\|POS=NOUN`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Aspect=Perf\|Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Conv`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Aspect=Imp\|Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `POS=ADV\|Typo=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Fem\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Conv`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Aspect=Perf\|Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Aspect=Imp\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Mood=Nec\|POS=VERB\|Person=3\|Polarity=Pos\|VerbForm=Fin\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Reflex=Yes\|VerbForm=Fin\|Voice=Act`, `NumType=Mult\|POS=ADV`, `Aspect=Imp\|Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Masc\|Number=Ptan\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind,Neg`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Evident=Nfh\|Mood=Qot\|POS=AUX\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Aspect=Perf\|Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN\|PronType=Ind`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Aspect=Imp\|Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Aspect=Perf\|Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|POS=PRON\|PronType=Ind`, `Evident=Nfh\|Mood=Qot\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind\|Typo=Yes`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=ADP\|Typo=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Conv`, `Evident=Fh\|Mood=Ind\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN\|PronType=Int`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind,Neg`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Aspect=Perf\|Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|VerbForm=Conv`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Aspect=Imp\|Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind,Neg`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind,Neg`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind,Neg`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Typo=Yes`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|POS=DET\|PronType=Rel`, `Case=Loc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem\|Typo=Yes`, `Case=Nom\|POS=DET\|PronType=Ind,Neg`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=1\|Polarity=Neg\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Voc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind,Neg`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Reflex=Yes\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=AUX\|Polarity=Pos\|VerbForm=Conv`, `Case=Acc\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Acc\|POS=DET\|PronType=Int`, `Aspect=Perf\|Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `POS=VERB\|Polarity=Neg\|Reflex=Yes\|VerbForm=Inf`, `Aspect=Imp\|Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Ptan\|POS=PROPN`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind,Neg`, `Aspect=Imp\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `POS=VERB\|Person=3\|Polarity=Pos\|VerbForm=Inf\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN\|PronType=Dem`, `Aspect=Imp\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Dat\|POS=DET\|PronType=Rel`, `POS=VERB\|Polarity=Pos\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=AUX\|Polarity=Pos\|VerbForm=Conv`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Aspect=Perf\|Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|POS=ADV`, `Aspect=Imp\|Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Aspect=Imp\|Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=AUX\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Definite=Ind\|POS=VERB\|Polarity=Pos\|VerbForm=Conv\|Voice=Act`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel`, `Evident=Nfh\|Mood=Qot\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Coll\|POS=NOUN\|PronType=Int`, `POS=VERB\|Polarity=Pos\|Typo=Yes\|VerbForm=Inf`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Acc\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Mood=Nec\|POS=AUX\|Polarity=Pos\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|VerbForm=Conv`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Ptan\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Evident=Nfh\|Mood=Qot\|POS=VERB\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Loc\|Gender=Fem\|Number=Coll\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind,Neg`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem\|Typo=Yes`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `POS=AUX\|Polarity=Pos\|VerbForm=Inf\|Voice=Act`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Imp\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Mood=Cnd\|POS=VERB\|Polarity=Neg\|Reflex=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel\|Typo=Yes`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `POS=PUNCT\|Typo=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=AUX\|Polarity=Pos\|VerbForm=Conv`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN\|Typo=Yes`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Evident=Nfh\|Mood=Qot\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|POS=PRON\|PronType=Rel`, `Mood=Imp\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `POS=SCONJ\|Typo=Yes`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|Typo=Yes\|VerbForm=Part\|Voice=Pass`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|POS=DET\|PronType=Ind`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind,Neg`, `Aspect=Imp\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Nom\|Gender=Fem\|Number=Ptan\|POS=PROPN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Degree=Pos\|POS=ADV\|Typo=Yes`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|POS=VERB\|Person=3\|Polarity=Pos\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Pos\|VerbForm=Fin\|Voice=Act`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Typo=Yes`, `Abbr=Yes\|POS=SYM\|Typo=Yes`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=X`, `POS=ADV\|PronType=Neg\|Typo=Yes`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind,Neg`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `POS=AUX\|Polarity=Pos\|VerbForm=Conv`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Conv\|Voice=Act`, `Aspect=Perf\|Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Rel`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Mood=Nec\|POS=VERB\|Polarity=Pos\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Tot`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Aspect=Imp\|Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Nec\|POS=VERB\|Polarity=Pos\|VerbForm=Fin\|Voice=Pass`, `Aspect=Imp\|Case=Acc\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Reflex=Yes\|VerbForm=Fin\|Voice=Act`, `POS=VERB\|Polarity=Pos\|Typo=Yes\|VerbForm=Inf\|Voice=Act`, `Aspect=Imp\|Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind,Neg`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `POS=ADV\|PronType=Ind,Neg`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|VerbForm=Conv\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Evident=Nfh\|Mood=Qot\|POS=VERB\|Polarity=Pos\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|Typo=Yes\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Evident=Nfh\|Mood=Qot\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Reflex=Yes\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN\|Typo=Yes`, `Evident=Nfh\|Mood=Qot\|POS=VERB\|Polarity=Neg\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Dat\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `POS=VERB\|Polarity=Pos\|Reflex=Yes\|VerbForm=Inf\|Voice=Act`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN\|PronType=Dem`, `Aspect=Imp\|Case=Acc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Ind\|POS=VERB\|Polarity=Pos\|VerbForm=Conv`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|VerbForm=Part`, `Evident=Nfh\|Mood=Qot\|POS=VERB\|Polarity=Pos\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN\|PronType=Int`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Rel`, `Aspect=Imp\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN\|Typo=Yes`, `Aspect=Perf\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Ptan\|POS=NOUN\|Typo=Yes`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Reflex=Yes\|Tense=Pres\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Degree=Cmp\|POS=ADV\|Typo=Yes`, `POS=NOUN\|Typo=Yes`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem\|Typo=Yes`, `Case=Acc\|Gender=Masc\|Number=Ptan\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Ptan\|POS=NOUN\|Typo=Yes`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem\|Typo=Yes`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Typo=Yes`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN\|Typo=Yes`, `POS=AUX\|Polarity=Neg\|VerbForm=Inf`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Dat\|Gender=Masc\|Number=Coll\|POS=NOUN`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Rel`, `Mood=Cnd\|POS=VERB\|Polarity=Pos\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Ptan\|POS=NOUN\|Typo=Yes`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Mood=Nec\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Gender=Fem\|NumType=Frac\|Number=Sing\|POS=NUM`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|NumType=Frac\|Number=Sing\|POS=NUM`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|Typo=Yes`, `Case=Acc\|Gender=Fem\|Number=Ptan\|POS=PROPN`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=DET`, `Aspect=Imp\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|VerbForm=Conv\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=AUX\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Voice=Act`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Typo=Yes`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem,Neg`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN\|Typo=Yes`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|POS=DET\|PronType=Ind,Neg`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Evident=Fh\|Mood=Ind\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Pres\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=AUX\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|VerbForm=Conv\|Voice=Act`, `Aspect=Imp\|Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Voc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `POS=ADV\|PronType=Int\|Typo=Yes`, `Case=Dat\|POS=PRON\|PronType=Ind,Neg`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Neg\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `Evident=Fh\|Mood=Ind\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Past\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot\|Typo=Yes`, `Evident=Fh\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Typo=Yes`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind,Neg`, `Case=Nom\|Gender=Fem\|Number=Coll\|POS=NOUN`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Voc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Imp\|Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Typo=Yes`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Evident=Fh\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Reflex=Yes\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Cnd\|POS=VERB\|Polarity=Pos\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Abbr=Yes\|POS=VERB`, `Aspect=Imp\|Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `NumType=Ord\|POS=ADJ\|Typo=Yes`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN\|PronType=Neg`, `Aspect=Perf\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Reflex=Yes\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind,Neg`, `Aspect=Perf\|Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part\|Voice=Act` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `dislocated`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `0`, `2`, `4`, `6`, `8`, `10`, `11`, `13`, `15`, `18`, `20`, `22`, `26`, `28`, `31`, `34`, `37`, `39`, `41`, `43`, `45`, `47`, `49`, `52`, `54`, `56`, `58`, `60`, `61`, `64`, `66`, `67`, `69`, `71`, `73`, `74`, `76`, `78`, `80`, `83`, `85`, `86`, `87`, `89`, `91`, `93`, `95`, `97`, `98`, `101`, `104`, `107`, `109`, `110`, `112`, `113`, `116`, `119`, `122`, `124`, `126`, `128`, `131`, `134`, `138`, `140`, `142`, `145`, `147`, `149`, `152`, `153`, `155`, `157`, `160`, `163`, `164`, `166`, `168`, `172`, `175`, `177`, `179`, `181`, `183`, `184`, `187`, `190`, `193`, `195`, `196`, `198`, `200`, `201`, `203`, `205`, `208`, `209`, `210`, `212`, `214`, `218`, `220`, `222`, `224`, `227`, `230`, `233`, `235`, `237`, `239`, `243`, `245`, `246`, `248`, `250`, `251`, `253`, `255`, `256`, `259`, `260`, `262`, `265`, `269`, `272`, `274`, `275`, `276`, `278`, `281`, `283`, `287`, `291`, `293`, `295`, `298`, `300`, `303`, `305`, `306`, `308`, `311`, `313`, `314`, `316`, `319`, `322`, `324`, `326`, `328`, `329`, `332`, `333`, `335`, `337`, `339`, `341`, `343`, `345`, `348`, `349`, `350`, `352`, `354`, `355`, `358`, `359`, `361`, `362`, `365`, `368`, `370`, `372`, `374`, `376`, `377`, `379`, `381`, `382`, `384`, `387`, `389`, `390`, `392`, `396`, `398`, `400`, `401`, `405`, `408`, `409`, `410`, `412`, `415`, `417`, `419`, `420`, `422`, `425`, `426`, `428`, `430`, `432`, `434`, `436`, `438`, `439`, `440`, `443`, `445`, `447`, `448`, `450`, `452`, `454`, `455`, `458`, `461`, `462`, `464`, `466`, `468`, `469`, `471`, `473`, `474`, `476`, `477`, `480`, `483`, `484`, `485`, `487`, `488`, `491`, `494`, `495`, `497`, `498`, `499`, `500`, `501`, `502`, `504`, `506`, `508`, `510`, `511`, `512`, `513`, `515`, `517`, `518`, `519`, `520`, `524`, `525`, `527`, `529`, `532`, `535`, `536`, `537`, `539`, `540`, `541`, `543`, `546`, `548`, `549`, `550`, `551`, `553`, `555`, `556`, `560`, `562`, `564`, `566`, `567`, `569`, `571`, `572`, `575`, `577`, `579`, `582`, `584`, `585`, `586`, `587`, `588`, `593`, `595`, `596`, `599`, `601`, `603`, `605`, `607`, `610`, `613`, `616`, `619`, `622`, `624`, `625`, `627`, `629`, `631`, `633`, `636`, `638`, `640`, `642`, `644`, `645`, `646`, `650`, `651`, `653`, `655`, `657`, `660`, `662`, `665`, `667`, `668`, `670`, `673`, `676`, `678`, `680`, `681`, `682`, `684`, `687`, `688`, `690`, `691`, `692`, `693`, `694`, `697`, `699`, `700`, `701`, `702`, `705`, `706`, `708`, `709`, `712`, `714`, `715`, `718`, `721`, `723`, `725`, `726`, `728`, `729`, `731`, `732`, `733`, `735`, `736`, `737`, `738`, `739`, `741`, `743`, `745`, `746`, `748`, `749`, `750`, `751`, `753`, `754`, `755`, `756`, `758`, `759`, `760`, `761`, `762`, `764`, `766`, `767`, `768`, `771`, `773`, `774`, `775`, `776`, `777`, `778`, `779`, `781`, `783`, `785`, `786`, `787`, `790`, `791`, `792`, `793`, `795`, `796`, `798`, `799`, `800`, `801`, `804`, `805`, `806`, `807`, `808`, `809`, `810`, `811`, `812`, `813`, `814`, `816`, `823`, `825`, `826`, `828`, `830`, `831`, `832`, `835`, `838`, `839`, `840`, `842`, `843`, `844`, `846`, `847`, `849`, `851`, `853`, `855`, `856`, `858`, `859`, `861`, `862`, `864`, `865`, `866`, `869`, `870`, `871`, `873`, `876`, `878`, `879`, `880`, `881`, `882`, `883`, `886`, `888`, `889`, `891`, `894`, `897`, `898`, `899`, `900`, `901`, `904`, `907`, `908`, `909`, `912`, `914`, `916`, `917`, `919`, `921`, `922`, `923`, `925`, `928`, `930`, `932`, `933`, `936`, `937`, `938`, `940`, `942`, `943`, `944`, `946`, `947`, `949`, `951`, `953`, `955`, `956`, `957`, `961`, `963`, `966`, `967`, `968`, `969`, `972`, `974`, `976`, `977`, `979`, `981`, `982`, `983`, `986`, `988`, `989`, `990`, `991`, `995`, `998`, `999`, `1002`, `1004`, `1007`, `1008`, `1009`, `1012`, `1015`, `1016`, `1018`, `1019`, `1021`, `1024`, `1027`, `1028`, `1031`, `1034`, `1035`, `1037`, `1039`, `1041`, `1043`, `1044`, `1046`, `1049`, `1051`, `1053`, `1054`, `1056`, `1058`, `1060`, `1061`, `1062`, `1064`, `1065`, `1067`, `1069`, `1070`, `1071`, `1073`, `1074`, `1075`, `1076`, `1078`, `1079`, `1082`, `1083`, `1085`, `1088`, `1089`, `1092`, `1095`, `1097`, `1099`, `1100`, `1102`, `1104`, `1105`, `1108`, `1110`, `1114`, `1116`, `1117`, `1119`, `1121`, `1123`, `1127`, `1128`, `1129`, `1130`, `1131`, `1133`, `1135`, `1137`, `1139`, `1140`, `1142`, `1143`, `1145`, `1147`, `1149`, `1150`, `1153`, `1158`, `1160`, `1162`, `1167`, `1168`, `1169`, `1171`, `1172`, `1174`, `1176`, `1178`, `1180`, `1181`, `1182`, `1183`, `1185`, `1188`, `1191`, `1193`, `1195`, `1196`, `1197`, `1200`, `1201`, `1204`, `1205`, `1206`, `1208`, `1209`, `1211`, `1213`, `1216`, `1218`, `1220`, `1221`, `1222`, `1223`, `1225`, `1226`, `1227`, `1229`, `1230`, `1232`, `1233`, `1235`, `1236`, `1237`, `1238`, `1240`, `1241`, `1242`, `1243`, `1245`, `1247`, `1248`, `1250`, `1251`, `1252`, `1253`, `1255`, `1256`, `1257`, `1258`, `1259`, `1260`, `1261`, `1262`, `1263`, `1264`, `1267`, `1269`, `1270`, `1272`, `1274`, `523`, `1276`, `1279`, `1280`, `1281`, `1282`, `1284`, `1285`, `1287`, `1289`, `1292`, `1293`, `1294`, `1297`, `1298`, `1300`, `1301`, `1305`, `1307`, `1309`, `1310`, `1313`, `1314`, `1317`, `1318`, `1319`, `1321`, `1323`, `1324`, `1325`, `1326`, `1327`, `1329`, `1330`, `1333`, `1335`, `1337`, `1338`, `1340`, `1342`, `1344`, `1346`, `1347`, `1350`, `1351`, `1353`, `1356`, `1357`, `1358`, `1360`, `1362`, `1364`, `1367`, `1368`, `1369`, `1370`, `1371`, `1373`, `1375`, `1377`, `1378`, `1381`, `1383`, `1384`, `1386`, `1388`, `1390`, `1391`, `1392`, `1393`, `1395`, `1396`, `1398`, `1399`, `1401`, `1402`, `1403`, `1405`, `1406`, `1407`, `1408`, `1410`, `1411`, `1412`, `1413`, `1416`, `1418`, `1419`, `1422`, `1423`, `1425`, `1427`, `1428`, `1431`, `1432`, `1433`, `1434`, `1437`, `1438`, `1439`, `1441`, `1442`, `1443`, `1444`, `1445`, `1446`, `1448`, `1450`, `1452`, `1454`, `1455`, `1456`, `1457`, `1458`, `1460`, `1462`, `1466`, `1467`, `1469`, `1470`, `1474`, `1476`, `1477`, `1479`, `1481`, `1482`, `1483`, `1484`, `1485`, `1487`, `1489`, `1492`, `1493`, `1495`, `1496`, `1498`, `1499`, `1501`, `1502`, `1503`, `1506`, `1507`, `1508`, `1509`, `1511`, `1513`, `1514`, `1517`, `1518`, `1520`, `1523`, `1525`, `1527`, `1528`, `1530`, `1532`, `1534`, `1535`, `1536`, `1537`, `1539`, `1540`, `1542`, `1543`, `1545`, `1546`, `1547`, `1549`, `1551`, `1552`, `1553`, `1554`, `1557`, `1558`, `1560`, `1562`, `1564`, `1567`, `1569`, `1571`, `1572`, `1573`, `1574`, `1576`, `1577`, `1579`, `1581`, `1583`, `1584`, `1531`, `1585`, `1587`, `1588`, `1589`, `1591`, `1592`, `1595`, `1596`, `1598`, `1600`, `1601`, `1604`, `1605`, `1607`, `1608`, `1610`, `1612`, `1613`, `1616`, `1618`, `1619`, `1621`, `1623`, `1625`, `1626`, `1629`, `1630`, `1631`, `1633`, `1637`, `1639`, `1640`, `1642`, `1643`, `1645`, `1647`, `1648`, `1651`, `1652`, `1654`, `1655`, `1656`, `1657`, `1659`, `1661`, `1664`, `1665`, `1668`, `1670`, `1672`, `1673`, `1674`, `1675`, `1678`, `1679`, `1681`, `1682`, `1685`, `1688`, `1690`, `1692`, `1694`, `1695`, `1697`, `1699`, `1701`, `1705`, `1708`, `1709`, `1710`, `1711`, `1714`, `1715`, `1718`, `1721`, `1723`, `1725`, `1727`, `1729`, `1731`, `1734`, `1736`, `1739`, `1741`, `1743`, `1745`, `1746`, `1748`, `1749`, `1752`, `1754`, `1756`, `1757`, `1758`, `1759`, `1760`, `1761`, `1766`, `1768`, `1769`, `1770`, `1771`, `1773`, `1775`, `1776`, `1777`, `1779`, `1781`, `1784`, `1785`, `1786`, `1788`, `1789`, `1790`, `1792`, `1794`, `1796`, `1798`, `1800`, `1802`, `1805`, `1807`, `1809`, `1810`, `1811`, `1813`, `1815`, `1816`, `1817`, `1818`, `1821`, `1823`, `1824`, `1825`, `1826`, `1828`, `1830`, `1832`, `1833`, `1834`, `1835`, `1837`, `1840`, `1842`, `1846`, `1848`, `1852`, `1853`, `1854`, `1856`, `1857`, `1858`, `1859`, `1860`, `1862`, `1863`, `1866`, `1868`, `1869`, `1871`, `1873`, `1304`, `1874`, `1875`, `1876`, `1878`, `1879`, `1880`, `1881`, `1883`, `1885`, `1886`, `1887`, `1890`, `1892`, `1893`, `1894`, `1897`, `1898`, `1900`, `1488`, `1903`, `1904`, `1905`, `1906`, `1907`, `1908`, `1910`, `1912`, `1913`, `1914`, `1915`, `1916`, `1918`, `1919`, `1920`, `1922`, `1925`, `1927`, `1929`, `1931`, `1933`, `1934`, `1936`, `1938`, `1939`, `1940`, `1943`, `1944`, `1945`, `1946`, `1947`, `1948`, `1950`, `1951`, `1953`, `1955`, `1956`, `1957`, `1960`, `1962`, `1963`, `1964`, `1965`, `1966`, `1969`, `1971`, `1973`, `1975`, `1976`, `1979`, `1980`, `1981`, `1982`, `1985`, `1986`, `1987`, `1988`, `1989`, `1991`, `1992`, `1993`, `1994`, `1995`, `1996`, `1999`, `2002`, `2003`, `2004`, `2006`, `2007`, `2008`, `2010`, `2011`, `2013`, `2015`, `2016`, `2017`, `2018`, `2020`, `2021`, `2023`, `2024`, `2028`, `2030`, `2031`, `2032`, `2033`, `2034`, `2037`, `2038`, `2040`, `2041`, `2043`, `2044`, `2047`, `2048`, `2049`, `2050`, `2051`, `2052`, `2053`, `2056`, `2058`, `2060`, `2062`, `2063`, `2065`, `2066`, `2067`, `2068`, `2069`, `2070`, `2071`, `2072`, `2075`, `2076`, `1806`, `2079`, `2081`, `2083`, `2086`, `2089`, `2090`, `2091`, `2092`, `2093`, `2095`, `2096`, `2097`, `2098`, `2099`, `2102`, `2103`, `2106`, `2107`, `2108`, `2110`, `2111`, `2112`, `2113`, `2114`, `2115`, `2116`, `2117`, `2118`, `2119`, `2120`, `2121`, `2122`, `2123`, `2124`, `2125`, `2127`, `2129`, `2132`, `2135`, `2136`, `2137`, `2138`, `2139`, `2141`, `2142`, `2143`, `2144`, `2147`, `2149`, `2151`, `2152`, `2153`, `2154`, `2157`, `2158`, `2159`, `2161`, `2162`, `2163`, `2166`, `2168`, `2169`, `2171`, `2174`, `2175`, `2176`, `2179`, `2181`, `2182`, `2184`, `2185`, `2186`, `2187`, `2189`, `2192`, `2193`, `2195`, `2196`, `2197`, `2199`, `2200`, `2202`, `2203`, `2206`, `2207`, `2208`, `2209`, `2212`, `2213`, `2216`, `2219`, `2220`, `2222`, `2225`, `2227`, `2228`, `2230`, `2231`, `2233`, `2235`, `2236`, `2237`, `2239`, `2241`, `2243`, `2245`, `2246`, `2248`, `2249`, `2250`, `2251`, `2252`, `2253`, `2254`, `2255`, `2256`, `2259`, `2264`, `2265`, `2269`, `2270`, `2272`, `2273`, `2274`, `2276`, `2277`, `2278`, `2279`, `2283`, `2285`, `2286`, `2288`, `2289`, `2290`, `2292`, `2293`, `2294`, `2295`, `2297`, `2299`, `2300`, `2303`, `2305`, `2306`, `2309`, `2310`, `2312`, `2314`, `2316`, `2318`, `2319`, `2320`, `2321`, `2322`, `2323`, `2325`, `2327`, `1961`, `2328`, `2329`, `2330`, `2332`, `2333`, `2334`, `2336`, `2338`, `2340`, `2342`, `2343`, `2345`, `2347`, `2349`, `2351`, `2353`, `2354`, `2357`, `2358`, `2359`, `2360`, `2362`, `2363`, `2364`, `2365`, `2366`, `2367`, `2368`, `2369`, `2372`, `2375`, `2376`, `2377`, `2379`, `2381`, `2382`, `2383`, `2384`, `2385`, `2386`, `2387`, `2388`, `2389`, `2390`, `2392`, `2393`, `2394`, `2396`, `2398`, `2399`, `2400`, `2404`, `2405`, `2406`, `2407`, `2408`, `2409`, `2410`, `2411`, `2412`, `2413`, `2414`, `2415`, `2416`, `2417`, `2418`, `2419`, `2420`, `2422`, `2423`, `2425`, `2426`, `2427`, `2428`, `2430`, `2431`, `2432`, `2435`, `2436`, `2438`, `2439`, `2441`, `2442`, `2443`, `2444`, `2446`, `2448`, `2449`, `2452`, `2454`, `2455`, `2456`, `2457`, `2458`, `2461`, `2463`, `2464`, `2467`, `2468`, `2470`, `2472`, `2475`, `2477`, `2478`, `2479`, `2481`, `2483`, `2485`, `2486`, `2488`, `2489`, `2490`, `2491`, `2492`, `2493`, `2494`, `2495`, `2496`, `2497`, `2498`, `2499`, `2500`, `2501`, `2503`, `2504`, `2505`, `2506`, `2507`, `2508`, `2510`, `2512`, `2513`, `2514`, `2515`, `2516`, `2517`, `2518`, `2519`, `2520`, `2521`, `2522`, `2523`, `2524`, `2526`, `2527`, `2528`, `2529`, `2530`, `2531`, `2534`, `2537`, `2540`, `2542`, `2543`, `2544`, `2546`, `2548`, `2549`, `2550`, `2551`, `2552`, `2553`, `2554`, `2555`, `2558`, `2560`, `2561`, `2562`, `2563`, `2565`, `2567`, `2570`, `2572`, `2573`, `2575`, `2576`, `2577`, `2578`, `2579`, `2581`, `2583`, `2584`, `2586`, `2588`, `2589`, `2592`, `2593`, `2595`, `2597`, `2598`, `2601`, `2603`, `2604`, `2605`, `2606`, `2607`, `2608`, `2610`, `2611`, `2612`, `2613`, `2614`, `2615`, `2617`, `2618`, `2619`, `2620`, `2623`, `2624`, `2626`, `2628`, `2629`, `2630`, `2631`, `2632`, `2633`, `2634`, `2636`, `2637`, `2639`, `2640`, `2642`, `2643`, `2644`, `2645`, `2646`, `2649`, `2650`, `2652`, `2653`, `2654`, `2655`, `2656`, `2657`, `2659`, `2660`, `2663`, `2664`, `2665`, `2666`, `2668`, `2669`, `2671`, `2672`, `2673`, `2675`, `2677`, `2678`, `2679`, `2680`, `2681`, `2682`, `2685`, `2686`, `2687`, `2688`, `2689`, `2690`, `2691`, `2693`, `2694`, `2695`, `2697`, `2699`, `2700`, `2701`, `2703`, `2704`, `2706`, `2707`, `2708`, `2709`, `2711`, `2712`, `2713`, `2716`, `2718`, `2720`, `2721`, `2722`, `2724`, `2725`, `2726`, `2728`, `2731`, `2732`, `2735`, `2736`, `2737`, `2740`, `2741`, `2742`, `2744`, `2746`, `2749`, `2750`, `2753`, `2756`, `2757`, `2760`, `2763`, `2764`, `2765`, `2766`, `2769`, `2771`, `2772`, `2773`, `2775`, `2778`, `2779`, `2780`, `2781`, `2437`, `2782`, `2784`, `2786`, `2787`, `2788`, `2789`, `2790`, `2792`, `2793`, `2795`, `2796`, `2797`, `2799`, `2800`, `2804`, `2805`, `2806`, `2807`, `2029`, `2808`, `2809`, `2812`, `2814`, `2816`, `2819`, `2820`, `2822`, `2823`, `2824`, `2825`, `2827`, `2829`, `2831`, `2832`, `2833`, `2835`, `2836`, `2838`, `2839`, `2395`, `2841`, `2843`, `2844`, `2846`, `2847`, `2848`, `2850`, `2852`, `2854`, `2855`, `2856`, `2859`, `2860`, `2862`, `2863`, `2864`, `2865`, `2867`, `2869`, `2870`, `2871`, `2873`, `2874`, `2875`, `2876`, `2877`, `2878`, `2880`, `2882`, `2883`, `2884`, `2885`, `2887`, `2889`, `2890`, `2891`, `2892`, `2894`, `2895`, `2896`, `2897`, `2898`, `2899`, `2900`, `2901`, `2902`, `2903`, `2904`, `490`, `2906`, `2907`, `2909`, `2910`, `2911`, `2913`, `2914`, `2915`, `2917`, `2919`, `2920`, `2921`, `2924`, `2925`, `2926`, `2928`, `2929`, `2930`, `2931`, `2933`, `2934`, `2937`, `2938`, `2939`, `2940`, `2941`, `2942`, `2943`, `2946`, `2947`, `2948`, `2949`, `2950`, `2951`, `2952`, `2954`, `2955`, `2956`, `2957`, `2958`, `2960`, `2962`, `2964`, `2965`, `2966`, `2967`, `2969`, `2970`, `1491`, `2971`, `2972`, `1599`, `2973`, `2974`, `2975`, `2977`, `2979`, `2980`, `2981`, `2982`, `2983`, `2985`, `2986`, `2987`, `2988`, `2989`, `2990`, `2991`, `2992`, `2994`, `2995`, `2998`, `3001`, `3002`, `3003`, `3004`, `3008`, `3010`, `3011`, `3012`, `3013`, `3014`, `3016`, `3018`, `3020`, `3021`, `3023`, `3024`, `3025`, `3026`, `3027`, `3029`, `3032`, `3033`, `3036`, `3037`, `3038`, `3040`, `3041`, `3042`, `3043`, `3044`, `3046`, `3047`, `3048`, `3050`, `3051`, `3054`, `3055`, `3056`, `3057`, `3058`, `3060`, `3061`, `3062`, `3063`, `3064`, `3065`, `3068`, `3071`, `3072`, `3073`, `3075`, `3077`, `3078`, `3080`, `3081`, `3082`, `3084`, `3085`, `3087`, `3088`, `3089`, `3090`, `3091`, `3092`, `3094`, `3096`, `3097`, `3098`, `3099`, `3102`, `3103`, `3105`, `3107`, `3108`, `3109`, `3111`, `3113`, `3114`, `3116`, `3117`, `3118`, `3120`, `3122`, `3124`, `3125`, `3126`, `3127`, `3128`, `3130`, `3131`, `3132`, `3133`, `3136`, `3137`, `3139`, `3140`, `3143`, `3144`, `3146`, `3147`, `3148`, `3151`, `3152`, `3154`, `3156`, `3158`, `3159`, `3161`, `3164`, `3165`, `3166`, `3168`, `3170`, `3171`, `3172`, `3173`, `3174`, `3176`, `3177`, `3178`, `3179`, `3180`, `3181`, `3183`, `3184`, `3187`, `3188`, `3189`, `3190`, `3192`, `3193`, `3197`, `3198`, `3200`, `3201`, `3202`, `3205`, `3206`, `3207`, `3209`, `3210`, `3211`, `3212`, `3214`, `3215`, `3216`, `3217`, `3218`, `3220`, `3221`, `3222`, `3223`, `3224`, `3226`, `3227`, `3228`, `3230`, `3232`, `3235`, `3237`, `3238`, `3239`, `3241`, `3243`, `3246`, `3248`, `3249`, `3250`, `3251`, `3252`, `3253`, `3254`, `3255`, `3256`, `3257`, `3258`, `3259`, `3260`, `3263`, `3264`, `3265`, `3269`, `3270`, `3272`, `3273`, `3275`, `3276`, `3278`, `3279`, `3280`, `3282`, `3283`, `3284`, `3285`, `3287`, `3288`, `3291`, `3292`, `3294`, `3295`, `3296`, `3297`, `3299`, `3301`, `3306`, `3308`, `3310`, `3311`, `3312`, `3313`, `3315`, `3317`, `3318`, `3319`, `3320`, `3321`, `3322`, `3325`, `3326`, `3327`, `3330`, `3331`, `3332`, `3333`, `3334`, `3335`, `3337`, `3338`, `3340`, `3342`, `3343`, `3344`, `3345`, `3346`, `3347`, `3348`, `3349`, `3350`, `3352`, `3353`, `3355`, `3356`, `3357`, `3359`, `3360`, `3361`, `3363`, `3364`, `3366`, `3368`, `3369`, `3370`, `3371`, `3372`, `3373`, `3375`, `3376`, `3377`, `3378`, `3380`, `3381`, `3382`, `3383`, `3384`, `3385`, `3386`, `3387`, `3388`, `3390`, `3391`, `3392`, `3393`, `3394`, `3395`, `3397`, `3399`, `3400`, `3401`, `3402`, `3406`, `3407`, `3408`, `3409`, `3411`, `3413`, `3414`, `3415`, `3416`, `3417`, `3418`, `3419`, `3421`, `3422`, `3424`, `3425`, `3428`, `3429`, `3431`, `3432`, `3433`, `3434`, `3436`, `3439`, `3441`, `3442`, `3444`, `3445`, `3446`, `3447`, `3448`, `3450`, `3451`, `3452`, `3453`, `3454`, `3455`, `3456`, `3457`, `3459`, `3460`, `3461`, `3462`, `3463`, `3464`, `3465`, `3468`, `3470`, `3473`, `3474`, `3475`, `3476`, `3477`, `3478`, `3480`, `3481`, `3483`, `3484`, `3485`, `3486`, `3487`, `3489`, `3490`, `3493`, `3495`, `3496`, `3498`, `3499`, `3500`, `3502`, `3503`, `3504`, `3505`, `3507`, `3508`, `3509`, `3510`, `3511`, `3512`, `3513`, `3515`, `3516`, `3517`, `3520`, `3521`, `3522`, `3524`, `3525`, `3526`, `3527`, `3529`, `3531`, `3534`, `3535`, `3536`, `3537`, `3538`, `3539`, `3542`, `3543`, `3544`, `3545`, `3546`, `3547`, `3548`, `3549`, `3550`, `3551`, `3553`, `3556`, `3557`, `3558`, `3559`, `3560`, `3561`, `3562`, `3563`, `3564`, `3565`, `3566`, `3568`, `3569`, `3570`, `3573`, `3574`, `3576`, `3577`, `3580`, `3581`, `3582`, `3585`, `3586`, `3587`, `3588`, `3589`, `3591`, `3592`, `3594`, `3596`, `3597`, `3598`, `3599`, `3600`, `3602`, `3603`, `3605`, `3606`, `3608`, `3609`, `3610`, `3611`, `3613`, `3614`, `3615`, `3616`, `3617`, `3618`, `3619`, `3620`, `3622`, `3623`, `3624`, `3625`, `3626`, `3627`, `3628`, `3630`, `3631`, `3632`, `3633`, `3634`, `3635`, `3637`, `3639`, `3640`, `3641`, `3643`, `3644`, `3645`, `3647`, `3648`, `3649`, `3650`, `3652`, `3654`, `3655`, `3657`, `3658`, `3659`, `3660`, `3661`, `3663`, `3666`, `3669`, `3670`, `3671`, `3672`, `3673`, `3675`, `3676`, `3677`, `3678`, `3679`, `3681`, `3682`, `3685`, `3686`, `3687`, `3689`, `3690`, `3692`, `3693`, `3695`, `3697`, `3698`, `3699`, `3700`, `3701`, `3703`, `3705`, `3708`, `3709`, `3710`, `3713`, `3714`, `3715`, `3716`, `3717`, `3718`, `3719`, `3720`, `3723`, `3725`, `3726`, `3728`, `3729`, `3730`, `3731`, `3732`, `3733`, `3734`, `3735`, `3736`, `3737`, `3738`, `3740`, `3742`, `3744`, `3746`, `3747`, `3748`, `3749`, `3751`, `3752`, `3753`, `3754`, `3755`, `3756`, `3757`, `3758`, `3760`, `3761`, `3762`, `3764`, `3765`, `3766`, `3767`, `3768`, `3769`, `3771`, `3773`, `3775`, `3777`, `3779`, `3780`, `3781`, `3782`, `3783`, `3784`, `3786`, `3787`, `3788`, `3789`, `3791`, `3792`, `3793`, `3795`, `3796`, `3797`, `3798`, `3799`, `3800`, `3801`, `3803`, `3805`, `3806`, `3808`, `3809`, `3812`, `3814`, `3817`, `3819`, `3822`, `3825`, `3827`, `3828`, `3829`, `3830`, `3831`, `3832`, `3833`, `3835`, `3836`, `3837`, `3838`, `3839`, `3840`, `3841`, `3842`, `3843`, `3844`, `3845`, `3847`, `3848`, `3849`, `3850`, `3851`, `3852`, `3853`, `3854`, `3855`, `3857`, `3858`, `3861`, `3863`, `3864`, `3865`, `3866`, `3867`, `3868`, `3869`, `3870`, `3872`, `3873`, `3874`, `3875`, `3876`, `3877`, `3879`, `3880`, `3881`, `3883`, `3884`, `3885`, `3886`, `3887`, `3888`, `3889`, `3891`, `3892`, `3893`, `3894`, `3895`, `3896`, `3898`, `3899`, `3900`, `3901`, `3902`, `3905`, `3906`, `3907`, `3908`, `3909`, `3910`, `3911`, `3912`, `3913`, `3914`, `3915`, `3916`, `3917`, `3918`, `3920`, `3923`, `3924`, `3925`, `3927`, `3928`, `3929`, `3930`, `3931`, `3932`, `3934`, `3935`, `3936`, `3938`, `3939`, `3940`, `3941`, `3942`, `3945`, `3946`, `3949`, `3950`, `3951`, `3952`, `3953`, `3955`, `3956`, `3958`, `3959`, `3960`, `3962`, `3963`, `3964`, `3965`, `3967`, `3969`, `3970`, `3973`, `3974`, `3976`, `3979`, `3980`, `3983`, `3984`, `3985`, `3986`, `3987`, `3989`, `3990`, `3991`, `3993`, `3994`, `3997`, `3998`, `3999`, `4001`, `4004`, `4005`, `4007`, `4010`, `4011`, `4013`, `4016`, `4019`, `4020`, `4023`, `4024`, `4025`, `4026`, `4027`, `4028`, `4029`, `4030`, `4031`, `4032`, `4034`, `4035`, `4037`, `4038`, `4040`, `4041`, `4043`, `4045`, `4046`, `4047`, `4049`, `4050`, `4052`, `4054`, `4056`, `4057`, `4058`, `4059`, `4060`, `4061`, `4062`, `4063`, `4065`, `4067`, `4068`, `4069`, `4070`, `4071`, `4072`, `4073`, `4074`, `4076`, `4079`, `4080`, `4081`, `4082`, `4083`, `4084`, `4086`, `4088`, `4089`, `4090`, `4091`, `4092`, `4093`, `4094`, `4095`, `4097`, `4098`, `4099`, `4102`, `4103`, `4104`, `4106`, `4107`, `4108`, `4110`, `4112`, `4113`, `4114`, `4115`, `4116`, `4117`, `4118`, `4120`, `4122`, `4125`, `4127`, `4129`, `4132`, `4134`, `4135`, `4136`, `4137`, `4138`, `4139`, `4140`, `4141`, `4142`, `4143`, `4144`, `4146`, `4147`, `4148`, `4149`, `4150`, `4151`, `4152`, `4153`, `4154`, `4155`, `4156`, `4158`, `4159`, `4160`, `4161`, `4162`, `4164`, `4165`, `4166`, `4168`, `4171`, `4172`, `4173`, `4174`, `4175`, `4176`, `4177`, `4179`, `4180`, `4181`, `4182`, `4184`, `4186`, `4187`, `4188`, `4189`, `4191`, `4195`, `4196`, `4197`, `4198`, `4199`, `4202`, `4203`, `4204`, `4205`, `4206`, `4209`, `4210`, `4212`, `4213`, `4216`, `4217`, `4218`, `4219`, `4221`, `4224`, `4225`, `4226`, `4227`, `4230`, `4232`, `4233`, `4234`, `4238`, `4239`, `4241`, `4242`, `4245`, `4247`, `4249`, `4252`, `4254`, `4257`, `4258`, `4261`, `4262`, `4263`, `4265`, `4266`, `4268`, `4269`, `4270`, `4271`, `4273`, `4275`, `4276`, `4278`, `4279`, `4280`, `4281`, `4282`, `4283`, `4284`, `4286`, `4287`, `4288`, `4289`, `4290`, `4291`, `4292`, `4294`, `4295`, `4296`, `4297`, `4298`, `4299`, `4300`, `4301`, `4302`, `4303`, `4306`, `4309`, `4310`, `4311`, `4312`, `4313`, `4314`, `4317`, `4318`, `4319`, `4320`, `4321`, `4322`, `4324`, `4325`, `4327`, `4329`, `4330`, `4331`, `4332`, `4333`, `4334`, `4336`, `4337`, `4339`, `4340`, `4341`, `4343`, `4344`, `4345`, `4346`, `4347`, `4348`, `4350`, `4351`, `4352`, `4353`, `4354`, `4355`, `4356`, `4358`, `4359`, `4360`, `4361`, `4363`, `4364`, `4365`, `4366`, `4368`, `4370`, `4373`, `4376`, `4377`, `4378`, `4379`, `4380`, `4381`, `4384`, `4385`, `4386`, `4387`, `4388`, `4389`, `4390`, `4391`, `4393`, `4394`, `4395`, `4396`, `4397`, `4398`, `4400`, `4401`, `3467`, `4402`, `4403`, `4405`, `4406`, `4407`, `4408`, `4409`, `4410`, `4411`, `4412`, `4413`, `4414`, `4416`, `4417`, `4418`, `4420`, `4421`, `4422`, `4424`, `4425`, `4426`, `4427`, `4430`, `4431`, `4432`, `4435`, `4436`, `4437`, `4438`, `4439`, `4440`, `4441`, `4443`, `4444`, `4446`, `4447`, `4449`, `4451`, `4452`, `4454`, `4455`, `4456`, `4457`, `4458`, `4459`, `4462`, `4463`, `4465`, `4466`, `4468`, `4469`, `4471`, `4472`, `4473`, `4474`, `4476`, `4477`, `4478`, `4479`, `4480`, `4482`, `4483`, `4484`, `4486`, `4488`, `4489`, `4490`, `4491`, `4494`, `4496`, `4497`, `4498`, `4499`, `4500`, `4501`, `4503`, `4505`, `4506`, `249`, `4507`, `4508`, `4509`, `4510`, `4513`, `4514`, `4515`, `4517`, `4518`, `4519`, `4520`, `4521`, `931`, `4523`, `4527`, `4528`, `4529`, `4530`, `4531`, `4532`, `4533`, `4535`, `4536`, `4537`, `4538`, `4540`, `4542`, `4545`, `4546`, `4547`, `4548`, `4549`, `4551`, `4552`, `4555`, `4556`, `4557`, `4559`, `4560`, `4562`, `4563`, `4564`, `4565`, `4566`, `4568`, `4570`, `4571`, `4572`, `4573`, `4574`, `4576`, `4577`, `4578`, `4579`, `4581`, `4583`, `4586`, `4587`, `4590`, `4591`, `4592`, `4594`, `4595`, `4598`, `4599`, `4600`, `4601`, `4602`, `4603`, `4605`, `4606`, `4607`, `4608`, `4609`, `4612`, `4613`, `4615`, `4616`, `4617`, `4618`, `4619`, `4620`, `4621`, `4623`, `4625`, `4626`, `4627`, `4629`, `4630`, `4631`, `4632`, `4633`, `4634`, `4635`, `4636`, `4637`, `4638`, `4639`, `4641`, `4643`, `4644`, `4645`, `4646`, `4647`, `4648`, `4649`, `4650`, `4651`, `4653`, `4655`, `4656`, `4658`, `4659`, `4660`, `4662`, `4663`, `4664`, `4665`, `4667`, `4668`, `4669`, `4670`, `4672`, `4673`, `4674`, `4676`, `4678`, `4679`, `4680`, `4682`, `4684`, `4685`, `4686`, `4688`, `4690`, `4691`, `4692`, `4693`, `4695`, `4696`, `4697`, `4698`, `4700`, `4701`, `4704`, `4705`, `4706`, `4708`, `4711`, `4712`, `4713`, `4714`, `4715`, `4716`, `4719`, `4722`, `4723`, `4724`, `4726`, `4727`, `4728`, `4730`, `4731`, `4733`, `4734`, `4735`, `4736`, `4738`, `4739`, `4740`, `4741`, `4742`, `4743`, `4744`, `4745`, `4746`, `4747`, `4748`, `4749`, `4750`, `70`, `84`, `4751`, `4752`, `4753`, `4754`, `4756`, `4758`, `4760`, `4761`, `4762`, `4764`, `4766`, `4769`, `4771`, `4772`, `4774`, `4775`, `4776`, `4778`, `4779`, `4781`, `4782` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.80 | | `TOKEN_P` | 99.79 | | `TOKEN_R` | 99.81 | | `TOKEN_ACC` | 99.97 | | `SENTS_F` | 97.77 | | `SENTS_P` | 98.24 | | `SENTS_R` | 97.30 | | `TAG_ACC` | 91.59 | | `POS_ACC` | 97.94 | | `MORPH_ACC` | 95.69 | | `DEP_UAS` | 91.30 | | `DEP_LAS` | 87.75 | | `LEMMA_ACC` | 95.39 |
{"language": ["lv"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/lv_udv25_latvianlvtb_trf
null
[ "spacy", "token-classification", "lv", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "lv" ]
TAGS #spacy #token-classification #lv #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Latvian-LVTB ### Label Scheme View label scheme (6012 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (6012 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #lv #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (6012 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Norwegian-Bokmaal | Feature | Description | | --- | --- | | **Name** | `nb_udv25_norwegianbokmaal_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (1240 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X` | | **`morphologizer`** | `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=CCONJ`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `POS=ADP`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `POS=PROPN`, `POS=X`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=PRON\|PronType=Rel`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Definite=Ind\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=ADJ\|VerbForm=Part`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=ADV`, `Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `POS=VERB\|VerbForm=Part`, `Definite=Ind\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Degree=Pos\|Number=Plur\|POS=ADJ`, `NumType=Card\|Number=Plur\|POS=NUM`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=PART`, `POS=VERB\|VerbForm=Inf`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|POS=PROPN`, `POS=NOUN`, `Gender=Masc\|POS=PROPN`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|POS=PROPN`, `POS=PART\|Polarity=Neg`, `Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|POS=PROPN`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Degree=Sup\|POS=ADJ`, `Case=Gen\|Gender=Fem\|POS=PROPN`, `Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Degree=Sup\|POS=ADJ`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Neut\|POS=PROPN`, `Number=Plur\|POS=DET\|PronType=Int`, `Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Definite=Def\|POS=DET\|PronType=Dem`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Abbr=Yes\|Case=Gen\|POS=PROPN`, `Animacy=Hum\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Cmp\|POS=ADJ`, `POS=ADJ\|VerbForm=Part`, `Gender=Neut\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Abbr=Yes\|POS=ADP`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=AUX\|VerbForm=Part`, `POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Prs`, `Number=Plur\|POS=DET\|PronType=Ind`, `Degree=Pos\|POS=ADJ`, `Animacy=Hum\|Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Hum\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Hum\|Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Plur\|POS=DET\|Polarity=Neg\|PronType=Neg`, `NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `POS=DET\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Neut\|POS=PROPN`, `Gender=Masc\|Number=Sing\|POS=DET\|Polarity=Neg\|PronType=Neg`, `Definite=Def\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=AUX\|VerbForm=Inf`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Number=Plur\|POS=DET\|PronType=Prs`, `POS=SYM`, `Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Hum\|Case=Nom\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|POS=ADV`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Def\|POS=DET\|PronType=Prs`, `Animacy=Hum\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Neut\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Definite=Def\|NumType=Card\|POS=NUM`, `Mood=Imp\|POS=VERB\|VerbForm=Fin`, `Definite=Ind\|Number=Plur\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Hum\|Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=3\|Polarity=Neg\|PronType=Neg,Prs`, `Number=Plur\|POS=PRON\|Person=3\|Polarity=Neg\|PronType=Neg,Prs`, `Definite=Def\|NumType=Card\|Number=Sing\|POS=NUM`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Hum\|Number=Sing\|POS=PRON\|PronType=Art,Prs`, `Mood=Imp\|POS=AUX\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Prs,Tot`, `Number=Plur\|POS=ADJ`, `Gender=Masc\|POS=NOUN`, `Abbr=Yes\|POS=NOUN`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Prs`, `POS=INTJ`, `Animacy=Hum\|Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Animacy=Hum\|Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=ADJ`, `Animacy=Hum\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Hum\|Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Number=Sing\|POS=PRON\|Polarity=Neg\|PronType=Neg`, `Case=Gen\|POS=NOUN`, `Definite=Ind\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|POS=PROPN`, `Animacy=Hum\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Prs`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Hum\|Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Animacy=Hum\|Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Degree=Sup\|POS=ADJ`, `Animacy=Hum\|POS=PRON\|PronType=Int`, `POS=DET\|PronType=Ind`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs,Tot`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=DET\|Polarity=Neg\|PronType=Neg`, `Number=Plur\|POS=NOUN`, `POS=PRON\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Case=Gen\|Definite=Def\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem,Ind`, `Animacy=Hum\|POS=PRON\|Poss=Yes\|PronType=Int`, `Abbr=Yes\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Abbr=Yes\|Definite=Def,Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Rcp`, `Definite=Ind\|Degree=Pos\|POS=ADJ`, `Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|NumType=Card\|Number=Plur\|POS=NUM`, `Abbr=Yes\|Definite=Def,Ind\|Gender=Neut\|Number=Plur,Sing\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Tot`, `Abbr=Yes\|Definite=Def,Ind\|Gender=Masc\|Number=Plur,Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Prs`, `Animacy=Hum\|Case=Gen,Nom\|Number=Sing\|POS=PRON\|PronType=Art,Prs`, `Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Hum\|Case=Gen\|Number=Sing\|POS=PRON\|PronType=Art,Prs`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Ind\|Gender=Masc\|POS=NOUN`, `Definite=Def\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=ADJ\|VerbForm=Part`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Abbr=Yes\|Gender=Masc\|POS=NOUN`, `Abbr=Yes\|Case=Gen\|POS=NOUN`, `Abbr=Yes\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Abbr=Yes\|Degree=Pos\|POS=ADJ`, `Case=Gen\|Gender=Fem\|POS=NOUN`, `Case=Gen\|Degree=Cmp\|POS=ADJ`, `Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=NOUN` | | **`parser`** | `ROOT`, `acl`, `acl:cleft`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `compound:prt`, `conj`, `cop`, `csubj`, `dep`, `det`, `discourse`, `expl`, `flat:foreign`, `flat:name`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `reparandum`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `1`, `2`, `4`, `6`, `8`, `10`, `12`, `14`, `16`, `18`, `20`, `22`, `24`, `26`, `28`, `32`, `34`, `36`, `38`, `40`, `42`, `44`, `47`, `49`, `51`, `52`, `54`, `56`, `58`, `59`, `60`, `62`, `64`, `65`, `67`, `69`, `70`, `71`, `73`, `75`, `78`, `81`, `83`, `87`, `89`, `93`, `96`, `98`, `99`, `100`, `102`, `104`, `106`, `110`, `112`, `115`, `116`, `118`, `120`, `122`, `124`, `128`, `131`, `133`, `135`, `137`, `140`, `142`, `143`, `144`, `145`, `147`, `149`, `151`, `153`, `154`, `156`, `158`, `159`, `162`, `165`, `166`, `168`, `169`, `171`, `173`, `175`, `177`, `179`, `180`, `182`, `184`, `185`, `186`, `187`, `189`, `190`, `192`, `193`, `194`, `195`, `198`, `199`, `201`, `203`, `204`, `207`, `209`, `211`, `214`, `217`, `218`, `219`, `220`, `223`, `225`, `227`, `228`, `229`, `231`, `232`, `233`, `235`, `236`, `239`, `240`, `243`, `246`, `248`, `249`, `250`, `251`, `254`, `257`, `259`, `261`, `263`, `266`, `267`, `270`, `272`, `274`, `275`, `276`, `279`, `282`, `283`, `284`, `285`, `286`, `289`, `290`, `291`, `292`, `294`, `298`, `302`, `304`, `305`, `306`, `309`, `310`, `311`, `314`, `315`, `316`, `317`, `319`, `320`, `322`, `46`, `324`, `326`, `327`, `329`, `330`, `331`, `332`, `334`, `335`, `336`, `337`, `339`, `340`, `341`, `343`, `344`, `346`, `348`, `349`, `352`, `353`, `354`, `356`, `357`, `358`, `359`, `361`, `363`, `364`, `365`, `367`, `369`, `372`, `374`, `375`, `376`, `377`, `378`, `380`, `381`, `384`, `385`, `387`, `389`, `391`, `394`, `396`, `397`, `400`, `403`, `405`, `406`, `408`, `409`, `410`, `411`, `413`, `415`, `416`, `418`, `420`, `422`, `423`, `424`, `426`, `428`, `429`, `431`, `432`, `433`, `434`, `435`, `437`, `438`, `440`, `442`, `445`, `446`, `448`, `449`, `450`, `451`, `452`, `453`, `456`, `458`, `459`, `460`, `461`, `462`, `465`, `466`, `468`, `469`, `471`, `474`, `475`, `476`, `477`, `479`, `480`, `482`, `485`, `486`, `488`, `489`, `491`, `492`, `493`, `494`, `495`, `497`, `498`, `499`, `500`, `502`, `503`, `504`, `505`, `506`, `507`, `509`, `510`, `511`, `513`, `517`, `518`, `519`, `521`, `522`, `525`, `526`, `528`, `529`, `533`, `537`, `539`, `541`, `543`, `545`, `546`, `547`, `549`, `550`, `552`, `553`, `554`, `555`, `557`, `558`, `559`, `560`, `561`, `562`, `563`, `564`, `566`, `568`, `570`, `574`, `575`, `576`, `577`, `579`, `581`, `582`, `583`, `585`, `586`, `587`, `589`, `590`, `591`, `593`, `595`, `597`, `599`, `602`, `603`, `604`, `605`, `607`, `608`, `610`, `611`, `612`, `614`, `616`, `617`, `619`, `620`, `621`, `624`, `626`, `628`, `630`, `632`, `635`, `636`, `639`, `640`, `642`, `645`, `647`, `650`, `651`, `652`, `655`, `657`, `658`, `659`, `661`, `662`, `663`, `664`, `665`, `666`, `667`, `668`, `669`, `670`, `672`, `673`, `676`, `677`, `678`, `681`, `682`, `683`, `684`, `686`, `687`, `688`, `690`, `692`, `693`, `694`, `695`, `697`, `698`, `699`, `700`, `701`, `702`, `704`, `705`, `707`, `709`, `710`, `711`, `712`, `713`, `714`, `715`, `716`, `717`, `719`, `721`, `723`, `726`, `728`, `729`, `730`, `731`, `732`, `733`, `734`, `736`, `737`, `738`, `739`, `741`, `742`, `743`, `745`, `746`, `747`, `748`, `749`, `750`, `751`, `753`, `754`, `757`, `759`, `760`, `761`, `762`, `763`, `765`, `767`, `768`, `770`, `771`, `772`, `773`, `775`, `777`, `778`, `779`, `780`, `781`, `783`, `784`, `785`, `786`, `787`, `788`, `789`, `790`, `791`, `792`, `795`, `798`, `799`, `801`, `802`, `803`, `805`, `806`, `809`, `811`, `812`, `814`, `815`, `816`, `817`, `818`, `820`, `822`, `823`, `824`, `825`, `826`, `827`, `828`, `829`, `830`, `832`, `833`, `836`, `838`, `839`, `840`, `841`, `843`, `844`, `845`, `846`, `847`, `848`, `850`, `851`, `852`, `853`, `854`, `855`, `857`, `858`, `859`, `860`, `862`, `864`, `865`, `868`, `869`, `870`, `871`, `872`, `873`, `874`, `876`, `877`, `878`, `881`, `883`, `884`, `885`, `886`, `887`, `888`, `889`, `890`, `892`, `893`, `894`, `896`, `897`, `898`, `899`, `901`, `902`, `905`, `908`, `911`, `912`, `913`, `915`, `916`, `917`, `918`, `919`, `920`, `921`, `925`, `927`, `928`, `929`, `930`, `932`, `936`, `937`, `938`, `940`, `941`, `943`, `944`, `947`, `948`, `950`, `952`, `953`, `955`, `957`, `959`, `962`, `964`, `966`, `967`, `968`, `969`, `971`, `972`, `973`, `974`, `977`, `978`, `117`, `41`, `979`, `980`, `981`, `982`, `983`, `985`, `988`, `989`, `990`, `992`, `994`, `995`, `996`, `998`, `999`, `1000`, `1001`, `1002`, `1003`, `1004`, `1007`, `1009`, `1010`, `1011`, `1012`, `1013`, `1014`, `1015`, `1016`, `1017`, `1018`, `1019`, `1020`, `1021`, `1022`, `1023`, `1024`, `1025`, `1026`, `1029`, `1031`, `1035`, `1037`, `1039`, `1040`, `1041`, `1043`, `1044`, `1045`, `1048`, `1049`, `1050`, `1051`, `1053`, `1056`, `1058`, `1059`, `1060`, `1061`, `1064`, `1066`, `1068`, `1070`, `1071`, `1072`, `1075`, `1078`, `1079`, `1080`, `1081`, `1084`, `1085`, `1088`, `1090`, `1093`, `1095`, `1099`, `1102`, `1103`, `1105`, `1106`, `1107`, `1109`, `1110`, `1111`, `1113`, `1115`, `1116`, `1121`, `1123`, `1124`, `1126`, `1128`, `1129`, `1130`, `1131`, `1133`, `1134`, `1136`, `1137`, `1138`, `1139`, `1141`, `1143`, `1144`, `1145`, `1147`, `1148`, `1149`, `1150`, `1151`, `1153`, `1154`, `1156`, `1157`, `1158`, `1162`, `1163`, `1165`, `1166`, `1167`, `1168`, `1169`, `1170`, `1171`, `1172`, `1173`, `1174`, `1175`, `1176`, `1030`, `1179`, `1180`, `1182`, `1184`, `1185`, `1186`, `1187`, `1188`, `1189`, `1190`, `1191`, `1192`, `1193`, `1195`, `1198`, `1199`, `1201`, `1202`, `1204`, `1205`, `1206`, `1207`, `1211`, `1213`, `1214`, `1216`, `1219`, `1220`, `1221`, `1222`, `1223`, `1224`, `1225`, `1226`, `1228`, `1230`, `1232`, `1234`, `1235`, `1238`, `1239`, `1240`, `1241`, `1242`, `1244`, `1247`, `1248`, `1249`, `1250`, `1251`, `1253`, `1254`, `1255`, `1256`, `1257`, `1258`, `1259`, `1262`, `1263`, `1265`, `1267`, `515`, `1268`, `1269`, `1271`, `1273`, `1274`, `1275`, `1276`, `1277`, `1279`, `1280`, `1282`, `1283`, `1284`, `1285`, `1287`, `1289`, `1291`, `1292`, `1294`, `1297`, `1298`, `1299`, `1302`, `1303`, `1305`, `1307`, `1308`, `1309`, `1311`, `1312`, `1313`, `1314`, `1315`, `1316`, `1317`, `1318`, `1320`, `1321`, `1322`, `1324`, `1325`, `1326`, `1329`, `1331`, `1334`, `1336`, `1337`, `1340`, `1341`, `1342`, `1343`, `1346`, `1348`, `1349`, `1350`, `1352`, `1353`, `1355`, `1357`, `1358`, `1359`, `1361`, `965`, `1362`, `1363`, `1364`, `1366`, `1369`, `1370`, `1371`, `1372`, `1373`, `1375`, `1376`, `1377`, `1379`, `1381`, `1382`, `1383`, `1385`, `1387`, `1388`, `1390`, `1392`, `1393`, `1394`, `1395`, `1396`, `1397`, `1398`, `1399`, `1400`, `1402`, `1403`, `1405`, `1406`, `1407`, `1409`, `1411`, `1412`, `1413`, `1414`, `1418`, `1419`, `1420`, `1421`, `1423`, `1424`, `1425`, `1427`, `1428`, `1429`, `1430`, `1432`, `1433`, `1435`, `1437`, `1438`, `1441`, `1442`, `1444`, `1446`, `1447`, `1449`, `1453`, `1455`, `1457`, `1458`, `1460`, `1462`, `1463`, `1464`, `1466`, `1469`, `1470`, `1471`, `1473`, `1475`, `1476`, `1477`, `1478`, `1479`, `1482`, `1483`, `1484`, `1486`, `1487`, `1489`, `1491`, `1493`, `1494`, `1495`, `1496`, `1497`, `1498`, `1499`, `1500`, `1501`, `1502`, `1503`, `1504`, `1506`, `1507`, `1508`, `1510`, `1511`, `1512`, `1513`, `1516`, `1517`, `1518`, `1519`, `1520`, `1521`, `1522`, `1523`, `849` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 100.00 | | `TOKEN_P` | 100.00 | | `TOKEN_R` | 100.00 | | `TOKEN_ACC` | 100.00 | | `SENTS_F` | 98.80 | | `SENTS_P` | 98.84 | | `SENTS_R` | 98.75 | | `TAG_ACC` | 99.16 | | `POS_ACC` | 99.13 | | `MORPH_ACC` | 98.42 | | `DEP_UAS` | 95.63 | | `DEP_LAS` | 93.91 | | `LEMMA_ACC` | 98.82 |
{"language": ["nb"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/nb_udv25_norwegianbokmaal_trf
null
[ "spacy", "token-classification", "nb", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "nb" ]
TAGS #spacy #token-classification #nb #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Norwegian-Bokmaal ### Label Scheme View label scheme (1240 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (1240 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #nb #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (1240 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Norwegian-Nynorsk | Feature | Description | | --- | --- | | **Name** | `nb_udv25_norwegiannynorsk_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (1400 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X` | | **`morphologizer`** | `Number=Plur\|POS=DET\|PronType=Ind`, `Definite=Ind\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=ADP`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Gender=Masc\|POS=NOUN`, `POS=CCONJ`, `Definite=Ind\|Number=Plur\|POS=NOUN`, `Animacy=Hum\|Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=NOUN`, `POS=PROPN`, `Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Hum\|Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `POS=ADV`, `Definite=Ind\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Masc\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Gender=Neut\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `POS=PART`, `POS=VERB\|VerbForm=Inf`, `POS=PRON\|PronType=Rel`, `POS=VERB\|VerbForm=Part`, `Animacy=Hum\|Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Degree=Pos\|Number=Plur\|POS=ADJ`, `Number=Plur\|POS=DET\|PronType=Dem`, `Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Abbr=Yes\|POS=NOUN`, `Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|POS=ADV`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Degree=Cmp\|POS=ADJ`, `POS=ADJ\|VerbForm=Part`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=DET\|PronType=Int`, `POS=AUX\|VerbForm=Inf`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Number=Sing\|POS=PRON\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Abbr=Yes\|POS=ADJ`, `POS=PART\|Polarity=Neg`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=INTJ`, `Animacy=Hum\|Number=Sing\|POS=PRON\|PronType=Art,Prs`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|POS=PROPN`, `Animacy=Hum\|Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Definite=Ind\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Number=Plur\|POS=DET\|Polarity=Neg\|PronType=Neg`, `Animacy=Hum\|Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Hum\|Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Animacy=Hum\|Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=PRON\|PronType=Int`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `NumType=Card\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Number=Plur\|POS=DET\|PronType=Tot`, `Definite=Ind\|Degree=Sup\|POS=ADJ`, `NumType=Card\|Number=Plur\|POS=NUM`, `Definite=Def\|POS=DET\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Imp\|POS=VERB\|VerbForm=Fin`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Def\|POS=DET\|PronType=Dem`, `POS=X`, `Case=Gen\|Gender=Masc\|POS=NOUN`, `POS=AUX\|VerbForm=Part`, `Number=Plur\|POS=ADJ\|VerbForm=Part`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=DET\|PronType=Tot`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Hum\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Definite=Def\|Degree=Sup\|POS=ADJ`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `POS=DET\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Prs`, `Definite=Ind\|Degree=Pos\|POS=ADJ`, `Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `POS=SYM`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Number=Plur\|POS=NOUN`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem,Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Abbr=Yes\|POS=PRON\|PronType=Prs`, `Abbr=Yes\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `POS=ADJ`, `Gender=Neut\|POS=NOUN`, `Animacy=Hum\|Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Prs`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|POS=NOUN`, `Degree=Pos\|POS=ADJ`, `Definite=Def\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Hum\|Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Animacy=Hum\|Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Definite=Def\|NumType=Card\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Prs,Tot`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|Polarity=Neg\|PronType=Neg`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Def\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Hum\|POS=PRON\|PronType=Int`, `Mood=Imp\|POS=AUX\|VerbForm=Fin`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Number=Plur\|POS=DET\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Abbr=Yes\|POS=CCONJ`, `Number=Plur\|POS=PRON\|Person=3\|Polarity=Neg\|PronType=Neg,Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Prs`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Dem`, `Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=DET\|Polarity=Neg\|PronType=Neg`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Polarity=Neg\|PronType=Neg,Prs`, `Abbr=Yes\|Case=Gen\|POS=NOUN`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Dem`, `Definite=Def\|POS=ADV`, `Number=Sing\|POS=PRON\|Polarity=Neg\|PronType=Neg`, `Case=Gen\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Abbr=Yes\|Definite=Def,Ind\|Gender=Neut\|Number=Plur,Sing\|POS=NOUN`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Prs`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|POS=ADP`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Animacy=Hum\|Case=Nom\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Art,Prs`, `Definite=Def\|Number=Plur\|POS=NOUN`, `Abbr=Yes\|Gender=Masc\|POS=NOUN`, `Case=Gen\|POS=NOUN`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `POS=PRON\|PronType=Prs`, `POS=ADV\|VerbForm=Inf`, `Degree=Sup\|POS=ADJ`, `Definite=Ind\|Degree=Pos\|Gender=Neut\|POS=ADJ`, `Definite=Ind\|Gender=Masc\|POS=NOUN`, `Animacy=Hum\|Case=Nom\|Gender=Masc\|POS=PRON\|Person=3\|PronType=Prs`, `Abbr=Yes\|POS=PROPN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Gender=Neut\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|POS=NOUN`, `Case=Gen\|Gender=Neut\|POS=NOUN`, `Definite=Def\|POS=ADJ\|VerbForm=Part`, `POS=DET\|PronType=Dem`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Degree=Pos\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `NumType=Card\|POS=NUM\|PronType=Dem`, `Definite=Ind\|Number=Sing\|POS=VERB\|VerbForm=Part` | | **`parser`** | `ROOT`, `acl`, `acl:cleft`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `compound:prt`, `conj`, `cop`, `csubj`, `dep`, `det`, `discourse`, `expl`, `flat:foreign`, `flat:name`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `reparandum`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `2`, `4`, `5`, `7`, `9`, `11`, `13`, `17`, `19`, `21`, `23`, `25`, `29`, `31`, `35`, `38`, `40`, `42`, `44`, `45`, `47`, `49`, `51`, `53`, `55`, `56`, `58`, `62`, `65`, `67`, `70`, `72`, `75`, `77`, `79`, `80`, `82`, `84`, `86`, `88`, `90`, `92`, `95`, `98`, `100`, `102`, `104`, `106`, `107`, `108`, `110`, `112`, `114`, `119`, `121`, `123`, `126`, `128`, `130`, `132`, `134`, `136`, `138`, `141`, `143`, `145`, `146`, `148`, `150`, `152`, `154`, `156`, `158`, `160`, `162`, `164`, `165`, `167`, `170`, `171`, `172`, `174`, `175`, `177`, `178`, `180`, `182`, `185`, `186`, `189`, `191`, `193`, `196`, `198`, `202`, `203`, `207`, `209`, `212`, `214`, `217`, `220`, `222`, `224`, `225`, `227`, `228`, `231`, `233`, `235`, `238`, `239`, `241`, `245`, `247`, `248`, `250`, `253`, `255`, `256`, `259`, `262`, `263`, `265`, `266`, `268`, `270`, `271`, `274`, `275`, `277`, `279`, `280`, `282`, `285`, `288`, `290`, `292`, `294`, `296`, `298`, `299`, `302`, `306`, `309`, `310`, `313`, `316`, `318`, `320`, `321`, `322`, `323`, `324`, `327`, `329`, `330`, `332`, `334`, `335`, `337`, `340`, `341`, `342`, `343`, `345`, `346`, `347`, `349`, `350`, `352`, `354`, `355`, `356`, `357`, `358`, `360`, `362`, `363`, `365`, `366`, `368`, `370`, `371`, `373`, `375`, `378`, `379`, `380`, `383`, `384`, `385`, `386`, `388`, `389`, `392`, `393`, `394`, `395`, `397`, `398`, `399`, `400`, `401`, `403`, `405`, `407`, `408`, `410`, `412`, `413`, `415`, `417`, `419`, `420`, `423`, `424`, `425`, `426`, `427`, `429`, `431`, `432`, `433`, `435`, `438`, `440`, `442`, `444`, `446`, `447`, `448`, `449`, `451`, `452`, `454`, `456`, `458`, `459`, `462`, `465`, `466`, `468`, `469`, `470`, `471`, `473`, `475`, `476`, `478`, `479`, `481`, `482`, `484`, `485`, `488`, `489`, `490`, `492`, `495`, `497`, `501`, `503`, `505`, `507`, `508`, `510`, `512`, `513`, `515`, `517`, `518`, `520`, `521`, `523`, `524`, `526`, `527`, `529`, `530`, `531`, `534`, `536`, `537`, `538`, `539`, `541`, `543`, `544`, `545`, `547`, `548`, `549`, `551`, `552`, `553`, `556`, `557`, `558`, `559`, `561`, `562`, `564`, `565`, `567`, `568`, `569`, `571`, `573`, `574`, `577`, `578`, `580`, `581`, `582`, `583`, `584`, `585`, `589`, `591`, `593`, `594`, `596`, `598`, `599`, `602`, `603`, `604`, `605`, `607`, `609`, `611`, `613`, `615`, `616`, `617`, `619`, `621`, `622`, `623`, `625`, `627`, `629`, `630`, `631`, `632`, `633`, `635`, `636`, `637`, `639`, `640`, `641`, `642`, `644`, `645`, `647`, `648`, `649`, `651`, `652`, `653`, `655`, `659`, `660`, `661`, `662`, `663`, `664`, `665`, `666`, `668`, `671`, `672`, `673`, `675`, `676`, `677`, `678`, `679`, `680`, `684`, `687`, `688`, `689`, `184`, `690`, `692`, `261`, `694`, `695`, `696`, `698`, `701`, `702`, `705`, `707`, `709`, `710`, `711`, `714`, `715`, `716`, `718`, `720`, `721`, `723`, `725`, `727`, `729`, `731`, `732`, `735`, `737`, `738`, `739`, `740`, `743`, `744`, `746`, `747`, `749`, `750`, `751`, `752`, `753`, `754`, `755`, `756`, `758`, `760`, `761`, `762`, `765`, `768`, `769`, `770`, `772`, `773`, `775`, `777`, `778`, `780`, `781`, `784`, `785`, `787`, `789`, `791`, `792`, `793`, `795`, `796`, `798`, `799`, `801`, `803`, `805`, `806`, `808`, `810`, `811`, `815`, `816`, `817`, `818`, `819`, `820`, `821`, `822`, `825`, `827`, `828`, `829`, `830`, `832`, `833`, `834`, `835`, `836`, `837`, `839`, `840`, `842`, `843`, `845`, `846`, `847`, `849`, `850`, `851`, `852`, `854`, `855`, `857`, `860`, `861`, `862`, `863`, `865`, `867`, `868`, `870`, `872`, `874`, `875`, `876`, `877`, `878`, `879`, `881`, `883`, `884`, `885`, `887`, `889`, `890`, `891`, `894`, `895`, `897`, `898`, `900`, `902`, `905`, `907`, `909`, `910`, `911`, `912`, `913`, `914`, `915`, `917`, `919`, `921`, `922`, `926`, `928`, `929`, `930`, `931`, `932`, `933`, `935`, `936`, `939`, `940`, `941`, `943`, `944`, `946`, `948`, `949`, `950`, `951`, `952`, `953`, `955`, `957`, `958`, `960`, `961`, `962`, `963`, `964`, `965`, `966`, `967`, `968`, `970`, `971`, `973`, `974`, `976`, `977`, `978`, `979`, `980`, `982`, `983`, `984`, `986`, `988`, `989`, `990`, `992`, `993`, `995`, `997`, `999`, `1000`, `1001`, `1003`, `1006`, `1007`, `1009`, `1010`, `1011`, `1012`, `1014`, `1015`, `1016`, `1019`, `1020`, `1021`, `1023`, `1024`, `1025`, `1027`, `1028`, `1030`, `1032`, `1033`, `1034`, `1036`, `1037`, `1040`, `1043`, `1044`, `1046`, `1048`, `1050`, `1051`, `1053`, `1055`, `1056`, `1057`, `1058`, `1059`, `1060`, `1061`, `1062`, `1064`, `1065`, `1067`, `1068`, `1069`, `1071`, `1072`, `1073`, `1077`, `1078`, `1079`, `1080`, `1082`, `1083`, `1084`, `1085`, `1087`, `1088`, `1090`, `1092`, `1096`, `1098`, `1100`, `1102`, `1104`, `1106`, `1107`, `1110`, `1112`, `1114`, `1116`, `1118`, `1120`, `1121`, `1124`, `1127`, `1128`, `1130`, `1131`, `1133`, `1134`, `1135`, `1138`, `1139`, `1141`, `1142`, `1143`, `1146`, `1147`, `1150`, `1151`, `1154`, `1156`, `1157`, `1158`, `1160`, `1161`, `1162`, `1163`, `1165`, `1166`, `1168`, `1170`, `1172`, `1173`, `1174`, `1175`, `1176`, `1177`, `1180`, `1183`, `1186`, `1187`, `1190`, `1192`, `1193`, `1194`, `1195`, `1198`, `1199`, `1200`, `1202`, `1205`, `1207`, `1208`, `1209`, `1210`, `1212`, `1213`, `1214`, `1215`, `1216`, `1217`, `1219`, `1220`, `1221`, `1222`, `1224`, `1225`, `1226`, `1227`, `1229`, `1231`, `1232`, `1235`, `1236`, `1239`, `1240`, `1241`, `1243`, `1244`, `1245`, `1248`, `1250`, `1251`, `1252`, `1253`, `1254`, `1255`, `1258`, `1259`, `1260`, `1261`, `1263`, `1265`, `1266`, `1267`, `1268`, `1269`, `1270`, `1271`, `1272`, `1273`, `1274`, `1275`, `1278`, `1279`, `1280`, `1281`, `1282`, `1283`, `1285`, `1286`, `1287`, `1289`, `1291`, `1293`, `1294`, `1295`, `1296`, `1298`, `1299`, `1300`, `1301`, `1302`, `1303`, `1304`, `1307`, `1308`, `1309`, `1311`, `1312`, `1315`, `1317`, `1319`, `1320`, `1322`, `1324`, `1325`, `1326`, `1327`, `1329`, `1330`, `1331`, `1332`, `1333`, `1334`, `1335`, `1337`, `1338`, `1340`, `1341`, `1342`, `1343`, `1345`, `1347`, `1348`, `1349`, `1350`, `1351`, `1353`, `1354`, `1356`, `1359`, `1360`, `1362`, `1363`, `1364`, `1365`, `1366`, `1367`, `1369`, `1370`, `1372`, `1374`, `1376`, `1377`, `1378`, `1379`, `1380`, `1382`, `1383`, `1385`, `1386`, `1390`, `1391`, `1257`, `1392`, `1393`, `1394`, `1395`, `1396`, `1397`, `1398`, `1399`, `1400`, `1402`, `1403`, `1404`, `1405`, `1408`, `1409`, `1411`, `576`, `1413`, `1414`, `1416`, `1417`, `1419`, `1420`, `1421`, `1422`, `1423`, `1426`, `1429`, `1430`, `1433`, `1435`, `1436`, `1437`, `1439`, `1441`, `1442`, `1443`, `1444`, `1445`, `1447`, `1448`, `1449`, `1451`, `1452`, `1453`, `1454`, `1456`, `1457`, `1460`, `1462`, `1463`, `1465`, `1467`, `1468`, `1469`, `1470`, `1471`, `1472`, `1474`, `1476`, `1477`, `1478`, `1479`, `1481`, `1484`, `1486`, `1489`, `1492`, `1494`, `1496`, `1498`, `1499`, `1501`, `1504`, `1505`, `1506`, `1508`, `1510`, `1511`, `1512`, `1513`, `1514`, `1517`, `1518`, `1520`, `1521`, `1522`, `1524`, `1525`, `1526`, `1528`, `1531`, `1533`, `1535`, `1536`, `1537`, `1538`, `1540`, `1543`, `1544`, `1546`, `1547`, `1549`, `1550`, `1552`, `1555`, `1558`, `1559`, `1560`, `1561`, `1563`, `1565`, `1566`, `1567`, `1570`, `1572`, `1574`, `1576`, `1578`, `1579`, `1580`, `1582`, `1585`, `1587`, `1589`, `1590`, `1591`, `1593`, `1595`, `1596`, `1597`, `1598`, `1599`, `1600`, `1601`, `1604`, `1608`, `1610`, `1611`, `1612`, `1613`, `1614`, `1616`, `1617`, `1618`, `1620`, `1621`, `1623`, `1624`, `1625`, `1626`, `1627`, `1628`, `1631`, `1632`, `1635`, `1636`, `1638`, `1639`, `1641`, `1642`, `1643`, `1644`, `1645`, `1647`, `1648`, `1651`, `1653`, `1654`, `1655`, `1657`, `1658`, `1659`, `1660`, `1661`, `1662`, `1664`, `1666`, `1669`, `1673`, `1676`, `1677`, `1679`, `1682`, `1684`, `1686`, `1689`, `1690`, `1691`, `1692`, `1694`, `1696`, `1697`, `1699`, `1701`, `1702`, `1703`, `1705`, `1706`, `1708`, `1709`, `1711`, `1712`, `1713`, `1715`, `1717`, `1718`, `1720`, `1722`, `1724`, `1725`, `1726`, `1728`, `1730`, `1732`, `1734`, `1736`, `1738`, `1741`, `1743`, `1745`, `1747`, `1748`, `1749`, `1750`, `1751`, `1753`, `1754`, `1755`, `1756`, `1757`, `1758`, `1761`, `1763`, `1765`, `1767`, `1771`, `1774`, `1777`, `1779`, `1781`, `1783`, `1785`, `1786`, `1789`, `1791`, `1792`, `1793`, `1794`, `1795`, `1797`, `1800`, `1803`, `1806`, `1808`, `1810`, `1811`, `1812`, `1813`, `1816`, `1818`, `1820`, `1821`, `1823`, `1827`, `1829`, `1831`, `1832`, `1833`, `1835`, `1837`, `1838`, `1840`, `1842`, `1843`, `1844`, `1845`, `1848`, `1849`, `1851`, `1852`, `1854`, `1855`, `1857`, `1858`, `1859`, `1861`, `1862`, `1863`, `1866`, `1868`, `1871`, `1873`, `1874`, `1875`, `1876`, `1877`, `1879`, `1882`, `1884`, `1885`, `1887`, `1889`, `1890`, `1891`, `1892`, `1894`, `1896` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.96 | | `TOKEN_P` | 99.96 | | `TOKEN_R` | 99.96 | | `TOKEN_ACC` | 99.99 | | `SENTS_F` | 99.10 | | `SENTS_P` | 99.15 | | `SENTS_R` | 99.05 | | `TAG_ACC` | 98.33 | | `POS_ACC` | 98.34 | | `MORPH_ACC` | 97.91 | | `DEP_UAS` | 94.11 | | `DEP_LAS` | 92.14 | | `LEMMA_ACC` | 98.28 |
{"language": ["nb"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/nb_udv25_norwegiannynorsk_trf
null
[ "spacy", "token-classification", "nb", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "nb" ]
TAGS #spacy #token-classification #nb #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Norwegian-Nynorsk ### Label Scheme View label scheme (1400 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (1400 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #nb #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (1400 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Dutch-Alpino | Feature | Description | | --- | --- | | **Name** | `nl_udv25_dutchalpino_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (1712 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `ADJ\|nom\|basis\|met-e\|mv-n`, `ADJ\|nom\|basis\|met-e\|zonder-n\|stan`, `ADJ\|nom\|basis\|zonder\|zonder-n`, `ADJ\|nom\|comp\|met-e\|mv-n`, `ADJ\|nom\|comp\|met-e\|zonder-n\|stan`, `ADJ\|nom\|sup\|met-e\|mv-n`, `ADJ\|nom\|sup\|met-e\|zonder-n\|stan`, `ADJ\|nom\|sup\|zonder\|zonder-n`, `ADJ\|postnom\|basis\|met-s`, `ADJ\|postnom\|basis\|zonder`, `ADJ\|postnom\|comp\|met-s`, `ADJ\|prenom\|basis\|met-e\|stan`, `ADJ\|prenom\|basis\|zonder`, `ADJ\|prenom\|comp\|met-e\|stan`, `ADJ\|prenom\|comp\|zonder`, `ADJ\|prenom\|sup\|met-e\|stan`, `ADJ\|vrij\|basis\|zonder`, `ADJ\|vrij\|comp\|zonder`, `ADJ\|vrij\|dim\|zonder`, `ADJ\|vrij\|sup\|zonder`, `BW`, `LET`, `LID\|bep\|dat\|evmo`, `LID\|bep\|gen\|evmo`, `LID\|bep\|gen\|rest3`, `LID\|bep\|stan\|evon`, `LID\|bep\|stan\|rest`, `LID\|onbep\|stan\|agr`, `N\|eigen\|ev\|basis\|gen`, `N\|eigen\|ev\|basis\|genus\|stan`, `N\|eigen\|ev\|basis\|onz\|stan`, `N\|eigen\|ev\|basis\|zijd\|stan`, `N\|eigen\|ev\|dim\|onz\|stan`, `N\|eigen\|mv\|basis`, `N\|soort\|ev\|basis\|dat`, `N\|soort\|ev\|basis\|gen`, `N\|soort\|ev\|basis\|genus\|stan`, `N\|soort\|ev\|basis\|onz\|stan`, `N\|soort\|ev\|basis\|zijd\|stan`, `N\|soort\|ev\|dim\|onz\|stan`, `N\|soort\|mv\|basis`, `N\|soort\|mv\|dim`, `SPEC\|afgebr`, `SPEC\|afk`, `SPEC\|deeleigen`, `SPEC\|enof`, `SPEC\|meta`, `SPEC\|symb`, `SPEC\|vreemd`, `TSW`, `TW\|hoofd\|nom\|mv-n\|basis`, `TW\|hoofd\|nom\|mv-n\|dim`, `TW\|hoofd\|nom\|zonder-n\|basis`, `TW\|hoofd\|nom\|zonder-n\|dim`, `TW\|hoofd\|prenom\|stan`, `TW\|hoofd\|vrij`, `TW\|rang\|nom\|mv-n`, `TW\|rang\|nom\|zonder-n`, `TW\|rang\|prenom\|stan`, `VG\|neven`, `VG\|onder`, `VNW\|aanw\|adv-pron\|obl\|vol\|3o\|getal`, `VNW\|aanw\|adv-pron\|stan\|red\|3\|getal`, `VNW\|aanw\|det\|dat\|nom\|met-e\|zonder-n`, `VNW\|aanw\|det\|dat\|prenom\|met-e\|evmo`, `VNW\|aanw\|det\|gen\|prenom\|met-e\|rest3`, `VNW\|aanw\|det\|stan\|nom\|met-e\|mv-n`, `VNW\|aanw\|det\|stan\|nom\|met-e\|zonder-n`, `VNW\|aanw\|det\|stan\|prenom\|met-e\|rest`, `VNW\|aanw\|det\|stan\|prenom\|zonder\|agr`, `VNW\|aanw\|det\|stan\|prenom\|zonder\|evon`, `VNW\|aanw\|det\|stan\|prenom\|zonder\|rest`, `VNW\|aanw\|det\|stan\|vrij\|zonder`, `VNW\|aanw\|pron\|gen\|vol\|3m\|ev`, `VNW\|aanw\|pron\|stan\|vol\|3o\|ev`, `VNW\|aanw\|pron\|stan\|vol\|3\|getal`, `VNW\|betr\|det\|stan\|nom\|met-e\|zonder-n`, `VNW\|betr\|det\|stan\|nom\|zonder\|zonder-n`, `VNW\|betr\|pron\|stan\|vol\|3\|ev`, `VNW\|betr\|pron\|stan\|vol\|persoon\|getal`, `VNW\|bez\|det\|gen\|vol\|3\|ev\|prenom\|met-e\|rest3`, `VNW\|bez\|det\|stan\|nadr\|2v\|mv\|prenom\|zonder\|agr`, `VNW\|bez\|det\|stan\|red\|1\|ev\|prenom\|zonder\|agr`, `VNW\|bez\|det\|stan\|red\|2v\|ev\|prenom\|zonder\|agr`, `VNW\|bez\|det\|stan\|red\|3\|ev\|prenom\|zonder\|agr`, `VNW\|bez\|det\|stan\|vol\|1\|ev\|prenom\|zonder\|agr`, `VNW\|bez\|det\|stan\|vol\|1\|mv\|prenom\|met-e\|rest`, `VNW\|bez\|det\|stan\|vol\|1\|mv\|prenom\|zonder\|evon`, `VNW\|bez\|det\|stan\|vol\|2v\|ev\|prenom\|zonder\|agr`, `VNW\|bez\|det\|stan\|vol\|2\|getal\|prenom\|zonder\|agr`, `VNW\|bez\|det\|stan\|vol\|3m\|ev\|nom\|met-e\|zonder-n`, `VNW\|bez\|det\|stan\|vol\|3v\|ev\|nom\|met-e\|zonder-n`, `VNW\|bez\|det\|stan\|vol\|3\|ev\|prenom\|zonder\|agr`, `VNW\|bez\|det\|stan\|vol\|3\|mv\|prenom\|zonder\|agr`, `VNW\|onbep\|adv-pron\|gen\|red\|3\|getal`, `VNW\|onbep\|adv-pron\|obl\|vol\|3o\|getal`, `VNW\|onbep\|det\|stan\|nom\|met-e\|mv-n`, `VNW\|onbep\|det\|stan\|nom\|met-e\|zonder-n`, `VNW\|onbep\|det\|stan\|prenom\|met-e\|agr`, `VNW\|onbep\|det\|stan\|prenom\|met-e\|evz`, `VNW\|onbep\|det\|stan\|prenom\|met-e\|mv`, `VNW\|onbep\|det\|stan\|prenom\|met-e\|rest`, `VNW\|onbep\|det\|stan\|prenom\|zonder\|agr`, `VNW\|onbep\|det\|stan\|prenom\|zonder\|evon`, `VNW\|onbep\|det\|stan\|vrij\|zonder`, `VNW\|onbep\|grad\|stan\|nom\|met-e\|mv-n\|basis`, `VNW\|onbep\|grad\|stan\|nom\|met-e\|mv-n\|sup`, `VNW\|onbep\|grad\|stan\|nom\|met-e\|zonder-n\|basis`, `VNW\|onbep\|grad\|stan\|nom\|met-e\|zonder-n\|sup`, `VNW\|onbep\|grad\|stan\|prenom\|met-e\|agr\|basis`, `VNW\|onbep\|grad\|stan\|prenom\|met-e\|agr\|comp`, `VNW\|onbep\|grad\|stan\|prenom\|met-e\|agr\|sup`, `VNW\|onbep\|grad\|stan\|prenom\|met-e\|mv\|basis`, `VNW\|onbep\|grad\|stan\|prenom\|zonder\|agr\|basis`, `VNW\|onbep\|grad\|stan\|prenom\|zonder\|agr\|comp`, `VNW\|onbep\|grad\|stan\|vrij\|zonder\|basis`, `VNW\|onbep\|grad\|stan\|vrij\|zonder\|comp`, `VNW\|onbep\|grad\|stan\|vrij\|zonder\|sup`, `VNW\|onbep\|pron\|gen\|vol\|3p\|ev`, `VNW\|onbep\|pron\|stan\|vol\|3o\|ev`, `VNW\|onbep\|pron\|stan\|vol\|3p\|ev`, `VNW\|pers\|pron\|gen\|vol\|2\|getal`, `VNW\|pers\|pron\|nomin\|nadr\|3m\|ev\|masc`, `VNW\|pers\|pron\|nomin\|red\|1\|mv`, `VNW\|pers\|pron\|nomin\|red\|2v\|ev`, `VNW\|pers\|pron\|nomin\|red\|2\|getal`, `VNW\|pers\|pron\|nomin\|red\|3p\|ev\|masc`, `VNW\|pers\|pron\|nomin\|red\|3\|ev\|masc`, `VNW\|pers\|pron\|nomin\|vol\|1\|ev`, `VNW\|pers\|pron\|nomin\|vol\|1\|mv`, `VNW\|pers\|pron\|nomin\|vol\|2b\|getal`, `VNW\|pers\|pron\|nomin\|vol\|2v\|ev`, `VNW\|pers\|pron\|nomin\|vol\|2\|getal`, `VNW\|pers\|pron\|nomin\|vol\|3p\|mv`, `VNW\|pers\|pron\|nomin\|vol\|3v\|ev\|fem`, `VNW\|pers\|pron\|nomin\|vol\|3\|ev\|masc`, `VNW\|pers\|pron\|obl\|nadr\|3m\|ev\|masc`, `VNW\|pers\|pron\|obl\|red\|3\|ev\|masc`, `VNW\|pers\|pron\|obl\|vol\|2v\|ev`, `VNW\|pers\|pron\|obl\|vol\|3p\|mv`, `VNW\|pers\|pron\|obl\|vol\|3\|ev\|masc`, `VNW\|pers\|pron\|obl\|vol\|3\|getal\|fem`, `VNW\|pers\|pron\|stan\|nadr\|2v\|mv`, `VNW\|pers\|pron\|stan\|red\|3\|ev\|fem`, `VNW\|pers\|pron\|stan\|red\|3\|ev\|onz`, `VNW\|pers\|pron\|stan\|red\|3\|mv`, `VNW\|pr\|pron\|obl\|nadr\|1\|ev`, `VNW\|pr\|pron\|obl\|nadr\|2v\|getal`, `VNW\|pr\|pron\|obl\|nadr\|2\|getal`, `VNW\|pr\|pron\|obl\|red\|1\|ev`, `VNW\|pr\|pron\|obl\|red\|2v\|getal`, `VNW\|pr\|pron\|obl\|vol\|1\|ev`, `VNW\|pr\|pron\|obl\|vol\|1\|mv`, `VNW\|pr\|pron\|obl\|vol\|2\|getal`, `VNW\|recip\|pron\|gen\|vol\|persoon\|mv`, `VNW\|recip\|pron\|obl\|vol\|persoon\|mv`, `VNW\|refl\|pron\|obl\|nadr\|3\|getal`, `VNW\|refl\|pron\|obl\|red\|3\|getal`, `VNW\|vb\|adv-pron\|obl\|vol\|3o\|getal`, `VNW\|vb\|det\|stan\|nom\|met-e\|zonder-n`, `VNW\|vb\|det\|stan\|prenom\|met-e\|rest`, `VNW\|vb\|det\|stan\|prenom\|zonder\|evon`, `VNW\|vb\|pron\|gen\|vol\|3m\|ev`, `VNW\|vb\|pron\|gen\|vol\|3p\|mv`, `VNW\|vb\|pron\|gen\|vol\|3v\|ev`, `VNW\|vb\|pron\|stan\|vol\|3o\|ev`, `VNW\|vb\|pron\|stan\|vol\|3p\|getal`, `VZ\|fin`, `VZ\|init`, `VZ\|versm`, `WW\|inf\|nom\|zonder\|zonder-n`, `WW\|inf\|prenom\|met-e`, `WW\|inf\|vrij\|zonder`, `WW\|od\|nom\|met-e\|mv-n`, `WW\|od\|nom\|met-e\|zonder-n`, `WW\|od\|prenom\|met-e`, `WW\|od\|prenom\|zonder`, `WW\|od\|vrij\|zonder`, `WW\|pv\|conj\|ev`, `WW\|pv\|tgw\|ev`, `WW\|pv\|tgw\|met-t`, `WW\|pv\|tgw\|mv`, `WW\|pv\|verl\|ev`, `WW\|pv\|verl\|mv`, `WW\|vd\|nom\|met-e\|mv-n`, `WW\|vd\|nom\|met-e\|zonder-n`, `WW\|vd\|prenom\|met-e`, `WW\|vd\|prenom\|zonder`, `WW\|vd\|vrij\|zonder` | | **`morphologizer`** | `POS=PRON\|Person=3\|PronType=Dem`, `Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `POS=ADV`, `POS=VERB\|VerbForm=Part`, `POS=PUNCT`, `Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `POS=ADP`, `POS=NUM`, `Number=Plur\|POS=NOUN`, `POS=VERB\|VerbForm=Inf`, `POS=SCONJ`, `Definite=Def\|POS=DET`, `Gender=Com\|Number=Sing\|POS=NOUN`, `Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Degree=Pos\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=PROPN`, `Gender=Com\|Number=Sing\|POS=PROPN`, `POS=AUX\|VerbForm=Inf`, `Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `POS=DET`, `Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=PRON\|Person=3\|PronType=Prs`, `POS=CCONJ`, `Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `POS=PRON\|Person=3\|PronType=Ind`, `Degree=Cmp\|POS=ADJ`, `Case=Nom\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Ind\|POS=DET`, `Case=Nom\|POS=PRON\|Person=3\|PronType=Prs`, `POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Plur\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `POS=PRON\|PronType=Rel`, `Case=Acc\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Gender=Com,Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs`, `POS=PROPN`, `POS=PRON\|PronType=Ind`, `POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|POS=PRON\|PronType=Rcp`, `Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Number=Sing\|POS=NOUN`, `POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=SYM`, `Abbr=Yes\|POS=X`, `Gender=Com,Neut\|Number=Sing\|POS=PROPN`, `Degree=Sup\|POS=ADJ`, `Foreign=Yes\|POS=X`, `POS=ADJ`, `Number=Sing\|POS=PROPN`, `POS=PRON\|PronType=Dem`, `POS=AUX\|VerbForm=Part`, `POS=PRON\|Person=3\|PronType=Rel`, `Number=Plur\|POS=PROPN`, `POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|POS=PRON\|PronType=Dem`, `Case=Nom\|POS=PRON\|Person=2\|PronType=Prs`, `POS=X`, `POS=INTJ`, `Case=Gen\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=PRON\|PronType=Int`, `Case=Acc\|POS=PRON\|Person=2\|PronType=Prs`, `POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|POS=PRON\|Person=2\|PronType=Prs` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `csubj`, `dep`, `det`, `expl`, `expl:pv`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:agent`, `orphan`, `parataxis`, `punct`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `1`, `4`, `5`, `10`, `12`, `14`, `16`, `20`, `24`, `25`, `28`, `30`, `32`, `34`, `38`, `40`, `42`, `45`, `47`, `48`, `51`, `52`, `54`, `55`, `57`, `59`, `62`, `64`, `66`, `68`, `70`, `72`, `76`, `78`, `81`, `83`, `84`, `86`, `89`, `91`, `92`, `96`, `99`, `101`, `104`, `106`, `109`, `114`, `115`, `117`, `118`, `120`, `121`, `123`, `126`, `129`, `131`, `133`, `137`, `139`, `141`, `143`, `145`, `146`, `148`, `151`, `154`, `158`, `160`, `163`, `165`, `98`, `168`, `169`, `171`, `174`, `177`, `181`, `183`, `185`, `187`, `191`, `194`, `196`, `199`, `202`, `206`, `209`, `211`, `212`, `214`, `217`, `61`, `219`, `221`, `224`, `226`, `227`, `229`, `231`, `235`, `236`, `238`, `240`, `242`, `245`, `247`, `251`, `253`, `257`, `260`, `262`, `264`, `263`, `266`, `267`, `271`, `273`, `274`, `275`, `278`, `280`, `281`, `282`, `284`, `286`, `291`, `293`, `296`, `298`, `299`, `301`, `303`, `307`, `308`, `310`, `312`, `314`, `316`, `318`, `320`, `322`, `324`, `325`, `328`, `330`, `332`, `333`, `336`, `337`, `339`, `342`, `344`, `345`, `349`, `352`, `353`, `354`, `355`, `357`, `360`, `362`, `363`, `365`, `368`, `372`, `373`, `375`, `377`, `379`, `383`, `385`, `387`, `389`, `390`, `392`, `394`, `396`, `398`, `402`, `404`, `407`, `409`, `9`, `411`, `412`, `414`, `417`, `418`, `420`, `422`, `423`, `425`, `429`, `431`, `432`, `435`, `437`, `438`, `440`, `442`, `444`, `448`, `450`, `451`, `454`, `456`, `457`, `459`, `461`, `463`, `464`, `466`, `468`, `469`, `472`, `473`, `476`, `477`, `478`, `480`, `484`, `487`, `489`, `491`, `493`, `496`, `497`, `500`, `502`, `505`, `506`, `508`, `510`, `511`, `512`, `515`, `518`, `523`, `525`, `528`, `531`, `532`, `534`, `306`, `535`, `537`, `539`, `542`, `544`, `548`, `552`, `555`, `556`, `557`, `558`, `559`, `560`, `564`, `566`, `538`, `567`, `569`, `570`, `572`, `573`, `575`, `577`, `579`, `580`, `582`, `583`, `584`, `587`, `588`, `591`, `593`, `595`, `597`, `599`, `601`, `602`, `605`, `607`, `609`, `611`, `614`, `616`, `617`, `618`, `620`, `621`, `622`, `623`, `625`, `626`, `629`, `632`, `634`, `636`, `638`, `641`, `642`, `644`, `647`, `648`, `650`, `651`, `654`, `655`, `657`, `659`, `660`, `663`, `664`, `665`, `666`, `668`, `671`, `673`, `675`, `676`, `677`, `678`, `33`, `681`, `683`, `686`, `688`, `691`, `692`, `694`, `697`, `698`, `699`, `700`, `701`, `702`, `703`, `706`, `709`, `712`, `713`, `714`, `717`, `720`, `721`, `682`, `723`, `725`, `728`, `730`, `733`, `735`, `738`, `740`, `741`, `743`, `744`, `745`, `748`, `750`, `751`, `753`, `756`, `759`, `760`, `762`, `763`, `764`, `767`, `771`, `773`, `774`, `776`, `234`, `777`, `779`, `364`, `781`, `382`, `783`, `784`, `785`, `786`, `788`, `791`, `793`, `794`, `796`, `799`, `693`, `801`, `804`, `805`, `807`, `808`, `811`, `813`, `814`, `815`, `816`, `818`, `820`, `821`, `824`, `825`, `826`, `827`, `828`, `829`, `830`, `833`, `834`, `836`, `839`, `841`, `845`, `847`, `848`, `849`, `850`, `851`, `856`, `858`, `859`, `860`, `861`, `862`, `864`, `866`, `869`, `871`, `873`, `875`, `876`, `877`, `878`, `881`, `882`, `883`, `884`, `885`, `887`, `889`, `890`, `670`, `891`, `894`, `896`, `899`, `900`, `902`, `904`, `908`, `910`, `913`, `915`, `916`, `918`, `921`, `923`, `924`, `926`, `927`, `931`, `934`, `936`, `938`, `940`, `942`, `943`, `946`, `949`, `950`, `951`, `952`, `953`, `954`, `955`, `958`, `959`, `961`, `962`, `963`, `69`, `964`, `967`, `969`, `972`, `973`, `975`, `977`, `978`, `980`, `982`, `983`, `984`, `986`, `988`, `989`, `991`, `992`, `993`, `995`, `996`, `290`, `998`, `999`, `1000`, `1001`, `1003`, `1005`, `1007`, `1008`, `1009`, `1011`, `1014`, `1015`, `1016`, `1017`, `1018`, `1019`, `1021`, `1022`, `1023`, `1024`, `1025`, `1027`, `1030`, `1031`, `1032`, `1033`, `1036`, `1038`, `1041`, `1045`, `1046`, `1048`, `1052`, `1053`, `1055`, `1056`, `1057`, `1059`, `1060`, `1062`, `1064`, `1068`, `1069`, `1070`, `1073`, `1075`, `1076`, `1077`, `1080`, `1083`, `1086`, `1087`, `1088`, `1091`, `1092`, `1095`, `1098`, `1099`, `1100`, `1101`, `1104`, `1108`, `1109`, `1111`, `1113`, `1114`, `1115`, `1116`, `1118`, `1120`, `1121`, `1122`, `1125`, `1126`, `1129`, `1132`, `1133`, `1136`, `1137`, `1138`, `1140`, `1141`, `1142`, `1143`, `1144`, `1146`, `1147`, `1148`, `1149`, `1150`, `71`, `1151`, `1154`, `1155`, `1156`, `1158`, `1160`, `1161`, `1162`, `1163`, `1164`, `1165`, `1166`, `1168`, `1171`, `1172`, `1174`, `1175`, `1176`, `1177`, `1178`, `1180`, `1183`, `1185`, `1189`, `1192`, `1194`, `1195`, `1196`, `1198`, `1199`, `1200`, `1201`, `1202`, `981`, `1203`, `1204`, `1208`, `1209`, `1210`, `1211`, `1212`, `1213`, `1215`, `1216`, `1218`, `1219`, `1221`, `1223`, `1224`, `1225`, `1227`, `1228`, `1230`, `1231`, `1232`, `1234`, `1235`, `1236`, `1237`, `1239`, `1241`, `1243`, `1245`, `1247`, `1248`, `1249`, `1250`, `1252`, `1253`, `1254`, `1255`, `1256`, `1257`, `1258`, `1259`, `1261`, `1263`, `1265`, `1266`, `1267`, `1270`, `1271`, `1272`, `1273`, `1275`, `1276`, `1277`, `1280`, `53`, `1281`, `1285`, `1286`, `1287`, `1288`, `1291`, `1292`, `1294`, `1296`, `1298`, `1300`, `1301`, `1303`, `1305`, `1306`, `1308`, `1309`, `1311`, `1312`, `1315`, `1318`, `1321`, `1322`, `1323`, `1326`, `1328`, `1330`, `1332`, `1334`, `1335`, `1337`, `1338`, `1340`, `1342`, `1343`, `1344`, `1346`, `1347`, `1348`, `1349`, `1350`, `1351`, `1353`, `1355`, `1356`, `1357`, `1359`, `1361`, `1362`, `1364`, `1365`, `1368`, `1369`, `1370`, `1371`, `1372`, `1376`, `1377`, `1380`, `1381`, `1382`, `1385`, `1386`, `1387`, `1388`, `1389`, `1390`, `1391`, `1392`, `1393`, `1394`, `1396`, `1397`, `1399`, `1398`, `1403`, `1405`, `1407`, `1411`, `1413`, `1415`, `1416`, `1417`, `1418`, `1421`, `1422`, `1424`, `1425`, `1426`, `1427`, `1428`, `1429`, `1431`, `1432`, `1434`, `803`, `1435`, `1436`, `1437`, `1439`, `1441`, `1445`, `1448`, `1449`, `1450`, `1451`, `1453`, `1454`, `1456`, `1459`, `1460`, `1461`, `1464`, `1466`, `1467`, `1470`, `1473`, `1477`, `1479`, `1481`, `1482`, `1485`, `1487`, `1488`, `1490`, `1495`, `1496`, `1497`, `1499`, `1500`, `1501`, `1503`, `1504`, `1505`, `1506`, `1508`, `1509`, `1512`, `1514`, `1515`, `1516`, `1517`, `1269`, `1518`, `1520`, `1521`, `1523`, `1524`, `1526`, `1528`, `1529`, `1531`, `1532`, `1534`, `1536`, `1537`, `1538`, `1539`, `1540`, `1541`, `294`, `1542`, `1544`, `1546`, `1548`, `1549`, `1551`, `1554`, `1555`, `1556`, `1557`, `1559`, `1560`, `1563`, `1565`, `1566`, `1567`, `1568`, `1569`, `1570`, `1571`, `1572`, `1575`, `1576`, `1577`, `1578`, `1580`, `1582`, `1583`, `1586`, `1589`, `1592`, `1593`, `1594`, `1595`, `1596`, `1597`, `1598`, `1600`, `1601`, `1602`, `1604`, `1605`, `1606`, `1607`, `1608`, `1609`, `1610`, `1611`, `1612`, `1614`, `1615`, `1617`, `1619`, `1620`, `1621`, `1622`, `1623`, `1626`, `1628`, `1629`, `1630`, `1631`, `1632`, `1634`, `1636`, `1638`, `1639`, `1641`, `1643`, `1644`, `1646`, `1647`, `1648`, `1649`, `1222`, `1650`, `1652`, `1653`, `1655`, `1656`, `1657`, `1659`, `1661`, `1662`, `1664`, `1667`, `1668`, `1670`, `1671`, `1673`, `1676`, `1677`, `1679`, `1680`, `1682`, `1685`, `1687`, `1689`, `1691`, `1692`, `1695`, `1696`, `1699`, `1701`, `1703`, `1705`, `1707`, `1708`, `1709`, `1710`, `1712`, `1714`, `1715`, `1718`, `1720`, `1721`, `1722`, `1724`, `1725`, `1726`, `1728`, `1729`, `1731`, `1732`, `1733`, `1734`, `1736`, `1739`, `1742`, `1743`, `1746`, `1748`, `1749`, `1751`, `1752`, `1753`, `1754`, `1395`, `1756`, `1759`, `1760`, `1761`, `1762`, `1764`, `1766`, `1768`, `1770`, `1772`, `1773`, `1774`, `1775`, `1776`, `1777`, `1779`, `1233`, `1781`, `1782`, `1783`, `1785`, `1786`, `1787`, `1789`, `1790`, `1791`, `1543`, `1792`, `1794`, `1795`, `1796`, `1798`, `1800`, `1801`, `1802`, `1804`, `1806`, `1807`, `1809`, `1812`, `1814`, `1817`, `1818`, `1738`, `1819`, `1822`, `1824`, `1825`, `1827`, `1828`, `0`, `1829`, `1830`, `1831`, `1833`, `1834`, `1835`, `1837`, `1839`, `1841`, `1844`, `1845`, `1846`, `1847`, `1848`, `1581`, `1849`, `1850`, `1852`, `1854`, `1855`, `1856`, `1857`, `1858`, `1859`, `1860`, `1862`, `1864`, `1866`, `1867`, `1868`, `1869`, `1788`, `1871`, `77`, `1872`, `1873`, `1875`, `1877`, `1878`, `1879`, `1883`, `674`, `1884`, `1886`, `1887`, `1888`, `1889`, `1891`, `1892`, `1894`, `1895`, `1898`, `1899`, `1901`, `1902`, `1903`, `1905`, `1908`, `1911`, `1913`, `1915`, `1916`, `1917`, `1920`, `1921`, `1922`, `1923`, `1924`, `1925`, `1926`, `1927`, `1929`, `1930`, `1931`, `1932`, `1934`, `1935`, `1938`, `1940`, `1941`, `1942`, `1944`, `1945`, `1946`, `1948`, `1949`, `1950`, `1952`, `1953`, `1954`, `1955`, `1956`, `1957`, `1958`, `1959`, `1960`, `1962`, `1963`, `1964`, `1966`, `1968`, `1970`, `1971`, `1972`, `1973`, `1976`, `1978`, `1979`, `1980`, `1981`, `1982`, `1984`, `1985`, `1986`, `1987`, `1988`, `1990`, `237`, `1992`, `1993`, `1994`, `1995`, `1996`, `1997`, `1998`, `1999`, `2000`, `2002`, `2005`, `2007`, `2009`, `2010`, `2011`, `2012`, `2013`, `2014`, `2015`, `2016`, `2017`, `2019`, `2020`, `2021`, `2023`, `2025`, `2026`, `2028`, `2029`, `2032`, `1511`, `2034`, `2036`, `2038`, `2040`, `2042`, `2043`, `2045`, `2046`, `2047`, `2048`, `2049`, `2051`, `2052`, `2053`, `2054`, `2055`, `2056`, `2057`, `2058`, `2059`, `2060`, `2062`, `2064`, `2065`, `2066`, `2067`, `2068`, `2069`, `2071`, `2072`, `2073`, `2074`, `2075`, `2077`, `2078`, `182`, `2081`, `2082`, `2083`, `2084`, `2087`, `2088`, `2089`, `2091`, `2094`, `2096`, `2098`, `1533`, `2099`, `2100`, `2101`, `2103`, `2105`, `2106`, `2107`, `2108`, `2109`, `2110`, `2111`, `2112`, `2113`, `2114`, `2115`, `2116`, `2117`, `2118`, `2120`, `2123`, `2124`, `2126`, `2128`, `2130`, `2132`, `2133`, `2136`, `2139`, `2140`, `39`, `2141`, `130`, `2142`, `2144`, `2145`, `2146`, `2149`, `2150`, `2152`, `2153`, `2154`, `2155`, `2157`, `2158`, `2159`, `2161`, `2162`, `2163`, `2164`, `2166`, `2169`, `2171`, `2173`, `2174`, `2175`, `2176`, `2178`, `2179`, `2180`, `2181`, `2182`, `2183`, `2184`, `2185`, `2186`, `2187`, `2188`, `2190`, `2191`, `2192`, `2193`, `2194`, `2196`, `2198`, `2199`, `2201`, `2204`, `2205`, `2207`, `2209`, `2212`, `2214`, `2216`, `2217`, `2218`, `2219`, `2220`, `2221`, `1730`, `2222`, `2223`, `501`, `2224`, `2225`, `2227`, `2229`, `2230`, `2232`, `2233`, `2234`, `2235`, `2237`, `2239`, `2241`, `2243`, `2244`, `2246`, `2247`, `2248`, `2249`, `2250`, `2251`, `2253`, `2254`, `2257`, `2259`, `2261`, `2264`, `2265`, `2266`, `2269`, `2270`, `2271`, `2273`, `2276`, `2278`, `2280`, `2281`, `2283`, `2285`, `2287`, `2288`, `2289`, `2290`, `2291`, `2292`, `2294`, `2297`, `2298`, `2300`, `2301`, `2302`, `2303`, `2304`, `2305`, `2307`, `2309`, `2312`, `1933`, `2313`, `2314`, `1423`, `2315`, `2316`, `2319`, `2321`, `2322`, `2323`, `2326`, `2328`, `2330`, `2331`, `2332`, `2334`, `63`, `2335`, `2336`, `2338`, `2339`, `2341`, `2343`, `2272`, `2344`, `2346`, `2347`, `2349`, `2350`, `2351`, `2353`, `2354`, `2355`, `2356`, `2357`, `2358`, `195` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 98.65 | | `TOKEN_P` | 98.49 | | `TOKEN_R` | 98.82 | | `TOKEN_ACC` | 99.87 | | `SENTS_F` | 90.84 | | `SENTS_P` | 92.62 | | `SENTS_R` | 89.14 | | `TAG_ACC` | 95.60 | | `POS_ACC` | 97.67 | | `MORPH_ACC` | 96.79 | | `DEP_UAS` | 94.66 | | `DEP_LAS` | 92.28 | | `LEMMA_ACC` | 96.46 |
{"language": ["nl"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/nl_udv25_dutchalpino_trf
null
[ "spacy", "token-classification", "nl", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "nl" ]
TAGS #spacy #token-classification #nl #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Dutch-Alpino ### Label Scheme View label scheme (1712 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (1712 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #nl #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (1712 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Dutch-LassySmall | Feature | Description | | --- | --- | | **Name** | `nl_udv25_dutchlassysmall_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (1070 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `ADJ\|nom\|basis\|met-e\|mv-n`, `ADJ\|nom\|basis\|met-e\|zonder-n\|bijz`, `ADJ\|nom\|basis\|met-e\|zonder-n\|stan`, `ADJ\|nom\|basis\|zonder\|mv-n`, `ADJ\|nom\|basis\|zonder\|zonder-n`, `ADJ\|nom\|comp\|met-e\|mv-n`, `ADJ\|nom\|sup\|met-e\|mv-n`, `ADJ\|nom\|sup\|met-e\|zonder-n\|stan`, `ADJ\|nom\|sup\|zonder\|zonder-n`, `ADJ\|postnom\|basis\|zonder`, `ADJ\|prenom\|basis\|met-e\|bijz`, `ADJ\|prenom\|basis\|met-e\|stan`, `ADJ\|prenom\|basis\|zonder`, `ADJ\|prenom\|comp\|met-e\|stan`, `ADJ\|prenom\|comp\|zonder`, `ADJ\|prenom\|sup\|met-e\|stan`, `ADJ\|vrij\|basis\|zonder`, `ADJ\|vrij\|comp\|zonder`, `ADJ\|vrij\|sup\|zonder`, `BW`, `LET`, `LID\|bep\|gen\|evmo`, `LID\|bep\|gen\|rest3`, `LID\|bep\|stan\|evon`, `LID\|bep\|stan\|rest`, `LID\|onbep\|stan\|agr`, `N\|eigen\|ev\|basis\|gen`, `N\|eigen\|ev\|basis\|genus\|stan`, `N\|eigen\|ev\|basis\|onz\|stan`, `N\|eigen\|ev\|basis\|zijd\|stan`, `N\|eigen\|ev\|dim\|onz\|stan`, `N\|eigen\|mv\|basis`, `N\|soort\|ev\|basis\|dat`, `N\|soort\|ev\|basis\|gen`, `N\|soort\|ev\|basis\|genus\|stan`, `N\|soort\|ev\|basis\|onz\|stan`, `N\|soort\|ev\|basis\|zijd\|stan`, `N\|soort\|ev\|dim\|onz\|stan`, `N\|soort\|mv\|basis`, `N\|soort\|mv\|dim`, `SPEC\|afgebr`, `SPEC\|afk`, `SPEC\|deeleigen`, `SPEC\|enof`, `SPEC\|symb`, `SPEC\|vreemd`, `TSW`, `TW\|hoofd\|nom\|mv-n\|basis`, `TW\|hoofd\|nom\|zonder-n\|basis`, `TW\|hoofd\|nom\|zonder-n\|dim`, `TW\|hoofd\|prenom\|stan`, `TW\|hoofd\|vrij`, `TW\|rang\|nom\|zonder-n`, `TW\|rang\|prenom\|stan`, `VG\|neven`, `VG\|onder`, `VNW\|aanw\|adv-pron\|obl\|vol\|3o\|getal`, `VNW\|aanw\|adv-pron\|stan\|red\|3\|getal`, `VNW\|aanw\|det\|stan\|nom\|met-e\|mv-n`, `VNW\|aanw\|det\|stan\|nom\|met-e\|zonder-n`, `VNW\|aanw\|det\|stan\|prenom\|met-e\|rest`, `VNW\|aanw\|det\|stan\|prenom\|zonder\|agr`, `VNW\|aanw\|det\|stan\|prenom\|zonder\|evon`, `VNW\|aanw\|det\|stan\|prenom\|zonder\|rest`, `VNW\|aanw\|pron\|gen\|vol\|3m\|ev`, `VNW\|aanw\|pron\|stan\|vol\|3o\|ev`, `VNW\|aanw\|pron\|stan\|vol\|3\|getal`, `VNW\|betr\|det\|stan\|nom\|zonder\|zonder-n`, `VNW\|betr\|pron\|stan\|vol\|3\|ev`, `VNW\|betr\|pron\|stan\|vol\|persoon\|getal`, `VNW\|bez\|det\|stan\|red\|3\|ev\|prenom\|zonder\|agr`, `VNW\|bez\|det\|stan\|vol\|1\|ev\|prenom\|zonder\|agr`, `VNW\|bez\|det\|stan\|vol\|1\|mv\|prenom\|met-e\|rest`, `VNW\|bez\|det\|stan\|vol\|1\|mv\|prenom\|zonder\|evon`, `VNW\|bez\|det\|stan\|vol\|2\|getal\|prenom\|zonder\|agr`, `VNW\|bez\|det\|stan\|vol\|3m\|ev\|prenom\|met-e\|rest`, `VNW\|bez\|det\|stan\|vol\|3p\|mv\|prenom\|met-e\|rest`, `VNW\|bez\|det\|stan\|vol\|3v\|ev\|prenom\|met-e\|rest`, `VNW\|bez\|det\|stan\|vol\|3\|ev\|prenom\|zonder\|agr`, `VNW\|bez\|det\|stan\|vol\|3\|mv\|prenom\|zonder\|agr`, `VNW\|onbep\|adv-pron\|obl\|vol\|3o\|getal`, `VNW\|onbep\|det\|stan\|nom\|met-e\|mv-n`, `VNW\|onbep\|det\|stan\|nom\|met-e\|zonder-n`, `VNW\|onbep\|det\|stan\|prenom\|met-e\|agr`, `VNW\|onbep\|det\|stan\|prenom\|met-e\|evz`, `VNW\|onbep\|det\|stan\|prenom\|met-e\|mv`, `VNW\|onbep\|det\|stan\|prenom\|met-e\|rest`, `VNW\|onbep\|det\|stan\|prenom\|zonder\|agr`, `VNW\|onbep\|det\|stan\|prenom\|zonder\|evon`, `VNW\|onbep\|det\|stan\|vrij\|zonder`, `VNW\|onbep\|grad\|gen\|nom\|met-e\|mv-n\|basis`, `VNW\|onbep\|grad\|stan\|nom\|met-e\|mv-n\|basis`, `VNW\|onbep\|grad\|stan\|nom\|met-e\|zonder-n\|basis`, `VNW\|onbep\|grad\|stan\|nom\|met-e\|zonder-n\|sup`, `VNW\|onbep\|grad\|stan\|prenom\|met-e\|agr\|basis`, `VNW\|onbep\|grad\|stan\|prenom\|met-e\|agr\|sup`, `VNW\|onbep\|grad\|stan\|prenom\|met-e\|mv\|basis`, `VNW\|onbep\|grad\|stan\|prenom\|zonder\|agr\|basis`, `VNW\|onbep\|grad\|stan\|prenom\|zonder\|agr\|comp`, `VNW\|onbep\|grad\|stan\|vrij\|zonder\|basis`, `VNW\|onbep\|grad\|stan\|vrij\|zonder\|comp`, `VNW\|onbep\|grad\|stan\|vrij\|zonder\|sup`, `VNW\|onbep\|pron\|stan\|vol\|3o\|ev`, `VNW\|onbep\|pron\|stan\|vol\|3p\|ev`, `VNW\|pers\|pron\|nomin\|nadr\|3v\|ev\|fem`, `VNW\|pers\|pron\|nomin\|red\|1\|mv`, `VNW\|pers\|pron\|nomin\|red\|2v\|ev`, `VNW\|pers\|pron\|nomin\|red\|3p\|ev\|masc`, `VNW\|pers\|pron\|nomin\|vol\|1\|ev`, `VNW\|pers\|pron\|nomin\|vol\|1\|mv`, `VNW\|pers\|pron\|nomin\|vol\|2b\|getal`, `VNW\|pers\|pron\|nomin\|vol\|3p\|mv`, `VNW\|pers\|pron\|nomin\|vol\|3v\|ev\|fem`, `VNW\|pers\|pron\|nomin\|vol\|3\|ev\|masc`, `VNW\|pers\|pron\|obl\|nadr\|3m\|ev\|masc`, `VNW\|pers\|pron\|obl\|vol\|3p\|mv`, `VNW\|pers\|pron\|obl\|vol\|3\|ev\|masc`, `VNW\|pers\|pron\|obl\|vol\|3\|getal\|fem`, `VNW\|pers\|pron\|stan\|red\|3\|ev\|fem`, `VNW\|pers\|pron\|stan\|red\|3\|ev\|onz`, `VNW\|pers\|pron\|stan\|red\|3\|mv`, `VNW\|pr\|pron\|obl\|red\|1\|ev`, `VNW\|pr\|pron\|obl\|red\|2v\|getal`, `VNW\|pr\|pron\|obl\|vol\|1\|ev`, `VNW\|pr\|pron\|obl\|vol\|1\|mv`, `VNW\|recip\|pron\|obl\|vol\|persoon\|mv`, `VNW\|refl\|pron\|obl\|nadr\|3\|getal`, `VNW\|refl\|pron\|obl\|red\|3\|getal`, `VNW\|vb\|adv-pron\|obl\|vol\|3o\|getal`, `VNW\|vb\|pron\|stan\|vol\|3o\|ev`, `VNW\|vb\|pron\|stan\|vol\|3p\|getal`, `VZ\|fin`, `VZ\|init`, `VZ\|versm`, `WW\|inf\|nom\|zonder\|zonder-n`, `WW\|inf\|vrij\|zonder`, `WW\|od\|nom\|met-e\|mv-n`, `WW\|od\|nom\|met-e\|zonder-n`, `WW\|od\|prenom\|met-e`, `WW\|od\|prenom\|zonder`, `WW\|od\|vrij\|zonder`, `WW\|pv\|conj\|ev`, `WW\|pv\|tgw\|ev`, `WW\|pv\|tgw\|met-t`, `WW\|pv\|tgw\|mv`, `WW\|pv\|verl\|ev`, `WW\|pv\|verl\|mv`, `WW\|vd\|nom\|met-e\|mv-n`, `WW\|vd\|nom\|met-e\|zonder-n`, `WW\|vd\|prenom\|met-e`, `WW\|vd\|prenom\|zonder`, `WW\|vd\|vrij\|zonder` | | **`morphologizer`** | `Definite=Def\|POS=DET`, `Degree=Pos\|POS=ADJ`, `POS=CCONJ`, `Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `POS=DET`, `Degree=Sup\|POS=ADJ`, `Number=Sing\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Gender=Com\|Number=Sing\|POS=PROPN`, `POS=SYM`, `POS=NUM`, `POS=ADP`, `Definite=Ind\|POS=DET`, `Gender=Com\|Number=Sing\|POS=NOUN`, `Degree=Cmp\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=PROPN`, `POS=ADV`, `Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `POS=PROPN`, `Number=Plur\|POS=NOUN`, `POS=VERB\|VerbForm=Inf`, `POS=PRON\|PronType=Rel`, `Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `POS=PRON\|PronType=Ind`, `POS=VERB\|VerbForm=Part`, `POS=ADJ`, `POS=X`, `Gender=Com,Neut\|Number=Sing\|POS=PROPN`, `Foreign=Yes\|POS=X`, `POS=PRON\|Person=3\|PronType=Prs`, `POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Plur\|POS=PROPN`, `Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `POS=PRON\|Person=3\|PronType=Rel`, `POS=AUX\|VerbForm=Inf`, `POS=SCONJ`, `Number=Plur\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Abbr=Yes\|POS=X`, `Case=Nom\|POS=PRON\|Person=3\|PronType=Prs`, `POS=PRON\|Person=3\|PronType=Dem`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `POS=PRON\|PronType=Dem`, `POS=PRON\|Person=3\|PronType=Int`, `Gender=Com,Neut\|Number=Sing\|POS=NOUN`, `POS=PRON\|Person=3\|PronType=Ind`, `Case=Nom\|POS=PRON\|Person=2\|PronType=Prs`, `Number=Sing\|POS=NOUN`, `Case=Acc\|POS=PRON\|PronType=Rcp`, `POS=AUX\|VerbForm=Part`, `Number=Sing\|POS=PROPN`, `Case=Nom\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs`, `POS=INTJ`, `POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|POS=PRON\|Person=1\|PronType=Prs`, `POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `csubj`, `dep`, `det`, `expl`, `expl:pv`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:agent`, `orphan`, `parataxis`, `punct`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `1`, `3`, `4`, `6`, `8`, `10`, `13`, `15`, `17`, `19`, `21`, `22`, `25`, `27`, `29`, `30`, `32`, `36`, `37`, `39`, `41`, `42`, `45`, `47`, `49`, `51`, `53`, `55`, `58`, `59`, `61`, `63`, `65`, `66`, `70`, `72`, `74`, `76`, `78`, `79`, `80`, `82`, `84`, `86`, `89`, `92`, `94`, `96`, `97`, `99`, `101`, `104`, `107`, `109`, `110`, `112`, `114`, `116`, `117`, `118`, `119`, `120`, `121`, `124`, `126`, `127`, `130`, `133`, `134`, `135`, `137`, `139`, `142`, `143`, `145`, `148`, `152`, `154`, `156`, `159`, `160`, `163`, `165`, `167`, `168`, `172`, `175`, `176`, `178`, `182`, `184`, `187`, `189`, `190`, `192`, `194`, `195`, `197`, `199`, `200`, `203`, `205`, `207`, `208`, `209`, `212`, `214`, `215`, `217`, `219`, `220`, `221`, `224`, `227`, `228`, `230`, `232`, `233`, `237`, `238`, `240`, `241`, `242`, `244`, `245`, `248`, `249`, `250`, `251`, `252`, `255`, `258`, `259`, `260`, `262`, `266`, `268`, `270`, `272`, `275`, `278`, `280`, `281`, `282`, `283`, `285`, `287`, `290`, `291`, `293`, `297`, `299`, `300`, `301`, `302`, `306`, `307`, `309`, `310`, `311`, `313`, `314`, `316`, `318`, `319`, `320`, `324`, `329`, `332`, `333`, `335`, `337`, `339`, `343`, `346`, `347`, `348`, `352`, `353`, `357`, `358`, `359`, `360`, `362`, `363`, `366`, `369`, `372`, `374`, `377`, `378`, `379`, `381`, `382`, `386`, `387`, `391`, `395`, `397`, `399`, `400`, `401`, `403`, `406`, `407`, `408`, `409`, `410`, `411`, `412`, `414`, `415`, `417`, `419`, `421`, `423`, `426`, `427`, `428`, `431`, `433`, `435`, `437`, `439`, `441`, `444`, `446`, `448`, `451`, `453`, `455`, `457`, `458`, `460`, `462`, `463`, `465`, `467`, `469`, `470`, `472`, `474`, `475`, `478`, `482`, `483`, `485`, `489`, `491`, `492`, `493`, `495`, `499`, `500`, `502`, `506`, `508`, `511`, `514`, `518`, `520`, `522`, `525`, `527`, `528`, `532`, `534`, `535`, `538`, `540`, `541`, `544`, `546`, `547`, `548`, `551`, `552`, `556`, `558`, `559`, `560`, `563`, `565`, `567`, `569`, `570`, `573`, `577`, `579`, `581`, `584`, `587`, `589`, `591`, `595`, `597`, `599`, `600`, `601`, `602`, `606`, `608`, `610`, `612`, `614`, `615`, `616`, `618`, `619`, `620`, `621`, `622`, `626`, `628`, `629`, `631`, `632`, `634`, `635`, `636`, `637`, `639`, `641`, `644`, `649`, `653`, `654`, `656`, `657`, `658`, `661`, `663`, `664`, `665`, `666`, `667`, `668`, `669`, `670`, `674`, `676`, `678`, `679`, `682`, `685`, `687`, `689`, `692`, `694`, `696`, `699`, `702`, `703`, `704`, `705`, `706`, `708`, `709`, `711`, `712`, `714`, `715`, `717`, `718`, `719`, `722`, `725`, `729`, `730`, `733`, `736`, `738`, `739`, `743`, `745`, `746`, `749`, `750`, `328`, `752`, `754`, `755`, `757`, `760`, `761`, `762`, `764`, `767`, `769`, `770`, `773`, `777`, `778`, `781`, `783`, `784`, `785`, `786`, `789`, `790`, `793`, `794`, `795`, `798`, `800`, `162`, `803`, `806`, `809`, `812`, `813`, `815`, `817`, `818`, `819`, `821`, `823`, `824`, `825`, `827`, `830`, `832`, `834`, `836`, `838`, `648`, `839`, `841`, `843`, `844`, `846`, `848`, `849`, `851`, `852`, `853`, `854`, `855`, `857`, `859`, `860`, `861`, `863`, `865`, `867`, `869`, `872`, `873`, `875`, `877`, `879`, `881`, `883`, `885`, `886`, `887`, `888`, `890`, `893`, `894`, `896`, `899`, `901`, `902`, `904`, `906`, `908`, `911`, `913`, `915`, `918`, `919`, `920`, `921`, `926`, `928`, `930`, `931`, `932`, `933`, `934`, `396`, `935`, `936`, `938`, `939`, `940`, `942`, `945`, `946`, `947`, `948`, `950`, `951`, `954`, `956`, `957`, `960`, `962`, `964`, `967`, `969`, `970`, `971`, `975`, `976`, `977`, `978`, `979`, `980`, `981`, `982`, `983`, `984`, `985`, `988`, `990`, `991`, `995`, `997`, `998`, `840`, `999`, `1000`, `1002`, `1003`, `1004`, `1006`, `1008`, `1009`, `1013`, `1017`, `862`, `1019`, `1020`, `1021`, `1024`, `1025`, `1027`, `1028`, `1029`, `1031`, `1033`, `1036`, `1039`, `1040`, `1041`, `1043`, `1044`, `1047`, `1048`, `1052`, `1055`, `1056`, `1057`, `1061`, `1062`, `1063`, `1066`, `1069`, `507`, `1071`, `1072`, `1074`, `1075`, `1076`, `1078`, `1079`, `1080`, `1081`, `1082`, `1085`, `1086`, `1087`, `1089`, `1090`, `1091`, `1093`, `1094`, `1097`, `1100`, `1102`, `1103`, `1104`, `1106`, `1107`, `1108`, `1109`, `1111`, `1113`, `1115`, `1116`, `1119`, `1121`, `1122`, `1123`, `1125`, `1126`, `1127`, `1128`, `1129`, `1131`, `1132`, `1135`, `1138`, `1140`, `1141`, `1143`, `1144`, `1145`, `1147`, `1150`, `1151`, `1152`, `1154`, `1155`, `1158`, `1159`, `1160`, `1161`, `1162`, `1164`, `1166`, `1167`, `1169`, `1170`, `1172`, `1175`, `1177`, `510`, `1178`, `1181`, `1182`, `1183`, `1185`, `1187`, `1189`, `1190`, `1191`, `1192`, `1194`, `1197`, `1201`, `1202`, `1203`, `1206`, `1208`, `1209`, `1210`, `1213`, `1217`, `1218`, `1220`, `1221`, `1223`, `1225`, `1227`, `1229`, `1231`, `1233`, `1236`, `1238`, `1240`, `1241`, `1244`, `1245`, `1247`, `1249`, `1250`, `1252`, `1253`, `1254`, `1255`, `1257`, `1259`, `1261`, `1262`, `1264`, `1266`, `1268`, `1271`, `1273`, `1274`, `1276`, `1278`, `1279`, `48`, `1280`, `1281`, `1283`, `1248`, `1284`, `1286`, `1287`, `1289`, `1290`, `1292`, `884`, `1293`, `1295`, `1296`, `1298`, `1299`, `1300`, `1302`, `1303`, `1304`, `1305`, `1306`, `1307`, `1309`, `1311`, `1313`, `1316`, `1317`, `1318`, `1319`, `1321`, `206`, `1322`, `1323`, `1328`, `1330`, `1331`, `1332`, `1334`, `1336`, `1338`, `1341`, `1342`, `1343`, `1344`, `1345`, `1346`, `1347`, `1348`, `1350`, `1352`, `1354`, `1356`, `1357`, `1358`, `1359`, `1360`, `1361`, `864`, `1363`, `1364`, `1366`, `1367`, `1368`, `1370`, `1371`, `1372`, `1374`, `1376`, `1377`, `1378`, `1379`, `1381`, `1382`, `1383`, `1384`, `1386`, `1387`, `1389`, `1390`, `1391`, `1393`, `1396`, `1397`, `1398`, `1399`, `1403`, `1404`, `1406`, `1407`, `1410`, `1412`, `1415`, `1416`, `1419`, `1421`, `1422`, `1423`, `1424`, `1425`, `1427`, `1429`, `1432`, `1433`, `1437`, `1440`, `1442`, `1447`, `1450`, `1452`, `1454`, `1457`, `1458`, `1459`, `1460`, `1462`, `1463`, `1464`, `1466`, `1468`, `1469`, `1471`, `1473`, `1475`, `1476`, `1478`, `1479`, `1480`, `1481`, `1482`, `1483`, `1484` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.91 | | `TOKEN_P` | 99.88 | | `TOKEN_R` | 99.94 | | `TOKEN_ACC` | 100.00 | | `SENTS_F` | 91.84 | | `SENTS_P` | 90.52 | | `SENTS_R` | 93.20 | | `TAG_ACC` | 95.93 | | `POS_ACC` | 96.37 | | `MORPH_ACC` | 97.73 | | `DEP_UAS` | 90.23 | | `DEP_LAS` | 86.21 | | `LEMMA_ACC` | 96.71 |
{"language": ["nl"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/nl_udv25_dutchlassysmall_trf
null
[ "spacy", "token-classification", "nl", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "nl" ]
TAGS #spacy #token-classification #nl #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Dutch-LassySmall ### Label Scheme View label scheme (1070 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (1070 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #nl #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (1070 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Polish-LFG | Feature | Description | | --- | --- | | **Name** | `pl_udv25_polishlfg_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `GPL 3.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (4947 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `adj:pl:acc:f:com`, `adj:pl:acc:f:pos`, `adj:pl:acc:f:sup`, `adj:pl:acc:m1:com`, `adj:pl:acc:m1:pos`, `adj:pl:acc:m1:sup`, `adj:pl:acc:m2:pos`, `adj:pl:acc:m3:com`, `adj:pl:acc:m3:pos`, `adj:pl:acc:m3:sup`, `adj:pl:acc:n:com`, `adj:pl:acc:n:pos`, `adj:pl:acc:n:sup`, `adj:pl:dat:f:pos`, `adj:pl:dat:m1:com`, `adj:pl:dat:m1:pos`, `adj:pl:dat:m3:pos`, `adj:pl:dat:n:pos`, `adj:pl:gen:f:com`, `adj:pl:gen:f:pos`, `adj:pl:gen:f:sup`, `adj:pl:gen:m1:com`, `adj:pl:gen:m1:pos`, `adj:pl:gen:m1:sup`, `adj:pl:gen:m2:pos`, `adj:pl:gen:m2:sup`, `adj:pl:gen:m3:com`, `adj:pl:gen:m3:pos`, `adj:pl:gen:m3:sup`, `adj:pl:gen:n:com`, `adj:pl:gen:n:pos`, `adj:pl:inst:f:pos`, `adj:pl:inst:m1:pos`, `adj:pl:inst:m2:pos`, `adj:pl:inst:m3:pos`, `adj:pl:inst:n:pos`, `adj:pl:loc:f:pos`, `adj:pl:loc:f:sup`, `adj:pl:loc:m1:com`, `adj:pl:loc:m1:pos`, `adj:pl:loc:m3:pos`, `adj:pl:loc:m3:sup`, `adj:pl:loc:n:com`, `adj:pl:loc:n:pos`, `adj:pl:nom:f:com`, `adj:pl:nom:f:pos`, `adj:pl:nom:f:sup`, `adj:pl:nom:m1:com`, `adj:pl:nom:m1:pos`, `adj:pl:nom:m1:sup`, `adj:pl:nom:m2:pos`, `adj:pl:nom:m2:sup`, `adj:pl:nom:m3:com`, `adj:pl:nom:m3:pos`, `adj:pl:nom:m3:sup`, `adj:pl:nom:n:com`, `adj:pl:nom:n:pos`, `adj:sg:acc:f:com`, `adj:sg:acc:f:pos`, `adj:sg:acc:f:sup`, `adj:sg:acc:m1:com`, `adj:sg:acc:m1:pos`, `adj:sg:acc:m1:sup`, `adj:sg:acc:m2:pos`, `adj:sg:acc:m3:com`, `adj:sg:acc:m3:pos`, `adj:sg:acc:m3:sup`, `adj:sg:acc:n:com`, `adj:sg:acc:n:pos`, `adj:sg:acc:n:sup`, `adj:sg:dat:f:pos`, `adj:sg:dat:m1:com`, `adj:sg:dat:m1:pos`, `adj:sg:dat:m2:pos`, `adj:sg:dat:m3:pos`, `adj:sg:dat:n:com`, `adj:sg:dat:n:pos`, `adj:sg:gen:f:com`, `adj:sg:gen:f:pos`, `adj:sg:gen:f:sup`, `adj:sg:gen:m1:pos`, `adj:sg:gen:m2:pos`, `adj:sg:gen:m3:com`, `adj:sg:gen:m3:pos`, `adj:sg:gen:m3:sup`, `adj:sg:gen:n:com`, `adj:sg:gen:n:pos`, `adj:sg:inst:f:com`, `adj:sg:inst:f:pos`, `adj:sg:inst:f:sup`, `adj:sg:inst:m1:com`, `adj:sg:inst:m1:pos`, `adj:sg:inst:m1:sup`, `adj:sg:inst:m2:pos`, `adj:sg:inst:m2:sup`, `adj:sg:inst:m3:com`, `adj:sg:inst:m3:pos`, `adj:sg:inst:m3:sup`, `adj:sg:inst:n:com`, `adj:sg:inst:n:pos`, `adj:sg:loc:f:com`, `adj:sg:loc:f:pos`, `adj:sg:loc:f:sup`, `adj:sg:loc:m1:pos`, `adj:sg:loc:m2:pos`, `adj:sg:loc:m3:com`, `adj:sg:loc:m3:pos`, `adj:sg:loc:m3:sup`, `adj:sg:loc:n:com`, `adj:sg:loc:n:pos`, `adj:sg:loc:n:sup`, `adj:sg:nom:f:com`, `adj:sg:nom:f:pos`, `adj:sg:nom:f:sup`, `adj:sg:nom:m1:com`, `adj:sg:nom:m1:pos`, `adj:sg:nom:m1:sup`, `adj:sg:nom:m2:com`, `adj:sg:nom:m2:pos`, `adj:sg:nom:m3:com`, `adj:sg:nom:m3:pos`, `adj:sg:nom:m3:sup`, `adj:sg:nom:n:com`, `adj:sg:nom:n:pos`, `adj:sg:nom:n:sup`, `adj:sg:voc:f:sup`, `adj:sg:voc:m1:pos`, `adj:sg:voc:m3:pos`, `adja`, `adjc`, `adjp`, `adv`, `adv:com`, `adv:pos`, `adv:sup`, `aglt:pl:pri:imperf:nwok`, `aglt:pl:sec:imperf:nwok`, `aglt:sg:pri:imperf:nwok`, `aglt:sg:pri:imperf:wok`, `aglt:sg:sec:imperf:nwok`, `aglt:sg:sec:imperf:wok`, `bedzie:pl:pri:imperf`, `bedzie:pl:sec:imperf`, `bedzie:pl:ter:imperf`, `bedzie:sg:pri:imperf`, `bedzie:sg:sec:imperf`, `bedzie:sg:ter:imperf`, `comp`, `conj`, `depr:pl:nom:m2`, `depr:pl:voc:m2`, `fin:pl:pri:imperf`, `fin:pl:pri:perf`, `fin:pl:sec:imperf`, `fin:pl:sec:perf`, `fin:pl:ter:imperf`, `fin:pl:ter:perf`, `fin:sg:pri:imperf`, `fin:sg:pri:perf`, `fin:sg:sec:imperf`, `fin:sg:sec:perf`, `fin:sg:ter:imperf`, `fin:sg:ter:perf`, `ger:sg:acc:n:imperf:aff`, `ger:sg:acc:n:imperf:neg`, `ger:sg:acc:n:perf:aff`, `ger:sg:dat:n:imperf:aff`, `ger:sg:dat:n:perf:aff`, `ger:sg:gen:n:imperf:aff`, `ger:sg:gen:n:perf:aff`, `ger:sg:inst:n:imperf:aff`, `ger:sg:inst:n:perf:aff`, `ger:sg:loc:n:imperf:aff`, `ger:sg:loc:n:perf:aff`, `ger:sg:nom:n:imperf:aff`, `ger:sg:nom:n:perf:aff`, `imps:imperf`, `imps:perf`, `impt:pl:pri:imperf`, `impt:pl:pri:perf`, `impt:pl:sec:imperf`, `impt:pl:sec:perf`, `impt:sg:sec:imperf`, `impt:sg:sec:perf`, `inf:imperf`, `inf:perf`, `interj`, `interp`, `num:pl:acc:f:congr`, `num:pl:acc:f:rec`, `num:pl:acc:m1:rec`, `num:pl:acc:m2:congr`, `num:pl:acc:m2:rec`, `num:pl:acc:m3:congr`, `num:pl:acc:m3:rec`, `num:pl:acc:n:congr`, `num:pl:acc:n:rec`, `num:pl:dat:f:congr`, `num:pl:dat:m1:congr`, `num:pl:dat:n:congr`, `num:pl:gen:f:congr`, `num:pl:gen:m1:congr`, `num:pl:gen:m2:congr`, `num:pl:gen:m3:congr`, `num:pl:gen:m3:rec`, `num:pl:gen:n:congr`, `num:pl:inst:f:congr`, `num:pl:inst:m1:congr`, `num:pl:inst:m2:congr`, `num:pl:inst:m3:congr`, `num:pl:inst:n:congr`, `num:pl:loc:f:congr`, `num:pl:loc:m1:congr`, `num:pl:loc:m3:congr`, `num:pl:loc:n:congr`, `num:pl:nom:f:congr`, `num:pl:nom:m1:congr`, `num:pl:nom:m2:congr`, `num:pl:nom:m3:congr`, `num:pl:nom:n:congr`, `pact:pl:acc:f:imperf:aff`, `pact:pl:acc:m1:imperf:aff`, `pact:pl:acc:m2:imperf:aff`, `pact:pl:acc:m3:imperf:aff`, `pact:pl:acc:n:imperf:aff`, `pact:pl:dat:m1:imperf:aff`, `pact:pl:gen:f:imperf:aff`, `pact:pl:gen:m1:imperf:aff`, `pact:pl:gen:m1:imperf:neg`, `pact:pl:gen:m2:imperf:aff`, `pact:pl:gen:m3:imperf:aff`, `pact:pl:gen:n:imperf:aff`, `pact:pl:inst:m1:imperf:aff`, `pact:pl:inst:n:imperf:aff`, `pact:pl:loc:f:imperf:aff`, `pact:pl:loc:m3:imperf:aff`, `pact:pl:nom:f:imperf:aff`, `pact:pl:nom:m1:imperf:aff`, `pact:pl:nom:m2:imperf:aff`, `pact:pl:nom:m3:imperf:aff`, `pact:pl:nom:n:imperf:aff`, `pact:sg:acc:f:imperf:aff`, `pact:sg:acc:m1:imperf:aff`, `pact:sg:acc:m2:imperf:aff`, `pact:sg:acc:m3:imperf:aff`, `pact:sg:acc:n:imperf:aff`, `pact:sg:dat:f:imperf:aff`, `pact:sg:dat:m1:imperf:aff`, `pact:sg:gen:f:imperf:aff`, `pact:sg:gen:m1:imperf:aff`, `pact:sg:gen:m3:imperf:aff`, `pact:sg:gen:n:imperf:aff`, `pact:sg:inst:f:imperf:aff`, `pact:sg:inst:f:imperf:neg`, `pact:sg:inst:m1:imperf:aff`, `pact:sg:inst:m3:imperf:aff`, `pact:sg:inst:n:imperf:aff`, `pact:sg:loc:f:imperf:aff`, `pact:sg:loc:m1:imperf:aff`, `pact:sg:loc:m2:imperf:aff`, `pact:sg:loc:m3:imperf:aff`, `pact:sg:loc:n:imperf:aff`, `pact:sg:nom:f:imperf:aff`, `pact:sg:nom:m1:imperf:aff`, `pact:sg:nom:m2:imperf:aff`, `pact:sg:nom:m3:imperf:aff`, `pact:sg:nom:n:imperf:aff`, `pant:perf`, `pcon:imperf`, `ppas:pl:acc:f:imperf:aff`, `ppas:pl:acc:f:perf:aff`, `ppas:pl:acc:m1:perf:aff`, `ppas:pl:acc:m2:perf:aff`, `ppas:pl:acc:m3:imperf:aff`, `ppas:pl:acc:m3:perf:aff`, `ppas:pl:acc:m3:perf:neg`, `ppas:pl:acc:n:perf:aff`, `ppas:pl:dat:f:imperf:aff`, `ppas:pl:dat:f:perf:aff`, `ppas:pl:dat:m1:perf:aff`, `ppas:pl:dat:m3:perf:aff`, `ppas:pl:gen:f:imperf:aff`, `ppas:pl:gen:f:imperf:neg`, `ppas:pl:gen:f:perf:aff`, `ppas:pl:gen:m1:imperf:aff`, `ppas:pl:gen:m1:perf:aff`, `ppas:pl:gen:m2:perf:aff`, `ppas:pl:gen:m3:imperf:aff`, `ppas:pl:gen:m3:perf:aff`, `ppas:pl:gen:n:imperf:aff`, `ppas:pl:gen:n:perf:aff`, `ppas:pl:inst:f:perf:aff`, `ppas:pl:inst:m1:perf:aff`, `ppas:pl:inst:m3:perf:aff`, `ppas:pl:inst:n:perf:aff`, `ppas:pl:loc:f:imperf:neg`, `ppas:pl:loc:f:perf:aff`, `ppas:pl:loc:m1:imperf:aff`, `ppas:pl:loc:m3:imperf:aff`, `ppas:pl:loc:m3:perf:aff`, `ppas:pl:loc:n:imperf:aff`, `ppas:pl:loc:n:perf:aff`, `ppas:pl:nom:f:imperf:aff`, `ppas:pl:nom:f:imperf:neg`, `ppas:pl:nom:f:perf:aff`, `ppas:pl:nom:m1:imperf:aff`, `ppas:pl:nom:m1:perf:aff`, `ppas:pl:nom:m2:perf:aff`, `ppas:pl:nom:m3:imperf:aff`, `ppas:pl:nom:m3:perf:aff`, `ppas:pl:nom:n:imperf:aff`, `ppas:pl:nom:n:perf:aff`, `ppas:sg:acc:f:imperf:aff`, `ppas:sg:acc:f:perf:aff`, `ppas:sg:acc:m1:perf:aff`, `ppas:sg:acc:m2:perf:aff`, `ppas:sg:acc:m3:perf:aff`, `ppas:sg:acc:n:perf:aff`, `ppas:sg:dat:f:perf:aff`, `ppas:sg:dat:m1:perf:aff`, `ppas:sg:gen:f:imperf:aff`, `ppas:sg:gen:f:perf:aff`, `ppas:sg:gen:f:perf:neg`, `ppas:sg:gen:m1:imperf:aff`, `ppas:sg:gen:m1:perf:aff`, `ppas:sg:gen:m2:perf:aff`, `ppas:sg:gen:m3:imperf:aff`, `ppas:sg:gen:m3:perf:aff`, `ppas:sg:gen:n:imperf:aff`, `ppas:sg:gen:n:perf:aff`, `ppas:sg:inst:f:imperf:neg`, `ppas:sg:inst:f:perf:aff`, `ppas:sg:inst:m1:imperf:aff`, `ppas:sg:inst:m1:perf:aff`, `ppas:sg:inst:m3:perf:aff`, `ppas:sg:inst:n:perf:aff`, `ppas:sg:inst:n:perf:neg`, `ppas:sg:loc:f:imperf:aff`, `ppas:sg:loc:f:perf:aff`, `ppas:sg:loc:m3:imperf:aff`, `ppas:sg:loc:m3:perf:aff`, `ppas:sg:loc:n:imperf:aff`, `ppas:sg:loc:n:perf:aff`, `ppas:sg:nom:f:imperf:aff`, `ppas:sg:nom:f:imperf:neg`, `ppas:sg:nom:f:perf:aff`, `ppas:sg:nom:f:perf:neg`, `ppas:sg:nom:m1:imperf:aff`, `ppas:sg:nom:m1:perf:aff`, `ppas:sg:nom:m2:imperf:aff`, `ppas:sg:nom:m2:perf:aff`, `ppas:sg:nom:m3:imperf:aff`, `ppas:sg:nom:m3:perf:aff`, `ppas:sg:nom:m3:perf:neg`, `ppas:sg:nom:n:imperf:aff`, `ppas:sg:nom:n:perf:aff`, `ppron12:pl:acc:f:pri`, `ppron12:pl:acc:m1:pri`, `ppron12:pl:acc:m1:sec`, `ppron12:pl:acc:n:sec`, `ppron12:pl:dat:f:pri`, `ppron12:pl:dat:f:sec`, `ppron12:pl:dat:m1:pri`, `ppron12:pl:dat:m1:sec`, `ppron12:pl:gen:f:pri`, `ppron12:pl:gen:m1:pri`, `ppron12:pl:gen:m1:sec`, `ppron12:pl:inst:m1:pri`, `ppron12:pl:inst:m1:sec`, `ppron12:pl:loc:m1:pri`, `ppron12:pl:loc:m1:sec`, `ppron12:pl:nom:f:pri`, `ppron12:pl:nom:m1:pri`, `ppron12:pl:nom:m1:sec`, `ppron12:sg:acc:f:pri:akc`, `ppron12:sg:acc:f:sec:akc`, `ppron12:sg:acc:f:sec:nakc`, `ppron12:sg:acc:m1:pri:akc`, `ppron12:sg:acc:m1:pri:nakc`, `ppron12:sg:acc:m1:sec:akc`, `ppron12:sg:acc:m1:sec:nakc`, `ppron12:sg:acc:m2:pri:akc`, `ppron12:sg:acc:m2:sec:nakc`, `ppron12:sg:acc:m3:pri:akc`, `ppron12:sg:dat:f:pri:akc`, `ppron12:sg:dat:f:pri:nakc`, `ppron12:sg:dat:f:sec:akc`, `ppron12:sg:dat:f:sec:nakc`, `ppron12:sg:dat:m1:pri:akc`, `ppron12:sg:dat:m1:pri:nakc`, `ppron12:sg:dat:m1:sec:akc`, `ppron12:sg:dat:m1:sec:nakc`, `ppron12:sg:dat:m2:sec:nakc`, `ppron12:sg:dat:n:pri:nakc`, `ppron12:sg:gen:f:pri:akc`, `ppron12:sg:gen:f:sec:akc`, `ppron12:sg:gen:f:sec:nakc`, `ppron12:sg:gen:m1:pri:akc`, `ppron12:sg:gen:m1:sec:akc`, `ppron12:sg:gen:m1:sec:nakc`, `ppron12:sg:gen:m2:sec:akc`, `ppron12:sg:inst:f:pri`, `ppron12:sg:inst:f:sec`, `ppron12:sg:inst:m1:pri`, `ppron12:sg:inst:m1:sec`, `ppron12:sg:loc:f:pri`, `ppron12:sg:loc:f:sec`, `ppron12:sg:loc:m1:pri`, `ppron12:sg:loc:m1:sec`, `ppron12:sg:nom:f:pri`, `ppron12:sg:nom:f:sec`, `ppron12:sg:nom:m1:pri`, `ppron12:sg:nom:m1:sec`, `ppron12:sg:nom:m2:sec`, `ppron3:pl:acc:f:ter:akc:npraep`, `ppron3:pl:acc:f:ter:akc:praep`, `ppron3:pl:acc:m1:ter:akc:npraep`, `ppron3:pl:acc:m1:ter:akc:praep`, `ppron3:pl:acc:m2:ter:akc:npraep`, `ppron3:pl:acc:m3:ter:akc:npraep`, `ppron3:pl:acc:n:ter:akc:npraep`, `ppron3:pl:acc:n:ter:akc:praep`, `ppron3:pl:dat:f:ter:akc:npraep`, `ppron3:pl:dat:f:ter:akc:praep`, `ppron3:pl:dat:m1:ter:akc:npraep`, `ppron3:pl:dat:m3:ter:akc:praep`, `ppron3:pl:dat:n:ter:akc:npraep`, `ppron3:pl:gen:f:ter:akc:npraep`, `ppron3:pl:gen:f:ter:akc:praep`, `ppron3:pl:gen:m1:ter:akc:npraep`, `ppron3:pl:gen:m1:ter:akc:praep`, `ppron3:pl:gen:m2:ter:akc:npraep`, `ppron3:pl:gen:m3:ter:akc:npraep`, `ppron3:pl:gen:m3:ter:akc:praep`, `ppron3:pl:gen:n:ter:akc:npraep`, `ppron3:pl:gen:n:ter:akc:praep`, `ppron3:pl:inst:f:ter:akc:npraep`, `ppron3:pl:inst:f:ter:akc:praep`, `ppron3:pl:inst:m1:ter:akc:praep`, `ppron3:pl:inst:m2:ter:akc:praep`, `ppron3:pl:inst:n:ter:akc:praep`, `ppron3:pl:loc:f:ter:akc:praep`, `ppron3:pl:loc:m1:ter:akc:praep`, `ppron3:pl:loc:m3:ter:akc:praep`, `ppron3:pl:loc:n:ter:akc:praep`, `ppron3:pl:nom:f:ter:akc:npraep`, `ppron3:pl:nom:m1:ter:akc:npraep`, `ppron3:pl:nom:m3:ter:akc:npraep`, `ppron3:pl:nom:n:ter:akc:npraep`, `ppron3:sg:acc:f:ter:akc:npraep`, `ppron3:sg:acc:f:ter:akc:praep`, `ppron3:sg:acc:m1:ter:akc:npraep`, `ppron3:sg:acc:m1:ter:akc:praep`, `ppron3:sg:acc:m1:ter:nakc:npraep`, `ppron3:sg:acc:m1:ter:nakc:praep`, `ppron3:sg:acc:m2:ter:akc:praep`, `ppron3:sg:acc:m2:ter:nakc:npraep`, `ppron3:sg:acc:m3:ter:akc:praep`, `ppron3:sg:acc:m3:ter:nakc:npraep`, `ppron3:sg:acc:m3:ter:nakc:praep`, `ppron3:sg:acc:n:ter:akc:npraep`, `ppron3:sg:acc:n:ter:akc:praep`, `ppron3:sg:dat:f:ter:akc:npraep`, `ppron3:sg:dat:f:ter:akc:praep`, `ppron3:sg:dat:m1:ter:akc:npraep`, `ppron3:sg:dat:m1:ter:akc:praep`, `ppron3:sg:dat:m1:ter:nakc:npraep`, `ppron3:sg:dat:m2:ter:nakc:npraep`, `ppron3:sg:dat:m3:ter:nakc:npraep`, `ppron3:sg:dat:n:ter:nakc:npraep`, `ppron3:sg:gen:f:ter:akc:npraep`, `ppron3:sg:gen:f:ter:akc:praep`, `ppron3:sg:gen:m1:ter:akc:npraep`, `ppron3:sg:gen:m1:ter:akc:praep`, `ppron3:sg:gen:m1:ter:nakc:npraep`, `ppron3:sg:gen:m2:ter:akc:npraep`, `ppron3:sg:gen:m3:ter:akc:npraep`, `ppron3:sg:gen:m3:ter:akc:praep`, `ppron3:sg:gen:m3:ter:nakc:npraep`, `ppron3:sg:gen:n:ter:akc:npraep`, `ppron3:sg:gen:n:ter:akc:praep`, `ppron3:sg:gen:n:ter:nakc:npraep`, `ppron3:sg:inst:f:ter:akc:npraep`, `ppron3:sg:inst:f:ter:akc:praep`, `ppron3:sg:inst:m1:ter:akc:npraep`, `ppron3:sg:inst:m1:ter:akc:praep`, `ppron3:sg:inst:m2:ter:akc:npraep`, `ppron3:sg:inst:m2:ter:akc:praep`, `ppron3:sg:inst:m3:ter:akc:npraep`, `ppron3:sg:inst:m3:ter:akc:praep`, `ppron3:sg:inst:n:ter:akc:npraep`, `ppron3:sg:inst:n:ter:akc:praep`, `ppron3:sg:loc:f:ter:akc:praep`, `ppron3:sg:loc:m1:ter:akc:praep`, `ppron3:sg:loc:m2:ter:akc:praep`, `ppron3:sg:loc:m3:ter:akc:praep`, `ppron3:sg:loc:n:ter:akc:praep`, `ppron3:sg:nom:f:ter:akc:npraep`, `ppron3:sg:nom:m1:ter:akc:npraep`, `ppron3:sg:nom:m2:ter:akc:npraep`, `ppron3:sg:nom:m3:ter:akc:npraep`, `ppron3:sg:nom:n:ter:akc:npraep`, `praet:pl:f:imperf`, `praet:pl:f:perf`, `praet:pl:m1:imperf`, `praet:pl:m1:perf`, `praet:pl:m2:imperf`, `praet:pl:m2:perf`, `praet:pl:m3:imperf`, `praet:pl:m3:perf`, `praet:pl:n:imperf`, `praet:pl:n:perf`, `praet:sg:f:imperf`, `praet:sg:f:perf`, `praet:sg:m1:imperf`, `praet:sg:m1:imperf:agl`, `praet:sg:m1:imperf:nagl`, `praet:sg:m1:perf`, `praet:sg:m1:perf:agl`, `praet:sg:m1:perf:nagl`, `praet:sg:m2:imperf`, `praet:sg:m2:imperf:nagl`, `praet:sg:m2:perf`, `praet:sg:m2:perf:nagl`, `praet:sg:m3:imperf`, `praet:sg:m3:imperf:nagl`, `praet:sg:m3:perf`, `praet:sg:m3:perf:nagl`, `praet:sg:n:imperf`, `praet:sg:n:perf`, `pred`, `prep:acc`, `prep:acc:nwok`, `prep:acc:wok`, `prep:dat`, `prep:gen`, `prep:gen:nwok`, `prep:gen:wok`, `prep:inst`, `prep:inst:nwok`, `prep:inst:wok`, `prep:loc`, `prep:loc:nwok`, `prep:loc:wok`, `prep:nom`, `qub`, `qub:nwok`, `qub:wok`, `siebie:acc`, `siebie:dat`, `siebie:gen`, `siebie:inst`, `siebie:loc`, `subst:pl:acc:f`, `subst:pl:acc:m1`, `subst:pl:acc:m2`, `subst:pl:acc:m3`, `subst:pl:acc:n`, `subst:pl:dat:f`, `subst:pl:dat:m1`, `subst:pl:dat:m3`, `subst:pl:dat:n`, `subst:pl:gen:f`, `subst:pl:gen:m1`, `subst:pl:gen:m2`, `subst:pl:gen:m3`, `subst:pl:gen:n`, `subst:pl:inst:f`, `subst:pl:inst:m1`, `subst:pl:inst:m2`, `subst:pl:inst:m3`, `subst:pl:inst:n`, `subst:pl:loc:f`, `subst:pl:loc:m1`, `subst:pl:loc:m2`, `subst:pl:loc:m3`, `subst:pl:loc:n`, `subst:pl:nom:f`, `subst:pl:nom:m1`, `subst:pl:nom:m2`, `subst:pl:nom:m3`, `subst:pl:nom:n`, `subst:pl:voc:m1`, `subst:sg:acc:f`, `subst:sg:acc:m1`, `subst:sg:acc:m2`, `subst:sg:acc:m3`, `subst:sg:acc:n`, `subst:sg:dat:f`, `subst:sg:dat:m1`, `subst:sg:dat:m2`, `subst:sg:dat:m3`, `subst:sg:dat:n`, `subst:sg:gen:f`, `subst:sg:gen:m1`, `subst:sg:gen:m2`, `subst:sg:gen:m3`, `subst:sg:gen:n`, `subst:sg:inst:f`, `subst:sg:inst:m1`, `subst:sg:inst:m2`, `subst:sg:inst:m3`, `subst:sg:inst:n`, `subst:sg:loc:f`, `subst:sg:loc:m1`, `subst:sg:loc:m2`, `subst:sg:loc:m3`, `subst:sg:loc:n`, `subst:sg:nom:f`, `subst:sg:nom:m1`, `subst:sg:nom:m2`, `subst:sg:nom:m3`, `subst:sg:nom:n`, `subst:sg:voc:f`, `subst:sg:voc:m1`, `subst:sg:voc:m3`, `winien:pl:f:imperf`, `winien:pl:m1:imperf`, `winien:pl:m3:imperf`, `winien:sg:f:imperf`, `winien:sg:m1:imperf`, `winien:sg:m3:imperf`, `winien:sg:n:imperf` | | **`morphologizer`** | `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc3`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=PUNCT\|PunctType=Peri`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|SubGender=Masc3`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc3`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=ADV`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `POS=PUNCT\|PunctType=Comm`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc1`, `Agglutination=Nagl\|Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `AdpType=Prep\|POS=ADP`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `AdpType=Prep\|POS=ADP\|Variant=Short`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc3`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc3`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc1`, `POS=SCONJ`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|SubGender=Masc1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Aspect=Perf\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc3`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc1`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc1`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Aspect=Imp\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc3`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc3`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc3`, `AdpType=Post\|POS=ADP`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc1`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc1`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `POS=PUNCT`, `Aspect=Imp\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc3\|Variant=Long`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PUNCT\|PunctType=Dash`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc3`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PUNCT\|PunctType=Excl`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc1`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc3`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc3`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `POS=CCONJ`, `Aspect=Imp\|Number=Sing\|POS=AUX\|Person=2\|Variant=Short`, `Degree=Pos\|POS=ADV`, `POS=PUNCT\|PunctType=Qest`, `Mood=Cnd\|POS=AUX`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc1`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc1`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem\|SubGender=Masc3`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc3`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|VerbType=Quasi`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Degree=Pos\|POS=ADV\|PronType=Dem`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|SubGender=Masc1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Number=Plur\|POS=AUX\|Person=1\|Variant=Short`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc3`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc3`, `Aspect=Imp\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc1`, `Degree=Sup\|POS=ADV`, `POS=ADV\|PronType=Dem`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem\|SubGender=Masc3`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|VerbType=Quasi`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `POS=PART`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem\|SubGender=Masc1`, `POS=ADV\|PronType=Int`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc1`, `POS=PART\|Polarity=Neg`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `POS=PART\|PartType=Int`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc1`, `Case=Acc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc3`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc2`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc1`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc1`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc3`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc3`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Variant=Short`, `Degree=Cmp\|POS=ADV`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc1`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc1`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel\|SubGender=Masc1`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel\|SubGender=Masc3`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc1`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Int\|SubGender=Masc3`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Imp\|Number=Sing\|POS=AUX\|Person=1\|Variant=Short`, `AdpType=Prep\|POS=ADP\|Variant=Long`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc1`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Number=Sing\|POS=AUX\|Person=1\|Variant=Long`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc1`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|SubGender=Masc2`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `POS=ADV\|PronType=Neg`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc1`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=PROPN\|SubGender=Masc1`, `POS=ADJ\|PrepCase=Pre`, `Degree=Pos\|POS=ADV\|PronType=Int`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN\|SubGender=Masc1`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int\|SubGender=Masc1`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int\|SubGender=Masc2`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int\|SubGender=Masc2`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc1`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Aspect=Imp\|Number=Sing\|POS=AUX\|Person=2\|Variant=Long`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc1\|Variant=Short`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN\|Polarity=Pos\|VerbForm=Vnoun`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int\|SubGender=Masc1`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Variant=Short`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc3\|Variant=Long`, `Case=Nom\|Emphatic=Yes\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int\|SubGender=Masc1`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc1`, `Hyph=Yes\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Variant=Long`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc1`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=AUX\|SubGender=Masc3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=ADV\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc3\|Variant=Short`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc1`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `POS=ADV\|PronType=Rel`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind\|SubGender=Masc1`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg\|SubGender=Masc1`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Quot`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc3`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg\|SubGender=Masc1`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|SubGender=Masc3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Mood=Imp\|POS=AUX`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc1`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc2`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Brck`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc1\|Variant=Short`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc2`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Brck`, `Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc3`, `Case=Loc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc1`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc3`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc1`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Aspect=Perf\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN\|Polarity=Pos\|VerbForm=Vnoun`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind\|SubGender=Masc3`, `Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN\|Polarity=Pos\|VerbForm=Vnoun`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc2`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|POS=VERB\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc3`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind\|SubGender=Masc3`, `Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN\|Polite=Depr\|SubGender=Masc2`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc3`, `Degree=Pos\|POS=ADV\|PronType=Ind`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|SubGender=Masc3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc3`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc1\|Variant=Short`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|SubGender=Masc3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|SubGender=Masc3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=INTJ`, `POS=ADV\|PronType=Tot`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Variant=Long`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc1`, `Agglutination=Nagl\|Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc2`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc2`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|SubGender=Masc1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc1`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Aspect=Imp\|Number=Plur\|POS=AUX\|Person=2\|Variant=Short`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem\|SubGender=Masc1`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Imp\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN\|Polarity=Pos\|VerbForm=Vnoun`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc1`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc2`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc1`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Variant=Short`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc1`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc2`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc3`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=AUX\|SubGender=Masc1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc1`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc1`, `Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN\|Polarity=Pos\|VerbForm=Vnoun`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc3\|Variant=Long`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind\|SubGender=Masc1`, `Case=Acc\|Emphatic=Yes\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN\|SubGender=Masc1`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot\|SubGender=Masc1`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc2`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind\|SubGender=Masc1`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem\|SubGender=Masc3`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc1`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot\|SubGender=Masc1`, `Agglutination=Agl\|Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc1\|Variant=Short`, `Aspect=Imp\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN\|Polarity=Pos\|VerbForm=Vnoun`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem\|SubGender=Masc3`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc3\|Variant=Long`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc1`, `Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc1`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN\|Polarity=Pos\|VerbForm=Vnoun`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot\|SubGender=Masc1`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc3`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc3`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc2`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem\|SubGender=Masc2`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc1`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc3\|Variant=Long`, `Aspect=Imp\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN\|Polarity=Pos\|VerbForm=Vnoun`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc1`, `Agglutination=Nagl\|Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc1`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc3\|Variant=Short`, `Case=Acc\|Gender=Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc3`, `Aspect=Imp\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN\|Polarity=Pos\|VerbForm=Vnoun`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc3`, `Agglutination=Nagl\|Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc1\|Variant=Short`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc2\|Variant=Long`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind\|SubGender=Masc3`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc3`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc1`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|SubGender=Masc3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc1`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=AUX\|SubGender=Masc3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc3\|Variant=Long`, `Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|SubGender=Masc1`, `Case=Acc\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem\|SubGender=Masc3`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Variant=Long`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Variant=Long`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Quot`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|SubGender=Masc3`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc1`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc1`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Loc\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Conv\|Voice=Act`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc3`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc3`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc1`, `Case=Loc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|SubGender=Masc3`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Case=Ins\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|SubGender=Masc1`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc1`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc1\|Variant=Short`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=PROPN\|SubGender=Masc1`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc2`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc2`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=AUX\|SubGender=Masc3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc2`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|SubGender=Masc2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem\|SubGender=Masc1`, `POS=SCONJ\|PronType=Rel`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot\|SubGender=Masc1`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc2`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc2`, `Case=Ins\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc2`, `Aspect=Imp\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc1`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc1`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN\|Polarity=Pos\|VerbForm=Vnoun`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc2`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc1`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc1\|Variant=Short`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc3`, `Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind\|SubGender=Masc3`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc3`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=AUX\|SubGender=Masc1\|Tense=Past\|VerbForm=Fin`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind\|SubGender=Masc1`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|SubGender=Masc2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|SubGender=Masc1\|Tense=Past\|VerbForm=Fin`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|SubGender=Masc1`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc1`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc1`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc3`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Aspect=Imp\|Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin\|Voice=Act`, `POS=PUNCT\|PunctType=Semi`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc3\|Variant=Long`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Case=Dat\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc3\|Variant=Short`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg\|SubGender=Masc3`, `Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc1`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Case=Ins\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Rel`, `Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|SubGender=Masc1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc2`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc2`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Loc\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc2`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc2\|Variant=Long`, `Case=Ins\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=ADJ\|Variant=Short`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc2\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc3`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc3\|Variant=Long`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc1`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc3`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|SubGender=Masc1`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Acc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int\|SubGender=Masc3`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc2\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel\|SubGender=Masc3`, `Aspect=Imp\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc3`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int\|SubGender=Masc1`, `Aspect=Perf\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc2\|Variant=Short`, `Case=Gen\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc1`, `Case=Ins\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Act`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc3`, `Case=Voc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc1`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc2`, `Case=Ins\|Emphatic=Yes\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc2\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem\|SubGender=Masc2`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem\|SubGender=Masc1`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Variant=Long`, `Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|SubGender=Masc3`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc2\|Variant=Long`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|SubGender=Masc1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc3`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Variant=Long`, `Aspect=Imp\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN\|SubGender=Masc1`, `Case=Ins\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc2`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem\|SubGender=Masc2`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot\|SubGender=Masc1`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind\|SubGender=Masc3`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc1`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc3`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot\|SubGender=Masc3`, `Aspect=Perf\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN\|Polarity=Pos\|VerbForm=Vnoun`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc2`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc2`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot\|SubGender=Masc1`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Aspect=Perf\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int\|SubGender=Masc1`, `Emphatic=Yes\|POS=PART\|PartType=Int`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc3`, `Aspect=Perf\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Nom\|Emphatic=Yes\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Rel`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc1`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc1`, `Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc1`, `Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Gen\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc1`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot\|SubGender=Masc1`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc1`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc1`, `Case=Gen\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc3`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int\|SubGender=Masc1`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg\|SubGender=Masc1`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Aspect=Perf\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg\|SubGender=Masc1`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Voc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Voc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc1`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int\|SubGender=Masc1`, `Case=Loc\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc3`, `Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Case=Nom\|Gender=Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc2\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel\|SubGender=Masc3`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc3`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Case=Ins\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|SubGender=Masc3`, `Case=Ins\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN\|SubGender=Masc2`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc1`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Loc\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc1`, `Aspect=Imp\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN\|Polarity=Pos\|VerbForm=Vnoun`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc3\|Variant=Long`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Aspect=Perf\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Ins\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind\|SubGender=Masc1`, `Aspect=Imp\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc1`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel\|SubGender=Masc1`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|SubGender=Masc2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc3`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc2`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Ind\|SubGender=Masc3`, `Emphatic=Yes\|POS=ADV\|PronType=Int`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc3`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Conv\|Voice=Act`, `Case=Ins\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|SubGender=Masc2`, `Aspect=Perf\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc2`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|SubGender=Masc3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|SubGender=Masc2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Loc\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int\|SubGender=Masc1`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc2\|Variant=Long`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot\|SubGender=Masc3`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc1`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Acc\|Gender=Masc\|NumType=Frac\|Number=Plur\|POS=NUM\|SubGender=Masc3`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot\|SubGender=Masc3`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Int\|SubGender=Masc1`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot\|SubGender=Masc1`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg\|SubGender=Masc3`, `Agglutination=Nagl\|Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot\|SubGender=Masc1`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot\|SubGender=Masc1`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot\|SubGender=Masc3`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc2`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc2\|Variant=Long`, `Case=Nom\|Gender=Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc2\|Variant=Short`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc1`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int\|SubGender=Masc3`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int\|SubGender=Masc3`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int\|SubGender=Masc3`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int\|SubGender=Masc3`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Emphatic=Yes\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int\|SubGender=Masc3`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc2\|Variant=Long`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc2`, `Case=Gen\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|SubGender=Masc2`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc3`, `Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Variant=Short`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind\|SubGender=Masc1`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `POS=PART\|Variant=Short`, `Case=Acc\|Gender=Fem\|NumType=Frac\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind\|SubGender=Masc2`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc3\|Variant=Long`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN\|SubGender=Masc1`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc1`, `Aspect=Perf\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot\|SubGender=Masc3`, `Case=Dat\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Agglutination=Agl\|Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Ins\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc1`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot\|SubGender=Masc3`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot\|SubGender=Masc3`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Acc\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot\|SubGender=Masc1`, `Case=Ins\|Emphatic=Yes\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int\|SubGender=Masc1`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Emphatic=Yes\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int\|SubGender=Masc1`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN\|SubGender=Masc2`, `Case=Ins\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc1`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc3\|Variant=Long`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc1`, `Aspect=Imp\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc1`, `Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc1`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel\|SubGender=Masc3`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc1`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel\|SubGender=Masc3`, `Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc1\|Variant=Short`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot\|SubGender=Masc1`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Neg\|SubGender=Masc1\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Short`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc3`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Loc\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Case=Ins\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=PROPN\|SubGender=Masc1`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc2`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel\|SubGender=Masc3`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc1`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc3\|Variant=Long`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Aspect=Perf\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Voc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN\|SubGender=Masc3`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Neg`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc2\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg\|SubGender=Masc3`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel\|SubGender=Masc3`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot\|SubGender=Masc3`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel\|SubGender=Masc1`, `Case=Ins\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc1`, `Case=Ins\|Gender=Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|SubGender=Masc2`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot\|SubGender=Masc2`, `Case=Dat\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|SubGender=Masc1`, `Aspect=Imp\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Ind\|SubGender=Masc3`, `Case=Ins\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Ins\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc1\|Variant=Short`, `Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Short`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc2\|Variant=Long`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot\|SubGender=Masc1`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg\|SubGender=Masc1`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot\|SubGender=Masc3`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Case=Loc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc3`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=AUX\|SubGender=Masc2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc1`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Dat\|Gender=Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Gen\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int\|SubGender=Masc1`, `Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc2`, `Case=Ins\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc1`, `Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc2\|Variant=Long`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc2`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel\|SubGender=Masc1`, `Case=Acc\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Aspect=Perf\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int`, `Case=Acc\|Emphatic=Yes\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Dat\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc1`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=PROPN\|SubGender=Masc3`, `Aspect=Imp\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc2\|Variant=Long`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Loc\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc2`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc2`, `Case=Gen\|Gender=Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Perf\|Case=Loc\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=NUM`, `Aspect=Imp\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Loc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc3`, `Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc2\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc2`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Int\|SubGender=Masc2`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg\|SubGender=Masc3`, `Agglutination=Nagl\|Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Aspect=Imp\|Case=Loc\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg\|SubGender=Masc1`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc1`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot\|SubGender=Masc3`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=AUX\|SubGender=Masc1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=PART\|Variant=Long`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Aspect=Perf\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Ins\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc2`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc2\|Variant=Short`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|SubGender=Masc3\|Variant=Long`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel\|SubGender=Masc2`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc2\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc3`, `Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Case=Voc\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Aspect=Imp\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc1`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ\|SubGender=Masc3`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc2`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind\|SubGender=Masc1`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc3`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN\|SubGender=Masc3`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN\|SubGender=Masc3`, `Case=Acc\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc2\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind\|SubGender=Masc1`, `Case=Acc\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc2\|Variant=Long`, `Case=Loc\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc3`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc3\|Variant=Short`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot\|SubGender=Masc1`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot\|SubGender=Masc3`, `Aspect=Imp\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=NOUN\|Polite=Depr\|SubGender=Masc2`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind\|SubGender=Masc1`, `Aspect=Imp\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc1`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|SubGender=Masc3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel\|SubGender=Masc3`, `Case=Dat\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc1`, `Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|SubGender=Masc2\|Variant=Short`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Ind`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem\|SubGender=Masc2`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg\|SubGender=Masc3`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|SubGender=Masc2\|Variant=Long`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Neg\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc2`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc1\|Variant=Long`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Dat\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Aspect=Imp\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc1`, `Case=Loc\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Loc\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Aspect=Imp\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN\|Polarity=Neg\|VerbForm=Vnoun`, `Aspect=Imp\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Variant=Short`, `Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Emphatic=Yes\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int\|SubGender=Masc3`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc2\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc2\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Variant=Long`, `Aspect=Imp\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|SubGender=Masc1`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc1`, `Aspect=Imp\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs\|SubGender=Masc3`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc2\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ\|SubGender=Masc3`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem\|SubGender=Masc2`, `Case=Ins\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|SubGender=Masc1`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot\|SubGender=Masc2`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel\|SubGender=Masc1`, `Case=Loc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc2\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem\|SubGender=Masc2`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN\|SubGender=Masc2`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel\|SubGender=Masc1`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc2\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Neg\|SubGender=Masc3\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|SubGender=Masc3\|Variant=Long`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind\|SubGender=Masc1`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc3\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Variant=Long`, `Case=Ins\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Ins\|Emphatic=Yes\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot\|SubGender=Masc2`, `Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|SubGender=Masc1\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int\|SubGender=Masc2`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|SubGender=Masc1`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel\|SubGender=Masc1`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:aglt`, `aux:mood`, `aux:pass`, `case`, `cc`, `cc:preconj`, `ccomp`, `ccomp:obj`, `conj`, `cop`, `cop:locat`, `csubj`, `dep`, `det`, `discourse`, `expl:impers`, `expl:pv`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:agent`, `punct`, `vocative`, `xcomp`, `xcomp:obj` | | **`experimental_edit_tree_lemmatizer`** | `1`, `3`, `5`, `6`, `7`, `9`, `11`, `13`, `15`, `17`, `19`, `21`, `23`, `25`, `26`, `30`, `32`, `34`, `38`, `39`, `41`, `43`, `46`, `48`, `51`, `53`, `55`, `57`, `60`, `62`, `63`, `66`, `68`, `70`, `72`, `75`, `77`, `79`, `81`, `82`, `84`, `86`, `88`, `91`, `93`, `94`, `96`, `98`, `99`, `101`, `104`, `107`, `109`, `111`, `113`, `117`, `119`, `122`, `124`, `126`, `127`, `128`, `130`, `131`, `133`, `134`, `136`, `137`, `139`, `141`, `143`, `144`, `146`, `148`, `150`, `152`, `154`, `156`, `158`, `160`, `162`, `165`, `167`, `169`, `171`, `172`, `173`, `174`, `175`, `177`, `179`, `180`, `183`, `185`, `187`, `189`, `191`, `193`, `195`, `197`, `198`, `200`, `204`, `206`, `207`, `209`, `210`, `211`, `213`, `215`, `217`, `219`, `221`, `223`, `226`, `228`, `230`, `232`, `235`, `236`, `238`, `240`, `242`, `243`, `245`, `246`, `248`, `250`, `252`, `254`, `256`, `259`, `261`, `263`, `264`, `266`, `268`, `270`, `272`, `273`, `275`, `277`, `279`, `280`, `282`, `284`, `286`, `288`, `289`, `291`, `293`, `295`, `297`, `300`, `302`, `303`, `305`, `306`, `307`, `309`, `311`, `313`, `315`, `318`, `320`, `321`, `323`, `325`, `327`, `328`, `331`, `333`, `335`, `340`, `342`, `344`, `345`, `347`, `350`, `352`, `353`, `355`, `356`, `358`, `361`, `363`, `365`, `290`, `367`, `369`, `371`, `373`, `375`, `377`, `379`, `381`, `383`, `385`, `387`, `389`, `392`, `394`, `396`, `398`, `400`, `403`, `405`, `407`, `410`, `411`, `413`, `415`, `417`, `418`, `422`, `424`, `426`, `428`, `430`, `432`, `434`, `436`, `438`, `440`, `442`, `444`, `447`, `449`, `451`, `453`, `455`, `457`, `459`, `463`, `465`, `467`, `471`, `473`, `475`, `477`, `479`, `481`, `482`, `484`, `486`, `488`, `490`, `492`, `493`, `495`, `497`, `498`, `499`, `501`, `502`, `504`, `507`, `509`, `511`, `513`, `515`, `517`, `519`, `521`, `522`, `524`, `526`, `527`, `529`, `531`, `534`, `536`, `539`, `541`, `542`, `543`, `545`, `547`, `549`, `551`, `552`, `555`, `557`, `558`, `560`, `561`, `563`, `565`, `567`, `568`, `570`, `572`, `574`, `576`, `578`, `580`, `582`, `584`, `586`, `589`, `591`, `592`, `594`, `596`, `597`, `599`, `601`, `602`, `603`, `605`, `607`, `609`, `610`, `612`, `614`, `616`, `620`, `622`, `624`, `626`, `628`, `631`, `633`, `635`, `637`, `639`, `641`, `643`, `648`, `650`, `652`, `654`, `656`, `658`, `660`, `662`, `663`, `665`, `667`, `669`, `671`, `673`, `677`, `679`, `681`, `683`, `685`, `688`, `690`, `692`, `694`, `696`, `698`, `700`, `704`, `706`, `708`, `712`, `714`, `716`, `718`, `719`, `721`, `722`, `724`, `726`, `729`, `731`, `733`, `734`, `736`, `738`, `740`, `742`, `744`, `746`, `748`, `750`, `752`, `753`, `755`, `757`, `759`, `761`, `763`, `766`, `768`, `770`, `772`, `774`, `776`, `778`, `780`, `783`, `785`, `787`, `790`, `792`, `794`, `796`, `798`, `801`, `803`, `805`, `807`, `809`, `811`, `813`, `815`, `817`, `818`, `820`, `822`, `824`, `826`, `828`, `829`, `830`, `834`, `837`, `839`, `841`, `843`, `844`, `845`, `849`, `851`, `853`, `855`, `857`, `859`, `861`, `863`, `866`, `867`, `869`, `870`, `872`, `874`, `875`, `879`, `881`, `883`, `884`, `887`, `889`, `891`, `892`, `893`, `894`, `895`, `897`, `898`, `900`, `901`, `903`, `904`, `906`, `907`, `908`, `909`, `913`, `915`, `917`, `919`, `920`, `922`, `924`, `926`, `928`, `930`, `931`, `933`, `934`, `935`, `937`, `938`, `939`, `940`, `942`, `944`, `948`, `952`, `953`, `954`, `955`, `957`, `961`, `963`, `966`, `968`, `969`, `970`, `973`, `974`, `976`, `977`, `979`, `981`, `982`, `984`, `986`, `988`, `989`, `991`, `993`, `995`, `997`, `999`, `1000`, `1002`, `1004`, `1006`, `1009`, `1010`, `1012`, `1016`, `1018`, `1020`, `1023`, `1026`, `1027`, `1028`, `1029`, `1032`, `1033`, `1034`, `1035`, `1036`, `1038`, `1040`, `1042`, `1043`, `1046`, `1048`, `1049`, `1051`, `1052`, `1054`, `1055`, `1056`, `1057`, `1059`, `1060`, `1061`, `1063`, `1064`, `1065`, `1067`, `1070`, `1072`, `1074`, `1075`, `1076`, `1077`, `1079`, `1080`, `1081`, `1083`, `1086`, `1088`, `1090`, `1092`, `1093`, `1095`, `1098`, `1103`, `1106`, `1108`, `1110`, `1112`, `1114`, `1116`, `1118`, `1121`, `1122`, `1124`, `1126`, `1128`, `1130`, `1132`, `1134`, `1137`, `1138`, `1140`, `1142`, `1144`, `1148`, `1150`, `1151`, `1152`, `1154`, `1156`, `1157`, `1159`, `1160`, `1161`, `1162`, `1164`, `1167`, `1169`, `1170`, `1173`, `1174`, `1176`, `1177`, `1178`, `1180`, `1183`, `1185`, `1186`, `1188`, `1190`, `1191`, `1193`, `1196`, `1198`, `1199`, `1200`, `1202`, `1203`, `1204`, `1206`, `1208`, `1211`, `1212`, `1215`, `1216`, `1219`, `1220`, `1221`, `1222`, `1223`, `1224`, `1225`, `1227`, `1229`, `1231`, `1233`, `1235`, `1237`, `1239`, `1240`, `1241`, `1242`, `1244`, `1246`, `1248`, `1249`, `1250`, `1251`, `1254`, `1255`, `1258`, `1259`, `1262`, `1263`, `1267`, `1268`, `1269`, `1270`, `1272`, `1273`, `1275`, `1279`, `1281`, `1282`, `1284`, `1285`, `1287`, `1289`, `1290`, `1292`, `1294`, `1295`, `1296`, `1297`, `1299`, `1300`, `1302`, `1304`, `1308`, `1312`, `1314`, `1316`, `1318`, `1319`, `1320`, `1321`, `1323`, `1325`, `1326`, `1328`, `1330`, `1331`, `1333`, `1334`, `1336`, `1337`, `1339`, `1340`, `1341`, `1343`, `1344`, `1346`, `1348`, `1350`, `1351`, `1354`, `1356`, `1359`, `1361`, `1363`, `1365`, `1367`, `1368`, `1369`, `1370`, `1372`, `1374`, `1031`, `1376`, `1378`, `1380`, `1383`, `1385`, `1387`, `1389`, `1391`, `1394`, `1395`, `1397`, `1399`, `1401`, `1402`, `1404`, `1405`, `1407`, `1408`, `1410`, `1411`, `1412`, `1413`, `1415`, `1418`, `1419`, `1420`, `1421`, `1422`, `1425`, `1427`, `1430`, `1432`, `1433`, `1434`, `1436`, `1437`, `1439`, `1443`, `1445`, `1447`, `1449`, `1451`, `1453`, `1455`, `1457`, `1459`, `1463`, `1465`, `1466`, `1468`, `1469`, `1471`, `1473`, `1475`, `1476`, `1478`, `1480`, `1483`, `1486`, `1488`, `1489`, `1491`, `1493`, `1495`, `1496`, `1498`, `1500`, `1501`, `1503`, `1504`, `1505`, `1506`, `1507`, `1509`, `1510`, `1511`, `1512`, `1513`, `1514`, `827`, `1516`, `1518`, `1520`, `1522`, `1524`, `1526`, `1527`, `1528`, `1529`, `1531`, `1532`, `1534`, `1535`, `1537`, `1538`, `1539`, `1541`, `1543`, `1544`, `1546`, `1549`, `1550`, `1552`, `1554`, `1556`, `1559`, `1561`, `1563`, `1564`, `1565`, `1566`, `1568`, `1572`, `1573`, `1574`, `1576`, `1577`, `1578`, `1579`, `1580`, `1581`, `1584`, `1585`, `1587`, `1590`, `1591`, `1593`, `1595`, `1596`, `1597`, `1598`, `1600`, `1603`, `1604`, `1605`, `1607`, `1608`, `1609`, `1610`, `1612`, `1614`, `1616`, `1618`, `1620`, `1622`, `1624`, `1626`, `1628`, `1630`, `1631`, `1632`, `1634`, `1636`, `1638`, `1640`, `1642`, `1644`, `1646`, `1648`, `1650`, `1652`, `1654`, `1655`, `1657`, `1659`, `1660`, `1662`, `1664`, `1666`, `1668`, `1671`, `1673`, `1675`, `1676`, `1680`, `1681`, `1683`, `1684`, `1686`, `1688`, `1689`, `1691`, `1692`, `1693`, `1695`, `1696`, `1697`, `1698`, `1699`, `1701`, `1703`, `1705`, `1707`, `1709`, `1711`, `1713`, `1714`, `1717`, `1718`, `1719`, `1722`, `1723`, `1724`, `1726`, `1728`, `1730`, `1731`, `1733`, `1735`, `1737`, `1738`, `1739`, `1740`, `1741`, `1742`, `1744`, `1745`, `1749`, `1750`, `1751`, `1753`, `1754`, `1755`, `1757`, `1758`, `1760`, `1762`, `1763`, `1765`, `1769`, `1770`, `1772`, `1773`, `1775`, `1777`, `1778`, `1779`, `1782`, `1784`, `1787`, `1789`, `1791`, `1792`, `1794`, `1796`, `1797`, `1799`, `1801`, `1803`, `1805`, `1806`, `1807`, `1810`, `1812`, `1813`, `1817`, `1818`, `1820`, `1822`, `1825`, `1826`, `1829`, `1830`, `1831`, `1832`, `1834`, `1835`, `1838`, `1839`, `1840`, `1842`, `1844`, `1845`, `1846`, `1848`, `1850`, `1852`, `1853`, `1856`, `1857`, `1860`, `1862`, `1863`, `1865`, `1867`, `1869`, `1871`, `1873`, `1874`, `1875`, `1877`, `1879`, `1880`, `1882`, `1883`, `1885`, `1886`, `1887`, `1888`, `1889`, `1890`, `1891`, `1892`, `1894`, `1896`, `1898`, `1899`, `1900`, `1902`, `1904`, `1905`, `1725`, `1906`, `1911`, `1913`, `1915`, `1916`, `1017`, `1918`, `1920`, `1921`, `1922`, `1923`, `1924`, `1926`, `1927`, `1928`, `1929`, `1930`, `1931`, `1932`, `1933`, `1935`, `1936`, `1937`, `1939`, `1940`, `1941`, `1942`, `1943`, `1944`, `1945`, `1947`, `1949`, `1950`, `1953`, `1954`, `1955`, `1957`, `1958`, `1960`, `1961`, `1963`, `1965`, `1967`, `1968`, `1969`, `1971`, `1972`, `1973`, `1975`, `1978`, `1979`, `1980`, `1981`, `1983`, `1984`, `1985`, `1987`, `1988`, `1989`, `1990`, `1991`, `1992`, `1994`, `1995`, `1996`, `1997`, `1999`, `2002`, `2005`, `2007`, `2008`, `2010`, `2012`, `2013`, `2014`, `2015`, `2016`, `2017`, `2019`, `2020`, `2021`, `2024`, `2025`, `2027`, `2028`, `2029`, `2031`, `2032`, `2034`, `2036`, `2038`, `2040`, `2041`, `2042`, `2044`, `2046`, `2047`, `2048`, `1450`, `2050`, `2052`, `2053`, `2054`, `2055`, `2057`, `2060`, `2062`, `2063`, `2065`, `2068`, `2070`, `2072`, `2073`, `2075`, `2076`, `2078`, `2079`, `2081`, `2082`, `2085`, `2087`, `2089`, `2090`, `2091`, `2093`, `2094`, `2095`, `2096`, `2097`, `2098`, `2100`, `2102`, `2103`, `2104`, `2105`, `2107`, `2108`, `2109`, `2111`, `2113`, `2114`, `2115`, `2116`, `2118`, `2120`, `2121`, `2122`, `2123`, `2124`, `2125`, `2126`, `2127`, `2130`, `2132`, `2134`, `2135`, `2136`, `2139`, `2140`, `2141`, `2143`, `2145`, `2147`, `2149`, `2151`, `2153`, `2155`, `2157`, `2159`, `2161`, `2163`, `2165`, `2167`, `2169`, `2170`, `2172`, `2174`, `2176`, `2177`, `2178`, `2179`, `2180`, `2181`, `2182`, `2184`, `2186`, `2188`, `2189`, `2190`, `2191`, `2193`, `2195`, `2198`, `2199`, `2200`, `2201`, `2202`, `2204`, `2206`, `2209`, `2211`, `2213`, `2214`, `2215`, `2217`, `2218`, `2219`, `2221`, `2222`, `2224`, `2227`, `2228`, `2229`, `2230`, `2231`, `2232`, `2233`, `2234`, `2235`, `2237`, `2239`, `2241`, `2242`, `2243`, `2244`, `2245`, `2246`, `2248`, `2249`, `2251`, `2252`, `2253`, `2254`, `2255`, `2257`, `2259`, `2261`, `2262`, `2263`, `2265`, `2266`, `2267`, `2269`, `2271`, `2272`, `2273`, `2275`, `2276`, `2277`, `2279`, `2280`, `2283`, `2284`, `2285`, `2286`, `2287`, `2288`, `2291`, `2294`, `2296`, `2298`, `2300`, `2302`, `2303`, `2307`, `2309`, `2311`, `2312`, `2314`, `2316`, `2318`, `2320`, `2322`, `2324`, `2325`, `2328`, `2330`, `2331`, `2333`, `2335`, `2336`, `2337`, `2339`, `2341`, `2343`, `2345`, `2346`, `2348`, `2350`, `2351`, `2353`, `2354`, `2355`, `2356`, `2358`, `2360`, `2362`, `2363`, `2364`, `2366`, `2367`, `2368`, `2369`, `2372`, `2374`, `2376`, `2378`, `2379`, `2381`, `2384`, `2385`, `2386`, `2387`, `2388`, `2390`, `2391`, `2392`, `2394`, `2395`, `2396`, `2397`, `2399`, `2400`, `2402`, `2403`, `2404`, `2406`, `2407`, `2408`, `2409`, `2410`, `2412`, `2414`, `2415`, `2416`, `2417`, `2418`, `2419`, `2420`, `2421`, `2423`, `2424`, `2425`, `2427`, `2429`, `2430`, `2431`, `2433`, `2434`, `2435`, `2436`, `2437`, `2438`, `2439`, `2440`, `2441`, `2443`, `2444`, `2445`, `2446`, `2448`, `2450`, `2451`, `2453`, `2454`, `2455`, `2456`, `2457`, `2459`, `2460`, `2462`, `2464`, `2465`, `2466`, `2467`, `2468`, `2469`, `2471`, `2474`, `2475`, `2476`, `2479`, `2480`, `2482`, `2483`, `2484`, `2485`, `2486`, `2487`, `2488`, `2489`, `2490`, `2491`, `2492`, `2494`, `2495`, `2496`, `2497`, `2498`, `2499`, `2500`, `2502`, `2503`, `2505`, `2507`, `2508`, `2509`, `2510`, `2512`, `2513`, `2514`, `2515`, `2516`, `2518`, `2521`, `2522`, `2523`, `2524`, `2526`, `2527`, `2528`, `2529`, `2530`, `2531`, `2532`, `2533`, `2534`, `2535`, `2536`, `2537`, `2539`, `2541`, `2543`, `2545`, `2547`, `2549`, `2550`, `2552`, `2553`, `2555`, `2558`, `2559`, `2561`, `2563`, `2564`, `2566`, `2567`, `2568`, `2570`, `2571`, `2572`, `2573`, `2574`, `2576`, `2579`, `2581`, `2583`, `2584`, `2587`, `2589`, `2590`, `2591`, `2592`, `2594`, `2596`, `2597`, `2598`, `2599`, `2600`, `2602`, `2603`, `2604`, `2605`, `2607`, `2609`, `2610`, `2611`, `2612`, `2614`, `2615`, `2616`, `2618`, `2621`, `2622`, `2623`, `2625`, `2626`, `2628`, `2629`, `2631`, `2632`, `2633`, `2634`, `2635`, `2636`, `2638`, `2639`, `2640`, `1226`, `2641`, `2643`, `2644`, `2646`, `2647`, `2648`, `2649`, `2651`, `2653`, `2654`, `2655`, `2656`, `2657`, `2658`, `2659`, `2660`, `2661`, `2662`, `2663`, `2664`, `2665`, `2666`, `2667`, `2668`, `2669`, `2670`, `2671`, `2672`, `2673`, `2674`, `2675`, `2676`, `2677`, `2678`, `2679`, `2680`, `2681`, `2684`, `2686`, `2688`, `2689`, `2690`, `2693`, `2694`, `2695`, `2696`, `2698`, `2701`, `2703`, `2704`, `2706`, `2707`, `2708`, `2711`, `2712`, `2714`, `2715`, `2717`, `2719`, `2721`, `2724`, `2726`, `2728`, `2729`, `2730`, `2732`, `2735`, `2738`, `2740`, `2742`, `2743`, `2744`, `2746`, `2747`, `2748`, `2749`, `2751`, `2752`, `2753`, `2755`, `2757`, `2759`, `2760`, `2761`, `2762`, `2763`, `2764`, `2765`, `2766`, `2767`, `2768`, `2769`, `2770`, `2773`, `2774`, `2775`, `2776`, `2777`, `2779`, `2780`, `2782`, `2784`, `2787`, `2789`, `2790`, `2792`, `2794`, `2797`, `2798`, `2800`, `2802`, `2804`, `2808`, `2809`, `2810`, `2811`, `2812`, `2813`, `2815`, `2816`, `2817`, `2819`, `2821`, `2822`, `2823`, `2824`, `2825`, `2826`, `2827`, `2828`, `2831`, `2833`, `2834`, `2835`, `2836`, `2838`, `2839`, `2841`, `2842`, `2843`, `2844`, `2845`, `2846`, `2847`, `2848`, `2850`, `2852`, `2855`, `2856`, `2858`, `2861`, `2862`, `2863`, `2864`, `2866`, `2869`, `2872`, `2875`, `2876`, `2877`, `2878`, `2880`, `2881`, `2882`, `2883`, `2884`, `2885`, `2886`, `2887`, `2889`, `2890`, `2891`, `2893`, `2894`, `2895`, `2896`, `2898`, `2899`, `2900`, `2903`, `2904`, `2905`, `2906`, `2907`, `2908`, `2909`, `2910`, `2913`, `2915`, `2916`, `2917`, `2918`, `2920`, `2921`, `2922`, `2923`, `2924`, `2926`, `2928`, `2929`, `2930`, `2932`, `2934`, `2935`, `2936`, `2937`, `2938`, `2940`, `2942`, `2943`, `2944`, `2946`, `2948`, `2949`, `2950`, `2952`, `2953`, `2955`, `2956`, `2957`, `2958`, `2959`, `2961`, `2963`, `2965`, `2967`, `2969`, `2970`, `2972`, `2974`, `2976`, `2977`, `2979`, `2981`, `2982`, `2984`, `2985`, `2986`, `2988`, `2990`, `2992`, `2995`, `2996`, `2997`, `2999`, `3001`, `3002`, `3004`, `3005`, `3007`, `3011`, `3012`, `3013`, `3014`, `3017`, `3018`, `3019`, `3021`, `3022`, `3023`, `3024`, `3025`, `3027`, `3028`, `3030`, `3031`, `3032`, `3033`, `3035`, `3036`, `3037`, `3039`, `3040`, `3041`, `3042`, `3044`, `3046`, `3047`, `3048`, `3049`, `3051`, `3052`, `3053`, `3055`, `3057`, `3058`, `3060`, `3063`, `3064`, `3065`, `3067`, `3068`, `3070`, `3071`, `3073`, `3074`, `3075`, `3076`, `3078`, `3079`, `3080`, `3081`, `3082`, `3083`, `3084`, `3085`, `3086`, `3087`, `3088`, `3090`, `3091`, `3092`, `3093`, `3094`, `3095`, `3097`, `3099`, `3100`, `3101`, `3102`, `3103`, `3104`, `3105`, `3106`, `3107`, `3108`, `3109`, `3112`, `3113`, `3114`, `3116`, `3118`, `3120`, `3122`, `3123`, `3124`, `3126`, `3127`, `3129`, `3130`, `3131`, `3132`, `3133`, `3134`, `3135`, `3136`, `3137`, `3138`, `3139`, `3141`, `3142`, `3143`, `3144`, `3146`, `3147`, `3148`, `3149`, `3150`, `3153`, `3154`, `3155`, `3156`, `3158`, `3160`, `3161`, `3163`, `3164`, `3165`, `3167`, `3169`, `3171`, `3172`, `3174`, `3175`, `3177`, `3178`, `3180`, `3183`, `3185`, `3186`, `3187`, `3191`, `3192`, `3193`, `3194`, `3195`, `3196`, `3199`, `3201`, `3202`, `3205`, `3206`, `3207`, `3208`, `3210`, `3211`, `3212`, `3214`, `3215`, `3217`, `3218`, `3219`, `3220`, `3221`, `3222`, `3224`, `3225`, `3227`, `3228`, `3230`, `3232`, `3234`, `3235`, `3237`, `3239`, `3240`, `3241`, `3242`, `3244`, `3245`, `3246`, `3250`, `3253`, `3255`, `3257`, `3258`, `3259`, `3260`, `3261`, `3262`, `3263`, `3265`, `3267`, `3268`, `3269`, `3270`, `3271`, `3272`, `3273`, `3274`, `3275`, `3276`, `3277`, `3280`, `3281`, `3282`, `3283`, `3284`, `3285`, `3286`, `3287`, `3289`, `3290`, `3291`, `3292`, `3293`, `3294`, `3295`, `3296`, `3297`, `3298`, `3299`, `3300`, `3301`, `3302`, `3303`, `3306`, `3309`, `3310`, `3311`, `3312`, `3314`, `3315`, `3316`, `3317`, `3318`, `3319`, `3320`, `3321`, `3322`, `3324`, `3325`, `3328`, `3330`, `3333`, `3335`, `3337`, `3340`, `3343`, `3344`, `3346`, `3348`, `3350`, `3353`, `3357`, `3361`, `3362`, `3363`, `3365`, `3367`, `3370`, `3371`, `3372`, `3373`, `3374`, `3377`, `3378`, `3379`, `3380`, `3381`, `3382`, `3384`, `3386`, `3388`, `3390`, `3391`, `3393`, `3394`, `3395`, `3396`, `3397`, `3399`, `3400`, `3401`, `3402`, `3403`, `3404`, `3405`, `3406`, `3407`, `3408`, `3409`, `3411`, `3412`, `3413`, `3414`, `3418`, `3419`, `3421`, `3423`, `3424`, `3425`, `3427`, `3428`, `3431`, `3433`, `3434`, `3435`, `3437`, `3439`, `3440`, `3441`, `3442`, `3444`, `3445`, `3446`, `3447`, `3448`, `3449`, `3450`, `3451`, `3452`, `3454`, `3455`, `3456`, `3458`, `3459`, `3460`, `3462`, `3463`, `3464`, `3466`, `3467`, `3468`, `3470`, `3471`, `3472`, `3473`, `3475`, `3476`, `3477`, `3479`, `3481`, `3483`, `3485`, `3487`, `3488`, `3489`, `3490`, `3491`, `3492`, `3493`, `3494`, `3495`, `3496`, `3498`, `3499`, `3500`, `3501`, `3502`, `3503`, `3504`, `3505`, `3507`, `3508`, `3509`, `3510`, `3511`, `3513`, `3514`, `3515`, `3516`, `3517`, `3518`, `3519`, `3520`, `3521`, `3523`, `3524`, `3526`, `3529`, `3531`, `3532`, `3534`, `3536`, `3537`, `3539`, `3541`, `3542`, `3544`, `3546`, `3548`, `3550`, `3552`, `3553`, `3555`, `3556`, `3558`, `3559`, `3562`, `3564`, `3565`, `3567`, `3568`, `3569`, `3573`, `3575`, `3576`, `3577`, `3579`, `3580`, `3581`, `3582`, `3584`, `3585`, `3586`, `3588`, `3590`, `3591`, `3592`, `3594`, `3595`, `3597`, `3599`, `3601`, `3602`, `3603`, `3604`, `3605`, `3606`, `3607`, `3608`, `3609`, `3611`, `3613`, `3614`, `3616`, `3617`, `3618`, `3619`, `3621`, `3622`, `3624`, `3626`, `3628`, `3629`, `3630`, `3632`, `3633`, `3636`, `3637`, `3638`, `3639`, `3640`, `3642`, `3643`, `3645`, `3646`, `3647`, `3649`, `3651`, `3653`, `3655`, `3656`, `3657`, `3659`, `3661`, `3665`, `3666`, `3668`, `3672`, `3673`, `3675`, `3676`, `3678`, `3679`, `3681`, `3683`, `3685`, `3686`, `3688`, `3689`, `3690`, `3691`, `3692`, `3694`, `3696`, `3697`, `3698`, `3699`, `3700`, `3702`, `3704`, `3705`, `3706`, `3707`, `3708`, `3710`, `3711`, `3712`, `3713`, `3715`, `3716`, `3717`, `3719`, `3720`, `3721`, `3722`, `3723`, `3724`, `3725`, `3726`, `3727`, `3728`, `3729`, `3731`, `3732`, `3733`, `3735`, `3737`, `3741`, `3742`, `3743`, `3744`, `3745`, `3746`, `3748`, `3749`, `3750`, `3751`, `3752`, `3753`, `3754`, `3755`, `3756`, `3759`, `3760`, `3761`, `3762`, `3763`, `3764`, `3765`, `3766`, `3770`, `3771`, `3772`, `3773`, `3774`, `3776`, `3777`, `3778`, `3780`, `3781`, `3782`, `3784`, `3785`, `3786`, `3787`, `3788`, `3790`, `3791`, `3793`, `3794`, `3796`, `3797`, `3798`, `3799`, `3800`, `3801`, `3804`, `3805`, `3806`, `3807`, `3808`, `3809`, `3810`, `3811`, `3812`, `3813`, `3814`, `3815`, `3816`, `3817`, `3818`, `3820`, `3821`, `3822`, `3824`, `3825`, `3827`, `3828`, `3830`, `3831`, `3833`, `3834`, `3835`, `3836`, `3837`, `3839`, `3840`, `3841`, `3842`, `3845`, `3846`, `3847`, `3848`, `3849`, `3850`, `3851`, `3852`, `3855`, `3856`, `3859`, `3860`, `3861`, `3862`, `3863`, `3864`, `3867`, `3868`, `3870`, `3871`, `3872`, `3874`, `3876`, `3877`, `3880`, `3881`, `3882`, `3885`, `3887`, `3888`, `3889`, `3890`, `3891`, `3892`, `3895`, `3897`, `3898`, `3901`, `3903`, `3904`, `3905`, `3907`, `3909`, `3910`, `3912`, `3913`, `3914`, `3915`, `3916`, `3917`, `3919`, `3921`, `3922`, `3925`, `3926`, `3927`, `3929`, `3930`, `3932`, `3933`, `3939`, `3940`, `3941`, `3942`, `3943`, `3945`, `3946`, `3947`, `3948`, `3949`, `3951`, `3953`, `3954`, `3955`, `3957`, `3958`, `3959`, `3962`, `3964`, `3965`, `3968`, `3969`, `3971`, `3972`, `3974`, `3976`, `3978`, `3979`, `3980`, `3982`, `3983`, `3984`, `3985`, `3987`, `3989`, `3990`, `3991`, `3992`, `3993`, `3994`, `3995`, `3998`, `3999`, `4001`, `4003`, `4004`, `4007`, `4008`, `4009`, `4010`, `4012`, `4013`, `4014`, `4017`, `4019`, `4020`, `4021`, `4022`, `4023`, `4024`, `4025`, `4027`, `4028`, `4029`, `4030`, `4031`, `4033`, `4036`, `4037`, `4038`, `4039`, `4040`, `4041`, `4043`, `4045`, `4047`, `4048`, `4049`, `4051`, `4052`, `4054`, `4055`, `4056`, `4059`, `4060`, `4061`, `4063`, `4064`, `4067`, `4068`, `4069`, `4071`, `4074`, `4076`, `4077`, `4079`, `4081`, `4083`, `4084`, `4085`, `4086`, `4087`, `4088`, `4089`, `4090`, `4091`, `4092`, `4094`, `4095`, `4096`, `4097`, `4098`, `4101`, `4104`, `4105`, `4108`, `4109`, `4111`, `4112`, `4113`, `4114`, `4115`, `4116`, `4117`, `4118`, `4120`, `4121`, `4122`, `4123`, `4124`, `4125`, `4126`, `4127`, `4130`, `4131`, `4133`, `4134`, `4135`, `4136`, `4138`, `4139`, `4140`, `4141`, `4142`, `4143`, `4144`, `4146`, `4147`, `4148`, `4149`, `4151`, `4152`, `4153`, `4154`, `89`, `4155`, `4156`, `4157`, `4158`, `4159`, `4160`, `4161`, `4162`, `4163`, `4164`, `4165`, `4166`, `4167`, `4168`, `4170`, `4171`, `4172`, `4173`, `4174`, `4175`, `4176`, `4177`, `4178`, `4179`, `4180`, `4181`, `4182`, `4183`, `4184`, `4185`, `4186`, `4187`, `4188`, `4189`, `4190`, `4192`, `4193`, `4194`, `4195`, `4196`, `4197`, `4198`, `4199`, `4200`, `4201`, `4202`, `4204`, `4206`, `4207`, `4208`, `4210`, `4211`, `4212`, `4213`, `4214`, `4216`, `4217`, `4218`, `4219`, `4220`, `4221`, `4222`, `4223`, `4225`, `4226`, `4227`, `4228`, `4229`, `4230`, `4231`, `4232`, `4233`, `4236`, `4237`, `4238`, `4239`, `4240`, `4241`, `4243`, `4244`, `4247`, `4249`, `4250`, `4251`, `4252`, `4254`, `1454`, `4256`, `4258`, `4261`, `4262`, `4263`, `4265`, `4267`, `4269`, `4270`, `4271`, `4272`, `4274`, `4277`, `4279`, `4281`, `4282`, `4284`, `4287`, `4288`, `4290`, `4292`, `4293`, `4295`, `4296`, `4298`, `4299`, `4301`, `4303`, `4305`, `4307`, `4309`, `4310`, `4311`, `4312`, `4314`, `4316`, `4317`, `4318`, `4319`, `4321`, `4322`, `4324`, `4325`, `4326`, `4327`, `4328`, `4329`, `4330`, `4332`, `4333`, `4334`, `4335`, `4337`, `4339`, `4340`, `4342`, `4343`, `4344`, `4345`, `4347`, `4348`, `4349`, `4351`, `4353`, `4354`, `4356`, `4357`, `4358`, `4359`, `4360`, `4363`, `4364`, `4365`, `4366`, `4368`, `4369`, `4370`, `4371`, `4372`, `4373`, `4374`, `4375`, `4376`, `4377`, `4378`, `4380`, `4382`, `4384`, `4385`, `4387`, `4388`, `4389`, `4391`, `4392`, `4393`, `4394`, `4395`, `4396`, `4397`, `4398`, `4399`, `4400`, `4401`, `4402`, `4403`, `4404`, `4405`, `4406`, `4407`, `4408`, `4410`, `4411`, `4412`, `4413`, `4414`, `4415`, `4416`, `4418`, `4419`, `4420`, `4421`, `4422`, `4423`, `4424`, `4425`, `4427`, `4428`, `4429`, `4430`, `4433`, `4434`, `4436`, `4438`, `4441`, `4442`, `4444`, `4445`, `4447`, `4449`, `4450`, `4452`, `4453`, `4454`, `4457`, `4458`, `4460`, `4461`, `4463`, `4464`, `4465`, `4466`, `4467`, `4468`, `4469`, `4470`, `4471`, `4472`, `4473`, `4475`, `4476`, `4477`, `4478`, `4479`, `4480`, `4482`, `4484`, `4486`, `4487`, `4489`, `4490`, `4491`, `4493`, `4494`, `4495`, `4496`, `4497`, `4498`, `4500`, `4505`, `4506`, `4507`, `4508`, `4509`, `4510`, `4513`, `4514`, `4516`, `4517`, `4518`, `4519`, `4520`, `4521`, `4522`, `4523`, `4524`, `4525`, `4526`, `4527`, `4529`, `4530`, `4532`, `4533`, `4536`, `4538`, `4539`, `4540`, `4541`, `4542`, `4543`, `4544`, `4545`, `4546`, `4547`, `4548`, `4549`, `4550`, `4551`, `4553`, `4554`, `4555`, `4556`, `4557`, `4558`, `4559`, `4562`, `4563`, `4564`, `4565`, `4567`, `4569`, `4570`, `4571`, `4573`, `4574`, `4576`, `2037`, `4578`, `4579`, `4581`, `4584`, `4586`, `4588`, `4590`, `4591`, `4592`, `4593`, `4595`, `4596`, `4597`, `4599`, `4600`, `4601`, `4602`, `4603`, `4604`, `4605`, `4606`, `4607`, `4608`, `4609`, `4611`, `4612`, `4613`, `4614`, `4615`, `4616`, `4617`, `4618`, `4620`, `4622`, `4624`, `4625`, `4626`, `4627`, `4628`, `4629`, `4630`, `4631`, `4632`, `4633`, `4635`, `4636`, `4637`, `4638`, `4639`, `4640`, `4641`, `4644`, `4645`, `4646`, `4647`, `4648`, `4649`, `4650`, `4651`, `4652`, `4653`, `4654`, `4655`, `4656`, `4657`, `4659`, `4660`, `4663`, `4664`, `4665`, `4666`, `4668`, `4670`, `4671`, `4672`, `4674`, `4675`, `4676`, `4678`, `4679`, `4681`, `4682`, `4683`, `4684`, `4686`, `4687`, `4688`, `4689`, `4691`, `4692`, `4693`, `4694`, `4695`, `4696`, `4697`, `4698`, `4699`, `4700`, `4702`, `4703`, `4704`, `4705`, `4706`, `4707`, `4708`, `4711`, `4714`, `4716`, `4717`, `4718`, `4720`, `4722`, `4723`, `4724`, `4726`, `4727`, `4728`, `4729`, `4730`, `4733`, `4734`, `4735`, `4736`, `4737`, `4738`, `4739`, `4740`, `4741`, `4743`, `4744`, `4745`, `4748`, `4750`, `4751`, `4753`, `4754`, `4755`, `4756`, `4757`, `4759`, `4761`, `4762`, `4763`, `4764`, `4765`, `4766`, `4768`, `4769`, `4770`, `4771`, `4772`, `4774`, `4776`, `4777`, `4778`, `4779`, `4780`, `4781`, `4782`, `4783`, `4784`, `4785`, `4786`, `4787`, `4788`, `4789`, `4790`, `4791`, `4793`, `4795`, `4796`, `4798`, `4799`, `4801`, `4803`, `4804`, `4805`, `4806`, `4807`, `4808`, `4809`, `4811`, `4813`, `4815`, `4816`, `4817`, `4818`, `4819`, `4820`, `4821`, `4822`, `4823`, `4824`, `4825`, `4826`, `4827`, `4828` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.89 | | `TOKEN_P` | 99.89 | | `TOKEN_R` | 99.90 | | `TOKEN_ACC` | 99.98 | | `SENTS_F` | 99.89 | | `SENTS_P` | 99.89 | | `SENTS_R` | 99.89 | | `TAG_ACC` | 95.62 | | `POS_ACC` | 99.05 | | `MORPH_ACC` | 95.42 | | `DEP_UAS` | 97.39 | | `DEP_LAS` | 95.55 | | `LEMMA_ACC` | 95.92 |
{"language": ["pl"], "license": "gpl-3.0", "tags": ["spacy", "token-classification"]}
explosion/pl_udv25_polishlfg_trf
null
[ "spacy", "token-classification", "pl", "license:gpl-3.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "pl" ]
TAGS #spacy #token-classification #pl #license-gpl-3.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Polish-LFG ### Label Scheme View label scheme (4947 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (4947 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #pl #license-gpl-3.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (4947 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Portuguese-Bosque | Feature | Description | | --- | --- | | **Name** | `pt_udv25_portuguesebosque_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (2079 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `ADJ`, `ADP`, `ADP_ADV`, `ADP_DET`, `ADP_NUM`, `ADP_PRON`, `ADP_PROPN`, `ADV`, `ADV_PRON`, `ADV_PROPN`, `AUX`, `AUX_PRON`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PART_NOUN`, `PRON`, `PRON_PRON`, `PROPN`, `PROPN_DET`, `PROPN_PROPN`, `PUNCT`, `SCONJ`, `SCONJ_DET`, `SCONJ_PRON`, `SYM`, `VERB`, `VERB_PRON`, `X` | | **`morphologizer`** | `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Definite=Def\|POS=ADP\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `POS=PUNCT`, `NumType=Card\|POS=NUM`, `POS=ADV`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part`, `POS=ADP`, `POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=SCONJ`, `POS=VERB\|VerbForm=Inf`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=CCONJ`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Unsp\|Number=Sing\|POS=PROPN`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=ADV\|Polarity=Neg`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `POS=X`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Unsp\|Number=Plur\|POS=NOUN`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `POS=AUX\|VerbForm=Inf`, `Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `POS=VERB\|VerbForm=Ger`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|Number=Plur\|POS=PROPN`, `Number=Plur\|POS=AUX\|Person=3\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Art`, `POS=VERB\|VerbForm=Part`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `NumType=Ord\|POS=ADJ`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Unsp\|Number=Plur\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=SCONJ\|PronType=Dem`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Gender=Masc\|NumType=Mult\|Number=Sing\|POS=NUM`, `Gender=Unsp\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Gender=Unsp\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Gender=Masc\|Number=Sing\|POS=PROPN\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=VERB\|Person=3\|VerbForm=Inf`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `POS=AUX\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Unsp\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Unsp\|Number=Sing\|POS=PRON\|PronType=Rel`, `Number=Sing\|POS=DET\|PronType=Art`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|NumType=Frac\|Number=Sing\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Number=Plur\|POS=VERB\|Person=3\|VerbForm=Inf`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=SCONJ\|PronType=Art`, `Definite=Def\|POS=SCONJ\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Unsp\|POS=PRON\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `POS=AUX`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `POS=INTJ`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Unsp\|Number=Sing\|POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Emp`, `Case=Acc\|Gender=Unsp\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Unsp\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|POS=VERB\|PronType=Prs\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Emp`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Unsp\|POS=VERB\|PronType=Prs\|VerbForm=Ger`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Gender=Unsp\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Number=Sing\|POS=AUX\|Person=3\|VerbForm=Inf`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PART`, `Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Unsp\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=ADV`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Number=Plur\|POS=VERB\|Person=1\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Gender=Masc\|POS=ADJ`, `POS=NOUN`, `POS=AUX\|VerbForm=Ger`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Art`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Sing\|POS=NOUN`, `Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Unsp\|Number=Plur\|POS=PROPN`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|Gender=Unsp\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Emp`, `Gender=Unsp\|POS=PRON\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Tot`, `Case=Acc\|Gender=Masc\|POS=PRON\|PronType=Prs`, `Gender=Unsp\|Number=Unsp\|POS=PRON\|PronType=Rel`, `POS=VERB\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|Gender=Unsp\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=AUX\|VerbForm=Part`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Inf`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=PROPN\|PronType=Art`, `Case=Dat\|Gender=Unsp\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pqp\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Unsp\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Unsp\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Plur\|POS=AUX\|Person=1\|Tense=Past`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Gender=Unsp\|Number=Unsp\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADV\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Gender=Unsp\|Number=Unsp\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|Number=Sing\|POS=DET`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Unsp\|POS=VERB\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=X`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=SCONJ`, `Gender=Masc\|Number=Sing\|POS=PRON`, `Gender=Fem\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Dat\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `POS=ADP\|PronType=Dem`, `Definite=Def\|Gender=Fem\|POS=ADP\|PronType=Art`, `POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=ADP`, `Gender=Masc\|Number=Sing\|POS=ADP\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `POS=DET`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Emp`, `Gender=Unsp\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Art`, `Case=Acc\|Gender=Masc\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=AUX\|Person=1\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Ind`, `Case=Dat\|Gender=Unsp\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=VERB\|Person=3\|VerbForm=Inf\|Voice=Pass`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Number=Plur\|POS=VERB\|Person=2\|PronType=Prs\|VerbForm=Inf`, `Gender=Unsp\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem,Masc\|Number=Sing\|POS=PROPN`, `Gender=Unsp\|Number=Unsp\|POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=NUM`, `POS=PRON\|PronType=Neg`, `Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Dem`, `POS=SYM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=X`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|NumType=Sets\|Number=Sing\|POS=NUM`, `Foreign=Yes\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|Gender=Unsp\|POS=AUX\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Unsp\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Unsp\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Unsp\|Number=Plur\|POS=PRON\|PronType=Int`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=SCONJ\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Prs`, `Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Ind`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Art`, `Number=Sing\|POS=VERB`, `Number=Sing\|POS=DET`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Pass`, `NumType=Mult\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Neg`, `POS=PRON\|PronType=Dem`, `Mood=Ind\|POS=VERB\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Unsp\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Unsp\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Gender=Masc\|Number=Sing\|POS=ADV\|Polarity=Neg`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Number=Sing\|POS=VERB\|Person=1\|VerbForm=Inf`, `Definite=Def\|Gender=Masc\|Number=Unsp\|POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Unsp\|POS=NOUN`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=NOUN`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=SCONJ\|PronType=Art`, `POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=ADV\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Prs`, `Case=Dat\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Gender=Unsp\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Gender=Unsp\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Unsp\|Number=Sing\|POS=DET\|PronType=Rel`, `Gender=Fem\|Number=Plur\|POS=VERB`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|PronType=Prs\|VerbForm=Inf`, `Gender=Unsp\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `NumType=Range\|POS=NUM`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Gender=Unsp\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Dat\|Gender=Masc\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `Number=Unsp\|POS=PRON\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Int`, `Gender=Masc\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=VERB\|Person=1\|VerbForm=Inf\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=SCONJ\|PronType=Dem`, `NumType=Frac\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Ind`, `Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADV\|PronType=Rel`, `Case=Acc\|POS=VERB\|PronType=Prs\|VerbForm=Ger`, `Mood=Cnd\|POS=VERB\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf\|Voice=Pass`, `POS=VERB\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Unsp\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Number=Sing\|POS=X`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Dat\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Gender=Masc\|Number=Sing\|POS=ADV\|PronType=Int`, `Case=Dat\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `POS=VERB`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Unsp\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc,Dat\|Gender=Fem,Unsp\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Unsp\|Number=Unsp\|POS=ADV\|PronType=Int`, `Gender=Unsp\|Number=Sing\|POS=ADV\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `POS=PROPN`, `Case=Acc\|Gender=Masc\|POS=AUX\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|Gender=Fem\|POS=AUX\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|Gender=Fem\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Unsp\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=X`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Definite=Ind\|Gender=Fem\|POS=DET\|PronType=Art`, `Gender=Unsp\|Number=Sing\|POS=ADV`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Pqp\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Unsp\|POS=PRON\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=VERB`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PROPN\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=VERB`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pqp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADV\|PronType=Ind`, `Mood=Sub\|Number=Sing\|POS=AUX\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Pass`, `POS=DET\|PronType=Ind`, `POS=SCONJ\|VerbForm=Ger`, `Mood=Cnd\|Number=Sing\|POS=VERB\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=VERB`, `Mood=Sub\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|POS=PRON\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|POS=PROPN`, `Gender=Fem\|Number=Plur\|POS=DET`, `NumType=Ord\|POS=NUM`, `POS=DET\|PronType=Int`, `Case=Acc\|Gender=Unsp\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Fin`, `POS=PART`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|VerbForm=Inf`, `NumType=Card\|POS=ADP`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Rel`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Unsp\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=SCONJ\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Art`, `Case=Dat\|Gender=Unsp\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `discourse`, `dislocated`, `expl`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:agent`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `1`, `2`, `3`, `4`, `6`, `8`, `9`, `11`, `13`, `15`, `17`, `20`, `22`, `24`, `14`, `7`, `26`, `28`, `30`, `32`, `34`, `36`, `38`, `40`, `42`, `44`, `45`, `48`, `53`, `54`, `55`, `57`, `58`, `60`, `62`, `65`, `66`, `67`, `70`, `72`, `74`, `76`, `79`, `83`, `85`, `87`, `89`, `91`, `95`, `99`, `101`, `102`, `104`, `106`, `108`, `110`, `113`, `115`, `117`, `119`, `120`, `122`, `124`, `125`, `126`, `128`, `130`, `132`, `134`, `136`, `138`, `141`, `142`, `144`, `147`, `150`, `152`, `154`, `155`, `159`, `162`, `163`, `165`, `166`, `169`, `171`, `172`, `174`, `175`, `178`, `180`, `181`, `184`, `186`, `189`, `191`, `193`, `195`, `198`, `200`, `111`, `202`, `204`, `207`, `209`, `212`, `214`, `216`, `218`, `220`, `221`, `223`, `224`, `226`, `228`, `230`, `232`, `234`, `236`, `239`, `242`, `244`, `245`, `246`, `247`, `249`, `251`, `252`, `253`, `256`, `257`, `259`, `261`, `263`, `267`, `269`, `270`, `271`, `273`, `277`, `278`, `281`, `282`, `283`, `285`, `286`, `288`, `289`, `290`, `292`, `293`, `295`, `297`, `298`, `300`, `302`, `303`, `305`, `307`, `309`, `310`, `311`, `313`, `314`, `316`, `319`, `168`, `322`, `323`, `326`, `327`, `329`, `331`, `333`, `335`, `336`, `338`, `341`, `343`, `345`, `347`, `348`, `350`, `351`, `354`, `356`, `359`, `361`, `363`, `364`, `365`, `366`, `367`, `369`, `373`, `376`, `378`, `379`, `380`, `381`, `383`, `384`, `386`, `389`, `392`, `394`, `395`, `396`, `398`, `400`, `403`, `405`, `407`, `409`, `410`, `412`, `415`, `416`, `417`, `418`, `419`, `420`, `422`, `424`, `429`, `431`, `432`, `438`, `439`, `441`, `442`, `445`, `448`, `449`, `450`, `452`, `454`, `457`, `458`, `461`, `463`, `465`, `468`, `469`, `470`, `473`, `475`, `477`, `478`, `481`, `484`, `485`, `486`, `488`, `491`, `495`, `497`, `499`, `503`, `506`, `507`, `508`, `509`, `510`, `511`, `513`, `514`, `516`, `517`, `519`, `521`, `522`, `523`, `525`, `528`, `530`, `533`, `534`, `536`, `538`, `540`, `541`, `542`, `544`, `545`, `547`, `549`, `551`, `552`, `554`, `555`, `558`, `559`, `560`, `562`, `563`, `565`, `566`, `570`, `572`, `579`, `582`, `583`, `585`, `586`, `587`, `590`, `592`, `594`, `595`, `597`, `599`, `601`, `603`, `606`, `608`, `609`, `611`, `612`, `614`, `615`, `616`, `619`, `621`, `622`, `625`, `626`, `627`, `629`, `630`, `631`, `633`, `634`, `637`, `638`, `639`, `640`, `642`, `644`, `646`, `647`, `652`, `653`, `656`, `657`, `659`, `660`, `661`, `664`, `666`, `669`, `671`, `672`, `673`, `674`, `675`, `677`, `678`, `680`, `682`, `685`, `687`, `689`, `691`, `692`, `693`, `695`, `699`, `701`, `702`, `703`, `706`, `707`, `709`, `710`, `711`, `712`, `714`, `716`, `718`, `719`, `720`, `721`, `724`, `725`, `729`, `730`, `732`, `735`, `738`, `740`, `742`, `744`, `746`, `749`, `750`, `751`, `754`, `756`, `760`, `762`, `767`, `769`, `771`, `774`, `776`, `778`, `780`, `781`, `784`, `785`, `787`, `788`, `789`, `791`, `793`, `794`, `795`, `798`, `800`, `801`, `803`, `804`, `806`, `808`, `810`, `811`, `812`, `814`, `816`, `819`, `820`, `823`, `824`, `825`, `828`, `829`, `832`, `833`, `835`, `836`, `839`, `840`, `844`, `845`, `847`, `850`, `851`, `853`, `854`, `855`, `858`, `861`, `862`, `863`, `865`, `868`, `871`, `873`, `875`, `877`, `879`, `880`, `881`, `882`, `883`, `884`, `885`, `887`, `889`, `892`, `894`, `895`, `537`, `896`, `898`, `899`, `902`, `904`, `905`, `908`, `909`, `912`, `914`, `916`, `917`, `920`, `921`, `922`, `924`, `925`, `928`, `929`, `930`, `931`, `933`, `936`, `939`, `940`, `942`, `943`, `945`, `948`, `949`, `951`, `953`, `956`, `957`, `960`, `961`, `963`, `964`, `965`, `966`, `969`, `970`, `971`, `973`, `976`, `977`, `979`, `981`, `983`, `985`, `987`, `988`, `990`, `991`, `993`, `994`, `995`, `996`, `997`, `998`, `1000`, `1001`, `1004`, `1006`, `1007`, `1009`, `1011`, `1013`, `1014`, `1015`, `1019`, `1021`, `1023`, `1025`, `1026`, `1029`, `1030`, `1033`, `1034`, `1036`, `1037`, `1039`, `1041`, `1042`, `1044`, `1046`, `1048`, `1050`, `1051`, `1054`, `1056`, `1057`, `1059`, `1061`, `1062`, `1064`, `1066`, `1067`, `1068`, `1069`, `1071`, `1072`, `1073`, `1074`, `1075`, `1077`, `1078`, `1079`, `1081`, `1083`, `1084`, `1085`, `1086`, `1088`, `1089`, `1092`, `1093`, `1097`, `1100`, `1101`, `1103`, `1104`, `1106`, `1108`, `1110`, `1114`, `1115`, `1117`, `1118`, `1119`, `1121`, `1123`, `1124`, `1126`, `1127`, `1128`, `1130`, `1133`, `1135`, `1136`, `1140`, `1143`, `1146`, `1148`, `1149`, `1151`, `1152`, `1155`, `1157`, `1158`, `1160`, `1163`, `1164`, `1165`, `1167`, `1168`, `1170`, `1172`, `1176`, `1177`, `1178`, `1180`, `1182`, `1184`, `1186`, `1187`, `1189`, `1190`, `1193`, `1196`, `1198`, `1202`, `1203`, `1204`, `1205`, `1206`, `1207`, `1208`, `1209`, `1210`, `1211`, `1214`, `1215`, `1216`, `1218`, `1219`, `1220`, `1221`, `1223`, `1225`, `1226`, `1228`, `1229`, `1230`, `1233`, `1234`, `1236`, `1237`, `1238`, `1239`, `1240`, `1242`, `1244`, `1247`, `1248`, `1249`, `1250`, `1251`, `1254`, `1256`, `1257`, `1258`, `1260`, `1262`, `1263`, `1266`, `1271`, `1272`, `1273`, `1274`, `1275`, `1277`, `1278`, `1279`, `1280`, `1283`, `1285`, `1287`, `1288`, `1290`, `1293`, `1294`, `1296`, `1299`, `1301`, `1302`, `1304`, `1307`, `1308`, `1309`, `1311`, `1312`, `1314`, `1315`, `1317`, `1320`, `1322`, `1324`, `1325`, `1326`, `1329`, `1330`, `1332`, `1333`, `1334`, `1336`, `1338`, `1339`, `1340`, `1341`, `1344`, `1345`, `1346`, `1348`, `1350`, `1351`, `1352`, `1354`, `1356`, `1358`, `1359`, `1360`, `1361`, `1362`, `1363`, `1367`, `1370`, `1371`, `1373`, `1375`, `1377`, `1378`, `1379`, `1381`, `1382`, `1383`, `1385`, `1386`, `1388`, `1389`, `1393`, `1395`, `1399`, `1401`, `1402`, `1403`, `1404`, `1405`, `1407`, `1408`, `1411`, `1413`, `1417`, `1418`, `1419`, `1420`, `1421`, `1423`, `1424`, `1425`, `1429`, `1430`, `1431`, `1433`, `1434`, `1436`, `1437`, `1438`, `1439`, `1442`, `1444`, `1446`, `1447`, `1449`, `1451`, `1453`, `1454`, `1455`, `1458`, `1461`, `1463`, `1464`, `1465`, `1467`, `1468`, `1469`, `1470`, `1471`, `1473`, `1476`, `1477`, `1478`, `1479`, `1482`, `1483`, `1484`, `1489`, `1491`, `1492`, `1494`, `1496`, `1497`, `1499`, `1502`, `1504`, `1505`, `1506`, `1507`, `1508`, `1509`, `1511`, `1514`, `1515`, `1517`, `1520`, `1521`, `1524`, `1525`, `1528`, `1529`, `1530`, `1532`, `1533`, `1534`, `1536`, `1538`, `1539`, `1541`, `1543`, `1544`, `1545`, `1546`, `1547`, `1548`, `1552`, `1556`, `1558`, `1560`, `1562`, `1563`, `1566`, `1567`, `1569`, `1570`, `1572`, `1574`, `1577`, `761`, `1579`, `1583`, `1585`, `1586`, `1587`, `1590`, `1592`, `1593`, `1595`, `1596`, `1597`, `1599`, `1603`, `1605`, `1607`, `1609`, `1610`, `1612`, `1614`, `1615`, `1617`, `1618`, `1620`, `1621`, `1622`, `1625`, `1627`, `1629`, `1630`, `1631`, `1633`, `1634`, `1636`, `1637`, `1638`, `1640`, `1641`, `1643`, `1644`, `1646`, `1647`, `1648`, `1651`, `1652`, `1657`, `1658`, `1659`, `1661`, `1662`, `1663`, `1664`, `1666`, `1669`, `1672`, `1673`, `1675`, `1676`, `1677`, `1679`, `1682`, `1684`, `1409`, `1685`, `1686`, `1687`, `1688`, `1690`, `1692`, `1693`, `1694`, `1695`, `1697`, `1699`, `1700`, `1704`, `1707`, `1708`, `1709`, `1711`, `1712`, `1715`, `1716`, `1717`, `1718`, `1719`, `1721`, `1722`, `1723`, `1725`, `1726`, `1729`, `1730`, `1732`, `1733`, `1734`, `1735`, `1737`, `1738`, `1741`, `1743`, `1744`, `1746`, `1747`, `1748`, `1750`, `1752`, `1754`, `1755`, `1756`, `1758`, `1759`, `1760`, `1762`, `1765`, `1766`, `1768`, `1769`, `1770`, `1773`, `1774`, `1775`, `1777`, `1778`, `1781`, `1782`, `1783`, `1785`, `1786`, `1787`, `219`, `1788`, `1789`, `1791`, `1792`, `1793`, `1795`, `1799`, `1800`, `1801`, `1802`, `1803`, `1805`, `1806`, `1808`, `1809`, `1811`, `1812`, `1814`, `1815`, `1816`, `1821`, `1823`, `1824`, `1825`, `1826`, `1829`, `1830`, `1831`, `1832`, `1833`, `1835`, `1838`, `1839`, `1840`, `1842`, `1843`, `1845`, `1846`, `1848`, `1849`, `1850`, `1851`, `1855`, `1856`, `1857`, `1859`, `1861`, `1862`, `1864`, `1866`, `1867`, `1869`, `421`, `1870`, `1872`, `1873`, `1874`, `1875`, `1878`, `1879`, `1880`, `1882`, `1883`, `1884`, `1885`, `1888`, `1891`, `1894`, `1895`, `1898`, `1901`, `1903`, `1904`, `1906`, `1907`, `1910`, `1912`, `1915`, `1917`, `1918`, `1920`, `1921`, `1922`, `1924`, `1926`, `1927`, `1930`, `1932`, `1933`, `1936`, `1938`, `1940`, `1941`, `1942`, `1943`, `1945`, `1947`, `1949`, `1951`, `1952`, `1953`, `1954`, `1956`, `1957`, `1958`, `1960`, `1961`, `1963`, `1964`, `1966`, `1968`, `1971`, `1973`, `1974`, `1975`, `1977`, `1979`, `1981`, `1983`, `1985`, `1986`, `1987`, `1988`, `792`, `1990`, `790`, `1992`, `1994`, `1996`, `1998`, `1999`, `2000`, `2001`, `2002`, `2003`, `2005`, `2006`, `2008`, `2010`, `2011`, `2012`, `2014`, `2016`, `2017`, `2018`, `2019`, `2021`, `2022`, `2023`, `2024`, `2025`, `2026`, `2028`, `2029`, `2031`, `2034`, `2036`, `2038`, `2041`, `2042`, `2044`, `2045`, `2046`, `2050`, `2051`, `2052`, `2055`, `2056`, `2057`, `2059`, `2060`, `2061`, `2062`, `2064`, `2066`, `2068`, `2069`, `2070`, `2072`, `2073`, `2075`, `2076`, `2078`, `2079`, `2081`, `2083`, `2084`, `2086`, `2088`, `2089`, `2091`, `2093`, `2095`, `2097`, `2098`, `2099`, `2101`, `2102`, `2103`, `2104`, `2106`, `2107`, `2108`, `2109`, `2110`, `2111`, `2112`, `2114`, `2116`, `2117`, `2118`, `2119`, `2120`, `2121`, `2122`, `2123`, `2124`, `2125`, `2126`, `2127`, `1584`, `2128`, `2130`, `2131`, `2132`, `2134`, `2137`, `2138`, `2139`, `2141`, `2144`, `2145`, `2146`, `2147`, `2150`, `2151`, `2153`, `2154`, `2155`, `2156`, `2157`, `2159`, `2160`, `2161`, `2163`, `2164`, `2165`, `2166`, `2167`, `2168`, `2169`, `2170`, `2173`, `2174`, `2175`, `2176`, `2177`, `2179`, `2182`, `2185`, `2187`, `2188`, `2189`, `2191`, `2193`, `2194`, `2195`, `2196`, `2197`, `2198`, `2200`, `2202`, `2203`, `2204`, `2205`, `2206`, `2207`, `2208`, `2209`, `2210`, `2211`, `2212`, `2213`, `2216`, `2217`, `2219`, `2221`, `2224`, `2227`, `2229`, `2230`, `2232`, `2233`, `2234`, `2235`, `2237`, `2239`, `2240`, `2241`, `2242`, `2243`, `2244`, `2245`, `2246`, `2248`, `2249`, `2250`, `2251`, `2253`, `2254`, `2255`, `2257`, `2258`, `2260`, `2261`, `2262`, `2263`, `2264`, `2265`, `2266`, `2267`, `2268`, `2269`, `2270`, `2271`, `2272`, `2273`, `2274`, `2275`, `2277`, `2278`, `2281`, `2282`, `2283`, `2284`, `2285`, `2287`, `2288`, `2290`, `2291`, `2292`, `2293`, `2294`, `2297`, `2298`, `2299`, `2300`, `2302`, `2304`, `2305`, `2307`, `2308`, `2309`, `2310`, `2312`, `2313`, `2314`, `2315`, `2316`, `2317`, `2318`, `2319`, `2321`, `2322`, `2323`, `2327`, `2329`, `2331`, `2333`, `2335`, `2337`, `2338`, `2339`, `2341`, `2342`, `2343`, `2346`, `2348`, `2349`, `2350`, `2351`, `2352`, `2353`, `37`, `2354`, `2355`, `2357`, `2358`, `2359`, `2360`, `2361`, `2362`, `2364`, `2365`, `2367`, `2368`, `2369`, `2370`, `2372`, `2375`, `2376`, `2378`, `2379`, `2380`, `2381`, `2382`, `2383`, `2384`, `2385`, `2386`, `2389`, `2390`, `2392`, `2393`, `2394`, `2395`, `2398`, `2399`, `2400`, `2402`, `2403`, `2405`, `2406`, `2407`, `2408`, `2409`, `2410`, `2413`, `2415`, `2416`, `2417`, `2418`, `2419`, `2420`, `2422`, `2424`, `2427`, `2428`, `2429`, `2430`, `2431`, `2432`, `2433`, `2435`, `2437`, `1962`, `2438`, `2439`, `2440`, `2442`, `2443`, `2444`, `2445` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.92 | | `TOKEN_P` | 99.93 | | `TOKEN_R` | 99.91 | | `TOKEN_ACC` | 99.99 | | `SENTS_F` | 95.82 | | `SENTS_P` | 95.40 | | `SENTS_R` | 96.25 | | `TAG_ACC` | 98.09 | | `POS_ACC` | 98.14 | | `MORPH_ACC` | 97.34 | | `DEP_UAS` | 93.85 | | `DEP_LAS` | 91.19 | | `LEMMA_ACC` | 98.00 |
{"language": ["pt"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/pt_udv25_portuguesebosque_trf
null
[ "spacy", "token-classification", "pt", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "pt" ]
TAGS #spacy #token-classification #pt #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Portuguese-Bosque ### Label Scheme View label scheme (2079 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (2079 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #pt #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (2079 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Romanian-Nonstandard | Feature | Description | | --- | --- | | **Name** | `ro_udv25_romaniannonstandard_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (7445 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `AdpType=Prep\|Case=Acc`, `Afp`, `Afpf--n`, `Afpfp-n`, `Afpfpon`, `Afpfpoy`, `Afpfprn`, `Afpfpry`, `Afpfson`, `Afpfsoy`, `Afpfsrn`, `Afpfsry`, `Afpmp-n`, `Afpmpoy`, `Afpmprn`, `Afpmpry`, `Afpmpvy`, `Afpms-n`, `Afpmsoy`, `Afpmsrn`, `Afpmsry`, `Afpmsvn`, `Afpmsvy`, `COLON`, `COMMA`, `Cccsp`, `Cccsz`, `Ccssp`, `Ccssz`, `Cscsp`, `Csssp`, `DASH`, `DBLQ`, `Dd3-po---e`, `Dd3-po---o`, `Dd3fpo`, `Dd3fpr`, `Dd3fpr---e`, `Dd3fpr---o`, `Dd3fso`, `Dd3fso---e`, `Dd3fso---o`, `Dd3fsr`, `Dd3fsr---e`, `Dd3fsr---o`, `Dd3mpo`, `Dd3mpr`, `Dd3mpr---e`, `Dd3mpr---o`, `Dd3mso`, `Dd3mso---e`, `Dd3mso---o`, `Dd3msr`, `Dd3msr---e`, `Dd3msr---o`, `Dh1mp`, `Dh1ms`, `Dh2mp`, `Dh2ms`, `Dh3fp`, `Dh3mp`, `Dh3ms`, `Di3--r`, `Di3-po`, `Di3-sr`, `Di3fp`, `Di3fpo`, `Di3fpr`, `Di3fso`, `Di3fsr`, `Di3mpr`, `Di3mso`, `Di3msr`, `Ds1fp-p`, `Ds1fp-s`, `Ds1fsop`, `Ds1fsos`, `Ds1fsrp`, `Ds1fsrs`, `Ds1mp-p`, `Ds1mp-s`, `Ds1ms-p`, `Ds1ms-s`, `Ds2fp-p`, `Ds2fp-s`, `Ds2fsop`, `Ds2fsos`, `Ds2fsrp`, `Ds2fsrs`, `Ds2mp-p`, `Ds2mp-s`, `Ds2ms-p`, `Ds2ms-s`, `Ds3fp-s`, `Ds3fsos`, `Ds3fsrs`, `Ds3mp-s`, `Ds3ms-s`, `Dw3--r`, `Dw3-po`, `Dw3fpr`, `Dw3fso`, `Dw3fsr`, `Dw3mpr`, `Dw3mso`, `Dw3msr`, `Dz3fpr`, `Dz3fsr`, `Dz3msr`, `EXCL`, `EXCLHELLIP`, `HELLIP`, `I`, `LPAR`, `M`, `Mc-p-l`, `Mcfp-l`, `Mcfpol`, `Mcfprln`, `Mcfsoln`, `Mcfsoly`, `Mcfsrln`, `Mcfsrly`, `Mcmp-l`, `Mcms-ln`, `Mcmsoly`, `Mcmsrl`, `Mcmsrly`, `Mffsrln`, `Ml-po`, `Mlfpr`, `Mlmpr`, `Mmfpr-n`, `Mmmpr-n`, `Mmmsr-n`, `Mo---l`, `Mo---ln`, `Mo-s-r`, `Mofprln`, `Mofprly`, `Mofs-l`, `Mofs-ly`, `Mofsrln`, `Mofsrly`, `Momp-ln`, `Moms-l`, `Moms-ln`, `Momsoly`, `Momsrly`, `Ncfpoy`, `Ncfprn`, `Ncfpry`, `Ncfpvy`, `Ncfson`, `Ncfsoy`, `Ncfsrn`, `Ncfsry`, `Ncfsvn`, `Ncfsvy`, `Ncmpoy`, `Ncmprn`, `Ncmpry`, `Ncmpvy`, `Ncmson`, `Ncmsoy`, `Ncmsrn`, `Ncmsry`, `Ncmsvn`, `Ncmsvy`, `Ncnsrn`, `Np`, `Npfpoy`, `Npfprn`, `Npfpry`, `Npfsoy`, `Npfsrn`, `Npfsry`, `Npfsvn`, `Npmpoy`, `Npmprn`, `Npmpry`, `Npmsoy`, `Npmsrn`, `Npmsry`, `Npmsvn`, `Npmsvy`, `PERIOD`, `Pd3-po`, `Pd3-po---o`, `Pd3fpo`, `Pd3fpr`, `Pd3fso`, `Pd3fsr`, `Pd3mpo`, `Pd3mpr`, `Pd3mso`, `Pd3msr`, `Ph1mp`, `Ph1ms`, `Ph2mp`, `Ph2ms`, `Ph3--r`, `Ph3fp`, `Ph3fsr`, `Ph3mp`, `Ph3mpo`, `Ph3mpr`, `Ph3ms`, `Ph3mso`, `Pi3--r`, `Pi3-po`, `Pi3-so`, `Pi3-sr`, `Pi3fpo`, `Pi3fpr`, `Pi3fso`, `Pi3fsr`, `Pi3mpo`, `Pi3mpr`, `Pi3mpry`, `Pi3mso`, `Pi3msr`, `Pi3msry`, `Pp1-pa--------s`, `Pp1-pa--------w`, `Pp1-pd--------s`, `Pp1-pd--------w`, `Pp1-pr`, `Pp1-sa--------s`, `Pp1-sa--------w`, `Pp1-sd--------s`, `Pp1-sd--------w`, `Pp1-sr`, `Pp2-pa--------s`, `Pp2-pa--------w`, `Pp2-pd--------s`, `Pp2-pd--------w`, `Pp2-po`, `Pp2-pr`, `Pp2-sa--------s`, `Pp2-sa--------w`, `Pp2-sd--------s`, `Pp2-sd--------w`, `Pp2-so`, `Pp2-sr`, `Pp3-pd--------s`, `Pp3-pd--------w`, `Pp3-po`, `Pp3-pr`, `Pp3-sd--------w`, `Pp3-so`, `Pp3fpa--------s`, `Pp3fpa--------w`, `Pp3fpr`, `Pp3fsa--------s`, `Pp3fsa--------w`, `Pp3fsd--------s`, `Pp3fso`, `Pp3fsoy`, `Pp3fsr`, `Pp3mpa--------s`, `Pp3mpa--------w`, `Pp3mpo`, `Pp3mpr`, `Pp3msa--------s`, `Pp3msa--------w`, `Pp3msd--------s`, `Pp3mso`, `Pp3msr`, `Pp3msry`, `Ps1fp-p`, `Ps1fp-s`, `Ps1fsrp`, `Ps1fsrs`, `Ps1mp-p`, `Ps1ms-p`, `Ps1ms-s`, `Ps2fp-p`, `Ps2fp-s`, `Ps2fsrp`, `Ps2fsrs`, `Ps2mp-s`, `Ps2ms-p`, `Ps2ms-s`, `Ps3fp-s`, `Ps3fsrs`, `Ps3mp-s`, `Ps3ms-s`, `Pw3--r`, `Pw3-po`, `Pw3-pr`, `Pw3-pry`, `Pw3-so`, `Pw3fpr`, `Pw3fpry`, `Pw3fso`, `Pw3fsr`, `Pw3fsry`, `Pw3mpr`, `Pw3mpry`, `Pw3mso`, `Pw3msr`, `Pw3msry`, `Px3--a--------s`, `Px3--a--------w`, `Px3--d--------s`, `Px3--d--------w`, `Px3--d-------w`, `Pz3-so`, `Pz3-sr`, `Pz3fsr`, `Pz3mso`, `Pz3msr`, `QUEST`, `QUOT`, `Qn`, `Qs`, `Qz`, `RPAR`, `Rg`, `Ri`, `Rw`, `Rz`, `SCOLON`, `Sp`, `Spca`, `Spcg`, `Spsa`, `Spsd`, `Spsg`, `TILDA`, `Td-po`, `Tdfpr`, `Tdfso`, `Tdfsr`, `Tdmpr`, `Tdmso`, `Tdmsr`, `Tf-so`, `Tffsr`, `Tfmso`, `Tfmsr`, `Ti-po`, `Ti-pr`, `Tifso`, `Tifsr`, `Timso`, `Timsr`, `Tsfpr`, `Tsfso`, `Tsfsr`, `Tsmpr`, `Tsmsr`, `Vag-----p`, `Vag-----z`, `Vaii1p`, `Vaii1s`, `Vaii2p`, `Vaii2s`, `Vaii3p`, `Vaii3s`, `Vail3s`, `Vaip1p`, `Vaip1s`, `Vaip2p`, `Vaip2s`, `Vaip3`, `Vaip3p`, `Vaip3s`, `Vais1p`, `Vais1s`, `Vais2p`, `Vais2s`, `Vais3p`, `Vais3s`, `Vam-2p`, `Vam-2p---l`, `Vam-2s--p`, `Vam-2s--z`, `Vam-2s-p`, `Vam-2s-z`, `Vamip3p`, `Vamip3s`, `Vamn`, `Vamsp3`, `Van`, `Van------l`, `Vap`, `Vap--sm-p`, `Vasp1p`, `Vasp1s`, `Vasp2p`, `Vasp2s`, `Vasp3`, `Vasp3s`, `Vmg-----p`, `Vmg-----z`, `Vmii1p`, `Vmii1s`, `Vmii2p`, `Vmii2s`, `Vmii3p`, `Vmii3s`, `Vmil1s`, `Vmil2p`, `Vmil2s`, `Vmil3p`, `Vmil3s`, `Vmip1p`, `Vmip1s`, `Vmip2p`, `Vmip2s`, `Vmip3`, `Vmip3p`, `Vmip3s`, `Vmis1p`, `Vmis1s`, `Vmis2p`, `Vmis2s`, `Vmis3p`, `Vmis3s`, `Vmm-2p`, `Vmm-2p---l`, `Vmm-2s--p`, `Vmm-2s--z`, `Vmn`, `Vmn------l`, `Vmp`, `Vmp--pf-p`, `Vmp--pf-z`, `Vmp--pm-p`, `Vmp--pm-z`, `Vmp--sf-p--o`, `Vmp--sf-p--r`, `Vmp--sf-z--r`, `Vmp--sm-p`, `Vmp--sm-z`, `Vmsp1p`, `Vmsp1s`, `Vmsp2p`, `Vmsp2s`, `Vmsp3`, `Vmsp3s`, `X`, `Y` | | **`morphologizer`** | `AdpType=Prep\|Case=Acc\|POS=ADP`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc,Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `POS=ADV\|PronType=Int,Rel`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=ADV`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Acc,Nom\|POS=PRON\|Person=3\|PronType=Int,Rel`, `POS=CCONJ\|Polarity=Pos`, `Compound=Yes\|POS=SCONJ\|Polarity=Pos`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=PART\|PartType=Sub`, `Mood=Sub\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `POS=VERB\|VerbForm=Inf`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=ADV\|Polarity=Neg`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `POS=AUX\|Polarity=Pos\|VerbForm=Ger`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|Polarity=Pos\|VerbForm=Ger`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Voc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=INTJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `POS=SCONJ\|Polarity=Pos`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres`, `Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `AdpType=Prep\|Case=Acc\|Compound=Yes\|POS=ADP`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc,Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat,Gen\|Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|POS=DET\|Person=3\|PronType=Int,Rel`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Case=Voc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres`, `POS=AUX\|VerbForm=Part`, `POS=VERB\|VerbForm=Part`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `POS=PART\|PartType=Inf`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|VerbForm=Part`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Art`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Art`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Mood=Sub\|POS=AUX\|Person=3\|Tense=Pres`, `Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|VerbForm=Part`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Pos`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=PROPN`, `NumForm=Digit\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|VerbForm=Part`, `POS=PROPN`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Neg`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Compound=Yes\|POS=CCONJ\|Polarity=Neg`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `POS=AUX\|VerbForm=Inf`, `AdpType=Prep\|Case=Gen\|Compound=Yes\|POS=ADP`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|PronType=Prs`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|PronType=Prs`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat,Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=2\|PronType=Emp`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=VERB\|Polarity=Neg\|VerbForm=Ger`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Emp`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Emp`, `Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Variant=Long\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Emp`, `Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|PronType=Prs`, `Case=Voc\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Acc,Nom\|Definite=Def\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|VerbForm=Part`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|PronType=Prs`, `Mood=Sub\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Voc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Compound=Yes\|POS=CCONJ\|Polarity=Pos`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Voc\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Art`, `Case=Dat\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NOUN`, `AdpType=Prep\|Case=Gen\|POS=ADP`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|PronType=Prs`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Emp`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|VerbForm=Part`, `Case=Acc,Nom\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|PronType=Tot`, `Case=Acc,Nom\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|PronType=Prs`, `POS=VERB\|Variant=Long\|VerbForm=Inf`, `Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `AdpType=Prep\|Case=Dat\|POS=ADP`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|PronType=Prs`, `Case=Dat,Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|PronType=Prs`, `Compound=Yes\|POS=ADV\|Polarity=Neg`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Art`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|PronType=Prs`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Dat,Gen\|NumType=Card\|Number=Plur\|POS=NUM\|PronType=Tot`, `Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Definite=Ind\|NumForm=Word\|NumType=Ord\|POS=NUM`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=AUX\|Variant=Long\|VerbForm=Inf`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Dat,Gen\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Past\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `POS=X`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|PronType=Prs`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|PronType=Prs`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|PronType=Prs`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|PronType=Prs`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Emp`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM\|PronType=Tot`, `Case=Acc,Nom\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|VerbForm=Part`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|PronType=Prs`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `POS=AUX\|Polarity=Neg\|VerbForm=Ger`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres`, `Case=Acc,Nom\|POS=DET\|Person=3\|PronType=Ind`, `Case=Voc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `POS=CCONJ\|Polarity=Neg`, `Case=Dat,Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Voc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Polite=Form\|PronType=Prs`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Past`, `Gender=Masc\|Number=Sing\|POS=AUX\|Polarity=Pos\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Emp`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Voc\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Voc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `POS=PRON\|Polarity=Pos`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Emp`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=2\|PronType=Emp`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|Position=Postnom\|PronType=Dem`, `Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Emp`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Emp`, `Case=Dat,Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Compound=Yes\|POS=ADP\|Polarity=Pos`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Emp`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=ADJ`, `Case=Voc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Emp`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Neg`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `POS=ADV\|PronType=Ind`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `POS=AUX\|Polarity=Pos`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres`, `NumForm=Roman\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Voc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp`, `Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|Variant=Long\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|Variant=Long`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Imp`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|POS=PRON\|Person=3\|PronType=Emp`, `NumForm=Word\|NumType=Ord\|POS=NUM`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Emp`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Dat,Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=1\|PronType=Emp`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=1\|PronType=Emp`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Art`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Emp`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Definite=Ind\|Degree=Pos\|Gender=Fem\|POS=ADJ`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Case=Voc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Past`, `Case=Dat,Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Int,Rel`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pqp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pqp\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pqp`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Weak`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Weak`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|POS=AUX\|Person=3\|Tense=Pres`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Neg`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Strong`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|POS=ADJ`, `POS=DET`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=ADP`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumType=Mult\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Definite=Ind\|Gender=Masc\|NumType=Mult\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Emp`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Neg`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Degree=Pos\|POS=ADJ`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Definite=Ind\|Gender=Masc\|NumType=Mult\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Part`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=ADV\|Person=3\|PronType=Int,Rel`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Polite=Form\|PronType=Prs` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advcl:tcl`, `advmod`, `advmod:tmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `cc:preconj`, `ccomp`, `ccomp:pmod`, `compound`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `expl`, `expl:impers`, `expl:pass`, `expl:poss`, `expl:pv`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nmod:agent`, `nmod:pmod`, `nmod:tmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `1`, `3`, `4`, `6`, `8`, `12`, `14`, `16`, `19`, `23`, `29`, `30`, `32`, `35`, `37`, `39`, `40`, `45`, `46`, `47`, `51`, `53`, `54`, `57`, `61`, `63`, `65`, `66`, `69`, `33`, `71`, `73`, `76`, `79`, `80`, `84`, `86`, `87`, `88`, `89`, `92`, `95`, `97`, `100`, `103`, `105`, `107`, `110`, `112`, `113`, `115`, `117`, `120`, `121`, `123`, `125`, `126`, `128`, `130`, `132`, `133`, `136`, `140`, `143`, `145`, `147`, `58`, `148`, `151`, `154`, `157`, `159`, `163`, `165`, `167`, `171`, `174`, `176`, `178`, `180`, `182`, `184`, `185`, `187`, `188`, `190`, `192`, `196`, `197`, `199`, `200`, `202`, `206`, `208`, `210`, `211`, `213`, `215`, `216`, `219`, `221`, `223`, `225`, `226`, `228`, `230`, `232`, `236`, `238`, `241`, `242`, `244`, `246`, `248`, `251`, `253`, `255`, `258`, `260`, `264`, `265`, `267`, `272`, `275`, `278`, `280`, `281`, `284`, `286`, `287`, `290`, `291`, `292`, `295`, `296`, `298`, `300`, `301`, `302`, `305`, `306`, `307`, `309`, `310`, `312`, `314`, `315`, `317`, `319`, `321`, `323`, `324`, `327`, `330`, `332`, `334`, `335`, `337`, `339`, `340`, `343`, `344`, `345`, `346`, `350`, `351`, `353`, `355`, `357`, `360`, `362`, `366`, `368`, `369`, `370`, `371`, `224`, `374`, `376`, `378`, `379`, `381`, `384`, `385`, `386`, `388`, `389`, `391`, `392`, `393`, `396`, `398`, `399`, `403`, `406`, `408`, `411`, `413`, `415`, `418`, `422`, `423`, `426`, `427`, `431`, `433`, `436`, `438`, `440`, `442`, `445`, `448`, `449`, `450`, `451`, `452`, `454`, `455`, `457`, `459`, `460`, `462`, `464`, `466`, `468`, `471`, `472`, `473`, `474`, `475`, `478`, `481`, `482`, `485`, `486`, `488`, `490`, `492`, `494`, `495`, `497`, `498`, `499`, `501`, `503`, `504`, `506`, `508`, `510`, `513`, `514`, `515`, `516`, `518`, `519`, `521`, `523`, `524`, `526`, `527`, `528`, `530`, `533`, `96`, `537`, `538`, `539`, `542`, `544`, `545`, `547`, `548`, `553`, `555`, `556`, `558`, `559`, `561`, `562`, `563`, `565`, `566`, `570`, `572`, `573`, `575`, `577`, `578`, `579`, `581`, `583`, `584`, `586`, `588`, `589`, `592`, `594`, `595`, `596`, `598`, `599`, `600`, `601`, `604`, `606`, `607`, `608`, `612`, `613`, `616`, `619`, `621`, `623`, `625`, `628`, `629`, `630`, `632`, `635`, `636`, `173`, `639`, `641`, `643`, `647`, `649`, `651`, `654`, `656`, `658`, `659`, `661`, `662`, `663`, `666`, `668`, `669`, `670`, `672`, `673`, `676`, `677`, `679`, `681`, `683`, `685`, `687`, `689`, `690`, `691`, `693`, `694`, `695`, `696`, `698`, `699`, `701`, `702`, `703`, `704`, `705`, `706`, `708`, `712`, `713`, `716`, `718`, `720`, `722`, `724`, `725`, `729`, `732`, `734`, `735`, `736`, `739`, `742`, `745`, `747`, `750`, `753`, `755`, `758`, `759`, `761`, `763`, `764`, `766`, `768`, `769`, `771`, `772`, `774`, `777`, `778`, `781`, `784`, `785`, `787`, `790`, `794`, `797`, `800`, `801`, `802`, `804`, `807`, `809`, `814`, `817`, `820`, `821`, `822`, `824`, `827`, `828`, `829`, `832`, `834`, `836`, `837`, `839`, `840`, `841`, `843`, `844`, `846`, `847`, `848`, `850`, `851`, `852`, `855`, `116`, `856`, `860`, `861`, `863`, `866`, `868`, `869`, `871`, `874`, `875`, `877`, `879`, `881`, `884`, `886`, `888`, `890`, `891`, `892`, `894`, `897`, `898`, `900`, `901`, `902`, `904`, `905`, `908`, `913`, `914`, `916`, `917`, `918`, `921`, `922`, `924`, `927`, `929`, `932`, `934`, `935`, `937`, `939`, `941`, `943`, `946`, `948`, `949`, `951`, `952`, `954`, `955`, `956`, `958`, `960`, `963`, `965`, `968`, `971`, `972`, `974`, `978`, `981`, `983`, `984`, `986`, `988`, `989`, `991`, `992`, `994`, `997`, `998`, `1000`, `1001`, `1002`, `1004`, `1006`, `1007`, `1008`, `1010`, `1011`, `1013`, `1014`, `1015`, `1017`, `1019`, `1022`, `1024`, `1029`, `1030`, `1032`, `1034`, `767`, `1035`, `1036`, `1037`, `1038`, `1040`, `1041`, `1042`, `1044`, `1045`, `1046`, `1049`, `1050`, `1052`, `1053`, `1055`, `1058`, `1061`, `1065`, `1067`, `1068`, `1071`, `1072`, `1074`, `1076`, `1078`, `1080`, `1081`, `1083`, `1084`, `1086`, `1087`, `1090`, `1091`, `1093`, `1097`, `1098`, `1099`, `1100`, `1102`, `1105`, `1106`, `1107`, `1110`, `1111`, `1113`, `1116`, `1123`, `1126`, `1127`, `1128`, `1129`, `1131`, `1132`, `1133`, `1135`, `1137`, `1139`, `1141`, `1144`, `1145`, `1147`, `1149`, `1150`, `1152`, `1154`, `1155`, `1156`, `1157`, `1158`, `1115`, `1159`, `1160`, `1162`, `1163`, `1164`, `1165`, `1168`, `1170`, `1172`, `1173`, `1174`, `1175`, `1176`, `1177`, `1178`, `1179`, `1181`, `1183`, `1184`, `1186`, `1187`, `1191`, `1195`, `1197`, `1198`, `1200`, `1201`, `1203`, `1205`, `1207`, `1209`, `1211`, `1212`, `1214`, `1215`, `1217`, `1219`, `1220`, `1223`, `1225`, `1227`, `183`, `1228`, `1231`, `1232`, `1234`, `1237`, `1239`, `1240`, `1242`, `1245`, `1247`, `1248`, `1249`, `1251`, `1252`, `1254`, `1255`, `1257`, `1259`, `1261`, `1263`, `1264`, `1266`, `1268`, `1272`, `1273`, `1277`, `1278`, `1280`, `1281`, `1282`, `1285`, `1286`, `1290`, `1291`, `1294`, `1296`, `1298`, `1300`, `1301`, `1303`, `1305`, `1308`, `1309`, `1310`, `1311`, `1312`, `1314`, `1316`, `1318`, `1320`, `1322`, `1324`, `1325`, `1327`, `1329`, `1331`, `1333`, `1335`, `1337`, `1338`, `1339`, `1341`, `1342`, `1343`, `1344`, `1346`, `1347`, `1350`, `142`, `1354`, `1355`, `1357`, `1358`, `1360`, `1362`, `1365`, `1366`, `1367`, `1368`, `1369`, `744`, `1370`, `1372`, `1373`, `1374`, `1375`, `1376`, `1377`, `1378`, `1380`, `1381`, `1382`, `1383`, `1386`, `1388`, `1389`, `1390`, `1394`, `1396`, `1399`, `1402`, `1405`, `1407`, `1409`, `1411`, `1412`, `1413`, `1414`, `1418`, `1419`, `1421`, `1422`, `1423`, `1424`, `1426`, `1427`, `1430`, `1432`, `1433`, `1434`, `1436`, `1438`, `1439`, `1440`, `1441`, `1442`, `1443`, `1446`, `1447`, `1448`, `1449`, `1450`, `1454`, `1456`, `1458`, `1459`, `1460`, `1464`, `1465`, `1467`, `1468`, `1469`, `1470`, `1472`, `1473`, `1475`, `1478`, `1479`, `1481`, `1483`, `1484`, `1486`, `1003`, `1489`, `1491`, `1493`, `1496`, `1498`, `1499`, `1501`, `1503`, `1506`, `1508`, `1511`, `1514`, `1515`, `1517`, `1518`, `1521`, `1522`, `1523`, `1524`, `1525`, `1528`, `1530`, `1531`, `1532`, `1533`, `1537`, `1539`, `1541`, `1542`, `1543`, `1545`, `1546`, `1547`, `1549`, `1550`, `1551`, `1552`, `1553`, `1555`, `1558`, `1559`, `1561`, `1562`, `1564`, `1566`, `1568`, `1570`, `1572`, `1576`, `1577`, `1579`, `1580`, `1582`, `1584`, `1585`, `1588`, `1590`, `1592`, `1593`, `1594`, `1596`, `1597`, `1599`, `1600`, `1601`, `1603`, `1605`, `1607`, `1609`, `1613`, `1615`, `1617`, `1619`, `1622`, `1623`, `1624`, `1625`, `1626`, `1627`, `1628`, `1629`, `1630`, `1633`, `1636`, `1638`, `1639`, `1640`, `1641`, `1643`, `1645`, `1647`, `1649`, `1652`, `1655`, `1656`, `1658`, `1660`, `1662`, `1665`, `1667`, `1669`, `1670`, `1671`, `1673`, `1674`, `1677`, `1678`, `1679`, `1680`, `1683`, `1686`, `1688`, `1689`, `1691`, `1693`, `1694`, `1696`, `1698`, `1699`, `1703`, `1704`, `1707`, `1708`, `1710`, `1712`, `1714`, `1716`, `1718`, `1720`, `1722`, `1724`, `1725`, `1726`, `1727`, `1729`, `1730`, `1731`, `1733`, `1734`, `1736`, `1737`, `1740`, `1741`, `1743`, `1744`, `1746`, `1747`, `1749`, `1750`, `1751`, `1752`, `1754`, `1755`, `1757`, `1758`, `1760`, `1762`, `1764`, `1766`, `1767`, `1769`, `1771`, `1774`, `1777`, `1779`, `1780`, `1781`, `1783`, `1785`, `1786`, `1789`, `1790`, `1793`, `1796`, `1799`, `1800`, `1802`, `1804`, `1805`, `1807`, `1809`, `1810`, `1813`, `1815`, `1817`, `1819`, `1822`, `1823`, `1825`, `1826`, `1827`, `1829`, `1830`, `1833`, `1835`, `1837`, `1840`, `1843`, `1844`, `1846`, `1848`, `1850`, `1853`, `1854`, `1855`, `1857`, `1859`, `1863`, `1865`, `1867`, `1870`, `1872`, `1873`, `1874`, `1875`, `1876`, `1878`, `1879`, `1880`, `1882`, `1884`, `1885`, `1888`, `1889`, `1892`, `1893`, `1895`, `1896`, `1897`, `1898`, `1899`, `1901`, `1903`, `1905`, `1907`, `1909`, `1911`, `1913`, `1915`, `1916`, `1918`, `1919`, `1921`, `1923`, `1925`, `1928`, `1931`, `1933`, `1935`, `1936`, `1938`, `1940`, `1943`, `1945`, `1946`, `1948`, `1951`, `1954`, `1956`, `1957`, `1958`, `1960`, `1962`, `1963`, `1965`, `1967`, `1969`, `1971`, `1973`, `1976`, `1977`, `1979`, `1981`, `1984`, `1986`, `1988`, `1989`, `1991`, `1994`, `1996`, `1999`, `2000`, `2001`, `2003`, `2004`, `2006`, `2008`, `2010`, `2011`, `2016`, `2017`, `2019`, `2020`, `2022`, `2023`, `2024`, `2025`, `2026`, `2027`, `2029`, `2031`, `2033`, `2034`, `2035`, `2036`, `2038`, `2041`, `2042`, `2043`, `2045`, `2047`, `2048`, `2049`, `2051`, `2053`, `2055`, `2057`, `2060`, `2063`, `2064`, `2066`, `2067`, `2068`, `2070`, `2071`, `2072`, `2073`, `2074`, `2075`, `2076`, `2079`, `2080`, `2082`, `2083`, `2084`, `2085`, `2086`, `2087`, `2089`, `2092`, `2094`, `2095`, `2098`, `2100`, `2102`, `2104`, `2105`, `2107`, `2109`, `2110`, `2112`, `2115`, `2117`, `2119`, `2120`, `2121`, `2123`, `2124`, `1482`, `2125`, `2127`, `2129`, `2132`, `2134`, `2137`, `2139`, `2140`, `2143`, `2146`, `2147`, `2148`, `2149`, `2150`, `2152`, `2154`, `2156`, `2157`, `2158`, `2159`, `2160`, `2161`, `2162`, `2164`, `2166`, `2168`, `2169`, `2170`, `2171`, `2173`, `2174`, `2177`, `2178`, `2180`, `2182`, `2183`, `2186`, `2188`, `2189`, `2191`, `2192`, `2193`, `2194`, `2195`, `2197`, `2198`, `2199`, `2200`, `2202`, `2206`, `2208`, `2209`, `2211`, `2214`, `2216`, `2217`, `2220`, `2221`, `2222`, `2223`, `2224`, `2225`, `2226`, `2228`, `2229`, `2230`, `2232`, `2234`, `2236`, `2237`, `2239`, `2241`, `2242`, `2243`, `2244`, `2245`, `2246`, `2248`, `2249`, `2251`, `2252`, `2172`, `2254`, `2256`, `2257`, `2258`, `2259`, `2261`, `2262`, `2263`, `2265`, `2267`, `2268`, `2270`, `2274`, `2277`, `2279`, `2280`, `2281`, `2282`, `2284`, `2286`, `2287`, `2291`, `2293`, `2294`, `2296`, `2297`, `2298`, `2300`, `2303`, `2305`, `2307`, `2308`, `2310`, `2312`, `2314`, `2316`, `2317`, `2319`, `2321`, `2323`, `2325`, `2326`, `2328`, `2329`, `2330`, `2331`, `2332`, `2333`, `2334`, `2336`, `2338`, `2341`, `2343`, `2345`, `2348`, `2349`, `2351`, `2352`, `2353`, `2355`, `2356`, `2358`, `2359`, `2361`, `2362`, `2364`, `2366`, `2368`, `2369`, `2371`, `2373`, `2375`, `2377`, `2378`, `2379`, `2381`, `2382`, `2383`, `2384`, `2385`, `2387`, `2389`, `2392`, `2395`, `2396`, `2398`, `2399`, `2400`, `2404`, `2405`, `2406`, `2410`, `2411`, `2412`, `2413`, `2415`, `2418`, `2420`, `2421`, `2424`, `2425`, `2426`, `2429`, `2432`, `2434`, `2436`, `2437`, `2439`, `2440`, `2441`, `2443`, `2444`, `2446`, `2447`, `2450`, `2452`, `2454`, `2456`, `2459`, `2461`, `2464`, `2465`, `2467`, `2469`, `2471`, `2473`, `2474`, `2476`, `2478`, `2480`, `2481`, `2482`, `2483`, `2484`, `2486`, `2488`, `2489`, `2490`, `2491`, `2493`, `2495`, `2497`, `2499`, `2500`, `2502`, `2503`, `2505`, `2506`, `2507`, `2509`, `2511`, `2513`, `2514`, `2516`, `2518`, `2519`, `2521`, `2522`, `2524`, `2527`, `2528`, `2529`, `2531`, `2533`, `2534`, `2536`, `2537`, `2538`, `2540`, `2542`, `2543`, `2545`, `2546`, `2547`, `2549`, `2550`, `2552`, `2553`, `2556`, `2558`, `2560`, `2561`, `2562`, `2563`, `2564`, `2566`, `2567`, `2568`, `2572`, `2573`, `2574`, `2576`, `2577`, `2579`, `2580`, `2581`, `2583`, `2584`, `2585`, `2586`, `2587`, `2588`, `2589`, `2590`, `2591`, `2592`, `2594`, `2595`, `2598`, `2599`, `2603`, `2604`, `2606`, `2607`, `2608`, `2609`, `2612`, `2616`, `2619`, `2620`, `2622`, `2624`, `2625`, `2626`, `2627`, `2628`, `2631`, `2633`, `2635`, `2637`, `2638`, `2640`, `2641`, `2642`, `2643`, `2645`, `2646`, `2647`, `2649`, `2651`, `2654`, `2655`, `2658`, `2660`, `2661`, `2662`, `2663`, `2665`, `2666`, `1717`, `2667`, `2668`, `2669`, `2670`, `2671`, `2673`, `2674`, `2675`, `2676`, `2678`, `2680`, `2681`, `2684`, `2685`, `2687`, `2688`, `2690`, `2691`, `2692`, `2694`, `2695`, `2696`, `2697`, `2699`, `2701`, `2702`, `2705`, `2708`, `2709`, `2711`, `2714`, `2715`, `2716`, `2718`, `2721`, `2723`, `2724`, `2727`, `2728`, `2729`, `2732`, `2734`, `2737`, `2739`, `2740`, `2742`, `2743`, `2745`, `2748`, `2751`, `2754`, `2755`, `2756`, `2757`, `2758`, `2760`, `2762`, `2764`, `2765`, `2766`, `2428`, `2767`, `2768`, `2769`, `2770`, `2771`, `2774`, `2777`, `2779`, `2782`, `2783`, `2784`, `2786`, `2788`, `2789`, `2790`, `2791`, `2792`, `2794`, `2795`, `2796`, `2797`, `2799`, `2800`, `2801`, `2803`, `2807`, `2808`, `2809`, `2812`, `2816`, `2819`, `2822`, `2823`, `2824`, `2826`, `2827`, `2828`, `2830`, `2831`, `2832`, `2833`, `2834`, `2835`, `2837`, `2839`, `2840`, `2842`, `2843`, `2845`, `2846`, `2847`, `2848`, `2849`, `2851`, `2853`, `2854`, `2855`, `2856`, `2857`, `2858`, `2859`, `2860`, `2861`, `2862`, `2864`, `2865`, `2866`, `2868`, `2872`, `2875`, `2876`, `2878`, `2880`, `2881`, `2882`, `2883`, `2885`, `2886`, `2888`, `2889`, `2890`, `2891`, `2893`, `2894`, `2895`, `2896`, `2897`, `2898`, `2899`, `2902`, `2904`, `2906`, `2907`, `2908`, `2909`, `2912`, `2913`, `2915`, `2916`, `2917`, `2918`, `2921`, `2922`, `2923`, `2924`, `2925`, `2926`, `2928`, `2930`, `2931`, `2935`, `2936`, `2937`, `2938`, `2940`, `2233`, `2942`, `2944`, `2945`, `2947`, `2948`, `2949`, `2951`, `923`, `2952`, `2953`, `2954`, `2955`, `2957`, `2959`, `2962`, `2964`, `2966`, `2967`, `2969`, `2972`, `2973`, `2974`, `2976`, `1715`, `2977`, `2979`, `2980`, `36`, `2981`, `2983`, `2985`, `2986`, `2990`, `2991`, `2993`, `2995`, `2997`, `2998`, `3001`, `3002`, `3003`, `3005`, `3006`, `3007`, `3009`, `3012`, `3014`, `3015`, `3016`, `3018`, `3020`, `3021`, `3022`, `3023`, `3026`, `3028`, `3029`, `3030`, `3032`, `3035`, `3037`, `3039`, `3040`, `3042`, `3044`, `3047`, `3050`, `3052`, `3053`, `3041`, `3054`, `3055`, `3056`, `3057`, `3058`, `3059`, `3061`, `3062`, `3064`, `3066`, `3067`, `3068`, `3070`, `3071`, `3072`, `3073`, `3075`, `3078`, `3082`, `3084`, `3086`, `3087`, `3088`, `3090`, `3091`, `3092`, `3095`, `3096`, `3097`, `3099`, `3100`, `3102`, `3107`, `3109`, `3111`, `3112`, `3114`, `3116`, `3118`, `3120`, `3121`, `3123`, `3124`, `3126`, `3127`, `3129`, `3130`, `3133`, `3134`, `3135`, `3136`, `3137`, `3138`, `3139`, `3140`, `3142`, `3144`, `3145`, `3146`, `3147`, `3148`, `3149`, `3150`, `3151`, `3153`, `3155`, `3157`, `3158`, `3159`, `3160`, `3161`, `3163`, `3165`, `3167`, `3168`, `3170`, `3171`, `3172`, `3174`, `3176`, `3178`, `3180`, `3181`, `3184`, `3185`, `3186`, `3188`, `3189`, `3190`, `3192`, `3194`, `3195`, `3196`, `3197`, `3200`, `3201`, `3202`, `3203`, `3204`, `3205`, `3206`, `3207`, `3210`, `3211`, `3213`, `3214`, `3217`, `3218`, `3220`, `3222`, `3224`, `3227`, `3229`, `3230`, `3231`, `3233`, `3234`, `3235`, `3236`, `3237`, `3240`, `3241`, `3243`, `3245`, `3247`, `3250`, `3252`, `3253`, `3254`, `3255`, `3257`, `3259`, `3260`, `3262`, `3264`, `3266`, `3268`, `3269`, `3271`, `3273`, `3275`, `3277`, `3278`, `3141`, `3279`, `3280`, `3281`, `3282`, `3284`, `3285`, `3287`, `3288`, `3290`, `3291`, `3293`, `3294`, `3296`, `3297`, `3299`, `3300`, `3302`, `3304`, `3305`, `3306`, `3308`, `3309`, `3311`, `3313`, `3314`, `3315`, `3316`, `3317`, `3319`, `3321`, `3323`, `3324`, `3325`, `3327`, `3329`, `3332`, `3333`, `3334`, `3336`, `3337`, `3338`, `3340`, `3341`, `3342`, `3344`, `3346`, `3348`, `3351`, `3353`, `3355`, `3357`, `3360`, `3361`, `3364`, `3367`, `3369`, `3370`, `3372`, `3373`, `3374`, `3377`, `3379`, `3380`, `3382`, `3384`, `3385`, `3387`, `3389`, `3391`, `3392`, `3393`, `3394`, `3395`, `3397`, `3399`, `3400`, `3402`, `3403`, `3404`, `3405`, `3406`, `3407`, `3408`, `3412`, `3414`, `3416`, `3418`, `3420`, `3422`, `3423`, `3424`, `3425`, `3426`, `3428`, `3429`, `3431`, `3432`, `3435`, `3436`, `3438`, `3439`, `3441`, `3443`, `3445`, `3447`, `3450`, `3451`, `3453`, `3455`, `3456`, `3457`, `3458`, `3459`, `3461`, `3462`, `3464`, `3465`, `3467`, `3469`, `3471`, `3473`, `3474`, `3475`, `3476`, `3478`, `3479`, `3481`, `3482`, `3484`, `3487`, `3488`, `3489`, `3491`, `3492`, `3493`, `3494`, `3497`, `3500`, `3501`, `3502`, `3504`, `3506`, `3507`, `3508`, `3511`, `3515`, `3516`, `3518`, `3521`, `3524`, `3526`, `3528`, `3529`, `3532`, `3535`, `3537`, `3538`, `3539`, `3540`, `3541`, `3543`, `3545`, `3546`, `3547`, `3548`, `3549`, `3550`, `3551`, `3553`, `3555`, `3556`, `3557`, `3559`, `3561`, `3563`, `3564`, `3565`, `3567`, `3570`, `3572`, `3574`, `3575`, `3577`, `3579`, `3581`, `3582`, `3584`, `3585`, `3587`, `3588`, `3590`, `3591`, `3592`, `3594`, `3596`, `3599`, `3600`, `3603`, `3605`, `3606`, `3607`, `3608`, `3610`, `3612`, `3615`, `3617`, `3618`, `3619`, `3620`, `3621`, `3623`, `3624`, `3625`, `3626`, `3628`, `3629`, `3630`, `3632`, `3633`, `3635`, `3637`, `3639`, `3642`, `3643`, `3645`, `3646`, `3649`, `3650`, `3652`, `3653`, `3655`, `3656`, `3657`, `3658`, `3659`, `3662`, `3664`, `3665`, `3666`, `3668`, `3671`, `3672`, `3674`, `3676`, `3678`, `3679`, `3680`, `3681`, `3683`, `3684`, `3685`, `3687`, `3688`, `3689`, `3690`, `3691`, `3693`, `3694`, `3695`, `3697`, `3698`, `3699`, `3700`, `3702`, `3703`, `3704`, `3706`, `3709`, `3712`, `3713`, `3714`, `3718`, `3719`, `3721`, `3722`, `3724`, `3725`, `3726`, `3727`, `3730`, `3731`, `3732`, `3734`, `3735`, `3737`, `3739`, `3742`, `3743`, `3744`, `3745`, `3746`, `3747`, `3748`, `3750`, `3752`, `3753`, `3755`, `3757`, `3759`, `3760`, `3762`, `3763`, `3764`, `3765`, `3766`, `3768`, `3770`, `3771`, `3774`, `3775`, `3776`, `3778`, `3779`, `3780`, `3782`, `3784`, `3785`, `3786`, `3789`, `3792`, `3794`, `3795`, `3796`, `3798`, `3799`, `3800`, `3802`, `3803`, `3805`, `3807`, `3808`, `3809`, `3812`, `3815`, `3817`, `3818`, `3819`, `3821`, `3823`, `3824`, `3826`, `3828`, `3829`, `3831`, `3833`, `3834`, `3836`, `3839`, `3840`, `3843`, `3846`, `3849`, `3851`, `3852`, `3853`, `3855`, `3856`, `3859`, `3860`, `3862`, `3864`, `3865`, `3866`, `3868`, `3870`, `3871`, `3872`, `3874`, `3875`, `3876`, `3878`, `3879`, `3880`, `3881`, `3882`, `3884`, `3886`, `3887`, `3890`, `3891`, `3892`, `3893`, `3894`, `3896`, `3897`, `3899`, `3900`, `3901`, `3903`, `3904`, `3905`, `3906`, `3907`, `3908`, `3909`, `3910`, `3911`, `3912`, `3913`, `3915`, `3916`, `3919`, `3921`, `3923`, `3924`, `3926`, `3927`, `3928`, `3930`, `3931`, `3932`, `3934`, `3936`, `3939`, `3941`, `3942`, `3943`, `3946`, `3948`, `3949`, `3950`, `3951`, `3952`, `3954`, `3956`, `3957`, `3958`, `3960`, `3961`, `3964`, `3967`, `3968`, `3971`, `3974`, `3975`, `3976`, `3979`, `3981`, `3983`, `3985`, `3986`, `3989`, `3990`, `3993`, `3994`, `3995`, `3996`, `3997`, `3998`, `3999`, `4001`, `4003`, `4004`, `4005`, `4007`, `4009`, `4010`, `4011`, `4013`, `4014`, `4015`, `4017`, `4019`, `4022`, `4023`, `4025`, `4026`, `4027`, `4028`, `4029`, `4030`, `4032`, `4035`, `4037`, `4040`, `4041`, `4042`, `4043`, `4045`, `4048`, `4051`, `4053`, `4055`, `4057`, `4058`, `4059`, `4060`, `4061`, `4062`, `4063`, `4065`, `4067`, `4068`, `4070`, `4072`, `4073`, `4074`, `4075`, `4077`, `4080`, `4081`, `4083`, `4085`, `4088`, `4089`, `4091`, `4093`, `4094`, `4095`, `4096`, `4098`, `4101`, `4102`, `4104`, `4105`, `4106`, `4108`, `4109`, `4111`, `4112`, `4113`, `4115`, `4117`, `4119`, `4122`, `4123`, `4124`, `4125`, `4126`, `4127`, `4128`, `4130`, `4131`, `4134`, `4135`, `4136`, `4137`, `4138`, `4139`, `4141`, `4143`, `4145`, `4147`, `4148`, `4150`, `4151`, `4154`, `4155`, `4157`, `4159`, `4160`, `4163`, `4164`, `4166`, `4169`, `4171`, `4172`, `4173`, `4175`, `4176`, `4177`, `4179`, `4180`, `4181`, `4183`, `4184`, `4185`, `4187`, `4188`, `4190`, `4191`, `4193`, `4194`, `4195`, `4198`, `4201`, `4204`, `4205`, `4206`, `4209`, `4210`, `4212`, `4215`, `4216`, `4218`, `4219`, `4224`, `4225`, `4227`, `4229`, `4230`, `4231`, `4232`, `4234`, `4236`, `4237`, `4238`, `4239`, `4242`, `4244`, `4246`, `4247`, `4250`, `4251`, `4253`, `4256`, `4260`, `4261`, `4263`, `4265`, `4267`, `4268`, `4269`, `4270`, `4272`, `4274`, `4277`, `4278`, `4279`, `4281`, `4282`, `4284`, `4286`, `4287`, `4288`, `4291`, `4293`, `4294`, `4295`, `4296`, `4298`, `4299`, `4301`, `4303`, `4305`, `4306`, `4307`, `4308`, `4309`, `4310`, `4313`, `4315`, `4317`, `4319`, `4320`, `4322`, `4324`, `4326`, `4328`, `4329`, `4331`, `4332`, `4333`, `4334`, `4335`, `4336`, `4338`, `4340`, `4343`, `4344`, `4346`, `4347`, `4348`, `4349`, `4351`, `4353`, `4355`, `4357`, `4358`, `4359`, `4360`, `4361`, `4362`, `4363`, `4365`, `4367`, `4369`, `4372`, `4373`, `4374`, `4375`, `4379`, `4381`, `4383`, `4385`, `4386`, `4388`, `4389`, `4391`, `4392`, `4393`, `4395`, `4396`, `4399`, `4400`, `4402`, `4404`, `4406`, `4407`, `4411`, `4412`, `4413`, `4414`, `4415`, `4418`, `4420`, `4422`, `4425`, `4426`, `4428`, `4429`, `4430`, `4432`, `4433`, `4435`, `4438`, `4440`, `4442`, `4444`, `4445`, `4446`, `4448`, `4450`, `4451`, `4452`, `4455`, `4457`, `4459`, `4461`, `4462`, `4464`, `4467`, `4468`, `4469`, `4470`, `4471`, `4473`, `4474`, `4475`, `4478`, `4480`, `4483`, `4485`, `4487`, `4488`, `4490`, `4491`, `4493`, `867`, `4494`, `4496`, `4497`, `4498`, `4499`, `4500`, `4501`, `4503`, `4505`, `4507`, `4508`, `4509`, `4510`, `4512`, `4515`, `4517`, `4518`, `4519`, `4521`, `1589`, `4522`, `4524`, `4525`, `4527`, `4529`, `4531`, `4533`, `4534`, `4535`, `4537`, `4538`, `4539`, `4540`, `4541`, `4542`, `4543`, `4544`, `4545`, `4546`, `4547`, `4549`, `4551`, `4552`, `4553`, `4554`, `4556`, `4557`, `4558`, `4559`, `4562`, `4563`, `4566`, `4567`, `4569`, `4570`, `4572`, `4574`, `4576`, `4577`, `4579`, `4580`, `4581`, `4583`, `4585`, `4586`, `4588`, `4591`, `4592`, `4594`, `4595`, `4596`, `4597`, `4598`, `4599`, `4600`, `4601`, `4603`, `4606`, `4608`, `4609`, `4610`, `4612`, `4614`, `4616`, `4617`, `4620`, `4621`, `4623`, `4624`, `4625`, `4626`, `4627`, `4629`, `4631`, `4633`, `4635`, `4636`, `4637`, `4638`, `4639`, `4640`, `4642`, `4644`, `4646`, `4647`, `4648`, `4649`, `4650`, `4651`, `4653`, `4655`, `4657`, `4658`, `4659`, `4661`, `4662`, `4663`, `4664`, `4665`, `4667`, `4668`, `4669`, `4671`, `4673`, `4675`, `4676`, `4680`, `4681`, `4683`, `4684`, `4686`, `4687`, `4690`, `4693`, `4695`, `4696`, `4699`, `4700`, `4702`, `4703`, `4704`, `4707`, `4708`, `4709`, `4710`, `4711`, `4713`, `4715`, `4716`, `4718`, `4719`, `4721`, `4726`, `4727`, `4729`, `4731`, `4735`, `4737`, `4738`, `4739`, `4741`, `4743`, `4744`, `4748`, `4749`, `4753`, `4755`, `4756`, `4757`, `4758`, `4759`, `4761`, `4763`, `4764`, `4766`, `4768`, `4769`, `4770`, `4772`, `4774`, `4775`, `4777`, `4779`, `4780`, `4782`, `4783`, `4785`, `4787`, `4788`, `4791`, `4792`, `4793`, `4795`, `4797`, `4801`, `4802`, `4804`, `4806`, `4808`, `4809`, `4810`, `4811`, `4813`, `4815`, `4817`, `4818`, `4820`, `4821`, `4823`, `4826`, `4827`, `4828`, `4830`, `4831`, `4833`, `4834`, `4838`, `4840`, `4843`, `4845`, `4847`, `4848`, `4849`, `4850`, `4851`, `4854`, `4855`, `4856`, `4858`, `4860`, `4862`, `4863`, `4864`, `4866`, `4867`, `4869`, `4871`, `4872`, `4874`, `4875`, `4876`, `4878`, `4880`, `4881`, `4883`, `4885`, `4886`, `4889`, `4890`, `4892`, `4893`, `4894`, `4896`, `4897`, `4899`, `4900`, `4902`, `4903`, `4904`, `4905`, `4907`, `4908`, `4909`, `4911`, `4913`, `4914`, `4918`, `4920`, `4922`, `4924`, `4925`, `4926`, `4927`, `4928`, `4929`, `4931`, `4932`, `4933`, `4934`, `4935`, `4937`, `813`, `4941`, `4943`, `4945`, `4946`, `4947`, `4948`, `4950`, `4952`, `4954`, `4955`, `4956`, `4959`, `4962`, `4963`, `4964`, `4967`, `4969`, `4970`, `4972`, `4973`, `4974`, `4976`, `4977`, `4978`, `4980`, `4982`, `4984`, `4986`, `4989`, `4990`, `4991`, `4992`, `4994`, `4995`, `4997`, `4999`, `5002`, `5003`, `5004`, `5005`, `5007`, `5009`, `5010`, `5013`, `5014`, `5016`, `5017`, `5018`, `5019`, `5020`, `5021`, `5022`, `5024`, `5025`, `5026`, `5027`, `5029`, `5030`, `5032`, `5034`, `5035`, `5036`, `5037`, `5039`, `5042`, `5043`, `5045`, `5046`, `5049`, `5051`, `5053`, `5054`, `5056`, `5057`, `5058`, `5061`, `5063`, `5066`, `5068`, `5069`, `5070`, `5071`, `5072`, `5075`, `5077`, `5078`, `5080`, `5082`, `5084`, `5085`, `5087`, `5089`, `5090`, `5092`, `5094`, `5095`, `5096`, `5099`, `5100`, `5101`, `5102`, `5104`, `5105`, `5107`, `5109`, `5110`, `5112`, `5116`, `5120`, `5121`, `5122`, `5124`, `5125`, `5127`, `5128`, `5129`, `5132`, `5133`, `5135`, `5138`, `5141`, `5142`, `5143`, `5144`, `5145`, `5146`, `5148`, `5150`, `5151`, `5154`, `5155`, `5156`, `5159`, `5162`, `5163`, `5164`, `5165`, `5166`, `5168`, `5169`, `5170`, `5172`, `5173`, `5174`, `5176`, `5177`, `5179`, `5181`, `5182`, `957`, `5183`, `5184`, `5185`, `5188`, `5189`, `5191`, `5192`, `5195`, `5196`, `5198`, `5200`, `5201`, `5203`, `5204`, `5205`, `5207`, `5208`, `5210`, `5211`, `5214`, `5215`, `5216`, `5217`, `5218`, `5219`, `5220`, `5221`, `5222`, `5224`, `5225`, `5226`, `5227`, `5229`, `5231`, `5232`, `5234`, `5235`, `5237`, `5238`, `5240`, `5241`, `5242`, `5245`, `5246`, `5251`, `5253`, `5256`, `5257`, `2677`, `5259`, `5261`, `5263`, `5264`, `5266`, `5267`, `5271`, `5274`, `5275`, `5279`, `5280`, `5281`, `5283`, `5285`, `5287`, `5289`, `5290`, `5291`, `5293`, `5296`, `5297`, `5299`, `5300`, `5301`, `5302`, `5305`, `5307`, `5309`, `5311`, `5314`, `5315`, `5316`, `5317`, `5319`, `5320`, `5321`, `5323`, `5324`, `5326`, `5327`, `5329`, `5331`, `5332`, `5333`, `5334`, `5336`, `5337`, `5339`, `5340`, `5341`, `5343`, `5346`, `5347`, `5348`, `5349`, `5351`, `5352`, `5353`, `5354`, `5356`, `5357`, `1020`, `5358`, `5359`, `5360`, `5361`, `5362`, `5363`, `5364`, `5365`, `5367`, `5369`, `5370`, `5371`, `5373`, `5374`, `5377`, `5379`, `5382`, `5383`, `5384`, `5386`, `5387`, `5389`, `5390`, `5393`, `5394`, `5396`, `5397`, `5399`, `5400`, `5402`, `5403`, `5404`, `4463`, `5406`, `5409`, `5410`, `5412`, `5413`, `5415`, `5416`, `5417`, `5419`, `5420`, `5421`, `5422`, `5423`, `5425`, `5428`, `5429`, `5431`, `5432`, `5434`, `5435`, `5437`, `5439`, `5441`, `5446`, `5447`, `5450`, `5452`, `5453`, `5456`, `5458`, `5462`, `5464`, `5465`, `5467`, `5468`, `5469`, `5470`, `5471`, `5473`, `5475`, `5476`, `5477`, `5479`, `5480`, `5482`, `5484`, `5485`, `5487`, `5489`, `3877`, `5490`, `5492`, `5493`, `5494`, `5497`, `5498`, `5499`, `5500`, `5503`, `5505`, `5506`, `5509`, `5510`, `5511`, `5513`, `5514`, `5517`, `5520`, `5521`, `5522`, `5524`, `5526`, `5529`, `5530`, `5531`, `5532`, `5533`, `5534`, `5535`, `5536`, `5537`, `5539`, `5540`, `5542`, `5543`, `5545`, `5546`, `5548`, `5549`, `5550`, `5552`, `5554`, `5556`, `5557`, `5559`, `5560`, `3089`, `5563`, `5564`, `5565`, `5567`, `5569`, `5570`, `5572`, `5575`, `5576`, `5578`, `5579`, `5580`, `5582`, `5583`, `5584`, `5585`, `5587`, `5589`, `5590`, `5591`, `5595`, `5597`, `5598`, `5599`, `5602`, `5603`, `5606`, `5608`, `5611`, `5613`, `4981`, `5614`, `5616`, `5617`, `5622`, `5623`, `5624`, `5625`, `5626`, `5627`, `5630`, `5631`, `5633`, `5634`, `5635`, `5637`, `3169`, `5639`, `5641`, `5643`, `5645`, `5646`, `5649`, `5651`, `5654`, `5655`, `5657`, `5659`, `5660`, `5662`, `5663`, `5664`, `5665`, `5667`, `5668`, `5669`, `5670`, `5671`, `5672`, `5673`, `5676`, `5681`, `5682`, `5683`, `5684`, `5685`, `5687`, `5689`, `5691`, `5693`, `5694`, `5698`, `5700`, `5702`, `5703`, `5704`, `5706`, `5708`, `5709`, `5710`, `5713`, `5715`, `5717`, `5718`, `5719`, `5723`, `5724`, `5725`, `5726`, `5728`, `5730`, `5731`, `5733`, `5734`, `5736`, `5738`, `5741`, `5743`, `5744`, `5747`, `5748`, `5749`, `5751`, `5752`, `5754`, `5756`, `5757`, `5759`, `5760`, `5761`, `5762`, `5763`, `5764`, `5766`, `5768`, `5770`, `5771`, `5773`, `5775`, `5776`, `5777`, `5778`, `5780`, `5782`, `5784`, `5786`, `5787`, `5788`, `5790`, `5791`, `5792`, `5795`, `5796`, `5798`, `5799`, `5800`, `5801`, `5802`, `5805`, `5806`, `5811`, `5813`, `5814`, `5815`, `5816`, `5817`, `5818`, `5820`, `5821`, `5822`, `5823`, `5824`, `5827`, `5830`, `5832`, `5833`, `5834`, `5836`, `5837`, `5839`, `5840`, `5841`, `5842`, `5845`, `5847`, `5849`, `5851`, `5853`, `5856`, `5859`, `5862`, `5863`, `5865`, `5867`, `5868`, `5870`, `5872`, `5873`, `5875`, `5876`, `5877`, `5878`, `5879`, `5881`, `5883`, `5886`, `5887`, `5888`, `5889`, `5891`, `5892`, `5895`, `5896`, `5898`, `5900`, `5903`, `5904`, `5905`, `5906`, `5908`, `5909`, `5912`, `5915`, `5916`, `5917`, `5918`, `5919`, `5920`, `5922`, `5923`, `5925`, `5927`, `5928`, `5929`, `5931`, `5932`, `5933`, `5935`, `5939`, `5940`, `5941`, `5943`, `5945`, `5947`, `5948`, `5950`, `5951`, `5952`, `5955`, `5956`, `5957`, `5958`, `5959`, `5961`, `5962`, `5963`, `5964`, `5965`, `5967`, `5968`, `5969`, `5970`, `5971`, `5972`, `5974`, `5976`, `5977`, `5978`, `5980`, `5982`, `5983`, `5984`, `5986`, `5987`, `5988`, `5990`, `5991`, `5993`, `5995`, `5996`, `5999`, `6000`, `6003`, `6004`, `6006`, `6009`, `6010`, `6011`, `6012`, `6013`, `6015`, `6016`, `6019`, `6020`, `6022`, `6024`, `6025`, `6028`, `6031`, `6032`, `6036`, `6037`, `6039`, `6040`, `6041`, `6042`, `6044`, `6046`, `6047`, `6048`, `6049`, `6050`, `6051`, `6052`, `6054`, `6056`, `6057`, `6058`, `6059`, `6061`, `6062`, `6063`, `6065`, `6066`, `6068`, `6069`, `6071`, `6072`, `6073`, `6074`, `6075`, `6076`, `6078`, `6079`, `6080`, `6082`, `6083`, `6085`, `6087`, `6088`, `6090`, `6091`, `6092`, `6094`, `6095`, `6096`, `6097`, `6099`, `6100`, `6102`, `6104`, `6106`, `6108`, `6109`, `6110`, `6111`, `6112`, `6115`, `6118`, `6121`, `6123`, `6124`, `6125`, `6127`, `6128`, `6129`, `6130`, `6131`, `6132`, `6133`, `6134`, `6135`, `6136`, `6137`, `6138`, `6139`, `6140`, `6141`, `6142`, `6143`, `6144`, `6145`, `6147`, `6149`, `6151`, `6153`, `6154`, `6155`, `6156`, `6157`, `6158`, `6160`, `6161`, `6162`, `6163`, `6165`, `6166`, `6167`, `6168`, `6169`, `6170`, `6172`, `6174`, `6176`, `6177`, `6178`, `6180`, `6183`, `6185`, `6188`, `6190`, `6194`, `6196`, `6197`, `6198`, `6199`, `6201`, `6202`, `6203`, `6206`, `6207`, `6210`, `6211`, `6212`, `6214`, `6215`, `6218`, `6219`, `6220`, `6222`, `6223`, `6224`, `6225`, `6226`, `6228`, `6229`, `6230`, `6232`, `6236`, `6238`, `6240`, `6242`, `6243`, `6245`, `6246`, `6247`, `6249`, `6250`, `6252`, `6253`, `6255`, `6257`, `6258`, `6261`, `6262`, `6263`, `6264`, `6266`, `6268`, `6269`, `6270`, `6273`, `6274`, `6275`, `6276`, `6277`, `6278`, `6280`, `6282`, `6283`, `6284`, `6287`, `6289`, `6290`, `6291`, `6292`, `6293`, `6295`, `1732`, `6296`, `6299`, `6300`, `6302`, `6303`, `6305`, `6306`, `6307`, `6308`, `6309`, `6310`, `6311`, `6312`, `6315`, `6317`, `6319`, `6320`, `6322`, `6323`, `6324`, `6325`, `6328`, `6330`, `6331`, `6332`, `6333`, `6334`, `6336`, `6338`, `6339`, `6341`, `6343`, `6345`, `6347`, `6348`, `6349`, `6351`, `6352`, `6354`, `6357`, `6358`, `6360`, `6361`, `6362`, `6364`, `6365`, `6367`, `6369`, `6370`, `6371`, `111`, `6372`, `6373`, `2065`, `6374`, `6375`, `6377`, `6378`, `6380`, `6381`, `6382`, `6384`, `6385`, `6386`, `6387`, `6388`, `6391`, `6392`, `6393`, `6394`, `6396`, `6397`, `6399`, `6400`, `6401`, `6402`, `6404`, `6407`, `6408`, `6409`, `6411`, `6414`, `6416`, `6418`, `6419`, `6421`, `6422`, `6423`, `6425`, `6426`, `6428`, `6429`, `6430`, `6431`, `6432`, `6434`, `6435`, `6436`, `6437`, `6438`, `6440`, `6441`, `6442`, `6443`, `6444`, `6445`, `6447`, `6449`, `6451`, `6452`, `6455`, `6456`, `6457`, `6458`, `6459`, `6460`, `6462`, `6463`, `6464`, `6465`, `6466`, `6469`, `6470`, `6471`, `6473`, `6474`, `6475`, `6476`, `6478`, `6480`, `6481`, `6482`, `6485`, `6486`, `6487`, `6488`, `6489`, `6490`, `6491`, `6493`, `6494`, `6495`, `6497`, `6498`, `6499`, `5134`, `6500`, `6501`, `6502`, `6503`, `6504`, `6506`, `6508`, `6509`, `6510`, `6511`, `6512`, `6514`, `6515`, `6516`, `6517`, `6518`, `6519`, `6520`, `6521`, `6523`, `6526`, `6527`, `6529`, `6531`, `6533`, `6535`, `6536`, `6537`, `6538`, `6539`, `6540`, `6543`, `6544`, `6545`, `6547`, `6550`, `6551`, `6552`, `6553`, `6554`, `6555`, `6557`, `6559`, `6560`, `6561`, `6562`, `6564`, `6565`, `6567`, `6568`, `6569`, `6570`, `6571`, `6574`, `6575`, `6578`, `6579`, `6580`, `6581`, `6583`, `6584`, `6586`, `6588`, `6589`, `6591`, `6593`, `6595`, `6597`, `6599`, `6600`, `6601`, `6602`, `6604`, `6605`, `6607`, `6609`, `6611`, `6614`, `6615`, `6616`, `6618`, `6619`, `6620`, `6622`, `6623`, `1924`, `6626`, `6628`, `6629`, `6631`, `6633`, `6635`, `6637`, `6638`, `6639`, `6641`, `6643`, `6644`, `6647`, `6649`, `6650`, `6651`, `6652`, `6654`, `6655`, `6656`, `6658`, `6659`, `6661`, `6662`, `6663`, `6664`, `6665`, `6666`, `6667`, `6669`, `6670`, `6672`, `6673`, `6674`, `6675`, `6676`, `6678`, `6680`, `6681`, `6682`, `6684`, `6685`, `6689`, `6690`, `6691`, `6694`, `6696`, `6697`, `6698`, `6699`, `6701`, `6702`, `6703`, `6704`, `6706`, `6707`, `6709`, `6710`, `6712`, `6714`, `6715`, `6717`, `6718`, `6719`, `6720`, `6721`, `6724`, `6725`, `6727`, `6730`, `6732`, `6733`, `6736`, `6739`, `6740`, `6743`, `6745`, `6746`, `6747`, `6748`, `6749`, `6751`, `6754`, `6755`, `6756`, `6757`, `6758`, `6759`, `6761`, `6763`, `6765`, `6768`, `6770`, `6773`, `6774`, `6775`, `6777`, `6778`, `6780`, `6783`, `6784`, `6785`, `6787`, `6789`, `6790`, `6792`, `6796`, `6799`, `6800`, `6801`, `6802`, `6803`, `6805`, `6807`, `6808`, `6810`, `6812`, `6814`, `6817`, `6819`, `6821`, `6822`, `6824`, `6826`, `6828`, `6829`, `6830`, `6832`, `6834`, `6835`, `6836`, `6839`, `6841`, `6844`, `6846`, `6848`, `6850`, `6851`, `6852`, `6853`, `6854`, `6855`, `6856`, `6858`, `6859`, `6860`, `6862`, `6863`, `6864`, `6866`, `6868`, `6869`, `6871`, `6873`, `6877`, `6880`, `6884`, `6885`, `6887`, `6888`, `6889`, `6892`, `6893`, `6894`, `6895`, `6898`, `6900`, `6901`, `6902`, `6904`, `6905`, `6906`, `6907`, `6909`, `6911`, `6914`, `6915`, `6916`, `6918`, `6919`, `6921`, `6922`, `6923`, `6924`, `6925`, `6926`, `6929`, `6930`, `6931`, `6934`, `6935`, `6937`, `6939`, `6940`, `6941`, `6944`, `6946`, `6947`, `6948`, `6950`, `6952`, `6954`, `6956`, `6957`, `6959`, `6960`, `6961`, `6963`, `6964`, `6965`, `6966`, `6968`, `6969`, `6970`, `6971`, `6972`, `6973`, `6974`, `6975`, `6977`, `1222`, `6979`, `6980`, `6981`, `6982`, `6983`, `6984`, `6985`, `6987`, `6988`, `6989`, `6990`, `6991`, `6992`, `6993`, `6994`, `6997`, `6998`, `7000`, `7001`, `7002`, `7003`, `7004`, `7007`, `7009`, `7010`, `7011`, `7013`, `7014`, `7016`, `7017`, `7019`, `7020`, `7021`, `7023`, `7024`, `7026`, `2231`, `7027`, `7028`, `7029`, `7031`, `7032`, `7033`, `7034`, `7035`, `7037`, `7038`, `7039`, `7040`, `7042`, `7043`, `7044`, `7045`, `7046`, `7048`, `7049`, `7051`, `7053`, `7055`, `7059`, `7060`, `7061`, `7062`, `7064`, `7065`, `7067`, `7068`, `7071`, `7072`, `7073`, `7074`, `7076`, `7077`, `7081`, `7084`, `7085`, `7088`, `7090`, `7092`, `7093`, `7095`, `7096`, `7097`, `7098`, `7100`, `7101`, `7102`, `7104`, `7107`, `7108`, `7112`, `7113`, `7115`, `7116`, `7117`, `7120`, `7121`, `7122`, `7123`, `7124`, `7125`, `7126`, `7128`, `7131`, `7132`, `7133`, `7134`, `7135`, `7138`, `7140`, `7141`, `7142`, `7143`, `7145`, `7146`, `7148`, `7149`, `7152`, `7156`, `7158`, `7159`, `7160`, `7161`, `7162`, `7163`, `7166`, `7169`, `7170`, `7173`, `7174`, `7177`, `7178`, `7179`, `7180`, `7181`, `7183`, `7184`, `7185`, `7186`, `7188`, `7189`, `7191`, `7192`, `7195`, `7198`, `7199`, `7201`, `7203`, `7204`, `7205`, `7206`, `7208`, `7213`, `7215`, `7216`, `7219`, `7221`, `7224`, `7225`, `7227`, `7229`, `7231`, `7232`, `7235`, `7236`, `7237`, `7239`, `7240`, `7242`, `7243`, `7245`, `7246`, `7247`, `7248`, `7252`, `7253`, `7254`, `7256`, `7258`, `7259`, `7260`, `7262`, `7263`, `7264`, `7266`, `7268`, `7270`, `7271`, `7272`, `7273`, `7274`, `7276`, `7277`, `7278`, `7281`, `7282`, `7283`, `7286`, `7288`, `7290`, `1256`, `7291`, `7292`, `7293`, `7295`, `7298`, `7299`, `7301`, `7302`, `7303`, `7304`, `7306`, `7307`, `7308`, `7310`, `7312`, `7313`, `7316`, `7317`, `7318`, `7319`, `7320`, `7323`, `7324`, `7326`, `7328`, `7331`, `7332`, `7334`, `7336`, `7337`, `7338`, `7340`, `7342`, `7343`, `7344`, `7345`, `7346`, `7347`, `7348`, `7350`, `7352`, `7353`, `5131`, `7354`, `7356`, `7358`, `7360`, `7362`, `7363`, `7366`, `7367`, `7368`, `7369`, `7373`, `7374`, `7375`, `7376`, `7377`, `7378`, `7379`, `7382`, `7383`, `7384`, `7385`, `7386`, `7387`, `7388`, `7389`, `7392`, `7395`, `7397`, `7398`, `7400`, `7402`, `7405`, `7406`, `7408`, `7410`, `7411`, `7412`, `7414`, `7416`, `7417`, `7419`, `7421`, `7423`, `7425`, `7427`, `7428`, `7429`, `7430`, `7432`, `7434`, `7435`, `7436`, `7437`, `7439`, `7440`, `7443`, `7444`, `7445`, `7447`, `7448`, `7449`, `7451`, `7453`, `7454`, `7456`, `7458`, `7459`, `7460`, `7462`, `7463`, `7464`, `7465`, `7466`, `7467`, `7468`, `7469`, `7470`, `7471`, `7472`, `7475`, `7477`, `7478`, `7479`, `7481`, `7482`, `7483`, `7484`, `7485`, `7486`, `7487`, `7488`, `7490`, `7492`, `7496`, `7497`, `7498`, `7500`, `7501`, `7503`, `7505`, `7506`, `7509`, `7511`, `7512`, `7514`, `7515`, `7516`, `7518`, `7522`, `7523`, `7524`, `7255`, `7526`, `7527`, `7530`, `7532`, `7533`, `7535`, `7536`, `7539`, `7541`, `7544`, `7547`, `7548`, `7550`, `7552`, `7553`, `7555`, `7556`, `7558`, `7559`, `7560`, `7561`, `7563`, `7564`, `7565`, `7566`, `7567`, `7569`, `7571`, `7575`, `7577`, `7578`, `7580`, `7581`, `7585`, `7586`, `7588`, `7590`, `7593`, `7595`, `7597`, `7599`, `7600`, `7601`, `7603`, `7605`, `7607`, `7608`, `7609`, `7610`, `7611`, `7612`, `7613`, `7614`, `7615`, `7616`, `7617`, `7619`, `7620`, `7621`, `7622`, `7623`, `7625`, `7628`, `7630`, `7631`, `7632`, `7634`, `7635`, `3191`, `7636`, `7637`, `7639`, `7641`, `7642`, `7643`, `7644`, `7645`, `7646`, `7647`, `7648`, `7649`, `7650`, `7652`, `7653`, `7654`, `7655`, `7657`, `7658`, `7659`, `7660`, `7661`, `7662`, `7664`, `7665`, `7667`, `7668`, `7670`, `7672`, `7673`, `7674`, `7675`, `7677`, `7678`, `7679`, `7680`, `7681`, `7682`, `7684`, `7686`, `7687`, `7688`, `7690`, `7692`, `7693`, `7695`, `7696`, `7698`, `7700`, `7701`, `7703`, `7704`, `7707`, `7710`, `7711`, `7713`, `7714`, `7715`, `7717`, `7718`, `7719`, `7721`, `7722`, `7723`, `7725`, `7726`, `7728`, `7729`, `7730`, `7731`, `7732`, `7733`, `7734`, `7735`, `7737`, `7739`, `7741`, `7743`, `7744`, `7745`, `7748`, `7750`, `7752`, `7753`, `7755`, `7756`, `7757`, `7758`, `7759`, `7760`, `7761`, `7762`, `7763`, `7764`, `7765`, `7766`, `7767`, `7768`, `7769`, `7771`, `7772`, `7774`, `7775`, `7776`, `7778`, `7779`, `7781`, `7782`, `7784`, `7785`, `7788`, `7789`, `7790`, `7791`, `7793`, `7794`, `7796`, `7798`, `7800`, `7801`, `7803`, `7804`, `7806`, `7808`, `7810`, `7811`, `7813`, `7816`, `7817`, `7819`, `7822`, `7824`, `7826`, `7828`, `7831`, `7833`, `7834`, `7836`, `7838`, `7840`, `7841`, `7842`, `7844`, `7846`, `7848`, `7850`, `7851`, `7852`, `7853`, `7854`, `7855`, `7856`, `7857`, `7859`, `7860`, `7861`, `7862`, `7863`, `7866`, `7868`, `7871`, `7873`, `7875`, `7876`, `7878`, `7880`, `7883`, `7884`, `7885`, `7886`, `7888`, `7889`, `7891`, `7894`, `7895`, `7896`, `7898`, `7899`, `7900`, `7901`, `7902`, `7903`, `7905`, `7907`, `7909`, `7910`, `7912`, `7914`, `7915`, `7916`, `7917`, `7919`, `5472`, `7920`, `7921`, `7922`, `7923`, `7924`, `7926`, `7928`, `7930`, `7931`, `7933`, `7934`, `7935`, `7937`, `7938`, `7939`, `7941`, `7942`, `7945`, `7946`, `7947`, `7948`, `7951`, `7952`, `7953`, `7955`, `7956`, `7959`, `7960`, `7961`, `7962`, `7963`, `7964`, `7965`, `7966`, `7967`, `7969`, `7970`, `7971`, `7972`, `7974`, `7975`, `7976`, `7977`, `7978`, `7979`, `7982`, `7984`, `7985`, `7987`, `7988`, `7989`, `7990`, `7992`, `7993`, `7994`, `7995`, `7997`, `7998`, `7999`, `8000`, `8001`, `8002`, `8007`, `8008`, `8009`, `8011`, `8012`, `8014`, `8016`, `8019`, `8021`, `8023`, `8025`, `8027`, `8028`, `8030`, `8031`, `8032`, `8033`, `8035`, `8037`, `3820`, `8038`, `8040`, `8042`, `8044`, `8046`, `8047`, `8048`, `8049`, `2686`, `8050`, `8051`, `8053`, `8054`, `8055`, `8056`, `8058`, `8061`, `8062`, `8064`, `8065`, `8066`, `8067`, `8068`, `8069`, `8071`, `8072`, `8073`, `8074`, `8075`, `8076`, `8077`, `8078`, `8079`, `8080`, `8081`, `8083`, `8084`, `8085`, `8086`, `8087`, `8088`, `8090`, `8091`, `8093`, `8094`, `8095`, `8097`, `8098`, `8099`, `8101`, `8103`, `8104`, `8106`, `8108`, `8109`, `8110`, `8111`, `8112`, `8113`, `8115`, `8117`, `8118`, `8119`, `8120`, `8121`, `8124`, `8125`, `8127`, `8128`, `8129`, `8130`, `8131`, `8132`, `8133`, `8134`, `8136`, `8137`, `8139`, `8141`, `8142`, `8144`, `8145`, `8147`, `8151`, `8154`, `8155`, `8157`, `8158`, `8160`, `8161`, `8162`, `8164`, `8166`, `8167`, `8168`, `8169`, `8170`, `8171`, `8173`, `8174`, `8176`, `8177`, `8178`, `8179`, `8181`, `8182`, `8183`, `8185`, `8186`, `8187`, `8188`, `8189`, `8190`, `8191`, `8192`, `8193`, `8194`, `8195`, `8197`, `8199`, `8201`, `8202`, `8203`, `7736`, `8204`, `8205`, `8206`, `8207`, `8209`, `8210`, `8211`, `8213`, `8215`, `8216`, `8218`, `8219`, `8220`, `8221`, `8222`, `8223`, `7839`, `8224`, `8225`, `8227`, `2984`, `8229`, `8230`, `8231`, `8232`, `8235`, `8237`, `8239`, `8240`, `8241`, `8245`, `8246`, `8248`, `8249`, `8250`, `8253`, `8254`, `8256`, `8257`, `8259`, `8260`, `8261`, `8263`, `8264`, `8265`, `8266`, `8267`, `8268`, `8269`, `8271`, `8272`, `8273`, `8274`, `8275`, `8280`, `8281`, `8282`, `8284`, `8285`, `8286`, `8287`, `8288`, `8290`, `8291`, `8292`, `8293`, `8294`, `8295`, `8297`, `8299`, `8300`, `8301`, `8302`, `8303`, `8306`, `8308`, `8309`, `8310`, `8312`, `8313`, `8314`, `8316`, `8317`, `8319`, `8321`, `8323`, `8325`, `8326`, `8327`, `8329`, `8330`, `8331`, `8332`, `8333`, `8336`, `8338`, `8339`, `8296`, `8340`, `8342`, `8343`, `8344`, `8345`, `8347`, `8349`, `8350`, `8352`, `8357`, `8359`, `8360`, `8361`, `8362`, `8363`, `8365`, `8366`, `8367`, `8369`, `8370`, `8372`, `8373`, `8375`, `8377`, `8378`, `8379`, `8381`, `8382`, `8383`, `8385`, `8388`, `8389`, `8391`, `8392`, `8394`, `8396`, `8398`, `8270`, `8399`, `8402`, `8404`, `8405`, `8407`, `8409`, `8411`, `8412`, `8414`, `8415`, `8417`, `8419`, `8420`, `8423`, `8426`, `8427`, `8428`, `8431`, `8432`, `8433`, `8434`, `8435`, `8437`, `8438`, `8441`, `8443`, `8444`, `8445`, `8446`, `8447`, `8449`, `8453`, `8455`, `8457`, `8459`, `8460`, `8462`, `8463`, `8464`, `8466`, `8467`, `8468`, `8469`, `8470`, `8472`, `8473`, `8474`, `8475`, `8476`, `8478`, `8479`, `8481`, `8484`, `8485`, `8486`, `8488`, `8489`, `8491`, `8494`, `8495`, `8496`, `8497`, `8498`, `8499`, `8500`, `8503`, `8505`, `8506`, `8508`, `8509`, `8510`, `8511`, `8512`, `8513`, `8514`, `8515`, `8516`, `8517`, `8519`, `8521`, `8522`, `8523`, `8524`, `8525`, `8526`, `8527`, `8529`, `8530`, `8532`, `8535`, `8537`, `8538`, `8539`, `8541`, `8542`, `8543`, `8544`, `8549`, `8550`, `8551`, `8552`, `8553`, `8554`, `8555`, `8557`, `8558`, `8559`, `8562`, `8563`, `8564`, `8566`, `8569`, `8570`, `8571`, `8573`, `8575`, `8577`, `8578`, `8579`, `8580`, `8581`, `8584`, `8585`, `8586`, `8587`, `8589`, `8590`, `8592`, `8593`, `8594`, `8595`, `8597`, `8598`, `8600`, `8601`, `8602`, `8604`, `8605`, `8608`, `8610`, `8611`, `8612`, `8613`, `8614`, `8615`, `8616`, `8618`, `8619`, `8620`, `8621`, `8622`, `8625`, `8627`, `8629`, `8630`, `8632`, `8634`, `8636`, `8637`, `8638`, `8640`, `8642`, `8643`, `8644`, `8646`, `8647`, `8649`, `8650`, `8651`, `8653`, `8655`, `8656`, `8657`, `8658`, `8659`, `8660`, `8662`, `8664`, `8665`, `8666`, `8667`, `8669`, `8670`, `8671`, `8673`, `8674`, `8675`, `8676`, `8677`, `8678`, `8679`, `8680`, `8681`, `8683`, `8685`, `8687`, `8689`, `8691`, `8692`, `8693`, `8694`, `8696`, `8697`, `8698`, `8700`, `8701`, `8702`, `8703`, `8704`, `8705`, `8706`, `8707`, `8708`, `8709`, `8710`, `8712`, `8713`, `8715`, `8717`, `8719`, `8722`, `8723`, `8725`, `8726`, `8727`, `8729`, `8730`, `8732`, `8734`, `8736`, `8738`, `8739`, `8740`, `8741`, `8743`, `8744`, `8745`, `8747`, `8748`, `8752`, `8753`, `8754`, `8755`, `8756`, `8757`, `8758`, `8760`, `8761`, `8762`, `8763`, `8765`, `8766`, `8767`, `8768`, `8770`, `8771`, `8773`, `8774`, `8775`, `8776`, `8778`, `8779`, `8780`, `8781`, `8782`, `8785`, `8786`, `8787`, `8789`, `8790`, `8791`, `8793`, `8795`, `8798`, `8800`, `8801`, `8802`, `8804`, `8805`, `8807`, `8808`, `8809`, `8810`, `8813`, `8815`, `8816`, `8817`, `8819`, `8820`, `8821`, `8822`, `8823`, `401`, `8824`, `8826`, `8827`, `8829`, `8830`, `8831`, `8833`, `8835`, `8837`, `8839`, `8840`, `8841`, `8842`, `8844`, `8845`, `8847`, `8849`, `8851`, `8852`, `8853`, `8855`, `8857`, `8858`, `8859`, `8864`, `8865`, `8866`, `8867`, `8869`, `8870`, `8871`, `8874`, `8877`, `8879`, `8880`, `8881`, `8883`, `8884`, `8886`, `8887`, `8890`, `8891`, `8892`, `8893`, `8895`, `8897`, `8899`, `8900`, `8901`, `8903`, `8906`, `8907`, `8909`, `8911`, `8914`, `8916`, `8917`, `8919`, `8920`, `8921`, `8922`, `8923`, `8927`, `8928`, `8930`, `8931`, `8933`, `8934`, `8937`, `8939`, `8940`, `8941`, `8942`, `8944`, `8945`, `8947`, `8948`, `8949`, `8950`, `8951`, `8953`, `8954`, `8955`, `8958`, `8960`, `8962`, `8965`, `8966`, `8967`, `8968`, `8969`, `8970`, `8971`, `8972`, `8974`, `8976`, `8977`, `8978`, `8979`, `8980`, `8981`, `8982`, `8983`, `8984`, `8985`, `8987`, `8991`, `8992`, `8993`, `8994`, `8995`, `8996`, `8998`, `8999`, `9000`, `9002`, `9003`, `9004`, `9005`, `9007`, `9009`, `9010`, `9011`, `9014`, `9015`, `9016`, `9018`, `9019`, `9020`, `9022`, `9024`, `9025`, `9026`, `9028`, `9030`, `9031`, `9032`, `9034`, `9035`, `9037`, `9038`, `9039`, `9042`, `9043`, `9044`, `9046`, `9048`, `9050`, `9051`, `9053`, `9054`, `9055`, `9057`, `9058`, `8932`, `9059`, `9060`, `9061`, `9062`, `9064`, `9068`, `1932`, `9069`, `9070`, `9071`, `9072`, `9073`, `9074`, `9076`, `9079`, `9080`, `9083`, `9084`, `9087`, `9088`, `9090`, `9091`, `9093`, `9095`, `9096`, `9097`, `9098`, `9100`, `9103`, `9104`, `9105`, `9106`, `9107`, `9108`, `9109`, `9110`, `9111`, `9112`, `9113`, `9114`, `9116`, `9119`, `9120`, `9121`, `9122`, `9123`, `9124`, `9127`, `9128`, `9129`, `9130`, `9131`, `9132`, `9133`, `9134`, `9135`, `9136`, `9138`, `9139`, `9141`, `9142`, `9144`, `9145`, `9146`, `9148`, `9149`, `9150`, `9152`, `9153`, `9156`, `9158`, `9160`, `9162`, `9165`, `7986`, `9168`, `9170`, `9171`, `9172`, `9173`, `9175`, `9176`, `9177`, `9179`, `9180`, `9182`, `9183`, `9185`, `9188`, `9190`, `9191`, `9192`, `9194`, `9198`, `9200`, `9201`, `9202`, `9204`, `9206`, `9207`, `5871`, `9210`, `9211`, `9213`, `9214`, `9215`, `9217`, `9218`, `9220`, `9221`, `9222`, `9226`, `9228`, `9230`, `9231`, `9233`, `9234`, `9235`, `9238`, `9239`, `9241`, `9242`, `9244`, `9246`, `9249`, `9251`, `9252`, `9255`, `9256`, `9259`, `9260`, `9262`, `9263`, `9265`, `9269`, `9270`, `9273`, `9274`, `9277`, `3858`, `9279`, `9281`, `9282`, `9284`, `9287`, `7598`, `9289`, `9292`, `9294`, `9295`, `9296`, `9297`, `9298`, `9299`, `9301`, `9302`, `9304`, `9306`, `9308`, `9311`, `9312`, `9313`, `9314`, `9318`, `9320`, `9322`, `9325`, `9326`, `9327`, `9329`, `9331`, `9333`, `9334`, `9336`, `9338`, `9339`, `9340`, `9341`, `9342`, `9343`, `9344`, `9346`, `9347`, `9349`, `9350`, `9352`, `9353`, `9355`, `9358`, `9359`, `9360`, `9363`, `9365`, `9368`, `9369`, `9371`, `9373`, `9374`, `9375`, `9376`, `9377`, `9379`, `9382`, `9383`, `9384`, `9387`, `9388`, `9389`, `9390`, `9391`, `9392`, `9393`, `9395`, `9396`, `9398`, `9400`, `9401`, `9404`, `9406`, `9409`, `9410`, `9412`, `9414`, `9416`, `9417`, `9418`, `9420`, `9421`, `9424`, `9426`, `9428`, `9429`, `9431`, `9432`, `9433`, `9434`, `9435`, `9436`, `9438`, `9441`, `9443`, `9445`, `9446`, `9447`, `9448`, `9449`, `9450`, `9451`, `9453`, `9454`, `9455`, `9457`, `9458`, `9459`, `9460`, `9461`, `9462`, `9463`, `9464`, `9465`, `9467`, `9469`, `9471`, `9474`, `9476`, `9477`, `9478`, `9479`, `9480`, `973`, `9482`, `9483`, `9485`, `9486`, `9488`, `9489`, `9490`, `9492`, `9493`, `9495`, `9496`, `9498`, `9499`, `9501`, `9502`, `9504`, `9506`, `9507`, `9508`, `9511`, `9512`, `9514`, `9515`, `9518`, `9519`, `9521`, `9523`, `9524`, `9526`, `9528`, `9531`, `9533`, `9534`, `9535`, `9537`, `9539`, `9540`, `9541`, `9543`, `9545`, `9546`, `9548`, `9549`, `9550`, `9551`, `9554`, `9555`, `9556`, `9557`, `9559`, `9561`, `9562`, `9565`, `9567`, `9570`, `9571`, `9573`, `7877`, `9575`, `9578`, `9580`, `9582`, `9583`, `9586`, `9587`, `9588`, `9589`, `9591`, `9592`, `9593`, `9594`, `9595`, `9597`, `9599`, `9601`, `9603`, `9604`, `9605`, `9607`, `9610`, `5979`, `9611`, `9612`, `9613`, `9614`, `9616`, `9617`, `9618`, `9620`, `9621`, `9622`, `9624`, `9627`, `9629`, `9630`, `9632`, `9633`, `9636`, `9637`, `9638`, `9640`, `9641`, `9642`, `9644`, `9646`, `9647`, `9649`, `9650`, `9653`, `9656`, `9657`, `9658`, `9659`, `9660`, `9662`, `9663`, `9664`, `9665`, `9666`, `9667`, `9670`, `9673`, `9675`, `9677`, `9679`, `9681`, `9682`, `9683`, `9684`, `9686`, `9688`, `9689`, `9690`, `9692`, `9693`, `9695`, `9696`, `9697`, `9699`, `9701`, `9703`, `9705`, `9707`, `9710`, `9713`, `9714`, `9715`, `9717`, `9718`, `9721`, `9722`, `9724`, `9725`, `9726`, `9727`, `9729`, `9730`, `9731`, `9732`, `9733`, `9735`, `9737`, `9739`, `9740`, `9741`, `9744`, `9747`, `9748`, `9750`, `9751`, `9753`, `9754`, `9755`, `9756`, `9758`, `9759`, `9760`, `9761`, `9762`, `9764`, `9768`, `9770`, `9772`, `9774`, `9776`, `9777`, `9779`, `9780`, `9782`, `9783`, `9784`, `9787`, `9789`, `9790`, `9791`, `9793`, `9794`, `9795`, `9796`, `9797`, `9798`, `9799`, `9800`, `9803`, `9805`, `9807`, `9809`, `9810`, `9811`, `9813`, `9816`, `9817`, `9819`, `9820`, `9822`, `9823`, `9824`, `9825`, `9827`, `9828`, `9830`, `9831`, `9832`, `9834`, `9836`, `9837`, `9839`, `9840`, `9841`, `9842`, `9844`, `9845`, `9846`, `9847`, `9848`, `9850`, `9851`, `9853`, `9854`, `9855`, `9856`, `9857`, `2337`, `8520`, `9858`, `9861`, `9862`, `9757`, `9864`, `9865`, `9867`, `9868`, `9870`, `9871`, `9872`, `9873`, `9874`, `9877`, `9878`, `9879`, `9880`, `9882`, `9884`, `9885`, `9887`, `9889`, `9890`, `9892`, `9894`, `9895`, `9897`, `9899`, `9901`, `9903`, `9906`, `9907`, `9909`, `9911`, `9914`, `9916`, `9918`, `9919`, `9920`, `9922`, `9924`, `9927`, `9929`, `9930`, `9932`, `9935`, `9936`, `9938`, `9939`, `9940`, `9941`, `9942`, `9943`, `9944`, `9945`, `9946`, `9947`, `9948`, `9949`, `9950`, `9951`, `9952`, `9953`, `9955`, `9956`, `9957`, `9958`, `9960`, `9962`, `9963`, `9964`, `9965`, `9967`, `9968`, `9970`, `9971`, `9974`, `9977`, `9978`, `9980`, `9981`, `6878`, `9982`, `9984`, `9985`, `9987`, `9988`, `9989`, `9992`, `9993`, `9994`, `9995`, `9999`, `10001`, `10002`, `10003`, `10004`, `10006`, `10007`, `1912`, `10008`, `10011`, `10013`, `10014`, `10016`, `10017`, `10019`, `10020`, `10023`, `10025`, `10028`, `10029`, `10030`, `10033`, `10034`, `10036`, `10038`, `10039`, `10040`, `10041`, `10042`, `10044`, `10046`, `10048`, `10050`, `10051`, `10053`, `10055`, `10057`, `10058`, `10060`, `10061`, `10062`, `10063`, `10065`, `10066`, `10069`, `10070`, `10071`, `10073`, `10076`, `10078`, `10079`, `10081`, `10085`, `10086`, `10091`, `10092`, `10093`, `10094`, `10096`, `10098`, `10099`, `10100`, `10101`, `10104`, `10105`, `10106`, `10107`, `10110`, `10111`, `10112`, `10114`, `10115`, `10116`, `10118`, `10119`, `10120`, `10123`, `10124`, `10125`, `10127`, `10128`, `10129`, `10130`, `10131`, `10133`, `10134`, `10136`, `10138`, `10139`, `10142`, `10143`, `10146`, `10148`, `10149`, `10150`, `10152`, `10154`, `10156`, `10159`, `10161`, `10163`, `10164`, `10165`, `10167`, `10168`, `10169`, `10170`, `10171`, `10172`, `10175`, `10176`, `10177`, `10180`, `10183`, `10185`, `10186`, `10187`, `10189`, `10191`, `10193`, `10195`, `10196`, `10197`, `10198`, `10199`, `10200`, `10202`, `10203`, `10204`, `10207`, `10208`, `10210`, `10211`, `10213`, `10214`, `10215`, `10217`, `10218`, `10220`, `10222`, `10224`, `10225`, `10227`, `10228`, `10230`, `10232`, `10234`, `10235`, `10237`, `10238`, `10239`, `10241`, `10242`, `10243`, `10245`, `10248`, `10249`, `10251`, `10252`, `10253`, `10255`, `10258`, `10259`, `10260`, `10261`, `10262`, `10263`, `10265`, `10267`, `10268`, `10269`, `10270`, `10272`, `10273`, `10275`, `10276`, `10277`, `10278`, `10279`, `10280`, `10281`, `10284`, `10285`, `10287`, `10288`, `10291`, `10292`, `10294`, `10296`, `10297`, `10298`, `10300`, `10302`, `10303`, `10304`, `10306`, `10307`, `10308`, `10309`, `10312`, `10313`, `10314`, `10315`, `10316`, `10317`, `10318`, `10319`, `10320`, `10321`, `10323`, `10324`, `10327`, `10328`, `10329`, `10330`, `10332`, `10333`, `10335`, `10336`, `10337`, `10340`, `10341`, `10343`, `10344`, `10345`, `10346`, `10347`, `10348`, `10349`, `10350`, `10351`, `10352`, `10353`, `10354`, `10356`, `10357`, `10359`, `10360`, `10363`, `10365`, `10366`, `10368`, `10370`, `10371`, `10372`, `10373`, `10374`, `10375`, `10376`, `10377` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.06 | | `TOKEN_P` | 99.06 | | `TOKEN_R` | 99.06 | | `TOKEN_ACC` | 99.77 | | `SENTS_F` | 97.00 | | `SENTS_P` | 97.32 | | `SENTS_R` | 96.67 | | `TAG_ACC` | 93.85 | | `POS_ACC` | 97.66 | | `MORPH_ACC` | 93.64 | | `DEP_UAS` | 92.56 | | `DEP_LAS` | 87.49 | | `LEMMA_ACC` | 93.99 |
{"language": ["ro"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/ro_udv25_romaniannonstandard_trf
null
[ "spacy", "token-classification", "ro", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ro" ]
TAGS #spacy #token-classification #ro #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Romanian-Nonstandard ### Label Scheme View label scheme (7445 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (7445 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #ro #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (7445 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Romanian-RRT | Feature | Description | | --- | --- | | **Name** | `ro_udv25_romanianrrt_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (3096 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `ARROW`, `Af`, `Afcfp-n`, `Afcfson`, `Afcfsrn`, `Afcmpoy`, `Afcms-n`, `Afp`, `Afp-p-n`, `Afp-poy`, `Afpf--n`, `Afpfp-n`, `Afpfp-ny`, `Afpfpoy`, `Afpfpry`, `Afpfson`, `Afpfsoy`, `Afpfsrn`, `Afpfsry`, `Afpm--n`, `Afpmp-n`, `Afpmpoy`, `Afpmpry`, `Afpms-n`, `Afpmsoy`, `Afpmsry`, `Afsfp-n`, `Afsfsrn`, `BULLET`, `COLON`, `COMMA`, `Ccssp`, `Ccsspy`, `Crssp`, `Csssp`, `Cssspy`, `DASH`, `DBLQ`, `Dd3-po---e`, `Dd3-po---o`, `Dd3fpo`, `Dd3fpr`, `Dd3fpr---e`, `Dd3fpr---o`, `Dd3fpr--y`, `Dd3fso`, `Dd3fso---e`, `Dd3fsr`, `Dd3fsr---e`, `Dd3fsr---o`, `Dd3fsr--yo`, `Dd3mpo`, `Dd3mpr`, `Dd3mpr---e`, `Dd3mpr---o`, `Dd3mso---e`, `Dd3msr`, `Dd3msr---e`, `Dd3msr---o`, `Dh1ms`, `Dh3fp`, `Dh3fso`, `Dh3fsr`, `Dh3mp`, `Dh3ms`, `Di3`, `Di3-----y`, `Di3--r---e`, `Di3-po`, `Di3-po---e`, `Di3-sr`, `Di3-sr---e`, `Di3-sr--y`, `Di3fp`, `Di3fpr`, `Di3fpr---e`, `Di3fso`, `Di3fso---e`, `Di3fsr`, `Di3fsr---e`, `Di3mp`, `Di3mpr`, `Di3mpr---e`, `Di3ms`, `Di3ms----e`, `Di3mso---e`, `Di3msr`, `Di3msr---e`, `Ds1fp-p`, `Ds1fp-s`, `Ds1fsop`, `Ds1fsos`, `Ds1fsrp`, `Ds1fsrs`, `Ds1fsrs-y`, `Ds1mp-p`, `Ds1mp-s`, `Ds1ms-p`, `Ds1ms-s`, `Ds1msrs-y`, `Ds2---s`, `Ds2fp-p`, `Ds2fp-s`, `Ds2fsrp`, `Ds2fsrs`, `Ds2mp-p`, `Ds2mp-s`, `Ds2ms-p`, `Ds2ms-s`, `Ds3---p`, `Ds3---s`, `Ds3fp-s`, `Ds3fsos`, `Ds3fsrs`, `Ds3mp-s`, `Ds3ms-s`, `Dw3--r---e`, `Dw3-po---e`, `Dw3fpr`, `Dw3fso---e`, `Dw3fsr`, `Dw3mpr`, `Dw3mso---e`, `Dw3msr`, `Dz3fsr---e`, `Dz3mso---e`, `Dz3msr---e`, `EQUAL`, `EXCL`, `EXCLHELLIP`, `GE`, `GT`, `HELLIP`, `I`, `LCURL`, `LPAR`, `LSQR`, `LT`, `M`, `Mc`, `Mc-p-d`, `Mc-p-l`, `Mcfp-l`, `Mcfp-ln`, `Mcfprln`, `Mcfprly`, `Mcfsoln`, `Mcfsrln`, `Mcmp-l`, `Mcms-ln`, `Mcmsrl`, `Mcmsrly`, `Mffprln`, `Mffsrln`, `Mlfpo`, `Mlfpr`, `Mlmpr`, `Mo---l`, `Mo---ln`, `Mo-s-r`, `Mofp-ln`, `Mofpoly`, `Mofprly`, `Mofs-l`, `Mofsoln`, `Mofsoly`, `Mofsrln`, `Mofsrly`, `Mompoly`, `Momprly`, `Moms-l`, `Moms-ln`, `Momsoly`, `Momsrly`, `Nc`, `Nc---n`, `Ncf--n`, `Ncfp-n`, `Ncfpoy`, `Ncfpry`, `Ncfs-n`, `Ncfson`, `Ncfsoy`, `Ncfsrn`, `Ncfsry`, `Ncfsryy`, `Ncfsvy`, `Ncm--n`, `Ncmp-n`, `Ncmpoy`, `Ncmpry`, `Ncms-n`, `Ncms-ny`, `Ncms-y`, `Ncmsoy`, `Ncmsrn`, `Ncmsry`, `Ncmsryy`, `Ncmsvn`, `Ncmsvy`, `Np`, `Npfson`, `Npfsoy`, `Npfsrn`, `Npfsry`, `Npmpoy`, `Npmpry`, `Npms-n`, `Npmsoy`, `Npmsry`, `PERCENT`, `PERIOD`, `PLUS`, `PLUSMINUS`, `Pd3-po`, `Pd3fpr`, `Pd3fso`, `Pd3fsr`, `Pd3mpo`, `Pd3mpr`, `Pd3mpr--y`, `Pd3mso`, `Pd3msr`, `Pi3`, `Pi3--r`, `Pi3-po`, `Pi3-so`, `Pi3-sr`, `Pi3fpr`, `Pi3fso`, `Pi3fsr`, `Pi3mpr`, `Pi3mso`, `Pi3msr`, `Pi3msr--y`, `Pp1-pa--------w`, `Pp1-pa--y-----w`, `Pp1-pd--------s`, `Pp1-pd--------w`, `Pp1-pd--y-----w`, `Pp1-pr--------s`, `Pp1-sa--------s`, `Pp1-sa--------w`, `Pp1-sa--y-----w`, `Pp1-sd--------s`, `Pp1-sd--------w`, `Pp1-sd--y-----w`, `Pp1-sn--------s`, `Pp2-----------s`, `Pp2-pa--------w`, `Pp2-pa--y-----w`, `Pp2-pd--------w`, `Pp2-pd--y-----w`, `Pp2-pr--------s`, `Pp2-sa--------s`, `Pp2-sa--------w`, `Pp2-sa--y-----w`, `Pp2-sd--------s`, `Pp2-sd--------w`, `Pp2-sd--y-----w`, `Pp2-sn--------s`, `Pp2-so--------s`, `Pp2-sr--------s`, `Pp3-p---------s`, `Pp3-pd--------w`, `Pp3-pd--y-----w`, `Pp3-po--------s`, `Pp3-sd--------w`, `Pp3-sd--y-----w`, `Pp3fpa--------w`, `Pp3fpa--y-----w`, `Pp3fpr--------s`, `Pp3fs---------s`, `Pp3fsa--------w`, `Pp3fsa--y-----w`, `Pp3fso--------s`, `Pp3fsr--------s`, `Pp3fsr--y-----s`, `Pp3mpa--------w`, `Pp3mpa--y-----w`, `Pp3mpr--------s`, `Pp3ms---------s`, `Pp3msa--------w`, `Pp3msa--y-----w`, `Pp3mso--------s`, `Pp3msr--------s`, `Pp3msr--y-----s`, `Ps1fp-s`, `Ps1fsrp`, `Ps1fsrs`, `Ps1mp-p`, `Ps1ms-p`, `Ps2fp-s`, `Ps2fsrp`, `Ps2fsrs`, `Ps2ms-s`, `Ps3---p`, `Ps3---s`, `Ps3fp-s`, `Ps3fsrs`, `Ps3mp-s`, `Ps3ms-s`, `Pw3--r`, `Pw3-po`, `Pw3-so`, `Pw3fpr`, `Pw3fso`, `Pw3mpr`, `Pw3mso`, `Px3--a--------s`, `Px3--a--------w`, `Px3--a--y-----w`, `Px3--d--------w`, `Px3--d--y-----w`, `Pz3-sr`, `Pz3fsr`, `QUEST`, `QUOT`, `Qf`, `Qn`, `Qs`, `Qs-y`, `Qz`, `Qz-y`, `RCURL`, `RPAR`, `RSQR`, `Rc`, `Rgc`, `Rgp`, `Rgpy`, `Rgs`, `Rp`, `Rw`, `Rw-y`, `Rz`, `SCOLON`, `SLASH`, `STAR`, `Sp`, `Spsa`, `Spsay`, `Spsd`, `Spsg`, `Td-po`, `Tdfpr`, `Tdfso`, `Tdfsr`, `Tdmpr`, `Tdmso`, `Tdmsr`, `Tf-so`, `Tffpoy`, `Tffpry`, `Tffs-y`, `Tfmpoy`, `Tfms-y`, `Tfmsoy`, `Tfmsry`, `Ti-po`, `Tifp-y`, `Tifso`, `Tifsr`, `Timso`, `Timsr`, `Tsfp`, `Tsfs`, `Tsmp`, `Tsms`, `UNDERSC`, `Va--1`, `Va--1-----y`, `Va--1p`, `Va--1s`, `Va--1s----y`, `Va--2p`, `Va--2p----y`, `Va--2s`, `Va--2s----y`, `Va--3`, `Va--3-----y`, `Va--3p`, `Va--3p----y`, `Va--3s`, `Va--3s----y`, `Vag`, `Vaii1`, `Vaii2s`, `Vaii3p`, `Vaii3s`, `Vail3p`, `Vail3s`, `Vaip1p`, `Vaip1s`, `Vaip2p`, `Vaip2s`, `Vaip3p`, `Vaip3p----y`, `Vaip3s`, `Vaip3s----y`, `Vais3p`, `Vais3s`, `Vam-2s`, `Vanp`, `Vap--sm`, `Vasp1p`, `Vasp1s`, `Vasp2p`, `Vasp2s`, `Vasp3`, `Vmg`, `Vmg-------y`, `Vmii1`, `Vmii1-----y`, `Vmii2p`, `Vmii2s`, `Vmii3p`, `Vmii3p----y`, `Vmii3s`, `Vmii3s----y`, `Vmil1`, `Vmil1p`, `Vmil2s`, `Vmil3p`, `Vmil3p----y`, `Vmil3s`, `Vmil3s----y`, `Vmip1p`, `Vmip1p----y`, `Vmip1s`, `Vmip1s----y`, `Vmip2p`, `Vmip2s`, `Vmip2s----y`, `Vmip3`, `Vmip3-----y`, `Vmip3p`, `Vmip3s`, `Vmip3s----y`, `Vmis1p`, `Vmis1s`, `Vmis3p`, `Vmis3p----y`, `Vmis3s`, `Vmis3s----y`, `Vmm-2p`, `Vmm-2s`, `Vmnp`, `Vmnp------y`, `Vmp--pf`, `Vmp--pm`, `Vmp--sf`, `Vmp--sm`, `Vmp--sm---y`, `Vmsp1p`, `Vmsp1s`, `Vmsp2s`, `Vmsp3`, `Vmsp3-----y`, `X`, `Y`, `Ya`, `Yn`, `Ynfsoy`, `Ynfsry`, `Ynmsoy`, `Ynmsry`, `Yp`, `Yp-sr`, `Yr` | | **`morphologizer`** | `Case=Dat,Gen\|Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `AdpType=Prep\|Case=Acc\|POS=ADP`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=ADV\|PronType=Int,Rel`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `POS=PUNCT`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=CCONJ\|Polarity=Pos`, `Case=Acc,Nom\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Sub\|POS=PART\|Variant=Short`, `Mood=Sub\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Weak`, `POS=AUX\|Tense=Pres\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part`, `POS=ADV`, `Degree=Pos\|POS=ADV`, `POS=PART\|Polarity=Neg`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|POS=PART`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `POS=SCONJ\|Polarity=Pos`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `POS=AUX\|Person=3`, `POS=VERB\|Tense=Pres\|VerbForm=Inf`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|POS=ADJ`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `POS=VERB\|VerbForm=Ger`, `Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=PART\|PartType=Inf`, `Case=Dat\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Weak\|Variant=Short`, `Case=Acc,Nom\|POS=DET\|Person=3\|Position=Prenom\|PronType=Int,Rel`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Definite=Ind\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak\|Variant=Short`, `NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat,Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part`, `POS=ADV\|PronType=Neg`, `AdpType=Prep\|Case=Acc\|POS=ADP\|Variant=Short`, `Case=Acc,Nom\|Definite=Ind\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Number=Sing\|POS=AUX\|Person=2`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Definite=Ind\|Degree=Pos\|Gender=Fem\|POS=ADJ`, `Case=Dat,Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Emp`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Sub\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `NumForm=Word\|NumType=Ord\|POS=NUM`, `AdpType=Prep\|Case=Gen\|POS=ADP`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `AdpType=Prep\|POS=PUNCT`, `Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Case=Dat\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Weak`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Strong`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Int,Rel`, `Gender=Masc\|Number=Sing\|POS=AUX\|VerbForm=Part`, `POS=VERB\|Variant=Short\|VerbForm=Ger`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Number=Sing\|POS=AUX\|Person=3`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Mood=Ind\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `POS=AUX\|Person=1`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=PART\|Polarity=Neg\|Variant=Short`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Case=Acc,Nom\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|PronType=Tot`, `Mood=Ind\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Weak\|Variant=Short`, `Number=Plur\|POS=AUX\|Person=3`, `Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=PROPN`, `POS=SCONJ\|Polarity=Pos\|Variant=Short`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=PART\|Tense=Fut`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `POS=DET\|Person=3\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `Case=Voc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Emp`, `Case=Acc,Nom\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Ind\|NumForm=Word\|NumType=Ord\|POS=NUM`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art\|Variant=Short`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Int,Rel`, `Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Gender=Masc\|POS=NOUN`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc,Nom\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `NumForm=Digit\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `POS=INTJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Definite=Ind\|Degree=Pos\|Gender=Masc\|POS=ADJ`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pqp\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Voc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind\|Variant=Short`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `POS=CCONJ\|Polarity=Pos\|Variant=Short`, `Number=Plur\|POS=AUX\|Person=2`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art\|Variant=Short`, `POS=AUX\|VerbForm=Ger`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Gender=Fem\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN\|Variant=Short`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Degree=Sup\|POS=ADV`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `POS=ADV\|PronType=Int,Rel\|Variant=Short`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN\|Variant=Short`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art\|Variant=Short`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Dat,Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Degree=Pos\|POS=ADV\|Variant=Short`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Abbr=Yes\|POS=NOUN`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Int,Rel`, `POS=NOUN`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `AdpType=Prep\|Case=Dat\|POS=ADP`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `AdpType=Prep\|POS=SYM`, `Case=Acc,Nom\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM\|PronType=Tot`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak\|Variant=Short`, `POS=SYM`, `POS=X`, `Abbr=Yes\|POS=X`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Abbr=Yes\|POS=ADV`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Int,Rel`, `NumForm=Roman\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Voc\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Abbr=Yes\|Case=Acc,Nom\|Number=Sing\|POS=PRON`, `Foreign=Yes\|POS=PROPN`, `Definite=Ind\|Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Degree=Pos\|Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art\|Variant=Short`, `Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Definite=Ind\|Degree=Pos\|Foreign=Yes\|Gender=Fem\|POS=ADJ`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art\|Variant=Short`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art\|Variant=Short`, `Case=Acc,Nom\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Int,Rel`, `Foreign=Yes\|POS=X`, `Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Foreign=Yes\|POS=NOUN`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Dat,Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Emp`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Neg`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Neg`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Neg`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Emp`, `Definite=Ind\|POS=NOUN`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=1\|PronType=Emp`, `Abbr=Yes\|POS=PRON`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number[psor]=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=AUX\|Person=1`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Emp`, `NumType=Card\|POS=NUM`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Number=Sing\|POS=AUX\|Person=3\|Variant=Short`, `Number=Plur\|POS=AUX\|Person=2\|Variant=Short`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|Variant=Short\|VerbForm=Fin`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN\|Variant=Short`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Variant=Short`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|Variant=Short\|VerbForm=Fin`, `Number=Plur\|POS=AUX\|Person=3\|Variant=Short`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Number=Plur\|POS=AUX\|Person=1`, `POS=VERB\|Tense=Pres\|Variant=Short\|VerbForm=Inf`, `Number=Sing\|POS=AUX\|Person=2\|Variant=Short`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem\|Variant=Short`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong\|Variant=Short`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Degree=Pos\|POS=ADV\|Polarity=Neg`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong\|Variant=Short`, `POS=AUX\|Person=3\|Variant=Short`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=VERB\|Variant=Short\|VerbForm=Part`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Dat,Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Sub\|POS=VERB\|Person=3\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=AUX\|Person=1\|Variant=Short`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat,Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM\|PronType=Tot`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `POS=ADV\|Polarity=Neg`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=ADJ`, `AdpType=Prep\|Case=Acc\|Foreign=Yes\|POS=ADP`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `POS=AUX\|Person=1\|Variant=Short`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `AdpType=Prep\|POS=ADP`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat,Gen\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Abbr=Yes\|Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|POS=ADJ`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Abbr=Yes\|Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Abbr=Yes\|Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Abbr=Yes\|Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|POS=VERB\|Tense=Pres\|VerbForm=Inf`, `Foreign=Yes\|NumForm=Roman\|NumType=Ord\|Number=Sing\|POS=NUM`, `Definite=Ind\|Foreign=Yes\|Gender=Masc\|POS=NOUN`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Mood=Ind\|POS=VERB\|Person=1\|Tense=Imp\|Variant=Short\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|Variant=Short\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|Variant=Short\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Definite=Def\|Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|Variant=Short\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Degree=Cmp\|POS=ADV`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|Variant=Short\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind\|Variant=Short`, `Definite=Ind\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art\|Variant=Short`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=PROPN` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advcl:tcl`, `advmod`, `advmod:tmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `cc:preconj`, `ccomp`, `ccomp:pmod`, `compound`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `expl`, `expl:impers`, `expl:pass`, `expl:poss`, `expl:pv`, `fixed`, `flat`, `goeswith`, `iobj`, `list`, `mark`, `nmod`, `nmod:agent`, `nmod:pmod`, `nmod:tmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `1`, `2`, `3`, `7`, `9`, `12`, `14`, `15`, `19`, `22`, `24`, `26`, `30`, `32`, `34`, `36`, `38`, `40`, `42`, `45`, `47`, `49`, `51`, `53`, `55`, `61`, `62`, `66`, `67`, `68`, `71`, `73`, `76`, `78`, `80`, `83`, `85`, `86`, `89`, `91`, `92`, `93`, `95`, `97`, `98`, `99`, `102`, `104`, `106`, `108`, `109`, `111`, `107`, `113`, `115`, `116`, `119`, `121`, `124`, `128`, `129`, `130`, `132`, `135`, `139`, `143`, `146`, `148`, `150`, `151`, `154`, `156`, `158`, `159`, `162`, `165`, `166`, `167`, `169`, `171`, `173`, `175`, `177`, `180`, `182`, `183`, `185`, `186`, `187`, `189`, `191`, `193`, `195`, `197`, `198`, `199`, `201`, `203`, `205`, `207`, `208`, `210`, `212`, `215`, `217`, `218`, `221`, `223`, `227`, `229`, `230`, `231`, `232`, `233`, `234`, `237`, `239`, `240`, `242`, `244`, `246`, `248`, `249`, `251`, `252`, `254`, `257`, `259`, `261`, `263`, `266`, `268`, `269`, `271`, `272`, `274`, `276`, `278`, `280`, `282`, `283`, `285`, `287`, `289`, `293`, `294`, `296`, `298`, `300`, `301`, `303`, `305`, `307`, `309`, `311`, `313`, `315`, `317`, `318`, `320`, `322`, `324`, `326`, `328`, `330`, `331`, `333`, `334`, `336`, `337`, `339`, `342`, `343`, `344`, `346`, `349`, `353`, `355`, `357`, `359`, `360`, `361`, `363`, `364`, `366`, `367`, `369`, `370`, `372`, `374`, `376`, `378`, `379`, `380`, `381`, `383`, `384`, `386`, `388`, `389`, `391`, `74`, `393`, `395`, `397`, `399`, `401`, `403`, `406`, `408`, `409`, `412`, `413`, `415`, `416`, `417`, `418`, `419`, `420`, `421`, `422`, `423`, `425`, `426`, `428`, `429`, `431`, `434`, `435`, `439`, `443`, `445`, `447`, `449`, `451`, `452`, `453`, `456`, `458`, `460`, `461`, `462`, `464`, `466`, `467`, `468`, `470`, `471`, `473`, `474`, `475`, `476`, `478`, `481`, `484`, `485`, `486`, `487`, `489`, `491`, `492`, `493`, `496`, `498`, `500`, `503`, `504`, `505`, `509`, `512`, `513`, `514`, `515`, `516`, `519`, `520`, `521`, `522`, `523`, `525`, `526`, `527`, `528`, `213`, `530`, `531`, `532`, `535`, `539`, `541`, `544`, `546`, `547`, `548`, `550`, `552`, `553`, `555`, `557`, `558`, `559`, `560`, `563`, `565`, `566`, `569`, `572`, `574`, `576`, `578`, `580`, `582`, `585`, `588`, `589`, `590`, `591`, `592`, `593`, `594`, `597`, `599`, `601`, `603`, `605`, `606`, `608`, `610`, `614`, `616`, `617`, `618`, `620`, `624`, `626`, `41`, `628`, `629`, `631`, `632`, `634`, `636`, `639`, `641`, `643`, `645`, `647`, `650`, `653`, `654`, `655`, `657`, `658`, `661`, `664`, `665`, `667`, `669`, `671`, `672`, `674`, `675`, `677`, `678`, `680`, `682`, `683`, `686`, `688`, `690`, `693`, `695`, `697`, `699`, `701`, `702`, `703`, `705`, `706`, `707`, `708`, `711`, `713`, `714`, `715`, `717`, `719`, `721`, `722`, `725`, `726`, `728`, `731`, `733`, `735`, `736`, `737`, `738`, `740`, `742`, `744`, `745`, `747`, `749`, `750`, `751`, `752`, `754`, `757`, `759`, `761`, `762`, `764`, `765`, `766`, `768`, `769`, `770`, `771`, `772`, `774`, `775`, `776`, `779`, `781`, `784`, `785`, `787`, `789`, `791`, `792`, `794`, `796`, `797`, `799`, `800`, `802`, `803`, `808`, `809`, `810`, `813`, `816`, `817`, `818`, `820`, `821`, `822`, `824`, `826`, `827`, `828`, `830`, `832`, `834`, `836`, `837`, `839`, `841`, `843`, `845`, `847`, `848`, `849`, `851`, `855`, `856`, `858`, `861`, `862`, `864`, `865`, `866`, `867`, `868`, `870`, `871`, `873`, `876`, `877`, `880`, `881`, `883`, `885`, `889`, `891`, `892`, `894`, `896`, `898`, `900`, `902`, `904`, `905`, `907`, `908`, `911`, `913`, `914`, `916`, `918`, `919`, `920`, `923`, `924`, `926`, `927`, `929`, `932`, `935`, `936`, `937`, `938`, `940`, `942`, `943`, `945`, `947`, `948`, `952`, `955`, `958`, `960`, `961`, `962`, `964`, `965`, `966`, `968`, `970`, `972`, `974`, `976`, `977`, `979`, `980`, `982`, `983`, `985`, `986`, `988`, `989`, `990`, `991`, `993`, `995`, `997`, `998`, `999`, `1001`, `1002`, `1003`, `1006`, `1007`, `1012`, `1013`, `1014`, `1015`, `1016`, `1019`, `1020`, `1021`, `1022`, `1023`, `1025`, `1027`, `1029`, `1031`, `1032`, `1033`, `1036`, `1038`, `1040`, `1043`, `1044`, `1045`, `1046`, `1048`, `1050`, `1052`, `1053`, `1055`, `1057`, `1058`, `1061`, `1062`, `1064`, `1067`, `1069`, `1071`, `1074`, `1076`, `1078`, `1080`, `1083`, `1085`, `1086`, `1089`, `1090`, `1091`, `1094`, `1097`, `1098`, `1099`, `1103`, `1104`, `1106`, `1107`, `1108`, `1109`, `1110`, `1112`, `1114`, `1117`, `1118`, `1120`, `1122`, `1124`, `1125`, `1127`, `1128`, `1129`, `1132`, `1133`, `1136`, `1138`, `1139`, `1141`, `1144`, `1145`, `1147`, `1150`, `1152`, `1154`, `1155`, `1156`, `1157`, `1159`, `1161`, `1162`, `1163`, `1165`, `1166`, `1167`, `1168`, `1169`, `1171`, `1174`, `1176`, `1178`, `1179`, `1180`, `1184`, `1186`, `1187`, `1189`, `1190`, `1192`, `1193`, `1195`, `1196`, `1198`, `1201`, `1203`, `1204`, `1207`, `1210`, `1212`, `1214`, `1215`, `1216`, `1217`, `1218`, `1219`, `1222`, `1223`, `1224`, `1226`, `1227`, `1230`, `1231`, `1232`, `1233`, `1234`, `1235`, `1236`, `1238`, `1239`, `1242`, `1243`, `1244`, `1245`, `1247`, `1249`, `1250`, `1252`, `1254`, `1255`, `1256`, `1258`, `1259`, `1261`, `1262`, `1268`, `1269`, `1270`, `1271`, `1272`, `1274`, `1275`, `1277`, `1278`, `1279`, `1281`, `1282`, `1285`, `1287`, `1288`, `1289`, `1290`, `1291`, `1292`, `1295`, `1297`, `1298`, `1299`, `1300`, `1301`, `1302`, `1303`, `1304`, `1305`, `1306`, `1307`, `1312`, `1313`, `1314`, `1316`, `1317`, `1318`, `1319`, `1320`, `1315`, `1321`, `1323`, `1324`, `1325`, `1326`, `1327`, `1329`, `1337`, `1338`, `1339`, `1343`, `1344`, `1346`, `1347`, `1350`, `1351`, `1353`, `1354`, `1355`, `1358`, `1360`, `1361`, `1362`, `1365`, `1366`, `1367`, `1368`, `1369`, `1370`, `1371`, `1372`, `1373`, `1374`, `1376`, `1377`, `1379`, `1380`, `1381`, `1382`, `1384`, `1385`, `1386`, `1387`, `1389`, `1390`, `1391`, `1392`, `1393`, `1394`, `1395`, `1396`, `1400`, `1401`, `1404`, `1405`, `1406`, `1409`, `1410`, `1411`, `1413`, `1414`, `1416`, `1417`, `1418`, `1419`, `1421`, `1424`, `1425`, `1426`, `1427`, `1428`, `1430`, `1431`, `1434`, `1435`, `1436`, `1438`, `1440`, `1442`, `1443`, `1444`, `1445`, `1448`, `1449`, `1450`, `1451`, `1453`, `1454`, `1455`, `1456`, `1458`, `1459`, `1460`, `1463`, `1464`, `1466`, `1467`, `1468`, `1469`, `1470`, `1471`, `1472`, `1473`, `1474`, `1475`, `1476`, `1479`, `1480`, `1482`, `1483`, `1485`, `1487`, `1488`, `1490`, `1491`, `1492`, `1493`, `1495`, `1501`, `1504`, `1506`, `1508`, `1510`, `1512`, `1513`, `1514`, `1515`, `1516`, `1517`, `1518`, `1519`, `1520`, `1523`, `1524`, `1527`, `1530`, `1532`, `1533`, `1534`, `1536`, `1537`, `1538`, `1539`, `1540`, `1542`, `1544`, `1545`, `1546`, `1547`, `1548`, `1549`, `1550`, `1552`, `1554`, `1555`, `1556`, `1557`, `1558`, `1560`, `1563`, `1564`, `1565`, `1566`, `1567`, `1568`, `1569`, `1570`, `1572`, `1573`, `1575`, `1577`, `1578`, `1579`, `1582`, `1584`, `1585`, `1586`, `1587`, `1588`, `1589`, `1590`, `1591`, `1593`, `1594`, `1595`, `1597`, `1598`, `1600`, `1601`, `1602`, `1604`, `1605`, `1606`, `1607`, `1608`, `1610`, `1611`, `1612`, `1616`, `1617`, `1618`, `1619`, `1620`, `1621`, `1622`, `1623`, `1627`, `1628`, `1629`, `1631`, `1639`, `1641`, `1642`, `1643`, `1649`, `1650`, `1652`, `1653`, `1654`, `1656`, `1657`, `1659`, `1660`, `1661`, `1663`, `1667`, `1668`, `1669`, `1670`, `1671`, `1673`, `1675`, `1676`, `1678`, `1679`, `1681`, `1682`, `1684`, `1685`, `1686`, `1687`, `1688`, `1689`, `1690`, `1691`, `1692`, `1694`, `1695`, `1696`, `1697`, `1698`, `1700`, `1702`, `1703`, `1705`, `1706`, `1707`, `1708`, `1709`, `1710`, `1712`, `1713`, `1717`, `1718`, `1719`, `1720`, `1721`, `1725`, `1726`, `1728`, `1729`, `1730`, `1731`, `1733`, `1734`, `1735`, `1738`, `1740`, `1741`, `1742`, `1743`, `1744`, `1747`, `1749`, `1751`, `1754`, `1756`, `1757`, `1758`, `1760`, `1761`, `1762`, `1765`, `1768`, `1771`, `1772`, `1774`, `1775`, `1776`, `1777`, `1778`, `1779`, `1780`, `1781`, `1782`, `1783`, `1785`, `1787`, `1788`, `1790`, `1793`, `1794`, `1795`, `1798`, `1800`, `1801`, `1802`, `1803`, `1805`, `1806`, `1807`, `1808`, `1809`, `1810`, `1816`, `1817`, `1818`, `1819`, `1820`, `1821`, `1822`, `1823`, `1825`, `1826`, `1828`, `1829`, `1830`, `1831`, `1832`, `1833`, `1835`, `1841`, `1842`, `1843`, `1844`, `1846`, `1847`, `1849`, `1850`, `1851`, `1852`, `1853`, `1854`, `1855`, `1856`, `1857`, `1858`, `1859`, `1860`, `1861`, `1863`, `1865`, `1866`, `1867`, `1870`, `1872`, `1873`, `1874`, `1875`, `1876`, `1879`, `1880`, `1881`, `1882`, `1883`, `1884`, `1886`, `1887`, `1888`, `1889`, `1890`, `1891`, `1892`, `1894`, `1896`, `1898`, `1900`, `1901`, `1902`, `1904`, `1905`, `1906`, `1907`, `1910`, `1911`, `1913`, `1914`, `1916`, `1917`, `1919`, `1921`, `1923`, `1924`, `1759`, `1173`, `1925`, `1927`, `1929`, `1930`, `1931`, `1932`, `1933`, `1934`, `1936`, `1938`, `1940`, `1941`, `1942`, `1944`, `1945`, `1946`, `1948`, `1949`, `1951`, `1952`, `1953`, `1954`, `1956`, `1957`, `1958`, `1959`, `1961`, `1962`, `1963`, `1964`, `1965`, `1966`, `1968`, `1969`, `1970`, `1971`, `1972`, `1973`, `1764`, `1974`, `1975`, `1976`, `1977`, `1979`, `1980`, `1981`, `1982`, `1983`, `1984`, `1985`, `1986`, `1987`, `1988`, `1989`, `1990`, `1993`, `1994`, `1995`, `1996`, `1997`, `1998`, `1999`, `2001`, `2002`, `2003`, `2004`, `2005`, `2007`, `2008`, `2009`, `2010`, `2011`, `2012`, `2013`, `2016`, `2018`, `2019`, `2020`, `2021`, `2022`, `2023`, `2024`, `2025`, `2026`, `2028`, `2029`, `2031`, `2033`, `2037`, `2038`, `2039`, `2042`, `2043`, `2045`, `2046`, `2047`, `2048`, `2049`, `2050`, `2051`, `2052`, `2053`, `2055`, `2056`, `2057`, `2059`, `2063`, `2064`, `2065`, `2066`, `2067`, `2068`, `2069`, `2070`, `2071`, `2072`, `602`, `2073`, `2074`, `2075`, `2078`, `2079`, `2080`, `2082`, `2083`, `2084`, `2085`, `2086`, `2087`, `2088`, `2089`, `2090`, `2091`, `2092`, `2093`, `2094`, `2096`, `2098`, `2099`, `2100`, `2101`, `2102`, `2103`, `2105`, `2106`, `2107`, `2108`, `2109`, `2110`, `2112`, `2113`, `2115`, `2116`, `2117`, `2118`, `2119`, `2123`, `2125`, `2126`, `2127`, `2128`, `2130`, `2131`, `2132`, `2133`, `2134`, `2135`, `2136`, `2139`, `2140`, `2141`, `2142`, `2143`, `2144`, `2146`, `2147`, `2148`, `2150`, `2151`, `2152`, `2154`, `2155`, `2156`, `2158`, `2159`, `2160`, `2162`, `2163`, `2164`, `2165`, `2167`, `2168`, `2169`, `2170`, `2171`, `2173`, `2174`, `2175`, `2177`, `2178`, `2179`, `2180`, `2181`, `2183`, `2184`, `2185`, `2187`, `2188`, `2189`, `2190`, `2191`, `2192`, `2193`, `2195`, `2197`, `2198`, `2199`, `2200`, `2201`, `2203`, `2204`, `2205`, `2206`, `2207`, `2209`, `2213`, `2214`, `2215`, `2216`, `2219`, `2220`, `2221`, `2223`, `2225`, `2227`, `2229`, `2230`, `2231`, `2232`, `2235`, `2238`, `2239`, `2241`, `2243`, `2245`, `2246`, `2247`, `2248`, `2249`, `2251`, `2253`, `2254`, `2255`, `2257`, `2260`, `2261`, `2263`, `2264`, `2265`, `2266`, `2267`, `2268`, `2269`, `2270`, `2271`, `2272`, `2273`, `2275`, `2276`, `2277`, `2280`, `2281`, `2282`, `2283`, `2284`, `2285`, `2287`, `2289`, `2291`, `2293`, `2294`, `2295`, `2296`, `2297`, `2298`, `2299`, `2300`, `2301`, `2303`, `2305`, `2307`, `2308`, `2309`, `2310`, `2311`, `2313`, `2314`, `2315`, `2316`, `2318`, `2319`, `2320`, `2321`, `2322`, `2324`, `2326`, `2327`, `2329`, `2330`, `2332`, `2334`, `2336`, `2338`, `2339`, `2340`, `2341`, `2342`, `2343`, `2345`, `2347`, `2349`, `2350`, `2351`, `2352`, `2353`, `2355`, `2356`, `2357`, `2359`, `2361`, `2364`, `2365`, `2366`, `2367`, `2368`, `2369`, `2370`, `2371`, `2372`, `2373`, `2374`, `2375`, `2378`, `2379`, `2380`, `2381`, `2382`, `2384`, `2385`, `2386`, `2387`, `2388`, `2390`, `2394`, `1763`, `2396`, `2398`, `2400`, `2402`, `2404`, `2405`, `2406`, `2407`, `2408`, `2409`, `2410`, `2411`, `2413`, `2414`, `2416`, `2417`, `2418`, `2420`, `2422`, `2423`, `188`, `2425`, `2426`, `2427`, `2428`, `2430`, `2431`, `2432`, `2434`, `2435`, `2436`, `2437`, `2439`, `2440`, `2443`, `2444`, `2446`, `2447`, `2448`, `2449`, `2451`, `2453`, `2455`, `2456`, `2457`, `2458`, `2459`, `2461`, `2463`, `2465`, `2466`, `2467`, `2468`, `2469`, `2470`, `2471`, `2472`, `2475`, `2477`, `2478`, `2479`, `2480`, `2482`, `2483`, `2484`, `2485`, `2486`, `2488`, `2490`, `2491`, `2493`, `2495`, `2496`, `2498`, `2499`, `2501`, `2503`, `2504`, `2506`, `2508`, `2509`, `2511`, `2512`, `2513`, `2514`, `2516`, `2517`, `2519`, `2521`, `2522`, `2523`, `2524`, `2525`, `2526`, `2528`, `2529`, `2530`, `2532`, `2533`, `2534`, `2535`, `2536`, `2537`, `2538`, `2539`, `2540`, `2542`, `2543`, `2544`, `2545`, `2546`, `2547`, `2548`, `2549`, `2550`, `2551`, `2552`, `2554`, `2555`, `2556`, `2558`, `2559`, `2560`, `2564`, `2565`, `2566`, `2567`, `2568`, `2569`, `2570`, `2571`, `2572`, `2573`, `2575`, `2576`, `2577`, `2578`, `2579`, `2580`, `2581`, `2582`, `2584`, `2585`, `2586`, `2587`, `2588`, `2589`, `2590`, `2591`, `2592`, `2593`, `2594`, `2595`, `2596`, `2597`, `2598`, `2599`, `2602`, `2603`, `2604`, `2606`, `2608`, `2609`, `2610`, `2611`, `2613`, `2614`, `2615`, `2617`, `2621`, `2622`, `2623`, `2624`, `2625`, `2626`, `2627`, `2628`, `2631`, `2633`, `2635`, `2637`, `2638`, `2639`, `2640`, `2642`, `2643`, `2644`, `2646`, `2647`, `2649`, `2650`, `2652`, `2653`, `2654`, `2656`, `2657`, `2658`, `2659`, `2660`, `2661`, `2662`, `2664`, `2666`, `2667`, `2668`, `2669`, `2671`, `2672`, `2673`, `2676`, `2677`, `2678`, `2679`, `2680`, `2681`, `2683`, `2684`, `2685`, `2686`, `2688`, `2690`, `2691`, `2692`, `2694`, `2696`, `2698`, `2699`, `2700`, `2702`, `2703`, `2704`, `2706`, `2707`, `2708`, `2710`, `2711`, `2713`, `2714`, `2715`, `2717`, `2719`, `2720`, `2721`, `2722`, `2724`, `2725`, `2726`, `2727`, `2728`, `2729`, `2731`, `2732`, `2734`, `2735`, `2736`, `2738`, `2740`, `2741`, `2742`, `2744`, `2745`, `2746`, `2747`, `2748`, `2750`, `2753`, `2754`, `2755`, `2756`, `2757`, `2758`, `2760`, `2761`, `2762`, `2764`, `2765`, `2766`, `2767`, `2768`, `2769`, `2770`, `2771`, `2772`, `2773`, `2774`, `2775`, `2778`, `2780`, `2784`, `2785`, `2787`, `2788`, `2790`, `2792`, `2793`, `2794`, `2795`, `2797`, `2799`, `2802`, `2803`, `2805`, `2806`, `2808`, `2809`, `2811`, `2813`, `2815`, `2816`, `2817`, `2819`, `2823`, `2826`, `2827`, `2829`, `2831`, `2832`, `2834`, `2835`, `2837`, `2838`, `2840`, `2841`, `2842`, `2844`, `2846`, `2847`, `2848`, `2849`, `2850`, `2851`, `2852`, `2853`, `2855`, `2856`, `2857`, `2858`, `2859`, `2860`, `2861`, `2862`, `2863`, `2865`, `2866`, `2867`, `2868`, `2869`, `2870`, `2871`, `2872`, `2873`, `2874`, `2875`, `2876`, `2877`, `2878`, `2879`, `2880`, `2881`, `2882`, `2883`, `2884`, `2885`, `2886`, `2889`, `2890`, `2891`, `2892`, `2893`, `2894`, `2895`, `2896`, `2898`, `2899`, `2900`, `2901`, `2902`, `2903`, `2904`, `2905`, `2906`, `2909`, `2910`, `2911`, `2912`, `2914`, `2915`, `2916`, `2917`, `2918`, `2919`, `2922`, `2924`, `2926`, `2928`, `2929`, `2931`, `2933`, `2934`, `2935`, `2937`, `2938`, `2939`, `2940`, `2941`, `2942`, `2943`, `2945`, `2946`, `2947`, `2950`, `2951`, `2952`, `2953`, `2956`, `2957`, `2958`, `2959`, `2960`, `2962`, `2963`, `2964`, `2965`, `2966`, `2967`, `2968`, `2969`, `23`, `2970`, `2971`, `2972`, `2973`, `2974`, `2975`, `2976`, `2977`, `2978`, `2980`, `2981`, `2983`, `2984`, `2985`, `2986`, `2987`, `2988`, `2991`, `2992`, `2994`, `2995`, `2996`, `2997`, `2998`, `3000`, `3001`, `3002`, `3003`, `3004`, `3006`, `3009`, `3010`, `3011`, `3012`, `3013`, `3015`, `3017`, `3018`, `3020`, `3021`, `3022`, `3025`, `3026`, `3027`, `3028`, `3029`, `3030`, `11`, `3033`, `3034`, `3035`, `3037`, `3038`, `3039`, `3040`, `3041`, `3042`, `3045`, `3047`, `3049`, `3050`, `3051`, `3052`, `3054`, `3056`, `3058`, `3060`, `3062`, `3063`, `3064`, `3065`, `3068`, `3070`, `3071`, `3072`, `3073`, `3074`, `3075`, `3077`, `3079`, `3080`, `3082`, `3085`, `3087`, `3088`, `3090`, `3093`, `3095`, `3096`, `3097`, `3099`, `3101`, `3102`, `3103`, `3106`, `3108`, `3109`, `3110`, `3113`, `3114`, `3117`, `3118`, `3119`, `3121`, `3122`, `3125`, `3126`, `3128`, `3129`, `3130`, `3132`, `3133`, `3135`, `3136`, `3137`, `3139`, `3141`, `3142`, `3143`, `3144`, `3146`, `3147`, `3148`, `3149`, `3151`, `3152`, `3153`, `3154`, `3155`, `3158`, `3159`, `3161`, `3162`, `3164`, `3165`, `3166`, `3167`, `3168`, `3170`, `3171`, `3172`, `3174`, `3175`, `3177`, `3178`, `3179`, `3180`, `3181`, `3182`, `3183`, `3184`, `3185`, `3186`, `3187`, `3188`, `3190`, `3191`, `3193`, `3194`, `3195`, `3196`, `3197`, `3198`, `3199`, `3200`, `3202`, `3204`, `3205`, `3206`, `3207`, `3208`, `3209`, `3210`, `3211`, `3214`, `3215`, `3216`, `3217`, `3218`, `3219`, `3220`, `3222`, `3225`, `3226`, `3227`, `3228`, `3229`, `3231`, `3232`, `3233`, `3235`, `3238`, `3239`, `3240`, `3241`, `3242` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.79 | | `TOKEN_P` | 99.78 | | `TOKEN_R` | 99.80 | | `TOKEN_ACC` | 99.96 | | `SENTS_F` | 92.35 | | `SENTS_P` | 94.94 | | `SENTS_R` | 89.89 | | `TAG_ACC` | 96.53 | | `POS_ACC` | 97.85 | | `MORPH_ACC` | 97.23 | | `DEP_UAS` | 92.52 | | `DEP_LAS` | 86.32 | | `LEMMA_ACC` | 97.00 |
{"language": ["ro"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/ro_udv25_romanianrrt_trf
null
[ "spacy", "token-classification", "ro", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ro" ]
TAGS #spacy #token-classification #ro #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Romanian-RRT ### Label Scheme View label scheme (3096 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (3096 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #ro #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (3096 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Russian-GSD | Feature | Description | | --- | --- | | **Name** | `ru_udv25_russiangsd_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (3014 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `!`, `&#39;&#39;`, `'`, `(`, `)`, `,`, `-`, `--`, `.`, `.,`, `/`, `:`, `AFX`, `APOSTROPHE`, `AWP`, `CC`, `CD`, `DT`, `FW`, `IN`, `JJ`, `JJH`, `JJL`, `JJR`, `JJRL`, `JJS`, `NEG`, `NFP`, `NN`, `NNP`, `ORD`, `PRED`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `UH`, `VB`, `VBC`, `VBG`, `VBNH`, `VBNL`, `WDT`, `WP`, `WRB`, `X`, ```` | | **`morphologizer`** | `POS=ADP`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=CCONJ`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Aspect=Perf\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON`, `POS=PART\|Polarity=Neg`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Variant=Short`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf\|Voice=Mid`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Nom\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|NumType=Card\|POS=NUM`, `Case=Nom\|NumType=Card\|POS=NUM`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET`, `POS=PART`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Degree=Cmp\|POS=ADV`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=ADV`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Degree=Pos\|POS=ADV`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Conv\|Voice=Act`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|NumType=Card\|POS=NUM`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Conv\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Loc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `POS=DET`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=NUM`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Degree=Pos\|Number=Plur\|POS=ADJ\|Variant=Short`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Aspect=Perf\|POS=VERB\|VerbForm=Inf\|Voice=Mid`, `Case=Loc\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Imp\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Gen\|Number=Plur\|POS=PRON`, `POS=SYM`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Nom\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Mid`, `Case=Gen\|Number=Plur\|POS=DET`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Aspect=Perf\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET`, `Foreign=Yes\|POS=X`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Anim\|Case=Nom\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Case=Acc\|NumType=Card\|POS=NUM`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=PRON`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|POS=PRON\|Reflex=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Number=Plur\|POS=DET`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Ins\|Number=Plur\|POS=DET`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Variant=Short`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Anim\|Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET`, `Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Variant=Short`, `Degree=Cmp\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|POS=PRON\|Reflex=Yes`, `Animacy=Inan\|Case=Nom\|Number=Plur\|POS=PRON`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Degree=Pos\|POS=VERB`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Imp\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Case=Ins\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|POS=PRON\|Reflex=Yes`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|NumType=Card\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON`, `Case=Ins\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Variant=Short`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Nom\|Number=Plur\|POS=PRON`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Dat\|NumType=Card\|POS=NUM`, `POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Variant=Short`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON`, `POS=NOUN`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `POS=X`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Abbr=Yes\|POS=PROPN`, `Animacy=Inan\|Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Imp\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|NumType=Card\|Number=Plur\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Degree=Sup\|POS=ADV`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2`, `Case=Dat\|POS=PRON\|Reflex=Yes`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Foreign=Yes\|POS=NOUN`, `POS=PROPN`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1`, `Animacy=Inan\|Case=Acc\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Aspect=Imp\|POS=AUX\|VerbForm=Inf`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Ins\|Number=Plur\|POS=PRON`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON`, `Aspect=Imp\|POS=AUX\|VerbForm=Conv`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `POS=AUX`, `Case=Dat\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Number=Plur\|POS=PRON`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Case=Dat\|Number=Plur\|POS=PRON`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Gen\|NumType=Card\|POS=NUM`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=3`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Case=Dat\|Number=Plur\|POS=PRON`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Perf\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Dat\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|POS=VERB\|VerbForm=Conv`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Aspect=Imp\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Abbr=Yes\|POS=ADV`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|NumType=Card\|POS=NUM`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=1`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PART`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Case=Acc\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|NumType=Card\|POS=NUM`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Loc\|NumType=Card\|POS=NUM`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ\|Variant=Short`, `Animacy=Anim\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Perf\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Abbr=Yes\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Anim\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|NumType=Card\|POS=NUM`, `Case=Gen\|POS=PRON\|Reflex=Yes`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|Variant=Short\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `POS=VERB`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Number=Plur\|POS=PRON`, `Animacy=Inan\|Case=Gen\|NumType=Card\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Case=Loc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Aspect=Perf\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Imp\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Nom\|NumType=Card\|Number=Plur\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Anim\|Aspect=Perf\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Aspect=Imp\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Perf\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Perf\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Aspect=Imp\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Case=Ins\|Number=Plur\|POS=PRON`, `Animacy=Anim\|Aspect=Perf\|Case=Ins\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Imp\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Foreign=Yes\|POS=PROPN`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Aspect=Perf\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Perf\|Case=Dat\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Degree=Cmp\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Aspect=Perf\|Case=Loc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Aspect=Imp\|Case=Gen\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=DET`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON`, `Animacy=Inan\|Aspect=Imp\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Nom\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1`, `Animacy=Inan\|Case=Par\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin\|Voice=Mid`, `Animacy=Inan\|Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|NumType=Card\|POS=SYM`, `Animacy=Anim\|Aspect=Imp\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|Variant=Short\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Conv\|Voice=Mid`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Nom\|NumType=Card\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=2`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=2`, `Animacy=Inan\|Aspect=Imp\|Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1`, `Aspect=Imp\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Animacy=Inan\|Aspect=Imp\|Case=Acc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Abbr=Yes\|POS=DET` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `dep`, `det`, `expl`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `goeswith`, `iobj`, `list`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `nummod:entity`, `nummod:gov`, `obj`, `obl`, `obl:agent`, `orphan`, `parataxis`, `punct`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `1`, `2`, `4`, `6`, `8`, `10`, `12`, `14`, `16`, `19`, `21`, `23`, `27`, `29`, `31`, `35`, `37`, `39`, `42`, `45`, `49`, `50`, `53`, `55`, `59`, `61`, `62`, `64`, `66`, `68`, `70`, `72`, `75`, `77`, `78`, `81`, `83`, `85`, `87`, `89`, `91`, `94`, `97`, `99`, `101`, `105`, `106`, `107`, `109`, `110`, `112`, `114`, `116`, `118`, `119`, `121`, `123`, `126`, `128`, `130`, `132`, `133`, `135`, `137`, `139`, `0`, `141`, `145`, `147`, `148`, `150`, `152`, `154`, `156`, `158`, `160`, `162`, `166`, `168`, `169`, `171`, `173`, `175`, `177`, `179`, `181`, `182`, `184`, `186`, `188`, `189`, `192`, `193`, `194`, `195`, `197`, `198`, `199`, `202`, `204`, `205`, `206`, `207`, `208`, `210`, `211`, `213`, `216`, `217`, `219`, `221`, `223`, `224`, `226`, `228`, `229`, `231`, `233`, `234`, `237`, `239`, `241`, `242`, `244`, `245`, `247`, `249`, `251`, `253`, `256`, `257`, `260`, `262`, `264`, `266`, `268`, `270`, `272`, `275`, `277`, `279`, `283`, `287`, `289`, `290`, `293`, `294`, `296`, `298`, `300`, `302`, `305`, `307`, `310`, `313`, `315`, `317`, `319`, `322`, `324`, `326`, `328`, `330`, `332`, `335`, `337`, `339`, `340`, `341`, `345`, `346`, `348`, `350`, `353`, `355`, `357`, `360`, `362`, `364`, `366`, `368`, `370`, `372`, `374`, `376`, `378`, `380`, `381`, `384`, `386`, `388`, `391`, `393`, `395`, `397`, `398`, `400`, `401`, `402`, `404`, `408`, `409`, `410`, `412`, `413`, `415`, `416`, `418`, `420`, `421`, `423`, `424`, `426`, `428`, `430`, `432`, `434`, `436`, `438`, `439`, `441`, `443`, `446`, `449`, `453`, `455`, `457`, `248`, `459`, `460`, `462`, `464`, `465`, `467`, `470`, `472`, `474`, `477`, `479`, `480`, `482`, `484`, `485`, `486`, `489`, `491`, `493`, `496`, `498`, `500`, `502`, `504`, `505`, `506`, `508`, `509`, `512`, `513`, `515`, `517`, `520`, `522`, `524`, `525`, `527`, `529`, `531`, `532`, `533`, `535`, `536`, `540`, `542`, `544`, `546`, `548`, `549`, `551`, `552`, `555`, `276`, `556`, `557`, `559`, `560`, `562`, `564`, `565`, `567`, `569`, `570`, `571`, `572`, `574`, `575`, `577`, `578`, `580`, `582`, `584`, `586`, `589`, `591`, `593`, `595`, `597`, `599`, `601`, `602`, `172`, `604`, `605`, `606`, `608`, `610`, `611`, `612`, `614`, `615`, `76`, `617`, `618`, `619`, `621`, `117`, `623`, `624`, `626`, `628`, `629`, `631`, `635`, `637`, `638`, `639`, `641`, `642`, `644`, `645`, `647`, `648`, `650`, `652`, `654`, `656`, `658`, `659`, `661`, `663`, `665`, `666`, `668`, `669`, `671`, `675`, `677`, `678`, `679`, `681`, `682`, `683`, `686`, `687`, `689`, `691`, `693`, `695`, `697`, `699`, `701`, `22`, `703`, `705`, `707`, `710`, `714`, `716`, `718`, `720`, `723`, `725`, `727`, `729`, `731`, `732`, `734`, `737`, `739`, `740`, `743`, `745`, `747`, `748`, `751`, `753`, `754`, `757`, `758`, `760`, `762`, `764`, `766`, `768`, `770`, `772`, `773`, `775`, `776`, `778`, `779`, `780`, `781`, `782`, `783`, `785`, `787`, `789`, `791`, `793`, `794`, `796`, `797`, `800`, `801`, `802`, `803`, `804`, `806`, `807`, `808`, `809`, `810`, `812`, `816`, `818`, `819`, `821`, `823`, `825`, `826`, `827`, `829`, `833`, `834`, `835`, `836`, `838`, `842`, `843`, `844`, `846`, `848`, `849`, `850`, `852`, `854`, `856`, `858`, `860`, `862`, `864`, `866`, `867`, `868`, `870`, `871`, `873`, `874`, `875`, `878`, `880`, `881`, `883`, `887`, `889`, `890`, `891`, `894`, `895`, `896`, `898`, `900`, `902`, `903`, `904`, `907`, `909`, `910`, `911`, `912`, `914`, `916`, `917`, `918`, `919`, `920`, `924`, `925`, `927`, `928`, `931`, `933`, `934`, `936`, `937`, `935`, `938`, `939`, `942`, `944`, `946`, `948`, `949`, `950`, `951`, `953`, `954`, `956`, `958`, `959`, `960`, `962`, `964`, `966`, `968`, `970`, `972`, `974`, `976`, `978`, `980`, `981`, `982`, `984`, `985`, `987`, `988`, `989`, `990`, `991`, `992`, `993`, `995`, `996`, `997`, `998`, `1000`, `1001`, `1002`, `1004`, `1006`, `1008`, `1010`, `1012`, `1013`, `1016`, `1018`, `1019`, `1021`, `1023`, `1024`, `1025`, `1028`, `1030`, `1031`, `1033`, `1034`, `1036`, `1038`, `1039`, `1040`, `1041`, `1043`, `1045`, `1046`, `1048`, `1052`, `1054`, `1055`, `1056`, `1057`, `1062`, `1064`, `1065`, `1067`, `1069`, `1070`, `1072`, `1073`, `1074`, `1075`, `1076`, `1078`, `1080`, `1081`, `1083`, `1085`, `1087`, `1088`, `1089`, `1091`, `1092`, `1093`, `1094`, `1095`, `1096`, `1097`, `1098`, `1100`, `1102`, `1104`, `1106`, `1108`, `1109`, `1110`, `1111`, `1112`, `1113`, `1116`, `1117`, `1119`, `1121`, `1123`, `1124`, `1125`, `1127`, `1129`, `1132`, `1134`, `1135`, `1138`, `1139`, `1141`, `1143`, `1144`, `1145`, `1146`, `1147`, `1149`, `1152`, `1153`, `1155`, `1156`, `1157`, `1159`, `1161`, `1163`, `1165`, `1166`, `1168`, `1169`, `1172`, `1174`, `1176`, `1177`, `1179`, `1183`, `1184`, `1185`, `1186`, `1188`, `1190`, `1193`, `1195`, `1196`, `1200`, `1203`, `1204`, `1206`, `1207`, `1208`, `1209`, `1211`, `1212`, `1214`, `1216`, `1217`, `1218`, `1219`, `1221`, `1223`, `1224`, `1225`, `1227`, `1228`, `1230`, `1232`, `1234`, `1237`, `1238`, `1239`, `1241`, `1243`, `1244`, `1246`, `1248`, `1249`, `1251`, `1252`, `1255`, `1257`, `1259`, `1261`, `1262`, `1263`, `1265`, `1267`, `1268`, `1269`, `1273`, `1275`, `1277`, `1279`, `1281`, `1283`, `1285`, `1287`, `1289`, `1291`, `1293`, `1295`, `1297`, `1299`, `1302`, `1305`, `1306`, `1309`, `1311`, `1312`, `1313`, `1314`, `1315`, `1317`, `1319`, `1321`, `1322`, `1325`, `1326`, `1328`, `1330`, `1331`, `1333`, `325`, `1334`, `1336`, `1338`, `1339`, `1341`, `1343`, `1346`, `1347`, `1348`, `1349`, `1350`, `1352`, `1353`, `1354`, `1355`, `1357`, `1358`, `1359`, `1361`, `1363`, `1365`, `1368`, `1370`, `1371`, `1372`, `1374`, `1376`, `1377`, `1378`, `1380`, `1382`, `1384`, `1385`, `1386`, `1388`, `1389`, `1391`, `1393`, `1395`, `1396`, `1398`, `1399`, `1402`, `1404`, `1405`, `1120`, `1406`, `1408`, `1409`, `1410`, `1412`, `1413`, `1414`, `1415`, `1417`, `1419`, `1421`, `1423`, `1425`, `1426`, `1427`, `1429`, `1431`, `1433`, `1434`, `1436`, `1438`, `1439`, `1441`, `1443`, `1444`, `1445`, `1447`, `1448`, `1449`, `1450`, `1451`, `1452`, `1454`, `1457`, `1458`, `1459`, `1461`, `1463`, `1465`, `1467`, `1468`, `1469`, `1470`, `1472`, `1475`, `1477`, `1479`, `1480`, `1481`, `1483`, `1484`, `1487`, `1489`, `1491`, `1492`, `1493`, `1496`, `1497`, `1499`, `1501`, `1502`, `1504`, `1506`, `1507`, `1508`, `1509`, `1511`, `1513`, `1515`, `1516`, `1517`, `1518`, `1519`, `1521`, `1522`, `1523`, `1525`, `1527`, `1529`, `1531`, `1532`, `1534`, `1535`, `1536`, `1537`, `1539`, `1541`, `1543`, `1545`, `1546`, `1548`, `1549`, `1550`, `1551`, `1552`, `1553`, `1555`, `1557`, `1558`, `1559`, `1560`, `1562`, `1564`, `1566`, `1567`, `1569`, `1571`, `1573`, `1575`, `1576`, `1578`, `1580`, `1581`, `1582`, `1583`, `1584`, `1585`, `1586`, `1588`, `1590`, `1592`, `1593`, `1595`, `1599`, `1601`, `1602`, `1604`, `1606`, `1610`, `1611`, `1613`, `1614`, `1616`, `1617`, `1618`, `1619`, `1621`, `1623`, `1624`, `1626`, `1628`, `1629`, `1631`, `1632`, `1634`, `1635`, `1636`, `1637`, `1638`, `1640`, `1642`, `1644`, `1646`, `1647`, `1649`, `1651`, `1652`, `1654`, `1655`, `1659`, `1663`, `1665`, `1666`, `1667`, `1668`, `1671`, `1672`, `1674`, `1675`, `1677`, `1679`, `1681`, `1685`, `1687`, `1688`, `1689`, `1691`, `1692`, `1695`, `1696`, `1699`, `1701`, `1702`, `1703`, `1705`, `1706`, `1709`, `1710`, `1711`, `1712`, `1714`, `1715`, `1446`, `1718`, `1720`, `1721`, `1722`, `1723`, `1725`, `1727`, `1728`, `1730`, `1732`, `1733`, `1734`, `1736`, `1738`, `1739`, `1741`, `1743`, `1745`, `1746`, `1747`, `1748`, `1749`, `1750`, `1751`, `1753`, `1754`, `1757`, `1758`, `1760`, `1761`, `1763`, `1764`, `1766`, `1767`, `1768`, `1769`, `1770`, `1772`, `1774`, `1775`, `1776`, `1778`, `1780`, `1781`, `1783`, `1785`, `1788`, `1790`, `1792`, `1793`, `1794`, `1795`, `1797`, `1798`, `1800`, `1801`, `1802`, `1804`, `1806`, `1809`, `1810`, `1812`, `1815`, `1817`, `1818`, `1819`, `1821`, `1822`, `1823`, `1824`, `1825`, `1827`, `1828`, `1829`, `1833`, `1834`, `1835`, `1836`, `1837`, `1839`, `1842`, `1844`, `1845`, `1846`, `1848`, `1850`, `1851`, `1852`, `1853`, `1854`, `1855`, `1857`, `1859`, `1862`, `1863`, `1864`, `1865`, `1866`, `1867`, `1868`, `1871`, `1873`, `1874`, `1876`, `1877`, `1879`, `1880`, `1881`, `1885`, `1886`, `1888`, `1889`, `1891`, `1893`, `1894`, `1895`, `1896`, `1898`, `1900`, `1901`, `1902`, `1903`, `1904`, `1905`, `1906`, `1908`, `1911`, `1912`, `1914`, `1916`, `1918`, `1920`, `1921`, `1923`, `1925`, `1927`, `1928`, `1929`, `1931`, `1933`, `1934`, `1935`, `1936`, `1938`, `1940`, `1941`, `1943`, `1945`, `1947`, `1948`, `1950`, `1951`, `1952`, `1954`, `1956`, `1958`, `1960`, `1961`, `1963`, `1965`, `1969`, `1970`, `1971`, `1972`, `1973`, `1974`, `1975`, `1976`, `1977`, `1978`, `1980`, `1982`, `1983`, `1985`, `1987`, `1988`, `1989`, `1990`, `1992`, `1996`, `1997`, `1998`, `1999`, `2000`, `2001`, `717`, `2002`, `2004`, `2007`, `2008`, `2010`, `2011`, `2012`, `2013`, `2015`, `2016`, `2018`, `2020`, `2021`, `2022`, `2024`, `2025`, `2026`, `2029`, `2031`, `2032`, `2033`, `2034`, `2036`, `855`, `2038`, `2040`, `2041`, `2042`, `2044`, `2046`, `2047`, `2048`, `2050`, `2052`, `2054`, `2058`, `2062`, `2063`, `2066`, `2068`, `2070`, `2072`, `2074`, `2075`, `2076`, `2078`, `2079`, `2080`, `2081`, `2083`, `2084`, `2085`, `2088`, `2089`, `2090`, `2091`, `2092`, `2093`, `2094`, `2096`, `2097`, `2098`, `2099`, `2101`, `2104`, `2105`, `2106`, `2107`, `2109`, `2110`, `2115`, `2117`, `2118`, `2121`, `2122`, `2123`, `2124`, `2125`, `2126`, `2127`, `2128`, `2129`, `2130`, `2131`, `2134`, `2135`, `2137`, `2138`, `630`, `2140`, `2143`, `2145`, `2147`, `2148`, `2149`, `2151`, `2152`, `2153`, `2154`, `2155`, `2156`, `2157`, `2159`, `2162`, `2164`, `2165`, `2167`, `2169`, `2170`, `2171`, `2175`, `2176`, `2180`, `2181`, `2183`, `2185`, `2187`, `2189`, `2190`, `2191`, `2194`, `2195`, `2196`, `2198`, `2200`, `2201`, `2202`, `2203`, `2205`, `2206`, `2207`, `2209`, `2211`, `2212`, `2213`, `2215`, `2217`, `2218`, `2219`, `2220`, `2222`, `2223`, `2224`, `2226`, `2228`, `2230`, `2231`, `2233`, `2235`, `2237`, `2239`, `2240`, `2241`, `2242`, `2243`, `2246`, `2247`, `2249`, `2251`, `2252`, `2253`, `2255`, `2256`, `2260`, `2261`, `2263`, `2265`, `2266`, `2267`, `2268`, `2270`, `2271`, `2273`, `2274`, `2277`, `2278`, `2280`, `2282`, `2284`, `2285`, `2287`, `2288`, `2290`, `2291`, `2292`, `2293`, `2294`, `2295`, `2297`, `2299`, `2301`, `2302`, `2303`, `2305`, `2306`, `2308`, `2310`, `2311`, `2313`, `2314`, `2315`, `2316`, `2317`, `2318`, `2321`, `2322`, `2324`, `2325`, `2327`, `2328`, `2329`, `2331`, `2332`, `2333`, `2335`, `2326`, `2336`, `2337`, `2339`, `2340`, `2342`, `2345`, `180`, `2347`, `2348`, `2349`, `2351`, `2352`, `2353`, `2354`, `2356`, `2357`, `2358`, `2360`, `2362`, `2364`, `2366`, `2368`, `2370`, `2372`, `2376`, `2377`, `2378`, `2380`, `2382`, `2383`, `2384`, `2385`, `2386`, `2388`, `2389`, `2391`, `2392`, `2393`, `2395`, `2397`, `2399`, `2400`, `2401`, `2403`, `2405`, `2406`, `2407`, `2409`, `2411`, `2412`, `2413`, `2414`, `2416`, `2417`, `2418`, `2419`, `2420`, `2421`, `2423`, `2424`, `2426`, `2427`, `2428`, `2429`, `2431`, `2432`, `2433`, `2434`, `2435`, `2436`, `2438`, `2439`, `2441`, `2442`, `2444`, `2445`, `2447`, `2448`, `2450`, `2451`, `2452`, `2453`, `2455`, `2456`, `2458`, `2460`, `2462`, `2463`, `2465`, `2468`, `2469`, `2470`, `2471`, `2473`, `2474`, `2476`, `2477`, `2479`, `2480`, `2482`, `2484`, `2488`, `2489`, `2493`, `2496`, `2497`, `2498`, `2499`, `2501`, `2502`, `2503`, `2504`, `2505`, `2506`, `2507`, `2509`, `2510`, `2511`, `2514`, `2517`, `2518`, `2519`, `2520`, `2522`, `2525`, `2528`, `2530`, `2531`, `2532`, `2533`, `2535`, `2536`, `2537`, `2538`, `2540`, `2542`, `2543`, `2545`, `2546`, `2547`, `2551`, `2553`, `2555`, `2557`, `2558`, `2559`, `2561`, `2562`, `2563`, `2566`, `2569`, `2571`, `2572`, `2573`, `2577`, `2578`, `2582`, `2584`, `2586`, `2587`, `2589`, `2591`, `2593`, `2594`, `2598`, `2599`, `2600`, `2601`, `2602`, `2603`, `2604`, `2605`, `2606`, `2607`, `2610`, `2611`, `2612`, `2613`, `2614`, `2616`, `2617`, `2618`, `2619`, `2621`, `2622`, `2623`, `2625`, `2626`, `2628`, `2630`, `2632`, `2633`, `2635`, `2637`, `2639`, `2640`, `2642`, `2644`, `2646`, `2647`, `2648`, `2649`, `2650`, `2652`, `2653`, `2654`, `2655`, `2656`, `2657`, `2659`, `2660`, `2661`, `2662`, `2663`, `2665`, `2666`, `2667`, `2669`, `2671`, `2672`, `2674`, `2675`, `2677`, `2678`, `2679`, `2680`, `2681`, `2683`, `2686`, `2687`, `2689`, `2691`, `2695`, `2696`, `2699`, `2701`, `2703`, `2706`, `2707`, `2708`, `2710`, `2711`, `2712`, `2714`, `2716`, `2718`, `2719`, `2720`, `2721`, `2722`, `2723`, `2725`, `2727`, `2728`, `2729`, `2730`, `2731`, `2732`, `2733`, `96`, `2735`, `2737`, `2739`, `2740`, `2741`, `2742`, `2746`, `2748`, `2750`, `2752`, `2754`, `2755`, `2756`, `2757`, `2758`, `2759`, `2761`, `2762`, `2764`, `2765`, `2767`, `2768`, `2770`, `2771`, `2773`, `2774`, `2775`, `2776`, `528`, `2778`, `2779`, `2781`, `2783`, `2786`, `2788`, `2789`, `2790`, `2791`, `2792`, `2793`, `2795`, `2796`, `2798`, `2799`, `2800`, `2802`, `2803`, `2805`, `2806`, `2807`, `2808`, `2810`, `2812`, `2813`, `2814`, `2815`, `2816`, `2818`, `2820`, `2821`, `2822`, `2823`, `2824`, `2826`, `2827`, `2830`, `2831`, `2833`, `2834`, `2836`, `2837`, `2838`, `2840`, `2841`, `2843`, `2844`, `2846`, `2848`, `2849`, `2850`, `2851`, `2852`, `2853`, `2855`, `2856`, `2858`, `2860`, `2861`, `2408`, `1222`, `2864`, `2865`, `2866`, `2867`, `2869`, `2870`, `2872`, `2873`, `2874`, `2875`, `2876`, `2877`, `2879`, `2880`, `2883`, `2885`, `2886`, `2887`, `2888`, `2890`, `2891`, `2892`, `2893`, `2895`, `2896`, `2897`, `2900`, `2901`, `2902`, `2903`, `2904`, `2906`, `2907`, `2908`, `2909`, `2911`, `2913`, `2914`, `2917`, `2919`, `2920`, `2921`, `2922`, `2924`, `2926`, `2927`, `2929`, `2931`, `2932`, `2934`, `2936`, `2940`, `2942`, `2944`, `2945`, `2946`, `2948`, `2950`, `2951`, `2953`, `2955`, `2956`, `2957`, `2959`, `2960`, `2962`, `2963`, `2964`, `2965`, `2966`, `2968`, `2970`, `2971`, `2972`, `2973`, `2975`, `2976`, `2978`, `2979`, `2980`, `2981`, `2982`, `2984`, `2986`, `2987`, `2989`, `2990`, `2991`, `2992`, `2993`, `2994`, `2995`, `2997`, `2999`, `3000`, `3002`, `3003`, `3004`, `3006`, `3008`, `3009`, `3010`, `3013`, `3014`, `3017`, `3019`, `3022`, `3024`, `3026`, `3027`, `3028`, `3029`, `3030`, `3031`, `3034`, `3035`, `3038`, `3039`, `3041`, `3044`, `3046`, `3047`, `3049`, `3050`, `3052`, `3054`, `3056`, `3057`, `3058`, `3059`, `3060`, `3061`, `3062`, `3063`, `3065`, `3066`, `3068`, `3070`, `3071`, `3073`, `3074`, `3075`, `3077`, `3078`, `3079`, `3080`, `3083`, `3084`, `3087`, `3089`, `3092`, `706`, `1420`, `394`, `3093`, `3094`, `3097`, `3098`, `3099`, `3101`, `616`, `3102`, `3103`, `3104`, `3105`, `3106`, `3108`, `3110`, `3111`, `3112`, `3114`, `3115`, `3118`, `3121`, `3122`, `3124`, `3125`, `3129`, `3132`, `3134`, `3136`, `3140`, `3141`, `3142`, `3144`, `3147`, `3149`, `3150`, `3151`, `3153`, `3155`, `3156`, `3158`, `3159`, `3160`, `3161`, `3162`, `3165`, `3166`, `3169`, `3170`, `3171`, `3173`, `3174`, `3175`, `3177`, `3178`, `3179`, `3180`, `3183`, `3184`, `3186`, `3187`, `3188`, `3189`, `3192`, `3194`, `3195`, `3197`, `3198`, `3199`, `3200`, `3202`, `3205`, `3206`, `3209`, `3210`, `3211`, `3212`, `3213`, `3215`, `3216`, `3217`, `3218`, `3219`, `3222`, `3224`, `3225`, `3226`, `3227`, `3228`, `3229`, `3231`, `201`, `3232`, `3233`, `3234`, `3235`, `3237`, `3239`, `3241`, `3242`, `3243`, `3245`, `3246`, `3247`, `3248`, `3249`, `3250`, `3253`, `3255`, `3257`, `3258`, `3259`, `3260`, `3261`, `3264`, `3265`, `3267`, `3268`, `3269`, `3270`, `3272`, `3273`, `3274`, `3275`, `3277`, `3278`, `3279`, `3280`, `3282`, `3284`, `3285`, `3286`, `3289`, `3291`, `3293`, `3295`, `3296`, `3297`, `3298`, `3299`, `3301`, `3305`, `3306`, `3307`, `3308`, `3310`, `3312`, `3313`, `3314`, `3315`, `3316`, `3317`, `3318`, `3319`, `3321`, `3322`, `3324`, `3325`, `3327`, `3328`, `3329`, `3330`, `3331`, `3333`, `3334`, `2307`, `3336`, `3338`, `3340`, `3341`, `3342`, `3344`, `3345`, `3347`, `3349`, `3350`, `3351`, `3353`, `3354`, `3355`, `3356`, `3357`, `3358`, `3359`, `3361`, `3362`, `3364`, `3369`, `3370`, `3371`, `3373`, `3375`, `3376`, `3377`, `3379`, `3381`, `3382`, `3384`, `3386`, `3388`, `3389`, `3391`, `3392`, `3394`, `3395`, `3397`, `3398`, `3400`, `3403`, `3404`, `3405`, `3407`, `3409`, `3411`, `3412`, `3413`, `3415`, `3417`, `3419`, `3421`, `3422`, `3423`, `3424`, `3425`, `3426`, `3428`, `3429`, `3432`, `3433`, `3434`, `3435`, `3436`, `3437`, `3439`, `3441`, `3443`, `3446`, `3448`, `3449`, `3450`, `3451`, `3452`, `3453`, `3454`, `3455`, `3456`, `3457`, `3458`, `3459`, `3461`, `3463`, `1044`, `3464`, `3465`, `3466`, `3467`, `3468`, `3470`, `3472`, `3473`, `3474`, `3475`, `3476`, `3477`, `3479`, `3480`, `3482`, `3483`, `3485`, `3486`, `3487`, `3489`, `3490`, `3493`, `3494`, `3495`, `3496`, `3497`, `3498`, `3500`, `3502`, `3504`, `3505`, `3506`, `3508`, `3509`, `3511`, `3512`, `3514`, `3515`, `3517`, `3518`, `3519`, `3521`, `3522`, `3523`, `3524`, `3525`, `3526`, `3528`, `3529`, `3531`, `3532`, `3534`, `3536`, `3537`, `3538`, `3539`, `3540`, `3541`, `3542`, `3543`, `3544`, `3545`, `3547`, `3549`, `3550`, `3551`, `3552`, `3553`, `3556`, `3558`, `3561`, `3563`, `3564`, `3566`, `3567`, `3569`, `3571`, `3574`, `3576`, `3577`, `3579`, `3581`, `3583`, `3584`, `3587`, `3588`, `3589`, `3592`, `3593`, `3594`, `3595`, `3597`, `3600`, `3601`, `3602`, `3603`, `3604`, `3605`, `3606`, `3609`, `3610`, `3612`, `3613`, `3614`, `3615`, `3616`, `3617`, `3619`, `3624`, `3627`, `3628`, `3630`, `3632`, `3634`, `3636`, `3637`, `3640`, `3642`, `3643`, `3645`, `3646`, `3648`, `3650`, `3654`, `3655`, `3656`, `3658`, `3659`, `3661`, `3663`, `3665`, `3666`, `3667`, `3669`, `3670`, `3672`, `3673`, `3674`, `3675`, `3677`, `3679`, `3680`, `3681`, `3682`, `3683`, `3684`, `3685`, `3687`, `3688`, `3689`, `3690`, `3691`, `3693`, `3696`, `3697`, `3698`, `3699`, `3700` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.49 | | `TOKEN_P` | 99.48 | | `TOKEN_R` | 99.50 | | `TOKEN_ACC` | 99.94 | | `SENTS_F` | 96.05 | | `SENTS_P` | 95.56 | | `SENTS_R` | 96.55 | | `TAG_ACC` | 96.91 | | `POS_ACC` | 98.25 | | `MORPH_ACC` | 94.72 | | `DEP_UAS` | 92.10 | | `DEP_LAS` | 88.72 | | `LEMMA_ACC` | 94.45 |
{"language": ["ru"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/ru_udv25_russiangsd_trf
null
[ "spacy", "token-classification", "ru", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "ru" ]
TAGS #spacy #token-classification #ru #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Russian-GSD ### Label Scheme View label scheme (3014 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (3014 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #ru #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (3014 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Slovak-SNK | Feature | Description | | --- | --- | | **Name** | `sk_udv25_slovaksnk_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (4879 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `#`, `%`, `0`, `?`, `AAfp1x`, `AAfp1x:q`, `AAfp1x:r`, `AAfp1y`, `AAfp1z`, `AAfp2x`, `AAfp2x:r`, `AAfp2y`, `AAfp2z`, `AAfp3x`, `AAfp4x`, `AAfp4y`, `AAfp6x`, `AAfp6x:q`, `AAfp6y`, `AAfp7x`, `AAfs1:r`, `AAfs1x`, `AAfs1x:q`, `AAfs1x:r`, `AAfs1y`, `AAfs1z`, `AAfs2x`, `AAfs2x:q`, `AAfs2x:r`, `AAfs2z`, `AAfs2z:r`, `AAfs3x`, `AAfs3x:r`, `AAfs3y`, `AAfs4x`, `AAfs4x:r`, `AAfs4y`, `AAfs4z`, `AAfs6x`, `AAfs6x:r`, `AAfs6y`, `AAfs7x`, `AAfs7x:r`, `AAfs7z`, `AAip1x`, `AAip1y`, `AAip2x`, `AAip2x:r`, `AAip2y`, `AAip2z`, `AAip3x`, `AAip4x`, `AAip4y`, `AAip4z`, `AAip6x`, `AAip6x:r`, `AAip6z`, `AAip7x`, `AAis1x`, `AAis1x:r`, `AAis1y`, `AAis1z`, `AAis2`, `AAis2x`, `AAis2x:r`, `AAis2y`, `AAis2z`, `AAis3x`, `AAis3x:r`, `AAis3y`, `AAis3z`, `AAis4x`, `AAis4x:r`, `AAis4y`, `AAis4z`, `AAis6x`, `AAis6x:r`, `AAis6z`, `AAis7`, `AAis7x`, `AAis7y`, `AAis7z`, `AAmp1`, `AAmp1x`, `AAmp1y`, `AAmp1z`, `AAmp2x`, `AAmp2y`, `AAmp2z`, `AAmp3x`, `AAmp3z`, `AAmp4x`, `AAmp4y`, `AAmp6x`, `AAmp7x`, `AAmp7z`, `AAms1:r`, `AAms1x`, `AAms1x:q`, `AAms1x:r`, `AAms1y`, `AAms1z`, `AAms2x`, `AAms2x:r`, `AAms3x`, `AAms4x`, `AAms4y`, `AAms4z`, `AAms6x`, `AAms7x`, `AAms7x:r`, `AAms7z`, `AAnp1x`, `AAnp1x:r`, `AAnp1y`, `AAnp2x`, `AAnp2z`, `AAnp3x`, `AAnp4x`, `AAnp4z`, `AAnp6x`, `AAnp7x`, `AAns1x`, `AAns1x:r`, `AAns1y`, `AAns1z`, `AAns2:r`, `AAns2x`, `AAns2x:r`, `AAns2y`, `AAns3x`, `AAns3y`, `AAns4x`, `AAns4x:q`, `AAns4x:r`, `AAns4y`, `AAns4z`, `AAns6x`, `AAns6x:r`, `AAns6y`, `AAns7x`, `AAns7z`, `AAop2x`, `AFfp1x:r`, `AFfp2x`, `AFfp2x:r`, `AFfp3x`, `AFfp4:r`, `AFfp4x`, `AFfp4x:r`, `AFfs1:r`, `AFfs1x`, `AFfs1x:r`, `AFfs2x`, `AFfs2x:r`, `AFfs3x:r`, `AFfs4x`, `AFfs4x:r`, `AFfs6x`, `AFfs6x:r`, `AFfs7x`, `AFfs7x:r`, `AFip1x`, `AFip1x:r`, `AFip2x:r`, `AFip4x`, `AFip4x:r`, `AFip6x:r`, `AFip7x:r`, `AFis1x`, `AFis1x:r`, `AFis2x:r`, `AFis4x`, `AFis4x:r`, `AFis6x:r`, `AFis7x`, `AFis7x:r`, `AFmp1x:r`, `AFmp1y`, `AFmp2x`, `AFmp3x:r`, `AFmp4x:r`, `AFmp7x`, `AFms1x`, `AFms1x:r`, `AFms2x`, `AFms2x:r`, `AFms4x`, `AFms4x:r`, `AFms7x`, `AFms7x:r`, `AFnp1x`, `AFnp1x:r`, `AFnp2x`, `AFnp2x:r`, `AFnp4x:r`, `AFns1x`, `AFns1x:r`, `AFns2x`, `AFns2x:r`, `AFns3x:r`, `AFns4x`, `AFns4x:r`, `AFns6x`, `AFns6x:r`, `AUfp1x`, `AUfs1x`, `AUfs2x`, `AUmp1x`, `AUmp1y`, `AUmp1z`, `AUms1x`, `AUms1y`, `AUms1z`, `AUns1x`, `AUns1z`, `DX`, `Dx`, `Dx:q`, `Dy`, `Dz`, `Eu2`, `Eu2:q`, `Eu3`, `Eu4`, `Eu4:q`, `Eu6`, `Eu6:q`, `Eu7`, `Eu7:q`, `Ev2`, `Ev3`, `Ev4`, `Ev6`, `Ev7`, `Ev7:q`, `Gkfp1x`, `Gkfp2x`, `Gkfp3x`, `Gkfp4x`, `Gkfp6x`, `Gkfp7x`, `Gkfs1x`, `Gkfs1x:r`, `Gkfs1y`, `Gkfs2x`, `Gkfs2x:q`, `Gkfs4x`, `Gkfs6x`, `Gkfs7x`, `Gkip1x`, `Gkip2x`, `Gkip4x`, `Gkip6x`, `Gkis1x`, `Gkis2x`, `Gkis4x`, `Gkis7x`, `Gkmp1x`, `Gkmp2x`, `Gkmp4x`, `Gkmp7x`, `Gkms1x`, `Gkms2x`, `Gkms4x`, `Gkms7x`, `Gknp1x`, `Gknp2x`, `Gknp4x`, `Gknp6x`, `Gkns1x`, `Gkns2x`, `Gkns3x`, `Gkns4x`, `Gkns6x`, `Gkop2x`, `Gtfp1x`, `Gtfp2x`, `Gtfp2x:r`, `Gtfp3x`, `Gtfp4x`, `Gtfp6x`, `Gtfp7x`, `Gtfs1x`, `Gtfs2x`, `Gtfs4x`, `Gtfs6x`, `Gtfs7x`, `Gtip1x`, `Gtip2x`, `Gtip4x`, `Gtip6x`, `Gtip7x`, `Gtip7y`, `Gtis1x`, `Gtis2x`, `Gtis3x`, `Gtis4x`, `Gtis6x`, `Gtis7x`, `Gtmp1x`, `Gtmp2x`, `Gtmp3x`, `Gtmp4x`, `Gtmp7x`, `Gtms1x`, `Gtms1x:r`, `Gtms1z`, `Gtms3x`, `Gtms4x:r`, `Gtnp1x`, `Gtnp2x`, `Gtnp4x`, `Gtnp6x`, `Gtnp7x`, `Gtns1x`, `Gtns2x`, `Gtns3x`, `Gtns4x`, `Gtns6x`, `Gtns7x`, `Gtop1x`, `Gtop2x`, `J`, `NAfp1`, `NAfp2`, `NAfp4`, `NAfp6`, `NAfs1`, `NAfs2`, `NAfs4`, `NAfs6`, `NAfs7`, `NAip1`, `NAip2`, `NAip3`, `NAip4`, `NAip6`, `NAis1`, `NAis2`, `NAis4`, `NAis6`, `NAis7`, `NAmp1`, `NAmp3`, `NAmp4`, `NAmp6`, `NAms1`, `NAms2`, `NAms3`, `NAms7`, `NAnp1`, `NAnp4`, `NAns1`, `NAns2`, `NAns3`, `NAns4`, `NAns6`, `NAns7`, `ND`, `NFfs1`, `NFfs2`, `NFfs4`, `NFfs6`, `NFfs7`, `NFis1`, `NFis2`, `NFis4`, `NFis6`, `NFis7`, `NFms1`, `NFms3`, `NFms4`, `NFms6`, `NFms7`, `NFns1`, `NFns2`, `NFns4`, `NFns6`, `NFns7`, `NNfp1`, `NNfp2`, `NNfp3`, `NNfp4`, `NNfp6`, `NNfp7`, `NNfs1`, `NNip1`, `NNip1:r`, `NNip2`, `NNip4`, `NNip6`, `NNip7`, `NNmp1`, `NNmp2`, `NNmp3`, `NNmp4`, `NNmp7`, `NNnp1`, `NNnp2`, `NNnp4`, `NNnp6`, `NNnp7`, `NNns1`, `NNop1`, `NSfs4`, `NSip2`, `NSip4`, `NSis1`, `NSis4`, `NSnp1`, `NUfp1`, `NUfp2`, `NUfp3`, `NUfp6`, `NUfp7`, `NUfs6`, `NUip2`, `NUip4`, `NUip6`, `NUip7`, `NUmp1`, `NUmp2`, `NUmp3`, `NUnp1`, `NUnp2`, `NUnp4`, `NUns1`, `NUns4`, `NX`, `O`, `O:q`, `O:r`, `OY`, `PAfp1`, `PAfp1:q`, `PAfp2`, `PAfp3`, `PAfp4`, `PAfp6`, `PAfp7`, `PAfs1`, `PAfs2`, `PAfs3`, `PAfs4`, `PAfs6`, `PAfs7`, `PAip1`, `PAip1:q`, `PAip2`, `PAip4`, `PAip6`, `PAip7`, `PAis1`, `PAis2`, `PAis3`, `PAis4`, `PAis6`, `PAis7`, `PAmp1`, `PAmp2`, `PAmp3`, `PAmp4`, `PAmp7`, `PAms1`, `PAms1:q`, `PAms2`, `PAms3`, `PAms4`, `PAms6`, `PAms7`, `PAnp1`, `PAnp2`, `PAnp4`, `PAnp6`, `PAnp7`, `PAns1`, `PAns1:q`, `PAns2`, `PAns3`, `PAns4`, `PAns6`, `PAop1`, `PAop4`, `PD`, `PFfp1`, `PFfp2`, `PFfp3`, `PFfp4`, `PFfp6`, `PFfp7`, `PFfs1`, `PFfs2`, `PFfs2:r`, `PFfs3`, `PFfs4`, `PFfs4:q`, `PFfs6`, `PFfs7`, `PFfs7:r`, `PFip1`, `PFip2`, `PFip3`, `PFip4`, `PFip6`, `PFip7`, `PFis1`, `PFis2`, `PFis2g`, `PFis3`, `PFis4`, `PFis4g`, `PFis6`, `PFis7`, `PFis7:q`, `PFis7:r`, `PFmp1`, `PFmp2`, `PFmp3`, `PFmp3:q`, `PFmp4`, `PFmp6`, `PFmp7`, `PFms1`, `PFms1:r`, `PFms2`, `PFms2g`, `PFms3`, `PFms3:r`, `PFms4`, `PFms4:r`, `PFms4g`, `PFms6`, `PFms7`, `PFms7:q`, `PFms7:r`, `PFnp1`, `PFnp2`, `PFnp3`, `PFnp4`, `PFnp6`, `PFnp7`, `PFns1`, `PFns1:q`, `PFns2`, `PFns2g`, `PFns3`, `PFns4`, `PFns4g`, `PFns6`, `PFns7`, `PFop2`, `PFop4`, `PFop4:r`, `PPhp1`, `PPhp2`, `PPhp3`, `PPhp4`, `PPhp4:r`, `PPhp6`, `PPhp7`, `PPhs1`, `PPhs2`, `PPhs3`, `PPhs3:r`, `PPhs4`, `PPhs6`, `PPhs7`, `PUfp1`, `PUfp2`, `PUfp4`, `PUfp6`, `PUfp7`, `PUfs1`, `PUfs2`, `PUfs3`, `PUfs4`, `PUfs6`, `PUfs7`, `PUip1`, `PUip2`, `PUip4`, `PUip6`, `PUip7`, `PUis1`, `PUis2`, `PUis3`, `PUis4`, `PUis6`, `PUis7`, `PUmp1`, `PUmp2`, `PUmp4`, `PUms1`, `PUms3`, `PUms7`, `PUnp1`, `PUnp2`, `PUnp3`, `PUnp4`, `PUnp6`, `PUns1`, `PUns2`, `PUns3`, `PUns4`, `PUns6`, `PUns7`, `PUop4`, `Q`, `R`, `R:q`, `SAfp1`, `SAfp6`, `SAfs1`, `SAfs1:r`, `SAfs2`, `SAfs2:r`, `SAfs3:r`, `SAfs4`, `SAfs4:r`, `SAfs6`, `SAfs6:r`, `SAmp1`, `SAmp2`, `SAmp3`, `SAmp4`, `SAmp6`, `SAms1`, `SAms1:r`, `SAms3`, `SAms4`, `SAms4:r`, `SAms5`, `SAms6`, `SAms7`, `SAms7:r`, `SFfp1:q`, `SFfs1`, `SFfs1:r`, `SFfs3`, `SFfs4`, `SFfs4:r`, `SFfs7`, `SFfs7:r`, `SFmp1:r`, `SFms1`, `SFms1:r`, `SFms2:r`, `SFms3:r`, `SFms4:r`, `SFms5:r`, `SFms7:r`, `SSfp1`, `SSfp1:r`, `SSfp2`, `SSfp2:r`, `SSfp3`, `SSfp4`, `SSfp4:q`, `SSfp4:r`, `SSfp6`, `SSfp6:q`, `SSfp6:r`, `SSfp7`, `SSfp7:q`, `SSfs1`, `SSfs1:r`, `SSfs2`, `SSfs2:q`, `SSfs2:r`, `SSfs3`, `SSfs3:r`, `SSfs4`, `SSfs4:q`, `SSfs4:r`, `SSfs5`, `SSfs6`, `SSfs6:q`, `SSfs6:r`, `SSfs7`, `SSfs7:q`, `SSfs7:r`, `SSfs7:rq`, `SSip1`, `SSip1:r`, `SSip2`, `SSip3`, `SSip4`, `SSip6`, `SSip6:r`, `SSip7`, `SSis1`, `SSis1:q`, `SSis1:r`, `SSis2`, `SSis2:q`, `SSis2:r`, `SSis2:rq`, `SSis3`, `SSis3:r`, `SSis4`, `SSis4:q`, `SSis4:r`, `SSis6`, `SSis6:r`, `SSis7`, `SSis7:q`, `SSis7:r`, `SSmp1`, `SSmp1:q`, `SSmp1:r`, `SSmp2`, `SSmp2:r`, `SSmp3`, `SSmp3:r`, `SSmp4`, `SSmp4:r`, `SSmp5`, `SSmp6`, `SSmp7`, `SSmp7:q`, `SSmp7:r`, `SSms1`, `SSms1:q`, `SSms1:r`, `SSms1:r:q`, `SSms1:rq`, `SSms2`, `SSms2:r`, `SSms2:rq`, `SSms3`, `SSms3:r`, `SSms4`, `SSms4:r`, `SSms5`, `SSms5:r`, `SSms6`, `SSms6:r`, `SSms7`, `SSms7:r`, `SSnp1`, `SSnp1:r`, `SSnp2`, `SSnp3`, `SSnp4`, `SSnp6`, `SSnp7`, `SSns1`, `SSns1:r`, `SSns2`, `SSns2:r`, `SSns3`, `SSns4`, `SSns4:r`, `SSns6`, `SSns6:r`, `SSns7`, `SSns7:r`, `SUfs1`, `SUfs1:r`, `SUfs2`, `SUfs2:r`, `SUfs3:r`, `SUfs4`, `SUfs4:r`, `SUfs6:r`, `SUfs7:r`, `SUis1:r`, `SUms1:r`, `SUms2:r`, `SUms4:r`, `SUms7:r`, `SUnp1:r`, `SUnp4`, `SUns1`, `SUns1:r`, `SUns4`, `SUns6`, `SUns6:r`, `SUns7:r`, `Ssfs1:r`, `T`, `T:q`, `TY`, `VBdsb-`, `VBdsc+`, `VBepa+`, `VBepa-`, `VBepb+`, `VBepb-`, `VBepc+`, `VBepc-`, `VBesa+`, `VBesa-`, `VBesb+`, `VBesb-`, `VBesc+`, `VBesc-`, `VHd+`, `VHe+`, `VHe-`, `VId+`, `VId-`, `VIe+`, `VIe+:q`, `VIe-`, `VIj+`, `VKdpa+`, `VKdpa-`, `VKdpb+`, `VKdpb-`, `VKdpb-:q`, `VKdpc+`, `VKdpc+:q`, `VKdpc-`, `VKdsa+`, `VKdsa+:q`, `VKdsa-`, `VKdsb+`, `VKdsb-`, `VKdsc+`, `VKdsc-`, `VKepa+`, `VKepa-`, `VKepa-:q`, `VKepb+`, `VKepb-`, `VKepc+`, `VKepc+:q`, `VKepc-`, `VKesa+`, `VKesa-`, `VKesb+`, `VKesb-`, `VKesc+`, `VKesc-`, `VKjpa+`, `VKjpb+`, `VKjpb-`, `VKjpc+`, `VKjpc-`, `VKjsa+`, `VKjsa-`, `VKjsb+`, `VKjsc+`, `VKjsc-`, `VLdpah+`, `VLdpah-`, `VLdpbh+`, `VLdpbh-`, `VLdpbm+`, `VLdpbm-`, `VLdpc+`, `VLdpcf+`, `VLdpcf+:q`, `VLdpcf-`, `VLdpci+`, `VLdpci+:q`, `VLdpci-`, `VLdpcm+`, `VLdpcm+:q`, `VLdpcm-`, `VLdpcn+`, `VLdpcn-`, `VLdpco+`, `VLdpco+:q`, `VLdpco-`, `VLdsaf+`, `VLdsaf+:q`, `VLdsaf-`, `VLdsam+`, `VLdsam-`, `VLdsbf+`, `VLdsbf-`, `VLdsbm+`, `VLdsbm-`, `VLdsc+`, `VLdscf+`, `VLdscf-`, `VLdsci+`, `VLdsci-`, `VLdscm+`, `VLdscm-`, `VLdscn+`, `VLdscn+:q`, `VLdscn-`, `VLepah+`, `VLepah-`, `VLepam+`, `VLepam-`, `VLepbh+`, `VLepbh-`, `VLepbh-:q`, `VLepbm+`, `VLepcf+`, `VLepcf-`, `VLepci+`, `VLepci+:q`, `VLepci-`, `VLepcm+`, `VLepcm-`, `VLepcn+`, `VLepcn-`, `VLepco+`, `VLepco-`, `VLesaf+`, `VLesaf-`, `VLesam+`, `VLesam-`, `VLesbf+`, `VLesbf-`, `VLesbm+`, `VLesbm-`, `VLesc+`, `VLesc-`, `VLescf+`, `VLescf-`, `VLesci+`, `VLesci-`, `VLescm+`, `VLescm+:q`, `VLescm-`, `VLescm-:q`, `VLescn+`, `VLescn-`, `VLjpah+`, `VLjpbh+`, `VLjpcf+`, `VLjpci+`, `VLjpcm+`, `VLjpco+`, `VLjsaf+`, `VLjsaf-`, `VLjsam+`, `VLjsam-`, `VLjscf+`, `VLjscf-`, `VLjsci+`, `VLjsci-`, `VLjscm+`, `VLjscm-`, `VLjscn+`, `VLjscn-`, `VMdpa+`, `VMdpa-`, `VMdpb+`, `VMdpb+:r`, `VMdpb-`, `VMdsb+`, `VMdsb-`, `VMepa+`, `VMepa-`, `VMepb+`, `VMepb-`, `VMepc-`, `VMesb+`, `VMesb-`, `VMjpb+`, `VMjsb+`, `Vje+`, `W`, `Wms1`, `Y`, `Z`, `ZIP` | | **`morphologizer`** | `POS=ADV\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `POS=PUNCT`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=PART`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Cnd\|POS=SCONJ`, `POS=PRON\|PronType=Prs\|Reflex=Yes`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Aspect=Imp\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|POS=VERB\|Polarity=Pos\|VerbForm=Inf`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Animacy=Anim\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Animacy=Anim\|Aspect=Perf\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `AdpType=Prep\|Case=Gen\|POS=ADP`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Aspect=Perf\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN`, `AdpType=Prep\|Case=Loc\|POS=ADP`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `POS=ADV\|PronType=Dem`, `Degree=Pos\|POS=ADV`, `Animacy=Anim\|Aspect=Imp\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Animacy=Inan\|Aspect=Perf\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Foreign=Yes\|POS=X`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `AdpType=Prep\|Case=Acc\|POS=ADP`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `AdpType=Preppron\|Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Prs`, `Aspect=Imp,Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Imp\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `POS=CCONJ`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `AdpType=Voc\|Case=Loc\|POS=ADP`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NUM`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `POS=SCONJ`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `AdpType=Prep\|Case=Ins\|POS=ADP`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `AdpType=Voc\|Case=Gen\|POS=ADP`, `NumForm=Digit\|POS=NUM`, `Hyph=Yes\|POS=X`, `Animacy=Anim\|Aspect=Imp\|Gender=Masc\|Number=Sing\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Animacy=Anim\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Aspect=Perf\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `AdpType=Prep\|Case=Dat\|POS=ADP`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Aspect=Imp,Perf\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Animacy=Inan\|Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Aspect=Imp\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Imp,Perf\|POS=VERB\|Polarity=Pos\|VerbForm=Inf`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `AdpType=Voc\|Case=Dat\|POS=ADP`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Gender=Neut\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Imp,Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=NUM`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Gender=Neut\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Loc\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NUM`, `Aspect=Perf\|POS=VERB\|Polarity=Pos\|VerbForm=Inf`, `Aspect=Imp\|Gender=Neut\|Number=Sing\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Aspect=Perf\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Perf\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Abbr=Yes\|POS=PROPN`, `Aspect=Imp\|POS=VERB\|Polarity=Neg\|VerbForm=Inf`, `Mood=Cnd\|POS=AUX`, `Aspect=Imp\|Gender=Neut\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Anim\|Aspect=Perf\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|Gender=Fem\|Number=Sing\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Animacy=Inan\|Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Degree=Cmp\|POS=ADV`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Gender=Neut\|Number=Plur\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Aspect=Imp\|Gender=Masc\|Number=Sing\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Animacy=Inan\|Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Abbr=Yes\|POS=X`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Aspect=Perf\|POS=VERB\|Polarity=Neg\|VerbForm=Inf`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem\|Typo=Yes`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|VerbForm=Fin`, `Animacy=Inan\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `POS=INTJ`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `AdpType=Prep\|Case=Loc\|POS=ADP\|Typo=Yes`, `Animacy=Anim\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Animacy=Inan\|Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Anim\|Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NUM`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Pos\|POS=ADV\|Typo=Yes`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Gender=Fem\|Number=Plur\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Pos\|VerbForm=Fin`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `POS=ADV\|PronType=Neg`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Aspect=Imp,Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Aspect=Imp,Perf\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=ADV\|PronType=Ind`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Anim\|Aspect=Imp\|Gender=Masc\|Number=Sing\|POS=AUX\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|VerbForm=Fin`, `Animacy=Anim\|Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Aspect=Imp,Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|VerbForm=Fin`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Imp\|POS=AUX\|Polarity=Pos\|VerbForm=Inf`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NUM`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Degree=Sup\|POS=ADV`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|VerbForm=Fin`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `AdpType=Voc\|Case=Ins\|POS=ADP`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Case=Ins\|Number=Sing\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NUM`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Anim\|Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Animacy=Anim\|Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Loc\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Hyph=Yes\|POS=ADV`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=ADV\|PronType=Tot`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Aspect=Imp\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Animacy=Inan\|Case=Ins\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|VerbForm=Fin`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|VerbForm=Fin`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Aspect=Imp\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Imp\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Nom\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `NumType=Mult\|POS=ADV`, `Case=Dat\|Number=Sing\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Perf\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Emp`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|NumType=Mult\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|VerbForm=Fin`, `Animacy=Anim\|Case=Voc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `AdpType=Preppron\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Aspect=Imp\|Gender=Masc\|Number=Plur\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Aspect=Perf\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Animacy=Inan\|Case=Ins\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|Typo=Yes`, `Aspect=Imp,Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Abbr=Yes\|POS=ADV`, `Animacy=Anim\|Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|Typo=Yes`, `Case=Gen\|Gender=Fem\|NumType=Mult\|Number=Plur\|POS=ADJ`, `AdpType=Prep\|Case=Ins\|POS=ADP\|Typo=Yes`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Number=Sing\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Fem\|NumType=Mult\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel\|Typo=Yes`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `POS=CCONJ\|Typo=Yes`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `AdpType=Prep\|Case=Gen\|POS=ADP\|Typo=Yes`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Loc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Dat\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `POS=PRON\|PronType=Prs\|Reflex=Yes\|Typo=Yes`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN\|Typo=Yes`, `AdpType=Voc\|Case=Ins\|POS=ADP\|Typo=Yes`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Case=Loc\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=NUM`, `Aspect=Perf\|POS=VERB\|Polarity=Pos\|VerbForm=Conv`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Emp`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Emp`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|NumType=Mult\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Emp`, `Animacy=Inan\|Aspect=Perf\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|VerbForm=Fin`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Ins\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Emp`, `Animacy=Inan\|Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Animacy=Anim\|Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Aspect=Imp,Perf\|Gender=Neut\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|POS=VERB\|Polarity=Pos\|VerbForm=Conv`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Emp`, `Case=Gen\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `AdpType=Preppron\|Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Mult\|Number=Sing\|POS=ADV`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Aspect=Imp\|Gender=Neut\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Perf\|Gender=Neut\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Imp,Perf\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Animacy=Anim\|Aspect=Imp\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Aspect=Imp,Perf\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Pos\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `POS=NUM`, `Aspect=Imp,Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|VerbForm=Fin`, `Animacy=Anim\|Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Aspect=Imp,Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Imp,Perf\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Aspect=Imp,Perf\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|VerbForm=Fin`, `Aspect=Imp\|Number=Plur\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Imp,Perf\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NUM`, `Aspect=Imp\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|VerbForm=Fin`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Cnd\|POS=PART`, `Case=Loc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Aspect=Perf\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part`, `Case=Loc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Aspect=Imp\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Animacy=Anim\|Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Aspect=Imp\|Gender=Fem\|Number=Sing\|POS=AUX\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Gender=Fem\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|NumType=Mult\|Number=Plur\|POS=ADV`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Emp`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=NUM`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Prs`, `AdpType=Preppron\|Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Gender=Neut\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Imp,Perf\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Emp`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NumType=Mult\|Number=Plur\|POS=ADJ`, `NumType=Mult\|POS=ADV\|PronType=Ind`, `Aspect=Imp,Perf\|Gender=Neut\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Acc\|Gender=Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Emp`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Anim\|Aspect=Imp\|Gender=Masc\|Number=Plur\|POS=AUX\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Emp`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Inan\|Aspect=Imp\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Ins\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Animacy=Anim\|Case=Ins\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg`, `Aspect=Imp\|POS=VERB\|Polarity=Pos\|Typo=Yes\|VerbForm=Inf`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Animacy=Inan\|Case=Dat\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NUM`, `Animacy=Inan\|Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Neg`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem\|Typo=Yes`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Voc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Ins\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Aspect=Imp,Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Imp\|Gender=Neut\|Number=Sing\|POS=AUX\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `ConjType=Oper\|POS=SYM`, `POS=SYM`, `Case=Dat\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Aspect=Imp,Perf\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Ins\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Gender=Neut\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN\|Typo=Yes`, `POS=DET\|PronType=Dem`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Ind`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Acc\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NUM`, `Aspect=Imp\|Gender=Fem\|Number=Plur\|POS=AUX\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Ins\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Emp`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|NumType=Mult\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|NumType=Mult\|Number=Sing\|POS=ADV`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Loc\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Abbr=Yes\|Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Aspect=Imp,Perf\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|NumType=Mult\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|NumType=Mult\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=NUM`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Fem\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=X`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|VerbForm=Fin`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes\|Typo=Yes`, `Case=Loc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Animacy=Inan\|Aspect=Imp\|Gender=Masc\|Number=Plur\|POS=AUX\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Animacy=Inan\|Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Aspect=Perf\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `AdpType=Prep\|Case=Acc\|POS=ADP\|Typo=Yes`, `Animacy=Inan\|Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Mult\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Neut\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `AdpType=Voc\|Case=Acc\|POS=ADP`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Aspect=Imp,Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Emp`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Ins\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Aspect=Imp\|Gender=Masc\|Number=Sing\|POS=AUX\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Nom\|Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Animacy=Inan\|Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Number=Plur\|POS=NUM`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|NumType=Mult\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `AdpType=Preppron\|Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Aspect=Imp\|Gender=Neut\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Gender=Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Aspect=Imp,Perf\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|POS=VERB\|Polarity=Neg\|VerbForm=Conv`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Acc\|Gender=Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Neut\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Gender=Neut\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Neut\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|NumType=Mult\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Ins\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Ins\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Aspect=Perf\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Gender=Neut\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `POS=ADV`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|VerbForm=Part`, `Aspect=Imp\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `POS=PART\|Typo=Yes`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Tot`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|PronType=Tot`, `Animacy=Anim\|Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `NumType=Mult\|POS=ADV\|PronType=Int,Rel`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Neg\|VerbForm=Fin`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Imp\|Number=Plur\|POS=VERB\|Polarity=Neg\|Tense=Past\|Typo=Yes\|VerbForm=Part`, `Animacy=Anim\|Aspect=Perf\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part`, `Case=Gen\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel\|Typo=Yes`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Emp`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Pos\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|VerbForm=Fin`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Anim\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Animacy=Anim\|Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Tot`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Pos\|VerbForm=Fin`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN\|Typo=Yes`, `NumType=Card\|POS=DET\|PronType=Dem`, `Aspect=Imp,Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `AdpType=Preppron\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Prs`, `Case=Dat\|Gender=Neut\|NumType=Mult\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Mult\|Number=Plur\|POS=ADV`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Gender=Neut\|Number=Sing\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Animacy=Anim\|Aspect=Imp\|Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|Tense=Past\|Typo=Yes\|VerbForm=Part`, `Case=Nom\|Gender=Neut\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part`, `Animacy=Anim\|Aspect=Imp\|Gender=Masc\|Number=Plur\|POS=AUX\|Polarity=Neg\|Tense=Past\|VerbForm=Part`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NumType=Mult\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel\|Typo=Yes`, `Animacy=Inan\|Aspect=Imp\|Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|Tense=Past\|Typo=Yes\|VerbForm=Part`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|NumType=Mult\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel\|Typo=Yes`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Aspect=Imp,Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Pos\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Neg\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Polarity=Neg\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Aspect=Imp\|POS=VERB\|Polarity=Pos`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `advmod:emph`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `det:numgov`, `discourse`, `expl:pass`, `expl:pv`, `fixed`, `flat`, `flat:foreign`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:arg`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `1`, `2`, `4`, `6`, `8`, `10`, `11`, `13`, `14`, `16`, `17`, `19`, `20`, `23`, `25`, `27`, `29`, `32`, `34`, `36`, `40`, `42`, `44`, `46`, `48`, `50`, `52`, `54`, `55`, `57`, `59`, `61`, `64`, `66`, `67`, `71`, `73`, `74`, `75`, `77`, `79`, `81`, `83`, `86`, `87`, `89`, `91`, `95`, `97`, `99`, `102`, `12`, `104`, `106`, `108`, `110`, `112`, `114`, `116`, `117`, `119`, `121`, `123`, `125`, `127`, `129`, `131`, `133`, `136`, `138`, `141`, `143`, `146`, `150`, `151`, `153`, `156`, `157`, `159`, `160`, `162`, `163`, `165`, `167`, `168`, `170`, `173`, `174`, `176`, `178`, `180`, `181`, `182`, `183`, `185`, `187`, `190`, `191`, `193`, `195`, `28`, `198`, `200`, `202`, `204`, `206`, `209`, `210`, `212`, `213`, `216`, `219`, `220`, `222`, `223`, `225`, `228`, `230`, `232`, `235`, `237`, `238`, `240`, `241`, `242`, `243`, `244`, `247`, `249`, `251`, `252`, `255`, `257`, `258`, `260`, `262`, `264`, `266`, `267`, `269`, `270`, `271`, `272`, `62`, `274`, `275`, `277`, `278`, `279`, `280`, `281`, `282`, `284`, `286`, `287`, `31`, `289`, `290`, `291`, `293`, `294`, `295`, `297`, `299`, `300`, `301`, `303`, `304`, `305`, `306`, `307`, `308`, `309`, `310`, `312`, `313`, `314`, `315`, `318`, `319`, `321`, `323`, `324`, `326`, `328`, `330`, `331`, `333`, `334`, `335`, `336`, `337`, `338`, `339`, `341`, `343`, `345`, `346`, `348`, `349`, `350`, `351`, `352`, `353`, `354`, `357`, `359`, `360`, `361`, `362`, `364`, `366`, `368`, `369`, `370`, `372`, `373`, `374`, `375`, `376`, `377`, `378`, `379`, `380`, `381`, `382`, `384`, `385`, `386`, `387`, `388`, `389`, `391`, `394`, `397`, `398`, `399`, `401`, `402`, `403`, `404`, `405`, `406`, `407`, `408`, `409`, `410`, `412`, `413`, `415`, `417`, `418`, `420`, `422`, `423`, `424`, `427`, `428`, `430`, `432`, `433`, `434`, `435`, `436`, `439`, `440`, `441`, `442`, `443`, `444`, `445`, `447`, `449`, `452`, `454`, `456`, `457`, `458`, `459`, `460`, `461`, `462`, `464`, `465`, `466`, `468`, `469`, `471`, `473`, `474`, `476`, `478`, `480`, `481`, `482`, `483`, `486`, `488`, `489`, `490`, `491`, `492`, `493`, `495`, `496`, `497`, `256`, `498`, `499`, `500`, `501`, `502`, `503`, `504`, `505`, `506`, `507`, `508`, `509`, `510`, `512`, `513`, `514`, `515`, `516`, `517`, `518`, `519`, `520`, `521`, `522`, `523`, `524`, `525`, `0`, `526`, `528`, `529`, `530`, `533`, `534`, `536`, `537`, `538`, `539`, `540`, `541`, `542`, `543`, `544`, `545`, `547`, `548`, `549`, `550`, `552`, `553`, `554`, `555`, `557`, `558`, `559`, `561`, `563`, `564`, `565`, `566`, `567`, `568`, `569`, `570`, `571`, `572`, `573`, `574`, `575`, `576`, `577`, `578`, `579`, `580`, `581`, `583`, `584`, `586`, `587`, `588`, `589`, `590`, `591`, `592`, `594`, `596`, `597`, `599`, `601`, `603`, `605`, `607`, `609`, `612`, `614`, `616`, `618`, `619`, `41`, `621`, `623`, `625`, `626`, `627`, `628`, `630`, `633`, `634`, `636`, `640`, `641`, `643`, `644`, `646`, `647`, `649`, `651`, `653`, `655`, `657`, `659`, `661`, `663`, `665`, `666`, `667`, `669`, `671`, `672`, `673`, `674`, `675`, `677`, `679`, `680`, `681`, `684`, `685`, `686`, `687`, `688`, `690`, `691`, `692`, `694`, `695`, `696`, `697`, `698`, `700`, `703`, `705`, `707`, `708`, `710`, `711`, `712`, `713`, `714`, `113`, `716`, `717`, `719`, `84`, `721`, `723`, `724`, `726`, `727`, `728`, `729`, `730`, `732`, `734`, `735`, `737`, `740`, `742`, `744`, `747`, `748`, `750`, `751`, `753`, `755`, `757`, `760`, `764`, `47`, `765`, `766`, `768`, `769`, `771`, `772`, `773`, `774`, `777`, `234`, `779`, `781`, `782`, `783`, `785`, `786`, `788`, `790`, `792`, `793`, `794`, `797`, `798`, `800`, `802`, `804`, `805`, `806`, `807`, `809`, `810`, `812`, `815`, `816`, `817`, `818`, `820`, `821`, `822`, `824`, `825`, `826`, `827`, `828`, `830`, `832`, `834`, `835`, `836`, `837`, `838`, `839`, `840`, `842`, `844`, `846`, `848`, `850`, `852`, `855`, `857`, `859`, `861`, `862`, `863`, `864`, `865`, `866`, `867`, `98`, `868`, `869`, `870`, `873`, `263`, `874`, `875`, `876`, `877`, `878`, `879`, `881`, `882`, `884`, `885`, `887`, `888`, `889`, `891`, `892`, `896`, `897`, `898`, `899`, `900`, `901`, `902`, `903`, `904`, `905`, `906`, `907`, `908`, `909`, `910`, `911`, `913`, `914`, `915`, `917`, `919`, `920`, `921`, `922`, `923`, `926`, `927`, `928`, `929`, `931`, `932`, `934`, `936`, `939`, `941`, `943`, `945`, `946`, `947`, `949`, `951`, `952`, `953`, `955`, `958`, `959`, `960`, `961`, `962`, `964`, `965`, `967`, `968`, `969`, `971`, `973`, `975`, `976`, `977`, `978`, `980`, `982`, `983`, `984`, `985`, `986`, `987`, `989`, `992`, `993`, `996`, `998`, `999`, `1000`, `1001`, `1003`, `1004`, `1005`, `1006`, `1007`, `1008`, `1009`, `1010`, `1011`, `1012`, `1013`, `1015`, `1017`, `1019`, `1021`, `1022`, `1023`, `1024`, `1025`, `1027`, `1028`, `1030`, `1031`, `1035`, `1037`, `1038`, `1039`, `1042`, `1043`, `1045`, `1046`, `1047`, `1049`, `1051`, `1052`, `1054`, `1056`, `1057`, `1058`, `1060`, `1061`, `1062`, `1064`, `1065`, `1066`, `1067`, `1068`, `1069`, `1071`, `1073`, `1074`, `1076`, `1078`, `1079`, `1080`, `1081`, `1082`, `1083`, `1086`, `1088`, `1090`, `1091`, `1092`, `1094`, `1095`, `1096`, `1099`, `1100`, `1101`, `1102`, `1103`, `1104`, `1106`, `1107`, `1108`, `1109`, `1110`, `1112`, `1114`, `1116`, `1118`, `1119`, `1121`, `1122`, `1124`, `1125`, `1126`, `1127`, `107`, `1128`, `1129`, `1130`, `1132`, `1133`, `1136`, `1137`, `1139`, `1141`, `1143`, `1145`, `1146`, `1147`, `1148`, `1150`, `1151`, `1153`, `1155`, `1156`, `1157`, `1160`, `1161`, `1162`, `1163`, `1164`, `1167`, `1169`, `1170`, `1171`, `1172`, `1173`, `1174`, `1175`, `1176`, `1177`, `1178`, `1180`, `1181`, `1182`, `1183`, `1184`, `1186`, `1188`, `1189`, `1191`, `1193`, `1194`, `1195`, `1196`, `1198`, `1201`, `1203`, `1204`, `1205`, `7`, `80`, `1207`, `1209`, `1210`, `1212`, `1213`, `1215`, `1216`, `1217`, `1218`, `1219`, `1220`, `1221`, `1223`, `1225`, `1226`, `1228`, `1229`, `1232`, `1234`, `1236`, `1238`, `1239`, `1240`, `1241`, `1242`, `1243`, `1244`, `1246`, `1248`, `1250`, `1252`, `1254`, `1256`, `1257`, `1259`, `1260`, `1263`, `1265`, `1266`, `1267`, `1269`, `1272`, `1274`, `1275`, `1276`, `1277`, `1279`, `1281`, `1284`, `1285`, `1288`, `1289`, `1290`, `1291`, `1292`, `1293`, `1294`, `1295`, `1297`, `1298`, `1299`, `1301`, `1302`, `1303`, `1305`, `1307`, `1308`, `1309`, `1310`, `1311`, `1312`, `1314`, `1315`, `1316`, `1317`, `1318`, `1320`, `1322`, `1323`, `1325`, `1326`, `1328`, `1329`, `1331`, `194`, `1333`, `1335`, `1337`, `1339`, `1341`, `1342`, `1343`, `1344`, `1345`, `1347`, `1348`, `1350`, `1351`, `1353`, `1354`, `1356`, `1358`, `1359`, `1360`, `1361`, `1363`, `1364`, `1365`, `1366`, `1368`, `1370`, `1372`, `1374`, `1375`, `1377`, `1378`, `1380`, `1382`, `1383`, `1384`, `1385`, `1386`, `1387`, `1388`, `1392`, `1395`, `1397`, `1398`, `1400`, `1401`, `1402`, `1403`, `1404`, `1405`, `1406`, `1408`, `1410`, `1412`, `1413`, `1414`, `1235`, `1415`, `1417`, `1418`, `1419`, `1420`, `1421`, `1422`, `1423`, `1425`, `1426`, `1427`, `1428`, `1283`, `1430`, `1431`, `1432`, `1434`, `1435`, `1437`, `1439`, `1440`, `1442`, `1443`, `1444`, `1446`, `1447`, `1448`, `1449`, `1450`, `1452`, `1453`, `1454`, `1457`, `1458`, `1459`, `1461`, `1462`, `1464`, `1465`, `1466`, `1467`, `1469`, `1470`, `1471`, `1472`, `1473`, `1474`, `1475`, `1477`, `1478`, `1479`, `1480`, `1481`, `1483`, `1485`, `1487`, `1488`, `1489`, `1491`, `1493`, `1494`, `1496`, `1498`, `1500`, `1501`, `1503`, `1504`, `1271`, `1505`, `1506`, `1507`, `1508`, `1509`, `1511`, `1512`, `1513`, `1516`, `1517`, `1518`, `1519`, `1521`, `1522`, `1523`, `1526`, `1528`, `1530`, `1531`, `1532`, `1534`, `1535`, `1536`, `1537`, `1538`, `1539`, `1540`, `1542`, `966`, `1544`, `1545`, `1546`, `1547`, `1549`, `1551`, `1552`, `1553`, `1554`, `1555`, `1556`, `1557`, `1558`, `1559`, `950`, `1561`, `1562`, `1563`, `1564`, `1565`, `1567`, `1568`, `1569`, `1570`, `1571`, `1572`, `1574`, `1575`, `1577`, `1578`, `1580`, `1582`, `1584`, `1587`, `1590`, `1592`, `1593`, `1596`, `1598`, `1599`, `1600`, `1601`, `1602`, `1603`, `1604`, `1606`, `1607`, `1609`, `1610`, `1611`, `1612`, `1613`, `1615`, `1616`, `1618`, `1619`, `1620`, `1622`, `1624`, `1626`, `1628`, `1629`, `1631`, `1632`, `1633`, `1634`, `1635`, `1636`, `1637`, `1639`, `1640`, `1642`, `1646`, `1647`, `1648`, `1650`, `1651`, `1653`, `1654`, `1655`, `1656`, `1658`, `1660`, `1662`, `1664`, `1665`, `1666`, `1669`, `1671`, `1672`, `1673`, `1674`, `1676`, `1677`, `1678`, `1679`, `1680`, `1681`, `1683`, `1685`, `1686`, `1687`, `1688`, `1689`, `1690`, `1691`, `1692`, `1694`, `1695`, `1696`, `1700`, `1702`, `1703`, `1705`, `1706`, `1709`, `1710`, `1711`, `1712`, `1713`, `1714`, `1715`, `1716`, `1717`, `1721`, `1723`, `1725`, `1726`, `1727`, `1728`, `1729`, `1730`, `1731`, `1732`, `1733`, `1734`, `1736`, `1737`, `1738`, `1739`, `1741`, `1742`, `1745`, `1746`, `1747`, `1749`, `1751`, `1752`, `1753`, `1754`, `1755`, `1756`, `1758`, `1759`, `1760`, `1761`, `1763`, `1766`, `1767`, `1768`, `1770`, `1771`, `1772`, `1773`, `1774`, `1775`, `1777`, `1778`, `1779`, `1780`, `1781`, `1783`, `1785`, `1787`, `1789`, `1792`, `1793`, `1795`, `1797`, `1798`, `1799`, `1801`, `1802`, `1804`, `1805`, `1806`, `1807`, `1808`, `1809`, `1811`, `1813`, `1815`, `1816`, `1818`, `1819`, `1820`, `1822`, `1824`, `1826`, `1828`, `1829`, `1831`, `1832`, `1833`, `1835`, `1837`, `1839`, `1841`, `1842`, `1843`, `1844`, `1845`, `1846`, `1847`, `1848`, `1849`, `1850`, `1851`, `1852`, `1853`, `1855`, `1856`, `1857`, `1858`, `1859`, `1860`, `1862`, `1863`, `1864`, `1865`, `1866`, `1868`, `1869`, `1871`, `1873`, `1874`, `1876`, `1877`, `1878`, `1880`, `1882`, `1884`, `1885`, `1886`, `1887`, `1888`, `1889`, `1890`, `1891`, `1892`, `1894`, `1895`, `1897`, `1898`, `1900`, `1901`, `1903`, `1905`, `1906`, `1907`, `1909`, `1910`, `1911`, `1912`, `1913`, `1914`, `1915`, `1916`, `1917`, `1918`, `1919`, `1920`, `1923`, `1924`, `1925`, `1926`, `1928`, `1929`, `1930`, `1932`, `1934`, `1935`, `1938`, `1939`, `1940`, `1942`, `1943`, `1945`, `1946`, `1947`, `1949`, `1950`, `1951`, `1953`, `1954`, `1955`, `1956`, `1957`, `1958`, `1960`, `1961`, `1962`, `1963`, `1964`, `1965`, `1967`, `1968`, `1969`, `1970`, `1973`, `1975`, `1976`, `1977`, `1978`, `1979`, `1981`, `1983`, `1984`, `1985`, `1987`, `1989`, `1990`, `1991`, `1992`, `1994`, `1995`, `1996`, `1997`, `1998`, `2000`, `2002`, `2003`, `2004`, `2005`, `2007`, `2008`, `2010`, `2012`, `2013`, `2014`, `2015`, `1367`, `2017`, `2018`, `2019`, `2021`, `2023`, `2025`, `2026`, `2028`, `2029`, `2030`, `2031`, `2033`, `2035`, `2038`, `2040`, `2041`, `2042`, `2044`, `2046`, `2048`, `2051`, `2052`, `2053`, `2054`, `2055`, `2057`, `2058`, `2059`, `2061`, `2063`, `2064`, `2066`, `2067`, `2068`, `2070`, `2071`, `2072`, `2073`, `2074`, `2075`, `2076`, `2077`, `2078`, `2080`, `2081`, `2084`, `2085`, `2086`, `2087`, `2088`, `2089`, `2090`, `2091`, `2092`, `2094`, `2096`, `2097`, `2098`, `2099`, `2100`, `2101`, `2102`, `2103`, `2104`, `2105`, `2106`, `2108`, `2109`, `2111`, `2112`, `2113`, `2116`, `2117`, `2118`, `2119`, `2120`, `2121`, `2122`, `2123`, `2124`, `2126`, `2127`, `2128`, `2129`, `2130`, `2131`, `2133`, `2134`, `2135`, `2136`, `2139`, `2141`, `2142`, `2143`, `2144`, `2145`, `2146`, `2147`, `2148`, `2149`, `2150`, `2151`, `2152`, `2153`, `2154`, `2155`, `2156`, `2157`, `2158`, `2159`, `2160`, `2161`, `2162`, `2163`, `2164`, `2165`, `2166`, `2168`, `2169`, `2170`, `2171`, `2172`, `2174`, `2175`, `2176`, `2178`, `2180`, `2181`, `2183`, `2184`, `2185`, `2186`, `2187`, `2188`, `2190`, `2193`, `2194`, `2195`, `2196`, `2198`, `2199`, `2201`, `2202`, `2203`, `2205`, `2206`, `2207`, `2210`, `2211`, `2212`, `2214`, `2215`, `2216`, `2217`, `2219`, `2222`, `2225`, `2226`, `2227`, `2228`, `2229`, `2231`, `2233`, `2234`, `2235`, `2237`, `2238`, `2239`, `2240`, `2241`, `2242`, `2243`, `2244`, `2245`, `2246`, `2247`, `2249`, `2251`, `2252`, `2253`, `2254`, `2256`, `2258`, `2259`, `2261`, `2262`, `2264`, `2265`, `2266`, `2267`, `2268`, `2269`, `2270`, `2272`, `2274`, `2275`, `2276`, `2277`, `2279`, `2281`, `2282`, `2284`, `2286`, `2290`, `2292`, `2293`, `2294`, `2295`, `2296`, `2297`, `2298`, `2299`, `2300`, `2301`, `2302`, `2303`, `2304`, `2306`, `2307`, `2308`, `2310`, `2311`, `2312`, `2313`, `2314`, `2315`, `2316`, `2317`, `2318`, `2319`, `2320`, `2322`, `2323`, `2324`, `2325`, `2326`, `2327`, `2328`, `2329`, `2330`, `2331`, `2332`, `2334`, `2336`, `2338`, `2339`, `2342`, `2343`, `2344`, `2345`, `2346`, `2347`, `2348`, `2349`, `2350`, `2351`, `2353`, `2355`, `2356`, `2357`, `2358`, `2359`, `2360`, `2361`, `2362`, `2363`, `2364`, `2365`, `2366`, `2368`, `2369`, `2370`, `2372`, `2373`, `2375`, `2376`, `2377`, `2378`, `2380`, `2381`, `2383`, `2385`, `2386`, `2387`, `2389`, `2391`, `2392`, `2393`, `2396`, `2397`, `2398`, `2399`, `2400`, `2401`, `2403`, `2405`, `2407`, `2408`, `2409`, `2410`, `2411`, `2413`, `2414`, `2415`, `2417`, `2419`, `2420`, `2422`, `2423`, `2424`, `2426`, `2427`, `2428`, `2429`, `2430`, `2431`, `2432`, `2433`, `2434`, `2435`, `2436`, `2438`, `2439`, `2441`, `2443`, `2444`, `2445`, `2448`, `2449`, `2451`, `2452`, `2453`, `2454`, `2455`, `2456`, `2457`, `2458`, `2459`, `2460`, `2461`, `2462`, `2463`, `2464`, `2466`, `2467`, `2469`, `2470`, `2471`, `2472`, `2475`, `2476`, `2477`, `2478`, `2479`, `2482`, `2483`, `2484`, `2485`, `2486`, `2487`, `2488`, `2489`, `2490`, `400`, `2491`, `2492`, `2494`, `2495`, `2497`, `2498`, `2500`, `2501`, `2503`, `2504`, `2506`, `2507`, `2508`, `2509`, `2510`, `2511`, `2513`, `2515`, `2516`, `2517`, `2518`, `2519`, `51`, `2521`, `2523`, `2524`, `2525`, `2526`, `2527`, `2528`, `2530`, `2531`, `2532`, `2534`, `2535`, `2536`, `2537`, `2539`, `2540`, `2541`, `2542`, `2543`, `2544`, `2545`, `2546`, `2547`, `2549`, `2550`, `2551`, `2552`, `2553`, `2555`, `2556`, `2557`, `2558`, `2559`, `2560`, `2561`, `2562`, `2563`, `2564`, `2566`, `2569`, `2571`, `2572`, `2573`, `2576`, `2577`, `2578`, `2580`, `2581`, `2582`, `2584`, `2585`, `2586`, `2587`, `2588`, `2589`, `2592`, `2593`, `2594`, `2595`, `2596`, `2598`, `2599`, `2600`, `203`, `2601`, `2602`, `2603`, `2604`, `2605`, `2606`, `2607`, `2609`, `2611`, `2612`, `2613`, `2614`, `2615`, `2616`, `2617`, `2618`, `2619`, `2620`, `2622`, `2624`, `2625`, `2626`, `2627`, `2628`, `2630`, `2632`, `2633`, `2634`, `2635`, `2636`, `2637`, `2638`, `2639`, `2640`, `2641`, `2642`, `2643`, `2644`, `2645`, `2646`, `2647`, `2648`, `2649`, `2651`, `2652`, `2654`, `2657`, `2658`, `2660`, `2661`, `2662`, `2663`, `2664`, `2665`, `2666`, `2667`, `2668`, `2669`, `2670`, `2672`, `2673`, `2675`, `2676`, `2677`, `2678`, `472`, `2679`, `2680`, `2681`, `2682`, `115`, `2683`, `2685`, `2686`, `2687`, `2688`, `2689`, `2690`, `2691`, `2692`, `2693`, `2694`, `2695`, `2697`, `2698`, `2335`, `2700`, `2701`, `2702`, `2703`, `2704`, `2705`, `2707`, `2708`, `2709`, `2710`, `2711`, `2712`, `2713`, `152`, `2714`, `2715`, `2716`, `2717`, `2718`, `2719`, `2722`, `2723`, `2724`, `2725`, `2728`, `2730`, `2731`, `2732`, `2733`, `2734`, `2736`, `2737`, `2738`, `2740`, `2741`, `2743`, `2744`, `2745`, `2746`, `2747`, `2749`, `2750`, `2751`, `2753`, `2755`, `2758`, `2759`, `2760`, `2761`, `2763`, `2764`, `2765`, `2766`, `2767`, `2768`, `2769`, `2770`, `2771`, `2772`, `2773`, `2775`, `2776`, `2777`, `2778`, `2779`, `2780`, `2781`, `2782`, `2783`, `2785`, `2786`, `2787`, `2790`, `2791`, `2792`, `2793`, `2794`, `2795`, `2798`, `2799`, `2800`, `2801`, `2802`, `2803`, `2804`, `2807`, `2808`, `2809`, `2810`, `2811`, `2814`, `2815`, `2816`, `2817`, `2818`, `2819`, `2821`, `2822`, `2823`, `2824`, `2825`, `2827`, `2829`, `2830`, `2831`, `2832`, `2833`, `2836`, `2837`, `2838`, `2839`, `2841`, `2842`, `2843`, `2844`, `2845`, `2847`, `2848`, `2849`, `2850`, `2851`, `2852`, `2853`, `2855`, `2857`, `2859`, `2860`, `2864`, `2865`, `2866`, `2867`, `2868`, `2870`, `2871`, `2872`, `2873`, `2875`, `2876`, `2878`, `2879`, `2880`, `2881`, `2882`, `2884`, `2885`, `2889`, `2890`, `2891`, `2892`, `2893`, `2894`, `2895`, `2896`, `2897`, `2898`, `2900`, `2901`, `2902`, `2903`, `2904`, `2905`, `2906`, `2907`, `2908`, `2909`, `2910`, `2911`, `2912`, `2913`, `2914`, `2915`, `2916`, `2917`, `2918`, `2919`, `2921`, `2922`, `2923`, `2925`, `2926`, `2927`, `2929`, `2931`, `2932`, `2934`, `2935`, `2936`, `2937`, `2938`, `2940`, `2941`, `2942`, `2944`, `2945`, `2946`, `2947`, `2948`, `2949`, `2950`, `2951`, `2952`, `2953`, `2954`, `2955`, `2956`, `2957`, `2959`, `2962`, `2963`, `2964`, `2965`, `2966`, `2967`, `2968`, `2969`, `2970`, `2971`, `2972`, `2974`, `2975`, `2976`, `2978`, `2979`, `2980`, `2981`, `2982`, `2983`, `2984`, `2985`, `2987`, `2988`, `2990`, `2991`, `2993`, `2994`, `2995`, `2996`, `2997`, `2998`, `2999`, `3000`, `3004`, `3005`, `3007`, `3008`, `3009`, `3011`, `3012`, `3014`, `3015`, `3016`, `3017`, `3018`, `3019`, `3020`, `3021`, `3022`, `3023`, `3024`, `3025`, `3026`, `3027`, `3028`, `3029`, `3030`, `3031`, `3032`, `3033`, `3034`, `3036`, `3038`, `3039`, `3041`, `3042`, `3043`, `3044`, `3045`, `3046`, `3047`, `3048`, `3049`, `3050`, `3051`, `3052`, `3053`, `3054`, `3055`, `3056`, `3057`, `3059`, `3060`, `3061`, `3063`, `3065`, `3066`, `3067`, `3068`, `3070`, `3071`, `3072`, `3073`, `3074`, `3075`, `3076`, `3077`, `3078`, `3079`, `3080`, `3082`, `3083`, `3084`, `3085`, `3086`, `3087`, `3088`, `3089`, `3092`, `3094`, `3095`, `3097`, `3098`, `3101`, `3102`, `3103`, `3105`, `3107`, `3109`, `3110`, `3111`, `3112`, `3114`, `3115`, `3116`, `3117`, `3118`, `3119`, `3120`, `3122`, `3123`, `3124`, `3125`, `3126`, `3127`, `3128`, `3130`, `3131`, `3132`, `3133`, `3134`, `3135`, `3137`, `3138`, `3140`, `3141`, `3143`, `3144`, `3145`, `3146`, `3147`, `3148`, `3150`, `3151`, `3152`, `3153`, `3155`, `3157`, `3159`, `3160`, `3162`, `3163`, `3165`, `3166`, `3168`, `3169`, `3170`, `3172`, `3173`, `3175`, `3176`, `3177`, `3179`, `3181`, `3182`, `3183`, `3184`, `3185`, `3186`, `3187`, `3188`, `3190`, `3192`, `3194`, `3195`, `3198`, `3200`, `3201`, `3203`, `3205`, `3206`, `3207`, `3208`, `3209`, `3210`, `3211`, `3212`, `3213`, `3214`, `3215`, `3216`, `3217`, `3219`, `3220`, `3222`, `3224`, `3225`, `3226`, `3227`, `3230`, `3231`, `3232`, `3234`, `3235`, `3236`, `3239`, `3241`, `3242`, `3243`, `3244`, `3247`, `3249`, `3250`, `3251`, `3253`, `3255`, `3256`, `3257`, `3259`, `3261`, `3262`, `3264`, `3265`, `3266`, `3267`, `3269`, `3270`, `3271`, `3272`, `3273`, `3274`, `3275`, `3276`, `3278`, `3279`, `3280`, `3281`, `3283`, `3284`, `3285`, `3286`, `3288`, `3289`, `3291`, `3292`, `3295`, `3297`, `3298`, `3300`, `3302`, `3304`, `3305`, `3306`, `3307`, `3308`, `3309`, `3310`, `3311`, `3312`, `3313`, `3314`, `3315`, `3316`, `3318`, `3319`, `3320`, `3321`, `3322`, `3323`, `3324`, `3326`, `3327`, `3328`, `3329`, `3330`, `3331`, `3332`, `3333`, `3334`, `3335`, `3336`, `3338`, `3339`, `3340`, `3341`, `3342`, `3343`, `3344`, `3346`, `3348`, `3349`, `3350`, `3351`, `3354`, `3355`, `3356`, `3357`, `3358`, `3359`, `3361`, `3362`, `3363`, `3364`, `3365`, `3366`, `3367`, `3369`, `3370`, `3371`, `3373`, `3374`, `3375`, `3376`, `3378`, `3379`, `3380`, `3381`, `3382`, `3383`, `3384`, `3385`, `3386`, `3387`, `3389`, `1251`, `3390`, `3391`, `3392`, `3393`, `3394`, `3395`, `3396`, `3399`, `3402`, `3403`, `3404`, `3406`, `3407`, `3408`, `3409`, `3410`, `3411`, `3413`, `3414`, `3415`, `3416`, `3417`, `3418`, `3419`, `3420`, `3421`, `3422`, `3423`, `3425`, `3427`, `3428`, `3430`, `3431`, `3432`, `3435`, `3436`, `3437`, `3438`, `3439`, `3440`, `3441`, `3443`, `3444`, `3445`, `3446`, `3447`, `3449`, `3450`, `3451`, `3452`, `3454`, `3455`, `3458`, `3459`, `3460`, `3461`, `3464`, `3466`, `3467`, `3468`, `3469`, `3470`, `3472`, `3473`, `3475`, `3476`, `3478`, `3480`, `3481`, `3484`, `3485`, `3486`, `3487`, `3488`, `3490`, `3491`, `3492`, `3496`, `3498`, `3499`, `3500`, `3501`, `3502`, `3503`, `3504`, `3505`, `3507`, `3508`, `3509`, `3510`, `3513`, `3514`, `3516`, `3518`, `3522`, `3523`, `3524`, `3526`, `3527`, `3529`, `3531`, `3532`, `3534`, `3535`, `3536`, `3537`, `3538`, `3539`, `3541`, `3542`, `3543`, `3544`, `3545`, `3546`, `3547`, `3549`, `3550`, `3551`, `3552`, `3554`, `3556`, `3558`, `3559`, `3560`, `3561`, `3562`, `3563`, `3565`, `3567`, `3568`, `3569`, `3570`, `3571`, `3573`, `3574`, `3576`, `3577`, `3579`, `3580`, `3581`, `3582`, `3584`, `3586`, `3588`, `3589`, `3590`, `3591`, `3592`, `3593`, `3594`, `3595`, `3596`, `3597`, `3598`, `3601`, `3602`, `3604`, `3605`, `3606`, `3608`, `3609`, `3610`, `3611`, `3612`, `3613`, `3614`, `3615`, `3616`, `3617`, `3618`, `3619`, `3620`, `3621`, `3622`, `3623`, `3625`, `3626`, `3627`, `3628`, `3629`, `3630`, `3631`, `3632`, `3633`, `3634`, `3635`, `3636`, `3637`, `3638`, `3639`, `3640`, `3641`, `3642`, `3643`, `3644`, `3645`, `3646`, `3647`, `3648`, `3650`, `3651`, `3652`, `3653`, `3654`, `3655`, `3656` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 100.00 | | `TOKEN_P` | 100.00 | | `TOKEN_R` | 100.00 | | `TOKEN_ACC` | 100.00 | | `SENTS_F` | 92.13 | | `SENTS_P` | 92.08 | | `SENTS_R` | 92.17 | | `TAG_ACC` | 91.20 | | `POS_ACC` | 97.20 | | `MORPH_ACC` | 93.90 | | `DEP_UAS` | 94.14 | | `DEP_LAS` | 90.95 | | `LEMMA_ACC` | 91.74 |
{"language": ["sk"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/sk_udv25_slovaksnk_trf
null
[ "spacy", "token-classification", "sk", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "sk" ]
TAGS #spacy #token-classification #sk #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Slovak-SNK ### Label Scheme View label scheme (4879 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (4879 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #sk #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (4879 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Serbian-SET | Feature | Description | | --- | --- | | **Name** | `sr_udv25_serbianset_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (2603 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `Agcfpay`, `Agcfpgy`, `Agcfpiy`, `Agcfply`, `Agcfpny`, `Agcfsay`, `Agcfsdy`, `Agcfsgy`, `Agcfsiy`, `Agcfsly`, `Agcfsny`, `Agcmpay`, `Agcmpgy`, `Agcmpiy`, `Agcmpny`, `Agcmsayn`, `Agcmsdy`, `Agcmsgy`, `Agcmsiy`, `Agcmsly`, `Agcmsny`, `Agcnply`, `Agcnpny`, `Agcnsay`, `Agcnsdy`, `Agcnsgy`, `Agcnsiy`, `Agcnsly`, `Agcnsny`, `Agpfpay`, `Agpfpdy`, `Agpfpgy`, `Agpfpiy`, `Agpfply`, `Agpfpny`, `Agpfsay`, `Agpfsdy`, `Agpfsgy`, `Agpfsiy`, `Agpfsly`, `Agpfsny`, `Agpmpay`, `Agpmpdy`, `Agpmpgy`, `Agpmpiy`, `Agpmply`, `Agpmpny`, `Agpmsann`, `Agpmsayn`, `Agpmsayy`, `Agpmsdy`, `Agpmsgn`, `Agpmsgy`, `Agpmsiy`, `Agpmsly`, `Agpmsnn`, `Agpmsny`, `Agpnpay`, `Agpnpdy`, `Agpnpgy`, `Agpnpiy`, `Agpnply`, `Agpnpny`, `Agpnsay`, `Agpnsdy`, `Agpnsgn`, `Agpnsgy`, `Agpnsiy`, `Agpnsly`, `Agpnsny`, `Agsfpay`, `Agsfpgy`, `Agsfpiy`, `Agsfpny`, `Agsfsay`, `Agsfsgy`, `Agsfsiy`, `Agsfsny`, `Agsmpay`, `Agsmpdy`, `Agsmpgy`, `Agsmply`, `Agsmpny`, `Agsmsayn`, `Agsmsayy`, `Agsmsgy`, `Agsmsiy`, `Agsmsly`, `Agsmsny`, `Agsnpay`, `Agsnpgy`, `Agsnpny`, `Agsnsgy`, `Agsnsiy`, `Agsnsny`, `Appfpay`, `Appfpdy`, `Appfpgy`, `Appfpiy`, `Appfply`, `Appfpny`, `Appfsay`, `Appfsgy`, `Appfsiy`, `Appfsly`, `Appfsny`, `Appmpay`, `Appmpgy`, `Appmpiy`, `Appmply`, `Appmpny`, `Appmsann`, `Appmsayn`, `Appmsayy`, `Appmsdy`, `Appmsgy`, `Appmsiy`, `Appmsly`, `Appmsnn`, `Appmsny`, `Appnpay`, `Appnpgy`, `Appnpiy`, `Appnpny`, `Appnsay`, `Appnsgy`, `Appnsiy`, `Appnsly`, `Appnsny`, `Aspfpay`, `Aspfpgy`, `Aspfply`, `Aspfpny`, `Aspfsay`, `Aspfsdy`, `Aspfsgy`, `Aspfsiy`, `Aspfsly`, `Aspfsny`, `Aspmpay`, `Aspmpgy`, `Aspmpny`, `Aspmsann`, `Aspmsayy`, `Aspmsdy`, `Aspmsgy`, `Aspmsiy`, `Aspmsly`, `Aspmsnn`, `Aspnpay`, `Aspnsay`, `Aspnsgy`, `Aspnsly`, `Aspnsny`, `Cc`, `Cs`, `I`, `Mdc`, `Mdm`, `Mdo`, `Mds`, `Mlc`, `Mlc--i`, `Mlcf-a`, `Mlcf-g`, `Mlcf-n`, `Mlcfpa`, `Mlcfpg`, `Mlcfsa`, `Mlcfsd`, `Mlcfsg`, `Mlcfsi`, `Mlcfsl`, `Mlcfsn`, `Mlcm-a`, `Mlcm-n`, `Mlcmpn`, `Mlcmsan`, `Mlcmsay`, `Mlcmsd`, `Mlcmsg`, `Mlcmsi`, `Mlcmsl`, `Mlcmsn`, `Mlcn-n`, `Mlcnsa`, `Mlcnsl`, `Mlcnsn`, `Mlofpa`, `Mlofpd`, `Mlofpg`, `Mlofpi`, `Mlofpl`, `Mlofpn`, `Mlofsa`, `Mlofsd`, `Mlofsg`, `Mlofsi`, `Mlofsl`, `Mlofsn`, `Mlompa`, `Mlompd`, `Mlompg`, `Mlompi`, `Mlompl`, `Mlompn`, `Mlomsan`, `Mlomsay`, `Mlomsd`, `Mlomsg`, `Mlomsi`, `Mlomsl`, `Mlomsn`, `Mlonpa`, `Mlonpg`, `Mlonpl`, `Mlonsa`, `Mlonsg`, `Mlonsi`, `Mlonsl`, `Mlonsn`, `Mls`, `Mlsf-a`, `Mlsf-g`, `Mlsf-i`, `Mlsf-l`, `Mlsf-n`, `Mlsm-a`, `Mlsm-n`, `Ncfpa`, `Ncfpd`, `Ncfpg`, `Ncfpi`, `Ncfpl`, `Ncfpn`, `Ncfsa`, `Ncfsd`, `Ncfsg`, `Ncfsi`, `Ncfsl`, `Ncfsn`, `Ncmpa`, `Ncmpd`, `Ncmpg`, `Ncmpi`, `Ncmpl`, `Ncmpn`, `Ncmsan`, `Ncmsay`, `Ncmsd`, `Ncmsg`, `Ncmsi`, `Ncmsl`, `Ncmsn`, `Ncmsv`, `Ncnpa`, `Ncnpd`, `Ncnpg`, `Ncnpi`, `Ncnpl`, `Ncnpn`, `Ncnsa`, `Ncnsd`, `Ncnsg`, `Ncnsi`, `Ncnsl`, `Ncnsn`, `Npfpd`, `Npfpg`, `Npfpn`, `Npfsa`, `Npfsd`, `Npfsg`, `Npfsi`, `Npfsl`, `Npfsn`, `Npmpa`, `Npmpd`, `Npmpg`, `Npmpi`, `Npmpn`, `Npmsan`, `Npmsay`, `Npmsd`, `Npmsg`, `Npmsi`, `Npmsl`, `Npmsn`, `Npnpn`, `Npnsa`, `Npnsd`, `Npnsg`, `Npnsi`, `Npnsl`, `Npnsn`, `Pd-fpa`, `Pd-fpd`, `Pd-fpg`, `Pd-fpi`, `Pd-fpl`, `Pd-fpn`, `Pd-fsa`, `Pd-fsd`, `Pd-fsg`, `Pd-fsi`, `Pd-fsl`, `Pd-fsn`, `Pd-mpa`, `Pd-mpd`, `Pd-mpg`, `Pd-mpl`, `Pd-mpn`, `Pd-msan`, `Pd-msay`, `Pd-msd`, `Pd-msg`, `Pd-msi`, `Pd-msl`, `Pd-msn`, `Pd-npa`, `Pd-npd`, `Pd-npg`, `Pd-npl`, `Pd-npn`, `Pd-nsa`, `Pd-nsd`, `Pd-nsg`, `Pd-nsi`, `Pd-nsl`, `Pd-nsn`, `Pi--sn`, `Pi-fpa`, `Pi-fpd`, `Pi-fpg`, `Pi-fpi`, `Pi-fpl`, `Pi-fpn`, `Pi-fsa`, `Pi-fsd`, `Pi-fsg`, `Pi-fsi`, `Pi-fsl`, `Pi-fsn`, `Pi-mpa`, `Pi-mpd`, `Pi-mpg`, `Pi-mpi`, `Pi-mpl`, `Pi-mpn`, `Pi-msan`, `Pi-msay`, `Pi-msd`, `Pi-msg`, `Pi-msi`, `Pi-msl`, `Pi-msn`, `Pi-npa`, `Pi-npd`, `Pi-npg`, `Pi-npl`, `Pi-npn`, `Pi-nsa`, `Pi-nsg`, `Pi-nsi`, `Pi-nsl`, `Pi-nsn`, `Pi3m-a`, `Pi3m-d`, `Pi3m-g`, `Pi3m-n`, `Pi3n-a`, `Pi3n-g`, `Pi3n-i`, `Pi3n-l`, `Pi3n-n`, `Pp1-pa`, `Pp1-pd`, `Pp1-pg`, `Pp1-pi`, `Pp1-pl`, `Pp1-pn`, `Pp1-sa`, `Pp1-sd`, `Pp1-sn`, `Pp2-pa`, `Pp2-pd`, `Pp2-pl`, `Pp2-pn`, `Pp3-pa`, `Pp3-pd`, `Pp3-pg`, `Pp3-pi`, `Pp3-pl`, `Pp3fpn`, `Pp3fsa`, `Pp3fsd`, `Pp3fsg`, `Pp3fsi`, `Pp3fsl`, `Pp3fsn`, `Pp3mpn`, `Pp3msa`, `Pp3msd`, `Pp3msg`, `Pp3msi`, `Pp3msl`, `Pp3msn`, `Pp3npn`, `Pp3nsa`, `Pp3nsn`, `Pq-fpa`, `Pq-fsl`, `Pq-fsn`, `Pq-mpn`, `Pq-msn`, `Pq-nsn`, `Pq3n-n`, `Ps1fpa`, `Ps1fpd`, `Ps1fpg`, `Ps1fpn`, `Ps1fsa`, `Ps1fsd`, `Ps1fsg`, `Ps1fsl`, `Ps1fsn`, `Ps1mpa`, `Ps1mpd`, `Ps1mpg`, `Ps1mpl`, `Ps1mpn`, `Ps1msan`, `Ps1msd`, `Ps1msg`, `Ps1msn`, `Ps1nsa`, `Ps1nsg`, `Ps1nsl`, `Ps1nsn`, `Ps2fpl`, `Ps2fpn`, `Ps2msan`, `Ps2nsl`, `Ps2nsn`, `Ps3fpa`, `Ps3fpg`, `Ps3fpl`, `Ps3fpn`, `Ps3fsa`, `Ps3fsd`, `Ps3fsg`, `Ps3fsi`, `Ps3fsl`, `Ps3fsn`, `Ps3mpa`, `Ps3mpd`, `Ps3mpg`, `Ps3mpl`, `Ps3mpn`, `Ps3msan`, `Ps3msd`, `Ps3msg`, `Ps3msi`, `Ps3msl`, `Ps3msn`, `Ps3npa`, `Ps3npg`, `Ps3npl`, `Ps3nsa`, `Ps3nsg`, `Ps3nsl`, `Ps3nsn`, `Px--sa`, `Px--sd`, `Px--sg`, `Px--si`, `Px--sl`, `Px-fpa`, `Px-fpg`, `Px-fpi`, `Px-fpl`, `Px-fsa`, `Px-fsd`, `Px-fsg`, `Px-fsi`, `Px-fsl`, `Px-mpa`, `Px-mpd`, `Px-mpg`, `Px-mpi`, `Px-mpl`, `Px-msan`, `Px-msay`, `Px-msd`, `Px-msg`, `Px-msi`, `Px-msl`, `Px-npa`, `Px-npg`, `Px-npl`, `Px-nsa`, `Px-nsg`, `Qo`, `Qq`, `Qz`, `Rgc`, `Rgp`, `Rgs`, `Rr`, `Sa`, `Sd`, `Sg`, `Si`, `Sl`, `Vaa1p`, `Vaa1s`, `Vaa3p`, `Vaa3s`, `Vaf3p`, `Vaf3s`, `Van`, `Vap-pf`, `Vap-pm`, `Vap-pn`, `Vap-sf`, `Vap-sm`, `Vap-sn`, `Var1p`, `Var1s`, `Var2p`, `Var3p`, `Var3s`, `Vma3s`, `Vmf1p`, `Vmf1s`, `Vmf2p`, `Vmf3p`, `Vmf3s`, `Vmm1p`, `Vmm2p`, `Vmn`, `Vmp-pf`, `Vmp-pm`, `Vmp-pn`, `Vmp-sf`, `Vmp-sm`, `Vmp-sn`, `Vmr1p`, `Vmr1s`, `Vmr2p`, `Vmr3p`, `Vmr3s`, `X`, `Xf`, `Y`, `Z` | | **`morphologizer`** | `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Loc\|POS=ADP`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=SCONJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|POS=ADP`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `POS=CCONJ`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Degree=Pos\|POS=ADV`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `NumType=Card\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|POS=ADP`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `POS=X`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `NumType=Ord\|POS=NUM`, `Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=PART\|Polarity=Neg`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|POS=ADP`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Degree=Pos\|POS=ADV\|PronType=Dem`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `POS=DET`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Degree=Cmp\|POS=ADV`, `Case=Nom\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Inf`, `POS=PART`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=ADV\|Tense=Past\|VerbForm=Conv`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=AUX\|VerbForm=Inf`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Degree=Pos\|POS=DET\|PronType=Ind`, `Animacy=Inan\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Pos\|POS=ADV\|PronType=Neg`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Degree=Cmp\|POS=DET`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Neg`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Degree=Pos\|POS=ADV\|PronType=Int,Rel`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=DET`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Ins\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=NUM`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Degree=Pos\|POS=ADV\|PronType=Tot`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=ADV`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Loc\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `NumType=Ord\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|POS=PRON\|PronType=Neg`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|POS=ADP`, `Degree=Sup\|POS=ADV`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Pos\|POS=DET`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=NUM`, `Degree=Pos\|POS=ADV\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `POS=ADV\|Tense=Pres\|VerbForm=Conv`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NUM`, `Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Ins\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=NUM`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Gender=Neut\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Ind`, `POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `NumType=Mult\|POS=NUM`, `Case=Nom\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Loc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Neg`, `Case=Dat\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=DET`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Degree=Pos\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=NUM`, `Degree=Sup\|POS=DET`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Ind`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Tot`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Neut\|POS=PRON\|PronType=Neg`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Loc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Loc\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Ind`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Loc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Dat\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Tot`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Loc\|Gender=Fem\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NUM`, `Animacy=Inan\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=NUM`, `Case=Loc\|Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Ins\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Gen\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|NumType=Mult\|POS=NUM`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Gen\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Degree=Pos\|POS=DET\|PronType=Dem`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Loc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `POS=SYM`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=DET`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `POS=INTJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Nom\|Gender=Masc\|NumType=Mult\|POS=NUM`, `Case=Acc\|Gender=Masc\|POS=PRON\|PronType=Neg`, `Case=Gen\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Gen\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=DET`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Ind`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Ind`, `Case=Ins\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `POS=ADV\|VerbForm=Part`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs` | | **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `det:numgov`, `discourse`, `fixed`, `flat`, `list`, `mark`, `nmod`, `nsubj`, `nummod`, `nummod:gov`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `1`, `3`, `5`, `7`, `9`, `10`, `12`, `14`, `16`, `17`, `19`, `21`, `23`, `25`, `27`, `29`, `31`, `33`, `35`, `38`, `40`, `41`, `43`, `45`, `47`, `49`, `51`, `53`, `56`, `58`, `60`, `62`, `64`, `66`, `68`, `70`, `73`, `74`, `77`, `80`, `82`, `84`, `86`, `88`, `90`, `92`, `94`, `96`, `98`, `100`, `104`, `105`, `107`, `109`, `111`, `113`, `116`, `118`, `120`, `121`, `124`, `126`, `128`, `130`, `132`, `134`, `136`, `138`, `141`, `143`, `145`, `147`, `150`, `153`, `155`, `157`, `159`, `160`, `162`, `164`, `166`, `167`, `168`, `170`, `172`, `174`, `175`, `176`, `178`, `180`, `182`, `184`, `188`, `190`, `191`, `194`, `196`, `198`, `199`, `201`, `202`, `205`, `207`, `209`, `211`, `214`, `217`, `219`, `221`, `223`, `225`, `227`, `229`, `231`, `233`, `237`, `239`, `241`, `243`, `245`, `83`, `246`, `247`, `249`, `253`, `255`, `258`, `260`, `262`, `263`, `265`, `269`, `271`, `272`, `274`, `275`, `276`, `277`, `278`, `280`, `282`, `283`, `285`, `287`, `289`, `291`, `292`, `293`, `294`, `295`, `297`, `298`, `299`, `301`, `302`, `304`, `306`, `308`, `310`, `312`, `314`, `315`, `317`, `320`, `321`, `323`, `325`, `327`, `328`, `330`, `332`, `333`, `335`, `337`, `338`, `340`, `341`, `342`, `343`, `346`, `250`, `348`, `349`, `350`, `351`, `353`, `354`, `356`, `358`, `360`, `362`, `364`, `365`, `367`, `369`, `371`, `373`, `375`, `376`, `378`, `380`, `382`, `384`, `385`, `386`, `388`, `391`, `395`, `398`, `400`, `402`, `404`, `406`, `409`, `413`, `415`, `419`, `421`, `424`, `426`, `427`, `428`, `429`, `430`, `431`, `432`, `434`, `436`, `438`, `440`, `442`, `444`, `446`, `447`, `449`, `450`, `452`, `454`, `455`, `457`, `459`, `461`, `462`, `463`, `465`, `466`, `468`, `470`, `472`, `474`, `476`, `477`, `478`, `480`, `483`, `485`, `486`, `489`, `491`, `492`, `497`, `498`, `500`, `501`, `502`, `503`, `504`, `507`, `508`, `509`, `510`, `512`, `513`, `515`, `516`, `518`, `519`, `521`, `523`, `524`, `526`, `527`, `529`, `531`, `532`, `533`, `535`, `538`, `540`, `542`, `543`, `545`, `547`, `550`, `552`, `553`, `556`, `557`, `558`, `561`, `562`, `563`, `566`, `567`, `569`, `571`, `572`, `574`, `576`, `578`, `580`, `582`, `583`, `586`, `588`, `590`, `592`, `594`, `596`, `600`, `601`, `603`, `606`, `607`, `609`, `610`, `611`, `613`, `614`, `615`, `616`, `618`, `620`, `623`, `624`, `626`, `627`, `629`, `630`, `632`, `635`, `637`, `639`, `641`, `642`, `643`, `645`, `647`, `648`, `649`, `652`, `654`, `655`, `658`, `660`, `662`, `665`, `667`, `668`, `670`, `672`, `674`, `675`, `676`, `678`, `680`, `682`, `683`, `684`, `685`, `687`, `688`, `690`, `691`, `693`, `694`, `696`, `697`, `699`, `701`, `703`, `705`, `707`, `708`, `709`, `710`, `711`, `713`, `714`, `716`, `717`, `718`, `721`, `723`, `725`, `726`, `730`, `732`, `734`, `735`, `736`, `737`, `739`, `740`, `741`, `742`, `744`, `746`, `747`, `749`, `750`, `752`, `754`, `755`, `756`, `757`, `760`, `761`, `762`, `763`, `765`, `768`, `769`, `771`, `772`, `773`, `774`, `775`, `777`, `780`, `781`, `783`, `784`, `785`, `787`, `789`, `790`, `791`, `792`, `794`, `797`, `798`, `800`, `802`, `803`, `806`, `808`, `811`, `813`, `815`, `817`, `819`, `822`, `825`, `827`, `829`, `831`, `833`, `835`, `836`, `838`, `839`, `842`, `845`, `848`, `850`, `851`, `852`, `853`, `855`, `856`, `858`, `860`, `862`, `864`, `866`, `867`, `868`, `871`, `874`, `875`, `876`, `877`, `878`, `880`, `883`, `884`, `885`, `887`, `889`, `890`, `894`, `895`, `896`, `898`, `899`, `901`, `902`, `903`, `904`, `905`, `908`, `909`, `910`, `911`, `913`, `915`, `916`, `917`, `919`, `920`, `921`, `922`, `923`, `925`, `927`, `928`, `929`, `931`, `932`, `934`, `935`, `937`, `938`, `941`, `943`, `945`, `946`, `947`, `948`, `950`, `951`, `952`, `953`, `954`, `955`, `957`, `959`, `962`, `964`, `965`, `966`, `969`, `971`, `972`, `973`, `975`, `977`, `978`, `980`, `982`, `983`, `985`, `986`, `987`, `989`, `990`, `992`, `994`, `996`, `998`, `999`, `1001`, `1002`, `1005`, `1007`, `1009`, `1012`, `1014`, `1016`, `1018`, `1020`, `1022`, `1025`, `1026`, `1028`, `1029`, `1031`, `1033`, `1035`, `1036`, `374`, `1038`, `1039`, `1040`, `1041`, `1042`, `1043`, `1045`, `1047`, `1050`, `1053`, `1054`, `1055`, `1056`, `1057`, `1059`, `1060`, `1061`, `1063`, `1064`, `1065`, `1066`, `1068`, `1070`, `1072`, `1074`, `1075`, `1077`, `1079`, `1080`, `1082`, `1083`, `1084`, `1087`, `1090`, `1091`, `1092`, `1093`, `1095`, `1096`, `1097`, `1100`, `1101`, `1102`, `1103`, `1104`, `1106`, `1108`, `1110`, `1112`, `1114`, `1115`, `1116`, `1117`, `1118`, `1119`, `1120`, `1122`, `1123`, `1125`, `1126`, `1127`, `1129`, `1130`, `1131`, `1132`, `1133`, `1134`, `1135`, `1136`, `1137`, `1139`, `1141`, `1142`, `1143`, `1144`, `1146`, `1148`, `1150`, `1151`, `1152`, `1153`, `1155`, `1157`, `1158`, `1159`, `1160`, `1162`, `1165`, `1167`, `1168`, `1169`, `1170`, `1172`, `1174`, `1177`, `1179`, `1181`, `1183`, `1184`, `1187`, `1189`, `1190`, `1193`, `1194`, `1195`, `1196`, `1197`, `1199`, `1200`, `1201`, `1202`, `1203`, `1205`, `1207`, `1210`, `1211`, `1212`, `1213`, `1215`, `1216`, `1218`, `1219`, `1221`, `1222`, `1223`, `1224`, `1227`, `1228`, `1230`, `1232`, `1235`, `1236`, `1237`, `1239`, `1241`, `1242`, `1244`, `1246`, `1248`, `1250`, `1253`, `1255`, `1256`, `1258`, `1259`, `1260`, `1261`, `1262`, `1263`, `1265`, `1266`, `1267`, `1269`, `1272`, `1275`, `1277`, `1279`, `1281`, `1283`, `1285`, `1287`, `1289`, `1291`, `1292`, `1293`, `1295`, `1296`, `1297`, `1298`, `1300`, `1303`, `1304`, `1306`, `1308`, `1310`, `1311`, `1312`, `1313`, `1315`, `1316`, `1317`, `1318`, `1319`, `1320`, `1321`, `1322`, `1323`, `1325`, `1326`, `1327`, `1329`, `1331`, `1332`, `1333`, `1334`, `1335`, `1336`, `1338`, `1339`, `1340`, `1342`, `1344`, `1346`, `1347`, `1349`, `1350`, `1353`, `1356`, `1357`, `1358`, `1359`, `1360`, `1361`, `1362`, `1363`, `1364`, `1367`, `1368`, `1369`, `1370`, `1371`, `1372`, `1373`, `1374`, `1376`, `1377`, `1379`, `1381`, `1382`, `1384`, `1385`, `1386`, `1387`, `1388`, `1390`, `1391`, `1392`, `1393`, `1395`, `1396`, `1398`, `1399`, `1401`, `1402`, `1404`, `1405`, `1406`, `1408`, `1409`, `1410`, `1412`, `1413`, `1415`, `1417`, `1418`, `1420`, `1421`, `1422`, `1423`, `1424`, `1425`, `1426`, `1428`, `1429`, `1430`, `1431`, `1433`, `1434`, `1435`, `1436`, `1437`, `1438`, `1439`, `1441`, `1442`, `1444`, `1445`, `1447`, `1448`, `1449`, `1450`, `1451`, `1452`, `1453`, `1454`, `1456`, `1458`, `1461`, `1462`, `1463`, `1464`, `1467`, `1469`, `1471`, `1472`, `1474`, `1475`, `1477`, `1479`, `1481`, `1483`, `1484`, `1485`, `1486`, `1487`, `1488`, `1490`, `1492`, `1493`, `1496`, `1497`, `1499`, `1500`, `1502`, `1503`, `1504`, `1505`, `1052`, `1507`, `1508`, `1510`, `1513`, `1515`, `1516`, `1518`, `1519`, `1520`, `1521`, `1523`, `1524`, `1525`, `1526`, `1527`, `1528`, `1529`, `1531`, `1533`, `1534`, `1535`, `1536`, `1537`, `1538`, `1540`, `1541`, `1542`, `1543`, `1544`, `1545`, `1546`, `1547`, `1548`, `1549`, `1550`, `1551`, `1552`, `1553`, `1554`, `1556`, `1557`, `1558`, `1560`, `1561`, `1562`, `1563`, `1564`, `1565`, `1566`, `1567`, `1568`, `1569`, `1571`, `1573`, `1574`, `1576`, `1578`, `1580`, `1581`, `1583`, `1585`, `1586`, `1587`, `1588`, `1589`, `1591`, `1592`, `1593`, `1594`, `1596`, `1597`, `1598`, `1600`, `1602`, `1605`, `1606`, `1607`, `1609`, `1611`, `1612`, `1613`, `1614`, `1616`, `1619`, `1623`, `1624`, `1626`, `1627`, `1628`, `1629`, `1631`, `1633`, `1635`, `1637`, `1638`, `1639`, `1640`, `1641`, `1642`, `1643`, `1645`, `1646`, `1648`, `1649`, `1650`, `1651`, `1652`, `1653`, `1654`, `1655`, `1656`, `1658`, `1659`, `1660`, `1661`, `1662`, `1663`, `1664`, `1665`, `1666`, `1667`, `1669`, `1670`, `1672`, `1673`, `1674`, `1677`, `372`, `1678`, `1680`, `1682`, `1683`, `1684`, `1685`, `1687`, `1689`, `1690`, `1692`, `1694`, `1695`, `1696`, `1697`, `1698`, `1700`, `1702`, `1703`, `1704`, `1706`, `1708`, `1709`, `1710`, `1712`, `1713`, `1715`, `1717`, `1720`, `1722`, `1724`, `1726`, `1728`, `1730`, `1731`, `1732`, `1734`, `1735`, `1737`, `1739`, `1741`, `1743`, `1744`, `1745`, `1746`, `1747`, `1748`, `1751`, `1753`, `1755`, `1757`, `1758`, `1759`, `1760`, `1762`, `1764`, `1766`, `1767`, `1768`, `1770`, `1772`, `1773`, `1776`, `1777`, `1778`, `1779`, `1781`, `1783`, `1784`, `1785`, `1787`, `1789`, `1790`, `1791`, `1793`, `1794`, `1795`, `1799`, `1800`, `1802`, `1803`, `1804`, `1806`, `1807`, `1808`, `1809`, `1812`, `1813`, `1814`, `1816`, `1817`, `1819`, `1820`, `1821`, `1822`, `1823`, `1824`, `1826`, `1828`, `1830`, `1831`, `1832`, `1834`, `1837`, `1838`, `1840`, `1841`, `1842`, `1844`, `1845`, `1847`, `1848`, `1850`, `1851`, `1853`, `1854`, `1855`, `1856`, `1859`, `1860`, `1862`, `1864`, `1865`, `1866`, `1868`, `1869`, `1870`, `1871`, `1873`, `1874`, `1875`, `1876`, `1877`, `1878`, `1879`, `1880`, `1881`, `1883`, `1884`, `1885`, `1886`, `1888`, `1889`, `1890`, `1891`, `1892`, `1894`, `1895`, `61`, `1896`, `1898`, `1899`, `1901`, `1902`, `1903`, `1905`, `1907`, `1908`, `1910`, `1912`, `1914`, `1915`, `1916`, `1917`, `1919`, `133`, `1920`, `1921`, `1922`, `1923`, `1924`, `1925`, `1926`, `1928`, `1931`, `1932`, `1933`, `1934`, `1935`, `1936`, `1937`, `1939`, `1940`, `1943`, `1945`, `1946`, `1947`, `1949`, `1950`, `1952`, `1955`, `1956`, `1957`, `1958`, `1959`, `1960`, `1962`, `1964`, `1965`, `1968`, `1969`, `1970`, `1971`, `1972`, `1973`, `1975`, `1977`, `1979`, `1980`, `1982`, `1984`, `1986`, `1989`, `1990`, `1992`, `1993`, `1995`, `1997`, `1999`, `2001`, `2003`, `2005`, `2006`, `2008`, `2009`, `2010`, `2011`, `2012`, `2014`, `2016`, `2017`, `2018`, `2019`, `2021`, `2022`, `2024`, `2026`, `2027`, `2030`, `2032`, `2035`, `2037`, `2039`, `2040`, `2041`, `2042`, `2043`, `2045`, `2047` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.92 | | `TOKEN_P` | 99.91 | | `TOKEN_R` | 99.94 | | `TOKEN_ACC` | 100.00 | | `SENTS_F` | 98.04 | | `SENTS_P` | 98.31 | | `SENTS_R` | 97.76 | | `TAG_ACC` | 95.86 | | `POS_ACC` | 98.56 | | `MORPH_ACC` | 96.05 | | `DEP_UAS` | 93.72 | | `DEP_LAS` | 90.25 | | `LEMMA_ACC` | 95.94 |
{"language": ["sr"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/sr_udv25_serbianset_trf
null
[ "spacy", "token-classification", "sr", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "sr" ]
TAGS #spacy #token-classification #sr #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Serbian-SET ### Label Scheme View label scheme (2603 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (2603 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #sr #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (2603 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Swedish-Talbanken | Feature | Description | | --- | --- | | **Name** | `sv_udv25_swedishtalbanken_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (1206 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `AB`, `AB\|AN`, `AB\|KOM`, `AB\|POS`, `AB\|SMS`, `AB\|SUV`, `DT\|NEU\|SIN\|DEF`, `DT\|NEU\|SIN\|IND`, `DT\|NEU\|SIN\|IND/DEF`, `DT\|UTR/NEU\|PLU\|DEF`, `DT\|UTR/NEU\|PLU\|IND`, `DT\|UTR/NEU\|PLU\|IND/DEF`, `DT\|UTR/NEU\|SIN/PLU\|IND`, `DT\|UTR/NEU\|SIN\|DEF`, `DT\|UTR/NEU\|SIN\|IND`, `DT\|UTR\|SIN\|DEF`, `DT\|UTR\|SIN\|IND`, `DT\|UTR\|SIN\|IND/DEF`, `HA`, `HD\|NEU\|SIN\|IND`, `HD\|UTR/NEU\|PLU\|IND`, `HD\|UTR\|SIN\|IND`, `HP\|-\|-\|-`, `HP\|NEU\|SIN\|IND`, `HP\|UTR/NEU\|PLU\|IND`, `HP\|UTR\|SIN\|IND`, `HS\|DEF`, `IE`, `IN`, `JJ`, `JJ\|AN`, `JJ\|KOM\|UTR/NEU\|SIN/PLU\|IND/DEF\|NOM`, `JJ\|POS\|MAS\|SIN\|DEF\|GEN`, `JJ\|POS\|MAS\|SIN\|DEF\|NOM`, `JJ\|POS\|NEU\|SIN\|IND/DEF\|NOM`, `JJ\|POS\|NEU\|SIN\|IND\|NOM`, `JJ\|POS\|UTR/NEU\|PLU\|IND/DEF\|GEN`, `JJ\|POS\|UTR/NEU\|PLU\|IND/DEF\|NOM`, `JJ\|POS\|UTR/NEU\|PLU\|IND\|NOM`, `JJ\|POS\|UTR/NEU\|SIN/PLU\|IND/DEF\|NOM`, `JJ\|POS\|UTR/NEU\|SIN\|DEF\|NOM`, `JJ\|POS\|UTR\|-\|-\|SMS`, `JJ\|POS\|UTR\|SIN\|IND/DEF\|NOM`, `JJ\|POS\|UTR\|SIN\|IND\|GEN`, `JJ\|POS\|UTR\|SIN\|IND\|NOM`, `JJ\|SUV\|MAS\|SIN\|DEF\|NOM`, `JJ\|SUV\|UTR/NEU\|PLU\|DEF\|NOM`, `JJ\|SUV\|UTR/NEU\|SIN/PLU\|DEF\|NOM`, `JJ\|SUV\|UTR/NEU\|SIN/PLU\|IND\|NOM`, `KN`, `MAD`, `MID`, `NN`, `NN\|-\|-\|-\|-`, `NN\|AN`, `NN\|NEU\|-\|-\|SMS`, `NN\|NEU\|PLU\|DEF\|GEN`, `NN\|NEU\|PLU\|DEF\|NOM`, `NN\|NEU\|PLU\|IND\|GEN`, `NN\|NEU\|PLU\|IND\|NOM`, `NN\|NEU\|SIN\|DEF\|GEN`, `NN\|NEU\|SIN\|DEF\|NOM`, `NN\|NEU\|SIN\|IND`, `NN\|NEU\|SIN\|IND\|GEN`, `NN\|NEU\|SIN\|IND\|NOM`, `NN\|SMS`, `NN\|UTR\|-\|-\|-`, `NN\|UTR\|-\|-\|SMS`, `NN\|UTR\|PLU\|DEF\|GEN`, `NN\|UTR\|PLU\|DEF\|NOM`, `NN\|UTR\|PLU\|IND\|GEN`, `NN\|UTR\|PLU\|IND\|NOM`, `NN\|UTR\|SIN\|DEF\|GEN`, `NN\|UTR\|SIN\|DEF\|NOM`, `NN\|UTR\|SIN\|IND\|GEN`, `NN\|UTR\|SIN\|IND\|NOM`, `PAD`, `PC\|PRF\|NEU\|SIN\|IND\|NOM`, `PC\|PRF\|UTR/NEU\|PLU\|IND/DEF\|GEN`, `PC\|PRF\|UTR/NEU\|PLU\|IND/DEF\|NOM`, `PC\|PRF\|UTR/NEU\|SIN\|DEF\|NOM`, `PC\|PRF\|UTR\|SIN\|IND\|NOM`, `PC\|PRS\|UTR/NEU\|SIN/PLU\|IND/DEF\|NOM`, `PL`, `PM`, `PM\|GEN`, `PM\|NOM`, `PM\|SMS`, `PN\|MAS\|SIN\|DEF\|SUB/OBJ`, `PN\|NEU\|SIN\|DEF`, `PN\|NEU\|SIN\|DEF\|SUB/OBJ`, `PN\|NEU\|SIN\|IND\|SUB/OBJ`, `PN\|UTR/NEU\|PLU\|DEF\|OBJ`, `PN\|UTR/NEU\|PLU\|DEF\|SUB`, `PN\|UTR/NEU\|PLU\|DEF\|SUB/OBJ`, `PN\|UTR/NEU\|PLU\|IND\|SUB/OBJ`, `PN\|UTR/NEU\|SIN/PLU\|DEF\|OBJ`, `PN\|UTR\|PLU\|DEF\|OBJ`, `PN\|UTR\|PLU\|DEF\|SUB`, `PN\|UTR\|SIN\|DEF\|NOM`, `PN\|UTR\|SIN\|DEF\|OBJ`, `PN\|UTR\|SIN\|DEF\|SUB`, `PN\|UTR\|SIN\|DEF\|SUB/OBJ`, `PN\|UTR\|SIN\|IND\|NOM`, `PN\|UTR\|SIN\|IND\|SUB`, `PN\|UTR\|SIN\|IND\|SUB/OBJ`, `PP`, `PS\|NEU\|SIN\|DEF`, `PS\|UTR/NEU\|PLU\|DEF`, `PS\|UTR/NEU\|SIN/PLU\|DEF`, `PS\|UTR\|SIN\|DEF`, `RG\|NEU\|SIN\|IND\|NOM`, `RG\|NOM`, `RG\|SMS`, `RG\|UTR\|SIN\|IND\|NOM`, `RO\|MAS\|SIN\|IND/DEF\|NOM`, `RO\|NOM`, `SN`, `UO`, `VB\|AN`, `VB\|IMP\|AKT`, `VB\|IMP\|SFO`, `VB\|INF\|AKT`, `VB\|INF\|SFO`, `VB\|KON\|PRS\|AKT`, `VB\|KON\|PRT\|AKT`, `VB\|PRS\|AKT`, `VB\|PRS\|SFO`, `VB\|PRT\|AKT`, `VB\|PRT\|SFO`, `VB\|SUP\|AKT`, `VB\|SUP\|SFO` | | **`morphologizer`** | `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `POS=ADP`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `POS=PUNCT`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Abbr=Yes\|POS=ADV`, `POS=SCONJ`, `POS=ADV`, `Case=Nom\|Definite=Ind\|Gender=Com\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PART`, `POS=VERB\|VerbForm=Inf`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=CCONJ`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PRON\|PronType=Rel`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Degree=Pos\|POS=ADV`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `POS=VERB\|VerbForm=Sup\|Voice=Act`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=PART\|Polarity=Neg`, `Case=Nom\|Degree=Pos\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Degree=Cmp\|POS=ADV`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Degree=Sup\|POS=ADV`, `Case=Nom\|NumType=Card\|POS=NUM`, `Abbr=Yes\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Sup\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Mood=Imp\|POS=VERB\|VerbForm=Fin\|Voice=Act`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `POS=AUX\|VerbForm=Sup\|Voice=Act`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Rcp`, `POS=VERB\|VerbForm=Sup\|Voice=Pass`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|POS=ADJ`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|POS=DET\|PronType=Ind`, `Case=Nom\|Definite=Def\|Number=Sing\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Case=Nom\|POS=ADJ\|Tense=Pres\|VerbForm=Part`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Dem`, `Definite=Def\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `POS=NOUN`, `Case=Nom\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Tot`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Ind`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Gender=Com\|POS=NOUN`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Case=Acc\|Definite=Def\|POS=PRON\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Prs`, `Case=Nom\|POS=PROPN`, `Case=Nom\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Plur\|POS=PRON\|PronType=Prs`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=ADJ\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Definite=Def\|Gender=Com\|Number=Plur\|POS=PRON\|PronType=Prs`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Rel`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Int`, `Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|POS=PROPN`, `POS=PROPN`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Int`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Neut\|POS=NOUN`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Int`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Neg`, `POS=VERB\|VerbForm=Sup`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Definite=Ind\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Foreign=Yes\|POS=NOUN`, `Foreign=Yes\|POS=ADJ`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=SYM`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Definite=Ind\|Degree=Sup\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Dem`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Neg`, `Mood=Sub\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Degree=Pos\|Gender=Com\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Case=Nom\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Ind\|POS=DET\|PronType=Prs`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Rel`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Definite=Def\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Ind`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Prs`, `Abbr=Yes\|POS=ADJ`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Rel`, `NumType=Card\|POS=NUM`, `POS=INTJ`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Int`, `Degree=Sup\|POS=ADV\|Polarity=Neg`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Int`, `POS=ADV\|Polarity=Neg`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Ind`, `POS=ADJ`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Prs`, `Mood=Imp\|POS=VERB\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Acc\|Definite=Def\|POS=PRON\|PronType=Ind`, `Foreign=Yes\|POS=ADP`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Prs`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Dem`, `Abbr=Yes\|Mood=Imp\|POS=VERB\|VerbForm=Fin\|Voice=Act`, `Mood=Sub\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Rel`, `Foreign=Yes\|POS=CCONJ`, `POS=DET\|PronType=Art`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Prs`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Def\|Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Acc\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Tot`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Definite=Def\|Number=Plur\|POS=PRON\|PronType=Tot`, `Degree=Pos\|POS=ADV\|Polarity=Neg`, `Mood=Sub\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=PRON\|PronType=Ind`, `Definite=Ind\|POS=DET\|PronType=Neg`, `Definite=Ind\|Number=Plur\|POS=PRON\|PronType=Neg`, `POS=CCONJ\|Polarity=Neg`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Number=Plur\|POS=PRON\|PronType=Tot`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Tot`, `Mood=Imp\|POS=AUX\|VerbForm=Fin\|Voice=Act`, `Foreign=Yes\|POS=ADV`, `Definite=Def\|POS=PRON\|Poss=Yes\|PronType=Rcp`, `Case=Acc\|Definite=Def\|POS=PRON\|Polarity=Neg\|PronType=Ind` | | **`parser`** | `ROOT`, `acl`, `acl:cleft`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `compound:prt`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `dislocated`, `expl`, `fixed`, `flat:name`, `iobj`, `list`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:agent`, `orphan`, `parataxis`, `punct`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `1`, `2`, `4`, `6`, `8`, `10`, `13`, `15`, `17`, `18`, `20`, `22`, `24`, `27`, `30`, `32`, `34`, `37`, `39`, `41`, `43`, `45`, `47`, `50`, `54`, `56`, `60`, `62`, `64`, `66`, `68`, `70`, `72`, `0`, `73`, `76`, `77`, `79`, `81`, `83`, `85`, `87`, `88`, `90`, `92`, `94`, `97`, `99`, `102`, `104`, `105`, `107`, `108`, `109`, `110`, `111`, `112`, `114`, `116`, `117`, `119`, `120`, `122`, `123`, `125`, `126`, `129`, `130`, `131`, `134`, `139`, `140`, `141`, `143`, `146`, `148`, `149`, `151`, `153`, `155`, `157`, `158`, `160`, `162`, `164`, `166`, `167`, `169`, `173`, `176`, `178`, `179`, `181`, `182`, `183`, `184`, `186`, `189`, `193`, `195`, `197`, `198`, `199`, `203`, `204`, `205`, `206`, `207`, `208`, `209`, `210`, `212`, `215`, `217`, `218`, `219`, `222`, `224`, `225`, `227`, `229`, `232`, `233`, `234`, `236`, `238`, `240`, `241`, `243`, `246`, `248`, `249`, `250`, `252`, `255`, `257`, `260`, `262`, `264`, `265`, `266`, `269`, `271`, `274`, `276`, `278`, `279`, `281`, `282`, `285`, `286`, `288`, `290`, `292`, `293`, `295`, `296`, `298`, `300`, `301`, `302`, `303`, `304`, `305`, `306`, `307`, `309`, `311`, `312`, `315`, `316`, `317`, `319`, `322`, `323`, `324`, `326`, `329`, `331`, `333`, `334`, `335`, `337`, `339`, `341`, `342`, `343`, `345`, `347`, `348`, `350`, `351`, `353`, `354`, `356`, `357`, `359`, `360`, `362`, `363`, `365`, `369`, `372`, `374`, `377`, `378`, `380`, `381`, `383`, `384`, `386`, `388`, `389`, `390`, `392`, `395`, `397`, `398`, `399`, `401`, `402`, `403`, `404`, `405`, `406`, `407`, `408`, `410`, `411`, `412`, `413`, `414`, `415`, `418`, `419`, `420`, `421`, `423`, `424`, `425`, `426`, `428`, `430`, `431`, `432`, `433`, `434`, `436`, `440`, `442`, `444`, `446`, `448`, `449`, `453`, `454`, `457`, `458`, `459`, `460`, `462`, `463`, `464`, `466`, `468`, `469`, `471`, `472`, `474`, `475`, `478`, `479`, `480`, `481`, `482`, `483`, `486`, `487`, `488`, `489`, `490`, `492`, `494`, `495`, `498`, `500`, `501`, `502`, `503`, `504`, `506`, `507`, `508`, `509`, `513`, `514`, `516`, `517`, `519`, `520`, `521`, `522`, `523`, `525`, `526`, `528`, `530`, `534`, `536`, `537`, `538`, `539`, `540`, `543`, `545`, `547`, `549`, `550`, `551`, `552`, `554`, `555`, `557`, `559`, `560`, `562`, `565`, `568`, `571`, `574`, `575`, `576`, `577`, `578`, `582`, `583`, `585`, `586`, `588`, `589`, `591`, `592`, `594`, `596`, `598`, `601`, `602`, `604`, `605`, `606`, `607`, `608`, `609`, `610`, `611`, `612`, `613`, `615`, `616`, `617`, `618`, `620`, `622`, `623`, `624`, `625`, `627`, `628`, `629`, `631`, `633`, `635`, `637`, `638`, `640`, `641`, `644`, `645`, `649`, `650`, `652`, `653`, `655`, `656`, `658`, `660`, `662`, `663`, `664`, `666`, `669`, `671`, `672`, `676`, `677`, `680`, `681`, `682`, `685`, `687`, `688`, `690`, `691`, `693`, `694`, `696`, `697`, `698`, `699`, `700`, `702`, `703`, `704`, `706`, `709`, `711`, `712`, `713`, `714`, `715`, `716`, `718`, `719`, `720`, `723`, `724`, `726`, `728`, `730`, `731`, `732`, `734`, `735`, `736`, `737`, `738`, `739`, `740`, `742`, `743`, `745`, `746`, `748`, `750`, `751`, `752`, `753`, `754`, `756`, `757`, `758`, `760`, `762`, `763`, `764`, `765`, `766`, `767`, `768`, `769`, `770`, `771`, `772`, `774`, `776`, `777`, `779`, `780`, `781`, `782`, `783`, `784`, `785`, `787`, `788`, `789`, `790`, `791`, `793`, `794`, `797`, `799`, `801`, `802`, `803`, `806`, `808`, `809`, `810`, `812`, `813`, `815`, `816`, `817`, `819`, `820`, `822`, `824`, `825`, `826`, `828`, `829`, `832`, `833`, `835`, `837`, `839`, `840`, `841`, `842`, `843`, `845`, `846`, `849`, `851`, `854`, `857`, `858`, `861`, `862`, `863`, `865`, `866`, `867`, `868`, `869`, `870`, `871`, `873`, `875`, `876`, `878`, `880`, `883`, `884`, `887`, `888`, `889`, `890`, `891`, `893`, `894`, `897`, `898`, `529`, `900`, `901`, `902`, `903`, `904`, `905`, `906`, `909`, `911`, `913`, `914`, `915`, `916`, `918`, `919`, `920`, `922`, `923`, `925`, `926`, `927`, `928`, `929`, `931`, `932`, `934`, `936`, `938`, `939`, `940`, `941`, `942`, `943`, `944`, `945`, `946`, `947`, `948`, `949`, `950`, `952`, `953`, `954`, `956`, `957`, `958`, `959`, `961`, `962`, `965`, `967`, `968`, `970`, `971`, `972`, `973`, `976`, `977`, `979`, `982`, `983`, `984`, `985`, `986`, `988`, `989`, `990`, `993`, `994`, `996`, `998`, `999`, `1001`, `1002`, `1003`, `1005`, `1006`, `1007`, `1009`, `1010`, `1012`, `1016`, `1018`, `1020`, `1021`, `1023`, `1024`, `1026`, `1027`, `1029`, `1030`, `1031`, `1032`, `1033`, `1034`, `1036`, `1037`, `1038`, `1040`, `1042`, `1044`, `223`, `1045`, `1046`, `1049`, `1052`, `1054`, `1057`, `1058`, `1061`, `1062`, `1063`, `1064`, `1065`, `1067`, `1068`, `1069`, `1070`, `1071`, `1072`, `1074`, `1077`, `1079`, `1080`, `1081`, `1083`, `1084`, `1086`, `1087`, `1088`, `1090`, `1092`, `1093`, `1094`, `1095`, `1096`, `1097`, `1098`, `1099`, `1100`, `1102`, `1105`, `1106`, `1107`, `1109`, `1110`, `1111`, `1112`, `1113`, `1114`, `1115`, `1116`, `1117`, `1118`, `1121`, `1123`, `1126`, `1128`, `1129`, `1130`, `1131`, `1132`, `1133`, `1135`, `1136`, `1137`, `1138`, `1139`, `1141`, `1142`, `1143`, `1144`, `1145`, `1148`, `1149`, `1150`, `1152`, `1154`, `1155`, `1157`, `1158`, `1159`, `1160`, `1162`, `1163`, `1164`, `1166`, `1167`, `1168`, `1170`, `1173`, `1174`, `1176`, `1178`, `1179`, `1180`, `1182`, `1183`, `1184`, `1186`, `1187`, `1188`, `1191`, `1192`, `1193`, `1194`, `1195`, `1196`, `1197`, `1198`, `1199`, `1200`, `1201`, `1203`, `1204`, `1206`, `1207`, `1209`, `1211`, `1212`, `1213`, `1214`, `1215`, `1216`, `1218`, `1219`, `1220`, `1221`, `1224`, `1225`, `1227`, `1228`, `1229`, `1231`, `1232`, `1233`, `1235`, `1237`, `1240`, `1243`, `1246`, `1248`, `1249`, `1251`, `1252`, `1254`, `1257`, `1258`, `1260`, `1263`, `1264`, `1265`, `1267`, `1269`, `1270`, `1272`, `1273`, `1275`, `1276`, `1278`, `1280`, `1281`, `1282`, `1284`, `297`, `1285`, `1287`, `1289`, `1291`, `1292`, `1293`, `1294`, `1295`, `1297`, `1299`, `1301`, `1303`, `1305`, `1308`, `1309`, `1310`, `1312` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 99.95 | | `TOKEN_P` | 99.95 | | `TOKEN_R` | 99.96 | | `TOKEN_ACC` | 99.99 | | `SENTS_F` | 98.02 | | `SENTS_P` | 98.02 | | `SENTS_R` | 98.02 | | `TAG_ACC` | 97.87 | | `POS_ACC` | 98.83 | | `MORPH_ACC` | 97.97 | | `DEP_UAS` | 92.14 | | `DEP_LAS` | 89.39 | | `LEMMA_ACC` | 97.37 |
{"language": ["sv"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/sv_udv25_swedishtalbanken_trf
null
[ "spacy", "token-classification", "sv", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "sv" ]
TAGS #spacy #token-classification #sv #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Swedish-Talbanken ### Label Scheme View label scheme (1206 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (1206 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #sv #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (1206 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Vietnamese-VTB | Feature | Description | | --- | --- | | **Name** | `vi_udv25_vietnamesevtb_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (81 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `!`, `"`, `,`, `-`, `.`, `...`, `:`, `;`, `?`, `@`, `A`, `C`, `CC`, `E`, `I`, `L`, `LBKT`, `M`, `N`, `NP`, `Nb`, `Nc`, `Np`, `Nu`, `Ny`, `P`, `R`, `RBKT`, `T`, `V`, `VP`, `X`, `Y`, `Z` | | **`morphologizer`** | `POS=NOUN`, `POS=ADP`, `POS=X\|Polarity=Neg`, `POS=VERB`, `POS=ADJ`, `POS=PUNCT`, `POS=X`, `POS=SCONJ`, `NumType=Card\|POS=NUM`, `POS=DET`, `POS=CCONJ`, `POS=PROPN`, `POS=AUX`, `POS=PART`, `POS=INTJ` | | **`parser`** | `ROOT`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `discourse`, `iobj`, `list`, `mark`, `nmod`, `nsubj`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `0` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 87.90 | | `TOKEN_P` | 86.84 | | `TOKEN_R` | 89.00 | | `TOKEN_ACC` | 98.42 | | `SENTS_F` | 94.33 | | `SENTS_P` | 96.23 | | `SENTS_R` | 92.50 | | `TAG_ACC` | 88.05 | | `POS_ACC` | 90.19 | | `MORPH_ACC` | 96.95 | | `DEP_UAS` | 68.08 | | `DEP_LAS` | 60.64 | | `LEMMA_ACC` | 89.35 |
{"language": ["vi"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/vi_udv25_vietnamesevtb_trf
null
[ "spacy", "token-classification", "vi", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "vi" ]
TAGS #spacy #token-classification #vi #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Vietnamese-VTB ### Label Scheme View label scheme (81 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (81 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #vi #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (81 labels for 6 components)", "### Accuracy" ]
token-classification
spacy
UD v2.5 benchmarking pipeline for UD_Old_French-SRCMF | Feature | Description | | --- | --- | | **Name** | `xx_udv25_oldfrenchsrcmf_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) | | **License** | `CC BY-SA 4.0` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (16214 labels for 6 components)</summary> | Component | Labels | | --- | --- | | **`experimental_char_ner_tokenizer`** | `TOKEN` | | **`senter`** | `I`, `S` | | **`tagger`** | `ADJQUA`, `ADJcar`, `ADJind`, `ADJord`, `ADJpos`, `ADJqua`, `ADVgen`, `ADVgen.PROadv`, `ADVgen.PROper`, `ADVing`, `ADVint`, `ADVneg`, `ADVneg.PROper`, `ADVsub`, `CONcoo`, `CONsub`, `CONsub.PROper`, `CONsub_o`, `CONsub_pre`, `DETcar`, `DETdef`, `DETdem`, `DETind`, `DETint`, `DETndf`, `DETord`, `DETpos`, `DETrel`, `DETrel_o`, `ETR`, `INJ`, `NOMcom`, `NOMcom.PROper`, `NOMpro`, `PRE`, `PRE.DETdef`, `PRE.PROdem`, `PRE.PROper`, `PROadv`, `PROcar`, `PROdem`, `PROimp`, `PROind`, `PROint`, `PROint.PROper`, `PROint_adv`, `PROord`, `PROper`, `PROper.PROper`, `PROpos`, `PROrel`, `PROrel.ADVneg`, `PROrel.PROadv`, `PROrel.PROper`, `PROrel_adv`, `RED`, `VERcjg`, `VERinf`, `VERppa`, `VERppe` | | **`morphologizer`** | `POS=CCONJ`, `Definite=Def\|POS=DET\|PronType=Art`, `POS=NOUN`, `POS=PRON\|PronType=Prs`, `POS=VERB\|VerbForm=Fin`, `POS=PROPN`, `POS=PRON\|PronType=Prs,Rel`, `POS=ADV`, `POS=ADP`, `POS=ADV\|PronType=Dem`, `POS=PRON\|PronType=Dem`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=AUX\|VerbForm=Fin`, `POS=DET\|PronType=Int`, `POS=ADJ`, `POS=PRON\|PronType=Ind`, `POS=DET\|PronType=Ind`, `Morph=VPar\|POS=ADJ`, `POS=DET\|Poss=Yes`, `POS=ADV\|Polarity=Neg`, `Definite=Def\|POS=ADP\|PronType=Art`, `POS=PRON\|PronType=Int`, `POS=SCONJ`, `POS=VERB\|VerbForm=Inf`, `NumType=Card\|POS=PRON`, `POS=PRON`, `NumType=Card\|POS=DET`, `POS=PRON\|Polarity=Neg\|PronType=Prs`, `POS=ADJ\|Poss=Yes`, `POS=PRON\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|POS=DET\|PronType=Art`, `POS=DET\|PronType=Dem`, `POS=AUX\|VerbForm=Inf`, `POS=ADJ\|PronType=Ind`, `Morph=VPar\|POS=NOUN`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Morph=VPar\|POS=PROPN`, `Morph=VInf\|POS=NOUN`, `NumType=Ord\|POS=PRON`, `POS=INTJ`, `POS=SCONJ\|PronType=Prs`, `Morph=VFin\|POS=NOUN`, `POS=DET\|PronType=Rel`, `NumType=Card\|POS=ADJ`, `POS=ADJ\|PronType=Ord`, `Morph=VFin\|POS=ADV`, `Morph=VFin\|POS=PROPN`, `POS=DET`, `Morph=VPar\|POS=ADP`, `Morph=VPar\|POS=ADV`, `NumType=Ord\|POS=DET`, `Morph=VFin\|POS=ADP`, `Morph=VFin\|POS=CCONJ`, `Morph=VInf\|POS=ADJ`, `POS=ADP\|PronType=Dem`, `POS=ADV\|Polarity=Int`, `Morph=VFin\|POS=INTJ` | | **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `case:det`, `cc`, `cc:nc`, `ccomp`, `conj`, `cop`, `csubj`, `dep`, `det`, `discourse`, `dislocated`, `expl`, `flat`, `iobj`, `mark`, `mark:advmod`, `mark:obj`, `mark:obl`, `nmod`, `nsubj`, `nsubj:obj`, `nummod`, `obj`, `obj:advmod`, `obl`, `obl:advmod`, `parataxis`, `vocative`, `xcomp` | | **`experimental_edit_tree_lemmatizer`** | `0`, `1`, `2`, `3`, `4`, `5`, `6`, `7`, `8`, `9`, `10`, `11`, `12`, `13`, `14`, `15`, `16`, `17`, `18`, `19`, `20`, `21`, `22`, `23`, `24`, `25`, `26`, `27`, `28`, `29`, `30`, `31`, `32`, `33`, `34`, `35`, `36`, `37`, `38`, `39`, `40`, `41`, `42`, `43`, `44`, `45`, `46`, `47`, `48`, `49`, `50`, `51`, `52`, `53`, `54`, `55`, `56`, `57`, `58`, `59`, `60`, `61`, `62`, `63`, `64`, `65`, `66`, `67`, `68`, `69`, `70`, `71`, `72`, `73`, `74`, `75`, `76`, `77`, `78`, `79`, `80`, `81`, `82`, `83`, `84`, `85`, `86`, `87`, `88`, `89`, `90`, `91`, `92`, `93`, `94`, `95`, `96`, `97`, `98`, `99`, `100`, `101`, `102`, `103`, `104`, `105`, `106`, `107`, `108`, `109`, `110`, `111`, `112`, `113`, `114`, `115`, `116`, `117`, `118`, `119`, `120`, `121`, `122`, `123`, `124`, `125`, `126`, `127`, `128`, `129`, `130`, `131`, `132`, `133`, `134`, `135`, `136`, `137`, `138`, `139`, `140`, `141`, `142`, `143`, `144`, `145`, `146`, `147`, `148`, `149`, `150`, `151`, `152`, `153`, `154`, `155`, `156`, `157`, `158`, `159`, `160`, `161`, `162`, `163`, `164`, `165`, `166`, `167`, `168`, `169`, `170`, `171`, `172`, `173`, `174`, `175`, `176`, `177`, `178`, `179`, `180`, `181`, `182`, `183`, `184`, `185`, `186`, `187`, `188`, `189`, `190`, `191`, `192`, `193`, `194`, `195`, `196`, `197`, `198`, `199`, `200`, `201`, `202`, `203`, `204`, `205`, `206`, `207`, `208`, `209`, `210`, `211`, `212`, `213`, `214`, `215`, `216`, `217`, `218`, `219`, `220`, `221`, `222`, `223`, `224`, `225`, `226`, `227`, `228`, `229`, `230`, `231`, `232`, `233`, `234`, `235`, `236`, `237`, `238`, `239`, `240`, `241`, `242`, `243`, `244`, `245`, `246`, `247`, `248`, `249`, `250`, `251`, `252`, `253`, `254`, `255`, `256`, `257`, `258`, `259`, `260`, `261`, `262`, `263`, `264`, `265`, `266`, `267`, `268`, `269`, `270`, `271`, `272`, `273`, `274`, `275`, `276`, `277`, `278`, `279`, `280`, `281`, `282`, `283`, `284`, `285`, `286`, `287`, `288`, `289`, `290`, `291`, `292`, `293`, `294`, `295`, `296`, `297`, `298`, `299`, `300`, `301`, `302`, `303`, `304`, `305`, `306`, `307`, `308`, `309`, `310`, `311`, `312`, `313`, `314`, `315`, `316`, `317`, `318`, `319`, `320`, `321`, `322`, `323`, `324`, `325`, `326`, `327`, `328`, `329`, `330`, `331`, `332`, `333`, `334`, `335`, `336`, `337`, `338`, `339`, `340`, `341`, `342`, `343`, `344`, `345`, `346`, `347`, `348`, `349`, `350`, `351`, `352`, `353`, `354`, `355`, `356`, `357`, `358`, `359`, `360`, `361`, `362`, `363`, `364`, `365`, `366`, `367`, `368`, `369`, `370`, `371`, `372`, `373`, `374`, `375`, `376`, `377`, `378`, `379`, `380`, `381`, `382`, `383`, `384`, `385`, `386`, `387`, `388`, `389`, `390`, `391`, `392`, `393`, `394`, `395`, `396`, `397`, `398`, `399`, `400`, `401`, `402`, `403`, `404`, `405`, `406`, `407`, `408`, `409`, `410`, `411`, `412`, `413`, `414`, `415`, `416`, `417`, `418`, `419`, `420`, `421`, `422`, `423`, `424`, `425`, `426`, `427`, `428`, `429`, `430`, `431`, `432`, `433`, `434`, `435`, `436`, `437`, `438`, `439`, `440`, `441`, `442`, `443`, `444`, `445`, `446`, `447`, `448`, `449`, `450`, `451`, `452`, `453`, `454`, `455`, `456`, `457`, `458`, `459`, `460`, `461`, `462`, `463`, `464`, `465`, `466`, `467`, `468`, `469`, `470`, `471`, `472`, `473`, `474`, `475`, `476`, `477`, `478`, `479`, `480`, `481`, `482`, `483`, `484`, `485`, `486`, `487`, `488`, `489`, `490`, `491`, `492`, `493`, `494`, `495`, `496`, `497`, `498`, `499`, `500`, `501`, `502`, `503`, `504`, `505`, `506`, `507`, `508`, `509`, `510`, `511`, `512`, `513`, `514`, `515`, `516`, `517`, `518`, `519`, `520`, `521`, `522`, `523`, `524`, `525`, `526`, `527`, `528`, `529`, `530`, `531`, `532`, `533`, `534`, `535`, `536`, `537`, `538`, `539`, `540`, `541`, `542`, `543`, `544`, `545`, `546`, `547`, `548`, `549`, `550`, `551`, `552`, `553`, `554`, `555`, `556`, `557`, `558`, `559`, `560`, `561`, `562`, `563`, `564`, `565`, `566`, `567`, `568`, `569`, `570`, `571`, `572`, `573`, `574`, `575`, `576`, `577`, `578`, `579`, `580`, `581`, `582`, `583`, `584`, `585`, `586`, `587`, `588`, `589`, `590`, `591`, `592`, `593`, `594`, `595`, `596`, `597`, `598`, `599`, `600`, `601`, `602`, `603`, `604`, `605`, `606`, `607`, `608`, `609`, `610`, `611`, `612`, `613`, `614`, `615`, `616`, `617`, `618`, `619`, `620`, `621`, `622`, `623`, `624`, `625`, `626`, `627`, `628`, `629`, `630`, `631`, `632`, `633`, `634`, `635`, `636`, `637`, `638`, `639`, `640`, `641`, `642`, `643`, `644`, `645`, `646`, `647`, `648`, `649`, `650`, `651`, `652`, `653`, `654`, `655`, `656`, `657`, `658`, `659`, `660`, `661`, `662`, `663`, `664`, `665`, `666`, `667`, `668`, `669`, `670`, `671`, `672`, `673`, `674`, `675`, `676`, `677`, `678`, `679`, `680`, `681`, `682`, `683`, `684`, `685`, `686`, `687`, `688`, `689`, `690`, `691`, `692`, `693`, `694`, `695`, `696`, `697`, `698`, `699`, `700`, `701`, `702`, `703`, `704`, `705`, `706`, `707`, `708`, `709`, `710`, `711`, `712`, `713`, `714`, `715`, `716`, `717`, `718`, `719`, `720`, `721`, `722`, `723`, `724`, `725`, `726`, `727`, `728`, `729`, `730`, `731`, `732`, `733`, `734`, `735`, `736`, `737`, `738`, `739`, `740`, `741`, `742`, `743`, `744`, `745`, `746`, `747`, `748`, `749`, `750`, `751`, `752`, `753`, `754`, `755`, `756`, `757`, `758`, `759`, `760`, `761`, `762`, `763`, `764`, `765`, `766`, `767`, `768`, `769`, `770`, `771`, `772`, `773`, `774`, `775`, `776`, `777`, `778`, `779`, `780`, `781`, `782`, `783`, `784`, `785`, `786`, `787`, `788`, `789`, `790`, `791`, `792`, `793`, `794`, `795`, `796`, `797`, `798`, `799`, `800`, `801`, `802`, `803`, `804`, `805`, `806`, `807`, `808`, `809`, `810`, `811`, `812`, `813`, `814`, `815`, `816`, `817`, `818`, `819`, `820`, `821`, `822`, `823`, `824`, `825`, `826`, `827`, `828`, `829`, `830`, `831`, `832`, `833`, `834`, `835`, `836`, `837`, `838`, `839`, `840`, `841`, `842`, `843`, `844`, `845`, `846`, `847`, `848`, `849`, `850`, `851`, `852`, `853`, `854`, `855`, `856`, `857`, `858`, `859`, `860`, `861`, `862`, `863`, `864`, `865`, `866`, `867`, `868`, `869`, `870`, `871`, `872`, `873`, `874`, `875`, `876`, `877`, `878`, `879`, `880`, `881`, `882`, `883`, `884`, `885`, `886`, `887`, `888`, `889`, `890`, `891`, `892`, `893`, `894`, `895`, `896`, `897`, `898`, `899`, `900`, `901`, `902`, `903`, `904`, `905`, `906`, `907`, `908`, `909`, `910`, `911`, `912`, `913`, `914`, `915`, `916`, `917`, `918`, `919`, `920`, `921`, `922`, `923`, `924`, `925`, `926`, `927`, `928`, `929`, `930`, `931`, `932`, `933`, `934`, `935`, `936`, `937`, `938`, `939`, `940`, `941`, `942`, `943`, `944`, `945`, `946`, `947`, `948`, `949`, `950`, `951`, `952`, `953`, `954`, `955`, `956`, `957`, `958`, `959`, `960`, `961`, `962`, `963`, `964`, `965`, `966`, `967`, `968`, `969`, `970`, `971`, `972`, `973`, `974`, `975`, `976`, `977`, `978`, `979`, `980`, `981`, `982`, `983`, `984`, `985`, `986`, `987`, `988`, `989`, `990`, `991`, `992`, `993`, `994`, `995`, `996`, `997`, `998`, `999`, `1000`, `1001`, `1002`, `1003`, `1004`, `1005`, `1006`, `1007`, `1008`, `1009`, `1010`, `1011`, `1012`, `1013`, `1014`, `1015`, `1016`, `1017`, `1018`, `1019`, `1020`, `1021`, `1022`, `1023`, `1024`, `1025`, `1026`, `1027`, `1028`, `1029`, `1030`, `1031`, `1032`, `1033`, `1034`, `1035`, `1036`, `1037`, `1038`, `1039`, `1040`, `1041`, `1042`, `1043`, `1044`, `1045`, `1046`, `1047`, `1048`, `1049`, `1050`, `1051`, `1052`, `1053`, `1054`, `1055`, `1056`, `1057`, `1058`, `1059`, `1060`, `1061`, `1062`, `1063`, `1064`, `1065`, `1066`, `1067`, `1068`, `1069`, `1070`, `1071`, `1072`, `1073`, `1074`, `1075`, `1076`, `1077`, `1078`, `1079`, `1080`, `1081`, `1082`, `1083`, `1084`, `1085`, `1086`, `1087`, `1088`, `1089`, `1090`, `1091`, `1092`, `1093`, `1094`, `1095`, `1096`, `1097`, `1098`, `1099`, `1100`, `1101`, `1102`, `1103`, `1104`, `1105`, `1106`, `1107`, `1108`, `1109`, `1110`, `1111`, `1112`, `1113`, `1114`, `1115`, `1116`, `1117`, `1118`, `1119`, `1120`, `1121`, `1122`, `1123`, `1124`, `1125`, `1126`, `1127`, `1128`, `1129`, `1130`, `1131`, `1132`, `1133`, `1134`, `1135`, `1136`, `1137`, `1138`, `1139`, `1140`, `1141`, `1142`, `1143`, `1144`, `1145`, `1146`, `1147`, `1148`, `1149`, `1150`, `1151`, `1152`, `1153`, `1154`, `1155`, `1156`, `1157`, `1158`, `1159`, `1160`, `1161`, `1162`, `1163`, `1164`, `1165`, `1166`, `1167`, `1168`, `1169`, `1170`, `1171`, `1172`, `1173`, `1174`, `1175`, `1176`, `1177`, `1178`, `1179`, `1180`, `1181`, `1182`, `1183`, `1184`, `1185`, `1186`, `1187`, `1188`, `1189`, `1190`, `1191`, `1192`, `1193`, `1194`, `1195`, `1196`, `1197`, `1198`, `1199`, `1200`, `1201`, `1202`, `1203`, `1204`, `1205`, `1206`, `1207`, `1208`, `1209`, `1210`, `1211`, `1212`, `1213`, `1214`, `1215`, `1216`, `1217`, `1218`, `1219`, `1220`, `1221`, `1222`, `1223`, `1224`, `1225`, `1226`, `1227`, `1228`, `1229`, `1230`, `1231`, `1232`, `1233`, `1234`, `1235`, `1236`, `1237`, `1238`, `1239`, `1240`, `1241`, `1242`, `1243`, `1244`, `1245`, `1246`, `1247`, `1248`, `1249`, `1250`, `1251`, `1252`, `1253`, `1254`, `1255`, `1256`, `1257`, `1258`, `1259`, `1260`, `1261`, `1262`, `1263`, `1264`, `1265`, `1266`, `1267`, `1268`, `1269`, `1270`, `1271`, `1272`, `1273`, `1274`, `1275`, `1276`, `1277`, `1278`, `1279`, `1280`, `1281`, `1282`, `1283`, `1284`, `1285`, `1286`, `1287`, `1288`, `1289`, `1290`, `1291`, `1292`, `1293`, `1294`, `1295`, `1296`, `1297`, `1298`, `1299`, `1300`, `1301`, `1302`, `1303`, `1304`, `1305`, `1306`, `1307`, `1308`, `1309`, `1310`, `1311`, `1312`, `1313`, `1314`, `1315`, `1316`, `1317`, `1318`, `1319`, `1320`, `1321`, `1322`, `1323`, `1324`, `1325`, `1326`, `1327`, `1328`, `1329`, `1330`, `1331`, `1332`, `1333`, `1334`, `1335`, `1336`, `1337`, `1338`, `1339`, `1340`, `1341`, `1342`, `1343`, `1344`, `1345`, `1346`, `1347`, `1348`, `1349`, `1350`, `1351`, `1352`, `1353`, `1354`, `1355`, `1356`, `1357`, `1358`, `1359`, `1360`, `1361`, `1362`, `1363`, `1364`, `1365`, `1366`, `1367`, `1368`, `1369`, `1370`, `1371`, `1372`, `1373`, `1374`, `1375`, `1376`, `1377`, `1378`, `1379`, `1380`, `1381`, `1382`, `1383`, `1384`, `1385`, `1386`, `1387`, `1388`, `1389`, `1390`, `1391`, `1392`, `1393`, `1394`, `1395`, `1396`, `1397`, `1398`, `1399`, `1400`, `1401`, `1402`, `1403`, `1404`, `1405`, `1406`, `1407`, `1408`, `1409`, `1410`, `1411`, `1412`, `1413`, `1414`, `1415`, `1416`, `1417`, `1418`, `1419`, `1420`, `1421`, `1422`, `1423`, `1424`, `1425`, `1426`, `1427`, `1428`, `1429`, `1430`, `1431`, `1432`, `1433`, `1434`, `1435`, `1436`, `1437`, `1438`, `1439`, `1440`, `1441`, `1442`, `1443`, `1444`, `1445`, `1446`, `1447`, `1448`, `1449`, `1450`, `1451`, `1452`, `1453`, `1454`, `1455`, `1456`, `1457`, `1458`, `1459`, `1460`, `1461`, `1462`, `1463`, `1464`, `1465`, `1466`, `1467`, `1468`, `1469`, `1470`, `1471`, `1472`, `1473`, `1474`, `1475`, `1476`, `1477`, `1478`, `1479`, `1480`, `1481`, `1482`, `1483`, `1484`, `1485`, `1486`, `1487`, `1488`, `1489`, `1490`, `1491`, `1492`, `1493`, `1494`, `1495`, `1496`, `1497`, `1498`, `1499`, `1500`, `1501`, `1502`, `1503`, `1504`, `1505`, `1506`, `1507`, `1508`, `1509`, `1510`, `1511`, `1512`, `1513`, `1514`, `1515`, `1516`, `1517`, `1518`, `1519`, `1520`, `1521`, `1522`, `1523`, `1524`, `1525`, `1526`, `1527`, `1528`, `1529`, `1530`, `1531`, `1532`, `1533`, `1534`, `1535`, `1536`, `1537`, `1538`, `1539`, `1540`, `1541`, `1542`, `1543`, `1544`, `1545`, `1546`, `1547`, `1548`, `1549`, `1550`, `1551`, `1552`, `1553`, `1554`, `1555`, `1556`, `1557`, `1558`, `1559`, `1560`, `1561`, `1562`, `1563`, `1564`, `1565`, `1566`, `1567`, `1568`, `1569`, `1570`, `1571`, `1572`, `1573`, `1574`, `1575`, `1576`, `1577`, `1578`, `1579`, `1580`, `1581`, `1582`, `1583`, `1584`, `1585`, `1586`, `1587`, `1588`, `1589`, `1590`, `1591`, `1592`, `1593`, `1594`, `1595`, `1596`, `1597`, `1598`, `1599`, `1600`, `1601`, `1602`, `1603`, `1604`, `1605`, `1606`, `1607`, `1608`, `1609`, `1610`, `1611`, `1612`, `1613`, `1614`, `1615`, `1616`, `1617`, `1618`, `1619`, `1620`, `1621`, `1622`, `1623`, `1624`, `1625`, `1626`, `1627`, `1628`, `1629`, `1630`, `1631`, `1632`, `1633`, `1634`, `1635`, `1636`, `1637`, `1638`, `1639`, `1640`, `1641`, `1642`, `1643`, `1644`, `1645`, `1646`, `1647`, `1648`, `1649`, `1650`, `1651`, `1652`, `1653`, `1654`, `1655`, `1656`, `1657`, `1658`, `1659`, `1660`, `1661`, `1662`, `1663`, `1664`, `1665`, `1666`, `1667`, `1668`, `1669`, `1670`, `1671`, `1672`, `1673`, `1674`, `1675`, `1676`, `1677`, `1678`, `1679`, `1680`, `1681`, `1682`, `1683`, `1684`, `1685`, `1686`, `1687`, `1688`, `1689`, `1690`, `1691`, `1692`, `1693`, `1694`, `1695`, `1696`, `1697`, `1698`, `1699`, `1700`, `1701`, `1702`, `1703`, `1704`, `1705`, `1706`, `1707`, `1708`, `1709`, `1710`, `1711`, `1712`, `1713`, `1714`, `1715`, `1716`, `1717`, `1718`, `1719`, `1720`, `1721`, `1722`, `1723`, `1724`, `1725`, `1726`, `1727`, `1728`, `1729`, `1730`, `1731`, `1732`, `1733`, `1734`, `1735`, `1736`, `1737`, `1738`, `1739`, `1740`, `1741`, `1742`, `1743`, `1744`, `1745`, `1746`, `1747`, `1748`, `1749`, `1750`, `1751`, `1752`, `1753`, `1754`, `1755`, `1756`, `1757`, `1758`, `1759`, `1760`, `1761`, `1762`, `1763`, `1764`, `1765`, `1766`, `1767`, `1768`, `1769`, `1770`, `1771`, `1772`, `1773`, `1774`, `1775`, `1776`, `1777`, `1778`, `1779`, `1780`, `1781`, `1782`, `1783`, `1784`, `1785`, `1786`, `1787`, `1788`, `1789`, `1790`, `1791`, `1792`, `1793`, `1794`, `1795`, `1796`, `1797`, `1798`, `1799`, `1800`, `1801`, `1802`, `1803`, `1804`, `1805`, `1806`, `1807`, `1808`, `1809`, `1810`, `1811`, `1812`, `1813`, `1814`, `1815`, `1816`, `1817`, `1818`, `1819`, `1820`, `1821`, `1822`, `1823`, `1824`, `1825`, `1826`, `1827`, `1828`, `1829`, `1830`, `1831`, `1832`, `1833`, `1834`, `1835`, `1836`, `1837`, `1838`, `1839`, `1840`, `1841`, `1842`, `1843`, `1844`, `1845`, `1846`, `1847`, `1848`, `1849`, `1850`, `1851`, `1852`, `1853`, `1854`, `1855`, `1856`, `1857`, `1858`, `1859`, `1860`, `1861`, `1862`, `1863`, `1864`, `1865`, `1866`, `1867`, `1868`, `1869`, `1870`, `1871`, `1872`, `1873`, `1874`, `1875`, `1876`, `1877`, `1878`, `1879`, `1880`, `1881`, `1882`, `1883`, `1884`, `1885`, `1886`, `1887`, `1888`, `1889`, `1890`, `1891`, `1892`, `1893`, `1894`, `1895`, `1896`, `1897`, `1898`, `1899`, `1900`, `1901`, `1902`, `1903`, `1904`, `1905`, `1906`, `1907`, `1908`, `1909`, `1910`, `1911`, `1912`, `1913`, `1914`, `1915`, `1916`, `1917`, `1918`, `1919`, `1920`, `1921`, `1922`, `1923`, `1924`, `1925`, `1926`, `1927`, `1928`, `1929`, `1930`, `1931`, `1932`, `1933`, `1934`, `1935`, `1936`, `1937`, `1938`, `1939`, `1940`, `1941`, `1942`, `1943`, `1944`, `1945`, `1946`, `1947`, `1948`, `1949`, `1950`, `1951`, `1952`, `1953`, `1954`, `1955`, `1956`, `1957`, `1958`, `1959`, `1960`, `1961`, `1962`, `1963`, `1964`, `1965`, `1966`, `1967`, `1968`, `1969`, `1970`, `1971`, `1972`, `1973`, `1974`, `1975`, `1976`, `1977`, `1978`, `1979`, `1980`, `1981`, `1982`, `1983`, `1984`, `1985`, `1986`, `1987`, `1988`, `1989`, `1990`, `1991`, `1992`, `1993`, `1994`, `1995`, `1996`, `1997`, `1998`, `1999`, `2000`, `2001`, `2002`, `2003`, `2004`, `2005`, `2006`, `2007`, `2008`, `2009`, `2010`, `2011`, `2012`, `2013`, `2014`, `2015`, `2016`, `2017`, `2018`, `2019`, `2020`, `2021`, `2022`, `2023`, `2024`, `2025`, `2026`, `2027`, `2028`, `2029`, `2030`, `2031`, `2032`, `2033`, `2034`, `2035`, `2036`, `2037`, `2038`, `2039`, `2040`, `2041`, `2042`, `2043`, `2044`, `2045`, `2046`, `2047`, `2048`, `2049`, `2050`, `2051`, `2052`, `2053`, `2054`, `2055`, `2056`, `2057`, `2058`, `2059`, `2060`, `2061`, `2062`, `2063`, `2064`, `2065`, `2066`, `2067`, `2068`, `2069`, `2070`, `2071`, `2072`, `2073`, `2074`, `2075`, `2076`, `2077`, `2078`, `2079`, `2080`, `2081`, `2082`, `2083`, `2084`, `2085`, `2086`, `2087`, `2088`, `2089`, `2090`, `2091`, `2092`, `2093`, `2094`, `2095`, `2096`, `2097`, `2098`, `2099`, `2100`, `2101`, `2102`, `2103`, `2104`, `2105`, `2106`, `2107`, `2108`, `2109`, `2110`, `2111`, `2112`, `2113`, `2114`, `2115`, `2116`, `2117`, `2118`, `2119`, `2120`, `2121`, `2122`, `2123`, `2124`, `2125`, `2126`, `2127`, `2128`, `2129`, `2130`, `2131`, `2132`, `2133`, `2134`, `2135`, `2136`, `2137`, `2138`, `2139`, `2140`, `2141`, `2142`, `2143`, `2144`, `2145`, `2146`, `2147`, `2148`, `2149`, `2150`, `2151`, `2152`, `2153`, `2154`, `2155`, `2156`, `2157`, `2158`, `2159`, `2160`, `2161`, `2162`, `2163`, `2164`, `2165`, `2166`, `2167`, `2168`, `2169`, `2170`, `2171`, `2172`, `2173`, `2174`, `2175`, `2176`, `2177`, `2178`, `2179`, `2180`, `2181`, `2182`, `2183`, `2184`, `2185`, `2186`, `2187`, `2188`, `2189`, `2190`, `2191`, `2192`, `2193`, `2194`, `2195`, `2196`, `2197`, `2198`, `2199`, `2200`, `2201`, `2202`, `2203`, `2204`, `2205`, `2206`, `2207`, `2208`, `2209`, `2210`, `2211`, `2212`, `2213`, `2214`, `2215`, `2216`, `2217`, `2218`, `2219`, `2220`, `2221`, `2222`, `2223`, `2224`, `2225`, `2226`, `2227`, `2228`, `2229`, `2230`, `2231`, `2232`, `2233`, `2234`, `2235`, `2236`, `2237`, `2238`, `2239`, `2240`, `2241`, `2242`, `2243`, `2244`, `2245`, `2246`, `2247`, `2248`, `2249`, `2250`, `2251`, `2252`, `2253`, `2254`, `2255`, `2256`, `2257`, `2258`, `2259`, `2260`, `2261`, `2262`, `2263`, `2264`, `2265`, `2266`, `2267`, `2268`, `2269`, `2270`, `2271`, `2272`, `2273`, `2274`, `2275`, `2276`, `2277`, `2278`, `2279`, `2280`, `2281`, `2282`, `2283`, `2284`, `2285`, `2286`, `2287`, `2288`, `2289`, `2290`, `2291`, `2292`, `2293`, `2294`, `2295`, `2296`, `2297`, `2298`, `2299`, `2300`, `2301`, `2302`, `2303`, `2304`, `2305`, `2306`, `2307`, `2308`, `2309`, `2310`, `2311`, `2312`, `2313`, `2314`, `2315`, `2316`, `2317`, `2318`, `2319`, `2320`, `2321`, `2322`, `2323`, `2324`, `2325`, `2326`, `2327`, `2328`, `2329`, `2330`, `2331`, `2332`, `2333`, `2334`, `2335`, `2336`, `2337`, `2338`, `2339`, `2340`, `2341`, `2342`, `2343`, `2344`, `2345`, `2346`, `2347`, `2348`, `2349`, `2350`, `2351`, `2352`, `2353`, `2354`, `2355`, `2356`, `2357`, `2358`, `2359`, `2360`, `2361`, `2362`, `2363`, `2364`, `2365`, `2366`, `2367`, `2368`, `2369`, `2370`, `2371`, `2372`, `2373`, `2374`, `2375`, `2376`, `2377`, `2378`, `2379`, `2380`, `2381`, `2382`, `2383`, `2384`, `2385`, `2386`, `2387`, `2388`, `2389`, `2390`, `2391`, `2392`, `2393`, `2394`, `2395`, `2396`, `2397`, `2398`, `2399`, `2400`, `2401`, `2402`, `2403`, `2404`, `2405`, `2406`, `2407`, `2408`, `2409`, `2410`, `2411`, `2412`, `2413`, `2414`, `2415`, `2416`, `2417`, `2418`, `2419`, `2420`, `2421`, `2422`, `2423`, `2424`, `2425`, `2426`, `2427`, `2428`, `2429`, `2430`, `2431`, `2432`, `2433`, `2434`, `2435`, `2436`, `2437`, `2438`, `2439`, `2440`, `2441`, `2442`, `2443`, `2444`, `2445`, `2446`, `2447`, `2448`, `2449`, `2450`, `2451`, `2452`, `2453`, `2454`, `2455`, `2456`, `2457`, `2458`, `2459`, `2460`, `2461`, `2462`, `2463`, `2464`, `2465`, `2466`, `2467`, `2468`, `2469`, `2470`, `2471`, `2472`, `2473`, `2474`, `2475`, `2476`, `2477`, `2478`, `2479`, `2480`, `2481`, `2482`, `2483`, `2484`, `2485`, `2486`, `2487`, `2488`, `2489`, `2490`, `2491`, `2492`, `2493`, `2494`, `2495`, `2496`, `2497`, `2498`, `2499`, `2500`, `2501`, `2502`, `2503`, `2504`, `2505`, `2506`, `2507`, `2508`, `2509`, `2510`, `2511`, `2512`, `2513`, `2514`, `2515`, `2516`, `2517`, `2518`, `2519`, `2520`, `2521`, `2522`, `2523`, `2524`, `2525`, `2526`, `2527`, `2528`, `2529`, `2530`, `2531`, `2532`, `2533`, `2534`, `2535`, `2536`, `2537`, `2538`, `2539`, `2540`, `2541`, `2542`, `2543`, `2544`, `2545`, `2546`, `2547`, `2548`, `2549`, `2550`, `2551`, `2552`, `2553`, `2554`, `2555`, `2556`, `2557`, `2558`, `2559`, `2560`, `2561`, `2562`, `2563`, `2564`, `2565`, `2566`, `2567`, `2568`, `2569`, `2570`, `2571`, `2572`, `2573`, `2574`, `2575`, `2576`, `2577`, `2578`, `2579`, `2580`, `2581`, `2582`, `2583`, `2584`, `2585`, `2586`, `2587`, `2588`, `2589`, `2590`, `2591`, `2592`, `2593`, `2594`, `2595`, `2596`, `2597`, `2598`, `2599`, `2600`, `2601`, `2602`, `2603`, `2604`, `2605`, `2606`, `2607`, `2608`, `2609`, `2610`, `2611`, `2612`, `2613`, `2614`, `2615`, `2616`, `2617`, `2618`, `2619`, `2620`, `2621`, `2622`, `2623`, `2624`, `2625`, `2626`, `2627`, `2628`, `2629`, `2630`, `2631`, `2632`, `2633`, `2634`, `2635`, `2636`, `2637`, `2638`, `2639`, `2640`, `2641`, `2642`, `2643`, `2644`, `2645`, `2646`, `2647`, `2648`, `2649`, `2650`, `2651`, `2652`, `2653`, `2654`, `2655`, `2656`, `2657`, `2658`, `2659`, `2660`, `2661`, `2662`, `2663`, `2664`, `2665`, `2666`, `2667`, `2668`, `2669`, `2670`, `2671`, `2672`, `2673`, `2674`, `2675`, `2676`, `2677`, `2678`, `2679`, `2680`, `2681`, `2682`, `2683`, `2684`, `2685`, `2686`, `2687`, `2688`, `2689`, `2690`, `2691`, `2692`, `2693`, `2694`, `2695`, `2696`, `2697`, `2698`, `2699`, `2700`, `2701`, `2702`, `2703`, `2704`, `2705`, `2706`, `2707`, `2708`, `2709`, `2710`, `2711`, `2712`, `2713`, `2714`, `2715`, `2716`, `2717`, `2718`, `2719`, `2720`, `2721`, `2722`, `2723`, `2724`, `2725`, `2726`, `2727`, `2728`, `2729`, `2730`, `2731`, `2732`, `2733`, `2734`, `2735`, `2736`, `2737`, `2738`, `2739`, `2740`, `2741`, `2742`, `2743`, `2744`, `2745`, `2746`, `2747`, `2748`, `2749`, `2750`, `2751`, `2752`, `2753`, `2754`, `2755`, `2756`, `2757`, `2758`, `2759`, `2760`, `2761`, `2762`, `2763`, `2764`, `2765`, `2766`, `2767`, `2768`, `2769`, `2770`, `2771`, `2772`, `2773`, `2774`, `2775`, `2776`, `2777`, `2778`, `2779`, `2780`, `2781`, `2782`, `2783`, `2784`, `2785`, `2786`, `2787`, `2788`, `2789`, `2790`, `2791`, `2792`, `2793`, `2794`, `2795`, `2796`, `2797`, `2798`, `2799`, `2800`, `2801`, `2802`, `2803`, `2804`, `2805`, `2806`, `2807`, `2808`, `2809`, `2810`, `2811`, `2812`, `2813`, `2814`, `2815`, `2816`, `2817`, `2818`, `2819`, `2820`, `2821`, `2822`, `2823`, `2824`, `2825`, `2826`, `2827`, `2828`, `2829`, `2830`, `2831`, `2832`, `2833`, `2834`, `2835`, `2836`, `2837`, `2838`, `2839`, `2840`, `2841`, `2842`, `2843`, `2844`, `2845`, `2846`, `2847`, `2848`, `2849`, `2850`, `2851`, `2852`, `2853`, `2854`, `2855`, `2856`, `2857`, `2858`, `2859`, `2860`, `2861`, `2862`, `2863`, `2864`, `2865`, `2866`, `2867`, `2868`, `2869`, `2870`, `2871`, `2872`, `2873`, `2874`, `2875`, `2876`, `2877`, `2878`, `2879`, `2880`, `2881`, `2882`, `2883`, `2884`, `2885`, `2886`, `2887`, `2888`, `2889`, `2890`, `2891`, `2892`, `2893`, `2894`, `2895`, `2896`, `2897`, `2898`, `2899`, `2900`, `2901`, `2902`, `2903`, `2904`, `2905`, `2906`, `2907`, `2908`, `2909`, `2910`, `2911`, `2912`, `2913`, `2914`, `2915`, `2916`, `2917`, `2918`, `2919`, `2920`, `2921`, `2922`, `2923`, `2924`, `2925`, `2926`, `2927`, `2928`, `2929`, `2930`, `2931`, `2932`, `2933`, `2934`, `2935`, `2936`, `2937`, `2938`, `2939`, `2940`, `2941`, `2942`, `2943`, `2944`, `2945`, `2946`, `2947`, `2948`, `2949`, `2950`, `2951`, `2952`, `2953`, `2954`, `2955`, `2956`, `2957`, `2958`, `2959`, `2960`, `2961`, `2962`, `2963`, `2964`, `2965`, `2966`, `2967`, `2968`, `2969`, `2970`, `2971`, `2972`, `2973`, `2974`, `2975`, `2976`, `2977`, `2978`, `2979`, `2980`, `2981`, `2982`, `2983`, `2984`, `2985`, `2986`, `2987`, `2988`, `2989`, `2990`, `2991`, `2992`, `2993`, `2994`, `2995`, `2996`, `2997`, `2998`, `2999`, `3000`, `3001`, `3002`, `3003`, `3004`, `3005`, `3006`, `3007`, `3008`, `3009`, `3010`, `3011`, `3012`, `3013`, `3014`, `3015`, `3016`, `3017`, `3018`, `3019`, `3020`, `3021`, `3022`, `3023`, `3024`, `3025`, `3026`, `3027`, `3028`, `3029`, `3030`, `3031`, `3032`, `3033`, `3034`, `3035`, `3036`, `3037`, `3038`, `3039`, `3040`, `3041`, `3042`, `3043`, `3044`, `3045`, `3046`, `3047`, `3048`, `3049`, `3050`, `3051`, `3052`, `3053`, `3054`, `3055`, `3056`, `3057`, `3058`, `3059`, `3060`, `3061`, `3062`, `3063`, `3064`, `3065`, `3066`, `3067`, `3068`, `3069`, `3070`, `3071`, `3072`, `3073`, `3074`, `3075`, `3076`, `3077`, `3078`, `3079`, `3080`, `3081`, `3082`, `3083`, `3084`, `3085`, `3086`, `3087`, `3088`, `3089`, `3090`, `3091`, `3092`, `3093`, `3094`, `3095`, `3096`, `3097`, `3098`, `3099`, `3100`, `3101`, `3102`, `3103`, `3104`, `3105`, `3106`, `3107`, `3108`, `3109`, `3110`, `3111`, `3112`, `3113`, `3114`, `3115`, `3116`, `3117`, `3118`, `3119`, `3120`, `3121`, `3122`, `3123`, `3124`, `3125`, `3126`, `3127`, `3128`, `3129`, `3130`, `3131`, `3132`, `3133`, `3134`, `3135`, `3136`, `3137`, `3138`, `3139`, `3140`, `3141`, `3142`, `3143`, `3144`, `3145`, `3146`, `3147`, `3148`, `3149`, `3150`, `3151`, `3152`, `3153`, `3154`, `3155`, `3156`, `3157`, `3158`, `3159`, `3160`, `3161`, `3162`, `3163`, `3164`, `3165`, `3166`, `3167`, `3168`, `3169`, `3170`, `3171`, `3172`, `3173`, `3174`, `3175`, `3176`, `3177`, `3178`, `3179`, `3180`, `3181`, `3182`, `3183`, `3184`, `3185`, `3186`, `3187`, `3188`, `3189`, `3190`, `3191`, `3192`, `3193`, `3194`, `3195`, `3196`, `3197`, `3198`, `3199`, `3200`, `3201`, `3202`, `3203`, `3204`, `3205`, `3206`, `3207`, `3208`, `3209`, `3210`, `3211`, `3212`, `3213`, `3214`, `3215`, `3216`, `3217`, `3218`, `3219`, `3220`, `3221`, `3222`, `3223`, `3224`, `3225`, `3226`, `3227`, `3228`, `3229`, `3230`, `3231`, `3232`, `3233`, `3234`, `3235`, `3236`, `3237`, `3238`, `3239`, `3240`, `3241`, `3242`, `3243`, `3244`, `3245`, `3246`, `3247`, `3248`, `3249`, `3250`, `3251`, `3252`, `3253`, `3254`, `3255`, `3256`, `3257`, `3258`, `3259`, `3260`, `3261`, `3262`, `3263`, `3264`, `3265`, `3266`, `3267`, `3268`, `3269`, `3270`, `3271`, `3272`, `3273`, `3274`, `3275`, `3276`, `3277`, `3278`, `3279`, `3280`, `3281`, `3282`, `3283`, `3284`, `3285`, `3286`, `3287`, `3288`, `3289`, `3290`, `3291`, `3292`, `3293`, `3294`, `3295`, `3296`, `3297`, `3298`, `3299`, `3300`, `3301`, `3302`, `3303`, `3304`, `3305`, `3306`, `3307`, `3308`, `3309`, `3310`, `3311`, `3312`, `3313`, `3314`, `3315`, `3316`, `3317`, `3318`, `3319`, `3320`, `3321`, `3322`, `3323`, `3324`, `3325`, `3326`, `3327`, `3328`, `3329`, `3330`, `3331`, `3332`, `3333`, `3334`, `3335`, `3336`, `3337`, `3338`, `3339`, `3340`, `3341`, `3342`, `3343`, `3344`, `3345`, `3346`, `3347`, `3348`, `3349`, `3350`, `3351`, `3352`, `3353`, `3354`, `3355`, `3356`, `3357`, `3358`, `3359`, `3360`, `3361`, `3362`, `3363`, `3364`, `3365`, `3366`, `3367`, `3368`, `3369`, `3370`, `3371`, `3372`, `3373`, `3374`, `3375`, `3376`, `3377`, `3378`, `3379`, `3380`, `3381`, `3382`, `3383`, `3384`, `3385`, `3386`, `3387`, `3388`, `3389`, `3390`, `3391`, `3392`, `3393`, `3394`, `3395`, `3396`, `3397`, `3398`, `3399`, `3400`, `3401`, `3402`, `3403`, `3404`, `3405`, `3406`, `3407`, `3408`, `3409`, `3410`, `3411`, `3412`, `3413`, `3414`, `3415`, `3416`, `3417`, `3418`, `3419`, `3420`, `3421`, `3422`, `3423`, `3424`, `3425`, `3426`, `3427`, `3428`, `3429`, `3430`, `3431`, `3432`, `3433`, `3434`, `3435`, `3436`, `3437`, `3438`, `3439`, `3440`, `3441`, `3442`, `3443`, `3444`, `3445`, `3446`, `3447`, `3448`, `3449`, `3450`, `3451`, `3452`, `3453`, `3454`, `3455`, `3456`, `3457`, `3458`, `3459`, `3460`, `3461`, `3462`, `3463`, `3464`, `3465`, `3466`, `3467`, `3468`, `3469`, `3470`, `3471`, `3472`, `3473`, `3474`, `3475`, `3476`, `3477`, `3478`, `3479`, `3480`, `3481`, `3482`, `3483`, `3484`, `3485`, `3486`, `3487`, `3488`, `3489`, `3490`, `3491`, `3492`, `3493`, `3494`, `3495`, `3496`, `3497`, `3498`, `3499`, `3500`, `3501`, `3502`, `3503`, `3504`, `3505`, `3506`, `3507`, `3508`, `3509`, `3510`, `3511`, `3512`, `3513`, `3514`, `3515`, `3516`, `3517`, `3518`, `3519`, `3520`, `3521`, `3522`, `3523`, `3524`, `3525`, `3526`, `3527`, `3528`, `3529`, `3530`, `3531`, `3532`, `3533`, `3534`, `3535`, `3536`, `3537`, `3538`, `3539`, `3540`, `3541`, `3542`, `3543`, `3544`, `3545`, `3546`, `3547`, `3548`, `3549`, `3550`, `3551`, `3552`, `3553`, `3554`, `3555`, `3556`, `3557`, `3558`, `3559`, `3560`, `3561`, `3562`, `3563`, `3564`, `3565`, `3566`, `3567`, `3568`, `3569`, `3570`, `3571`, `3572`, `3573`, `3574`, `3575`, `3576`, `3577`, `3578`, `3579`, `3580`, `3581`, `3582`, `3583`, `3584`, `3585`, `3586`, `3587`, `3588`, `3589`, `3590`, `3591`, `3592`, `3593`, `3594`, `3595`, `3596`, `3597`, `3598`, `3599`, `3600`, `3601`, `3602`, `3603`, `3604`, `3605`, `3606`, `3607`, `3608`, `3609`, `3610`, `3611`, `3612`, `3613`, `3614`, `3615`, `3616`, `3617`, `3618`, `3619`, `3620`, `3621`, `3622`, `3623`, `3624`, `3625`, `3626`, `3627`, `3628`, `3629`, `3630`, `3631`, `3632`, `3633`, `3634`, `3635`, `3636`, `3637`, `3638`, `3639`, `3640`, `3641`, `3642`, `3643`, `3644`, `3645`, `3646`, `3647`, `3648`, `3649`, `3650`, `3651`, `3652`, `3653`, `3654`, `3655`, `3656`, `3657`, `3658`, `3659`, `3660`, `3661`, `3662`, `3663`, `3664`, `3665`, `3666`, `3667`, `3668`, `3669`, `3670`, `3671`, `3672`, `3673`, `3674`, `3675`, `3676`, `3677`, `3678`, `3679`, `3680`, `3681`, `3682`, `3683`, `3684`, `3685`, `3686`, `3687`, `3688`, `3689`, `3690`, `3691`, `3692`, `3693`, `3694`, `3695`, `3696`, `3697`, `3698`, `3699`, `3700`, `3701`, `3702`, `3703`, `3704`, `3705`, `3706`, `3707`, `3708`, `3709`, `3710`, `3711`, `3712`, `3713`, `3714`, `3715`, `3716`, `3717`, `3718`, `3719`, `3720`, `3721`, `3722`, `3723`, `3724`, `3725`, `3726`, `3727`, `3728`, `3729`, `3730`, `3731`, `3732`, `3733`, `3734`, `3735`, `3736`, `3737`, `3738`, `3739`, `3740`, `3741`, `3742`, `3743`, `3744`, `3745`, `3746`, `3747`, `3748`, `3749`, `3750`, `3751`, `3752`, `3753`, `3754`, `3755`, `3756`, `3757`, `3758`, `3759`, `3760`, `3761`, `3762`, `3763`, `3764`, `3765`, `3766`, `3767`, `3768`, `3769`, `3770`, `3771`, `3772`, `3773`, `3774`, `3775`, `3776`, `3777`, `3778`, `3779`, `3780`, `3781`, `3782`, `3783`, `3784`, `3785`, `3786`, `3787`, `3788`, `3789`, `3790`, `3791`, `3792`, `3793`, `3794`, `3795`, `3796`, `3797`, `3798`, `3799`, `3800`, `3801`, `3802`, `3803`, `3804`, `3805`, `3806`, `3807`, `3808`, `3809`, `3810`, `3811`, `3812`, `3813`, `3814`, `3815`, `3816`, `3817`, `3818`, `3819`, `3820`, `3821`, `3822`, `3823`, `3824`, `3825`, `3826`, `3827`, `3828`, `3829`, `3830`, `3831`, `3832`, `3833`, `3834`, `3835`, `3836`, `3837`, `3838`, `3839`, `3840`, `3841`, `3842`, `3843`, `3844`, `3845`, `3846`, `3847`, `3848`, `3849`, `3850`, `3851`, `3852`, `3853`, `3854`, `3855`, `3856`, `3857`, `3858`, `3859`, `3860`, `3861`, `3862`, `3863`, `3864`, `3865`, `3866`, `3867`, `3868`, `3869`, `3870`, `3871`, `3872`, `3873`, `3874`, `3875`, `3876`, `3877`, `3878`, `3879`, `3880`, `3881`, `3882`, `3883`, `3884`, `3885`, `3886`, `3887`, `3888`, `3889`, `3890`, `3891`, `3892`, `3893`, `3894`, `3895`, `3896`, `3897`, `3898`, `3899`, `3900`, `3901`, `3902`, `3903`, `3904`, `3905`, `3906`, `3907`, `3908`, `3909`, `3910`, `3911`, `3912`, `3913`, `3914`, `3915`, `3916`, `3917`, `3918`, `3919`, `3920`, `3921`, `3922`, `3923`, `3924`, `3925`, `3926`, `3927`, `3928`, `3929`, `3930`, `3931`, `3932`, `3933`, `3934`, `3935`, `3936`, `3937`, `3938`, `3939`, `3940`, `3941`, `3942`, `3943`, `3944`, `3945`, `3946`, `3947`, `3948`, `3949`, `3950`, `3951`, `3952`, `3953`, `3954`, `3955`, `3956`, `3957`, `3958`, `3959`, `3960`, `3961`, `3962`, `3963`, `3964`, `3965`, `3966`, `3967`, `3968`, `3969`, `3970`, `3971`, `3972`, `3973`, `3974`, `3975`, `3976`, `3977`, `3978`, `3979`, `3980`, `3981`, `3982`, `3983`, `3984`, `3985`, `3986`, `3987`, `3988`, `3989`, `3990`, `3991`, `3992`, `3993`, `3994`, `3995`, `3996`, `3997`, `3998`, `3999`, `4000`, `4001`, `4002`, `4003`, `4004`, `4005`, `4006`, `4007`, `4008`, `4009`, `4010`, `4011`, `4012`, `4013`, `4014`, `4015`, `4016`, `4017`, `4018`, `4019`, `4020`, `4021`, `4022`, `4023`, `4024`, `4025`, `4026`, `4027`, `4028`, `4029`, `4030`, `4031`, `4032`, `4033`, `4034`, `4035`, `4036`, `4037`, `4038`, `4039`, `4040`, `4041`, `4042`, `4043`, `4044`, `4045`, `4046`, `4047`, `4048`, `4049`, `4050`, `4051`, `4052`, `4053`, `4054`, `4055`, `4056`, `4057`, `4058`, `4059`, `4060`, `4061`, `4062`, `4063`, `4064`, `4065`, `4066`, `4067`, `4068`, `4069`, `4070`, `4071`, `4072`, `4073`, `4074`, `4075`, `4076`, `4077`, `4078`, `4079`, `4080`, `4081`, `4082`, `4083`, `4084`, `4085`, `4086`, `4087`, `4088`, `4089`, `4090`, `4091`, `4092`, `4093`, `4094`, `4095`, `4096`, `4097`, `4098`, `4099`, `4100`, `4101`, `4102`, `4103`, `4104`, `4105`, `4106`, `4107`, `4108`, `4109`, `4110`, `4111`, `4112`, `4113`, `4114`, `4115`, `4116`, `4117`, `4118`, `4119`, `4120`, `4121`, `4122`, `4123`, `4124`, `4125`, `4126`, `4127`, `4128`, `4129`, `4130`, `4131`, `4132`, `4133`, `4134`, `4135`, `4136`, `4137`, `4138`, `4139`, `4140`, `4141`, `4142`, `4143`, `4144`, `4145`, `4146`, `4147`, `4148`, `4149`, `4150`, `4151`, `4152`, `4153`, `4154`, `4155`, `4156`, `4157`, `4158`, `4159`, `4160`, `4161`, `4162`, `4163`, `4164`, `4165`, `4166`, `4167`, `4168`, `4169`, `4170`, `4171`, `4172`, `4173`, `4174`, `4175`, `4176`, `4177`, `4178`, `4179`, `4180`, `4181`, `4182`, `4183`, `4184`, `4185`, `4186`, `4187`, `4188`, `4189`, `4190`, `4191`, `4192`, `4193`, `4194`, `4195`, `4196`, `4197`, `4198`, `4199`, `4200`, `4201`, `4202`, `4203`, `4204`, `4205`, `4206`, `4207`, `4208`, `4209`, `4210`, `4211`, `4212`, `4213`, `4214`, `4215`, `4216`, `4217`, `4218`, `4219`, `4220`, `4221`, `4222`, `4223`, `4224`, `4225`, `4226`, `4227`, `4228`, `4229`, `4230`, `4231`, `4232`, `4233`, `4234`, `4235`, `4236`, `4237`, `4238`, `4239`, `4240`, `4241`, `4242`, `4243`, `4244`, `4245`, `4246`, `4247`, `4248`, `4249`, `4250`, `4251`, `4252`, `4253`, `4254`, `4255`, `4256`, `4257`, `4258`, `4259`, `4260`, `4261`, `4262`, `4263`, `4264`, `4265`, `4266`, `4267`, `4268`, `4269`, `4270`, `4271`, `4272`, `4273`, `4274`, `4275`, `4276`, `4277`, `4278`, `4279`, `4280`, `4281`, `4282`, `4283`, `4284`, `4285`, `4286`, `4287`, `4288`, `4289`, `4290`, `4291`, `4292`, `4293`, `4294`, `4295`, `4296`, `4297`, `4298`, `4299`, `4300`, `4301`, `4302`, `4303`, `4304`, `4305`, `4306`, `4307`, `4308`, `4309`, `4310`, `4311`, `4312`, `4313`, `4314`, `4315`, `4316`, `4317`, `4318`, `4319`, `4320`, `4321`, `4322`, `4323`, `4324`, `4325`, `4326`, `4327`, `4328`, `4329`, `4330`, `4331`, `4332`, `4333`, `4334`, `4335`, `4336`, `4337`, `4338`, `4339`, `4340`, `4341`, `4342`, `4343`, `4344`, `4345`, `4346`, `4347`, `4348`, `4349`, `4350`, `4351`, `4352`, `4353`, `4354`, `4355`, `4356`, `4357`, `4358`, `4359`, `4360`, `4361`, `4362`, `4363`, `4364`, `4365`, `4366`, `4367`, `4368`, `4369`, `4370`, `4371`, `4372`, `4373`, `4374`, `4375`, `4376`, `4377`, `4378`, `4379`, `4380`, `4381`, `4382`, `4383`, `4384`, `4385`, `4386`, `4387`, `4388`, `4389`, `4390`, `4391`, `4392`, `4393`, `4394`, `4395`, `4396`, `4397`, `4398`, `4399`, `4400`, `4401`, `4402`, `4403`, `4404`, `4405`, `4406`, `4407`, `4408`, `4409`, `4410`, `4411`, `4412`, `4413`, `4414`, `4415`, `4416`, `4417`, `4418`, `4419`, `4420`, `4421`, `4422`, `4423`, `4424`, `4425`, `4426`, `4427`, `4428`, `4429`, `4430`, `4431`, `4432`, `4433`, `4434`, `4435`, `4436`, `4437`, `4438`, `4439`, `4440`, `4441`, `4442`, `4443`, `4444`, `4445`, `4446`, `4447`, `4448`, `4449`, `4450`, `4451`, `4452`, `4453`, `4454`, `4455`, `4456`, `4457`, `4458`, `4459`, `4460`, `4461`, `4462`, `4463`, `4464`, `4465`, `4466`, `4467`, `4468`, `4469`, `4470`, `4471`, `4472`, `4473`, `4474`, `4475`, `4476`, `4477`, `4478`, `4479`, `4480`, `4481`, `4482`, `4483`, `4484`, `4485`, `4486`, `4487`, `4488`, `4489`, `4490`, `4491`, `4492`, `4493`, `4494`, `4495`, `4496`, `4497`, `4498`, `4499`, `4500`, `4501`, `4502`, `4503`, `4504`, `4505`, `4506`, `4507`, `4508`, `4509`, `4510`, `4511`, `4512`, `4513`, `4514`, `4515`, `4516`, `4517`, `4518`, `4519`, `4520`, `4521`, `4522`, `4523`, `4524`, `4525`, `4526`, `4527`, `4528`, `4529`, `4530`, `4531`, `4532`, `4533`, `4534`, `4535`, `4536`, `4537`, `4538`, `4539`, `4540`, `4541`, `4542`, `4543`, `4544`, `4545`, `4546`, `4547`, `4548`, `4549`, `4550`, `4551`, `4552`, `4553`, `4554`, `4555`, `4556`, `4557`, `4558`, `4559`, `4560`, `4561`, `4562`, `4563`, `4564`, `4565`, `4566`, `4567`, `4568`, `4569`, `4570`, `4571`, `4572`, `4573`, `4574`, `4575`, `4576`, `4577`, `4578`, `4579`, `4580`, `4581`, `4582`, `4583`, `4584`, `4585`, `4586`, `4587`, `4588`, `4589`, `4590`, `4591`, `4592`, `4593`, `4594`, `4595`, `4596`, `4597`, `4598`, `4599`, `4600`, `4601`, `4602`, `4603`, `4604`, `4605`, `4606`, `4607`, `4608`, `4609`, `4610`, `4611`, `4612`, `4613`, `4614`, `4615`, `4616`, `4617`, `4618`, `4619`, `4620`, `4621`, `4622`, `4623`, `4624`, `4625`, `4626`, `4627`, `4628`, `4629`, `4630`, `4631`, `4632`, `4633`, `4634`, `4635`, `4636`, `4637`, `4638`, `4639`, `4640`, `4641`, `4642`, `4643`, `4644`, `4645`, `4646`, `4647`, `4648`, `4649`, `4650`, `4651`, `4652`, `4653`, `4654`, `4655`, `4656`, `4657`, `4658`, `4659`, `4660`, `4661`, `4662`, `4663`, `4664`, `4665`, `4666`, `4667`, `4668`, `4669`, `4670`, `4671`, `4672`, `4673`, `4674`, `4675`, `4676`, `4677`, `4678`, `4679`, `4680`, `4681`, `4682`, `4683`, `4684`, `4685`, `4686`, `4687`, `4688`, `4689`, `4690`, `4691`, `4692`, `4693`, `4694`, `4695`, `4696`, `4697`, `4698`, `4699`, `4700`, `4701`, `4702`, `4703`, `4704`, `4705`, `4706`, `4707`, `4708`, `4709`, `4710`, `4711`, `4712`, `4713`, `4714`, `4715`, `4716`, `4717`, `4718`, `4719`, `4720`, `4721`, `4722`, `4723`, `4724`, `4725`, `4726`, `4727`, `4728`, `4729`, `4730`, `4731`, `4732`, `4733`, `4734`, `4735`, `4736`, `4737`, `4738`, `4739`, `4740`, `4741`, `4742`, `4743`, `4744`, `4745`, `4746`, `4747`, `4748`, `4749`, `4750`, `4751`, `4752`, `4753`, `4754`, `4755`, `4756`, `4757`, `4758`, `4759`, `4760`, `4761`, `4762`, `4763`, `4764`, `4765`, `4766`, `4767`, `4768`, `4769`, `4770`, `4771`, `4772`, `4773`, `4774`, `4775`, `4776`, `4777`, `4778`, `4779`, `4780`, `4781`, `4782`, `4783`, `4784`, `4785`, `4786`, `4787`, `4788`, `4789`, `4790`, `4791`, `4792`, `4793`, `4794`, `4795`, `4796`, `4797`, `4798`, `4799`, `4800`, `4801`, `4802`, `4803`, `4804`, `4805`, `4806`, `4807`, `4808`, `4809`, `4810`, `4811`, `4812`, `4813`, `4814`, `4815`, `4816`, `4817`, `4818`, `4819`, `4820`, `4821`, `4822`, `4823`, `4824`, `4825`, `4826`, `4827`, `4828`, `4829`, `4830`, `4831`, `4832`, `4833`, `4834`, `4835`, `4836`, `4837`, `4838`, `4839`, `4840`, `4841`, `4842`, `4843`, `4844`, `4845`, `4846`, `4847`, `4848`, `4849`, `4850`, `4851`, `4852`, `4853`, `4854`, `4855`, `4856`, `4857`, `4858`, `4859`, `4860`, `4861`, `4862`, `4863`, `4864`, `4865`, `4866`, `4867`, `4868`, `4869`, `4870`, `4871`, `4872`, `4873`, `4874`, `4875`, `4876`, `4877`, `4878`, `4879`, `4880`, `4881`, `4882`, `4883`, `4884`, `4885`, `4886`, `4887`, `4888`, `4889`, `4890`, `4891`, `4892`, `4893`, `4894`, `4895`, `4896`, `4897`, `4898`, `4899`, `4900`, `4901`, `4902`, `4903`, `4904`, `4905`, `4906`, `4907`, `4908`, `4909`, `4910`, `4911`, `4912`, `4913`, `4914`, `4915`, `4916`, `4917`, `4918`, `4919`, `4920`, `4921`, `4922`, `4923`, `4924`, `4925`, `4926`, `4927`, `4928`, `4929`, `4930`, `4931`, `4932`, `4933`, `4934`, `4935`, `4936`, `4937`, `4938`, `4939`, `4940`, `4941`, `4942`, `4943`, `4944`, `4945`, `4946`, `4947`, `4948`, `4949`, `4950`, `4951`, `4952`, `4953`, `4954`, `4955`, `4956`, `4957`, `4958`, `4959`, `4960`, `4961`, `4962`, `4963`, `4964`, `4965`, `4966`, `4967`, `4968`, `4969`, `4970`, `4971`, `4972`, `4973`, `4974`, `4975`, `4976`, `4977`, `4978`, `4979`, `4980`, `4981`, `4982`, `4983`, `4984`, `4985`, `4986`, `4987`, `4988`, `4989`, `4990`, `4991`, `4992`, `4993`, `4994`, `4995`, `4996`, `4997`, `4998`, `4999`, `5000`, `5001`, `5002`, `5003`, `5004`, `5005`, `5006`, `5007`, `5008`, `5009`, `5010`, `5011`, `5012`, `5013`, `5014`, `5015`, `5016`, `5017`, `5018`, `5019`, `5020`, `5021`, `5022`, `5023`, `5024`, `5025`, `5026`, `5027`, `5028`, `5029`, `5030`, `5031`, `5032`, `5033`, `5034`, `5035`, `5036`, `5037`, `5038`, `5039`, `5040`, `5041`, `5042`, `5043`, `5044`, `5045`, `5046`, `5047`, `5048`, `5049`, `5050`, `5051`, `5052`, `5053`, `5054`, `5055`, `5056`, `5057`, `5058`, `5059`, `5060`, `5061`, `5062`, `5063`, `5064`, `5065`, `5066`, `5067`, `5068`, `5069`, `5070`, `5071`, `5072`, `5073`, `5074`, `5075`, `5076`, `5077`, `5078`, `5079`, `5080`, `5081`, `5082`, `5083`, `5084`, `5085`, `5086`, `5087`, `5088`, `5089`, `5090`, `5091`, `5092`, `5093`, `5094`, `5095`, `5096`, `5097`, `5098`, `5099`, `5100`, `5101`, `5102`, `5103`, `5104`, `5105`, `5106`, `5107`, `5108`, `5109`, `5110`, `5111`, `5112`, `5113`, `5114`, `5115`, `5116`, `5117`, `5118`, `5119`, `5120`, `5121`, `5122`, `5123`, `5124`, `5125`, `5126`, `5127`, `5128`, `5129`, `5130`, `5131`, `5132`, `5133`, `5134`, `5135`, `5136`, `5137`, `5138`, `5139`, `5140`, `5141`, `5142`, `5143`, `5144`, `5145`, `5146`, `5147`, `5148`, `5149`, `5150`, `5151`, `5152`, `5153`, `5154`, `5155`, `5156`, `5157`, `5158`, `5159`, `5160`, `5161`, `5162`, `5163`, `5164`, `5165`, `5166`, `5167`, `5168`, `5169`, `5170`, `5171`, `5172`, `5173`, `5174`, `5175`, `5176`, `5177`, `5178`, `5179`, `5180`, `5181`, `5182`, `5183`, `5184`, `5185`, `5186`, `5187`, `5188`, `5189`, `5190`, `5191`, `5192`, `5193`, `5194`, `5195`, `5196`, `5197`, `5198`, `5199`, `5200`, `5201`, `5202`, `5203`, `5204`, `5205`, `5206`, `5207`, `5208`, `5209`, `5210`, `5211`, `5212`, `5213`, `5214`, `5215`, `5216`, `5217`, `5218`, `5219`, `5220`, `5221`, `5222`, `5223`, `5224`, `5225`, `5226`, `5227`, `5228`, `5229`, `5230`, `5231`, `5232`, `5233`, `5234`, `5235`, `5236`, `5237`, `5238`, `5239`, `5240`, `5241`, `5242`, `5243`, `5244`, `5245`, `5246`, `5247`, `5248`, `5249`, `5250`, `5251`, `5252`, `5253`, `5254`, `5255`, `5256`, `5257`, `5258`, `5259`, `5260`, `5261`, `5262`, `5263`, `5264`, `5265`, `5266`, `5267`, `5268`, `5269`, `5270`, `5271`, `5272`, `5273`, `5274`, `5275`, `5276`, `5277`, `5278`, `5279`, `5280`, `5281`, `5282`, `5283`, `5284`, `5285`, `5286`, `5287`, `5288`, `5289`, `5290`, `5291`, `5292`, `5293`, `5294`, `5295`, `5296`, `5297`, `5298`, `5299`, `5300`, `5301`, `5302`, `5303`, `5304`, `5305`, `5306`, `5307`, `5308`, `5309`, `5310`, `5311`, `5312`, `5313`, `5314`, `5315`, `5316`, `5317`, `5318`, `5319`, `5320`, `5321`, `5322`, `5323`, `5324`, `5325`, `5326`, `5327`, `5328`, `5329`, `5330`, `5331`, `5332`, `5333`, `5334`, `5335`, `5336`, `5337`, `5338`, `5339`, `5340`, `5341`, `5342`, `5343`, `5344`, `5345`, `5346`, `5347`, `5348`, `5349`, `5350`, `5351`, `5352`, `5353`, `5354`, `5355`, `5356`, `5357`, `5358`, `5359`, `5360`, `5361`, `5362`, `5363`, `5364`, `5365`, `5366`, `5367`, `5368`, `5369`, `5370`, `5371`, `5372`, `5373`, `5374`, `5375`, `5376`, `5377`, `5378`, `5379`, `5380`, `5381`, `5382`, `5383`, `5384`, `5385`, `5386`, `5387`, `5388`, `5389`, `5390`, `5391`, `5392`, `5393`, `5394`, `5395`, `5396`, `5397`, `5398`, `5399`, `5400`, `5401`, `5402`, `5403`, `5404`, `5405`, `5406`, `5407`, `5408`, `5409`, `5410`, `5411`, `5412`, `5413`, `5414`, `5415`, `5416`, `5417`, `5418`, `5419`, `5420`, `5421`, `5422`, `5423`, `5424`, `5425`, `5426`, `5427`, `5428`, `5429`, `5430`, `5431`, `5432`, `5433`, `5434`, `5435`, `5436`, `5437`, `5438`, `5439`, `5440`, `5441`, `5442`, `5443`, `5444`, `5445`, `5446`, `5447`, `5448`, `5449`, `5450`, `5451`, `5452`, `5453`, `5454`, `5455`, `5456`, `5457`, `5458`, `5459`, `5460`, `5461`, `5462`, `5463`, `5464`, `5465`, `5466`, `5467`, `5468`, `5469`, `5470`, `5471`, `5472`, `5473`, `5474`, `5475`, `5476`, `5477`, `5478`, `5479`, `5480`, `5481`, `5482`, `5483`, `5484`, `5485`, `5486`, `5487`, `5488`, `5489`, `5490`, `5491`, `5492`, `5493`, `5494`, `5495`, `5496`, `5497`, `5498`, `5499`, `5500`, `5501`, `5502`, `5503`, `5504`, `5505`, `5506`, `5507`, `5508`, `5509`, `5510`, `5511`, `5512`, `5513`, `5514`, `5515`, `5516`, `5517`, `5518`, `5519`, `5520`, `5521`, `5522`, `5523`, `5524`, `5525`, `5526`, `5527`, `5528`, `5529`, `5530`, `5531`, `5532`, `5533`, `5534`, `5535`, `5536`, `5537`, `5538`, `5539`, `5540`, `5541`, `5542`, `5543`, `5544`, `5545`, `5546`, `5547`, `5548`, `5549`, `5550`, `5551`, `5552`, `5553`, `5554`, `5555`, `5556`, `5557`, `5558`, `5559`, `5560`, `5561`, `5562`, `5563`, `5564`, `5565`, `5566`, `5567`, `5568`, `5569`, `5570`, `5571`, `5572`, `5573`, `5574`, `5575`, `5576`, `5577`, `5578`, `5579`, `5580`, `5581`, `5582`, `5583`, `5584`, `5585`, `5586`, `5587`, `5588`, `5589`, `5590`, `5591`, `5592`, `5593`, `5594`, `5595`, `5596`, `5597`, `5598`, `5599`, `5600`, `5601`, `5602`, `5603`, `5604`, `5605`, `5606`, `5607`, `5608`, `5609`, `5610`, `5611`, `5612`, `5613`, `5614`, `5615`, `5616`, `5617`, `5618`, `5619`, `5620`, `5621`, `5622`, `5623`, `5624`, `5625`, `5626`, `5627`, `5628`, `5629`, `5630`, `5631`, `5632`, `5633`, `5634`, `5635`, `5636`, `5637`, `5638`, `5639`, `5640`, `5641`, `5642`, `5643`, `5644`, `5645`, `5646`, `5647`, `5648`, `5649`, `5650`, `5651`, `5652`, `5653`, `5654`, `5655`, `5656`, `5657`, `5658`, `5659`, `5660`, `5661`, `5662`, `5663`, `5664`, `5665`, `5666`, `5667`, `5668`, `5669`, `5670`, `5671`, `5672`, `5673`, `5674`, `5675`, `5676`, `5677`, `5678`, `5679`, `5680`, `5681`, `5682`, `5683`, `5684`, `5685`, `5686`, `5687`, `5688`, `5689`, `5690`, `5691`, `5692`, `5693`, `5694`, `5695`, `5696`, `5697`, `5698`, `5699`, `5700`, `5701`, `5702`, `5703`, `5704`, `5705`, `5706`, `5707`, `5708`, `5709`, `5710`, `5711`, `5712`, `5713`, `5714`, `5715`, `5716`, `5717`, `5718`, `5719`, `5720`, `5721`, `5722`, `5723`, `5724`, `5725`, `5726`, `5727`, `5728`, `5729`, `5730`, `5731`, `5732`, `5733`, `5734`, `5735`, `5736`, `5737`, `5738`, `5739`, `5740`, `5741`, `5742`, `5743`, `5744`, `5745`, `5746`, `5747`, `5748`, `5749`, `5750`, `5751`, `5752`, `5753`, `5754`, `5755`, `5756`, `5757`, `5758`, `5759`, `5760`, `5763`, `5764`, `5765`, `5766`, `5767`, `5768`, `5769`, `5770`, `5771`, `5772`, `5773`, `5774`, `5775`, `5776`, `5777`, `5778`, `5779`, `5780`, `5781`, `5782`, `5783`, `5784`, `5785`, `5786`, `5787`, `5788`, `5789`, `5790`, `5791`, `5792`, `5793`, `5794`, `5795`, `5796`, `5797`, `5798`, `5799`, `5800`, `5801`, `5802`, `5803`, `5804`, `5805`, `5806`, `5807`, `5808`, `5809`, `5810`, `5811`, `5812`, `5813`, `5814`, `5815`, `5816`, `5817`, `5818`, `5819`, `5820`, `5821`, `5822`, `5823`, `5824`, `5825`, `5826`, `5827`, `5828`, `5829`, `5830`, `5831`, `5832`, `5833`, `5834`, `5835`, `5836`, `5837`, `5838`, `5839`, `5840`, `5841`, `5842`, `5843`, `5844`, `5845`, `5846`, `5847`, `5848`, `5849`, `5850`, `5851`, `5852`, `5853`, `5854`, `5855`, `5856`, `5857`, `5858`, `5859`, `5860`, `5861`, `5862`, `5863`, `5864`, `5865`, `5866`, `5867`, `5868`, `5869`, `5870`, `5871`, `5872`, `5873`, `5874`, `5875`, `5876`, `5877`, `5878`, `5879`, `5880`, `5881`, `5882`, `5883`, `5884`, `5885`, `5886`, `5887`, `5888`, `5889`, `5890`, `5891`, `5892`, `5893`, `5894`, `5895`, `5896`, `5897`, `5898`, `5899`, `5900`, `5901`, `5902`, `5903`, `5904`, `5905`, `5906`, `5907`, `5908`, `5909`, `5910`, `5911`, `5912`, `5913`, `5914`, `5915`, `5916`, `5917`, `5918`, `5919`, `5920`, `5921`, `5922`, `5923`, `5924`, `5925`, `5926`, `5927`, `5928`, `5929`, `5930`, `5931`, `5932`, `5933`, `5934`, `5935`, `5936`, `5937`, `5938`, `5939`, `5940`, `5941`, `5942`, `5943`, `5944`, `5945`, `5946`, `5947`, `5948`, `5949`, `5950`, `5951`, `5952`, `5953`, `5954`, `5955`, `5956`, `5957`, `5958`, `5959`, `5960`, `5961`, `5962`, `5963`, `5964`, `5965`, `5966`, `5967`, `5968`, `5969`, `5970`, `5971`, `5972`, `5973`, `5974`, `5975`, `5976`, `5977`, `5978`, `5979`, `5980`, `5981`, `5982`, `5983`, `5984`, `5985`, `5986`, `5987`, `5988`, `5989`, `5990`, `5991`, `5992`, `5993`, `5994`, `5995`, `5996`, `5997`, `5998`, `5999`, `6000`, `6001`, `6002`, `6003`, `6004`, `6005`, `6006`, `6007`, `6008`, `6009`, `6010`, `6011`, `6012`, `6013`, `6014`, `6015`, `6016`, `6017`, `6018`, `6019`, `6020`, `6021`, `6022`, `6023`, `6024`, `6025`, `6026`, `6027`, `6028`, `6029`, `6030`, `6031`, `6032`, `6033`, `6034`, `6035`, `6036`, `6037`, `6038`, `6039`, `6040`, `6041`, `6042`, `6043`, `6044`, `6045`, `6046`, `6047`, `6048`, `6049`, `6050`, `6051`, `6052`, `6053`, `6054`, `6055`, `6056`, `6057`, `6058`, `6059`, `6060`, `6061`, `6062`, `6063`, `6064`, `6065`, `6066`, `6067`, `6068`, `6069`, `6070`, `6071`, `6072`, `6073`, `6074`, `6075`, `6076`, `6077`, `6078`, `6079`, `6080`, `6081`, `6082`, `6083`, `6084`, `6085`, `6086`, `6087`, `6088`, `6089`, `6090`, `6091`, `6092`, `6093`, `6094`, `6095`, `6096`, `6097`, `6098`, `6099`, `6100`, `6101`, `6102`, `6103`, `6104`, `6105`, `6106`, `6107`, `6108`, `6109`, `6110`, `6111`, `6112`, `6113`, `6114`, `6115`, `6116`, `6117`, `6118`, `6119`, `6120`, `6121`, `6122`, `6123`, `6124`, `6125`, `6126`, `6127`, `6128`, `6129`, `6130`, `6131`, `6132`, `6133`, `6134`, `6135`, `6136`, `6137`, `6138`, `6139`, `6140`, `6141`, `6142`, `6143`, `6144`, `6145`, `6146`, `6147`, `6148`, `6149`, `6150`, `6151`, `6152`, `6153`, `6154`, `6155`, `6156`, `6157`, `6158`, `6159`, `6160`, `6161`, `6162`, `6163`, `6164`, `6165`, `6166`, `6167`, `6168`, `6169`, `6170`, `6171`, `6172`, `6173`, `6174`, `6175`, `6176`, `6177`, `6178`, `6179`, `6180`, `6181`, `6182`, `6183`, `6184`, `6185`, `6186`, `6187`, `6188`, `6189`, `6190`, `6191`, `6192`, `6193`, `6194`, `6195`, `6196`, `6197`, `6198`, `6199`, `6200`, `6201`, `6202`, `6203`, `6204`, `6205`, `6206`, `6207`, `6208`, `6209`, `6210`, `6211`, `6212`, `6213`, `6214`, `6215`, `6216`, `6217`, `6218`, `6219`, `6220`, `6221`, `6222`, `6223`, `6224`, `6225`, `6226`, `6227`, `6228`, `6229`, `6230`, `6231`, `6232`, `6233`, `6234`, `6235`, `6236`, `6237`, `6238`, `6239`, `6240`, `6241`, `6242`, `6243`, `6244`, `6245`, `6246`, `6247`, `6248`, `6249`, `6250`, `6251`, `6252`, `6253`, `6254`, `6255`, `6256`, `6257`, `6258`, `6259`, `6260`, `6261`, `6262`, `6263`, `6264`, `6265`, `6266`, `6267`, `6268`, `6269`, `6270`, `6271`, `6272`, `6273`, `6274`, `6275`, `6276`, `6277`, `6278`, `6279`, `6280`, `6281`, `6282`, `6283`, `6284`, `6285`, `6286`, `6287`, `6288`, `6289`, `6290`, `6291`, `6292`, `6293`, `6294`, `6295`, `6296`, `6297`, `6298`, `6299`, `6300`, `6301`, `6302`, `6303`, `6304`, `6305`, `6306`, `6307`, `6308`, `6309`, `6310`, `6311`, `6312`, `6313`, `6314`, `6315`, `6316`, `6317`, `6318`, `6319`, `6320`, `6321`, `6322`, `6323`, `6324`, `6325`, `6326`, `6327`, `6328`, `6329`, `6330`, `6331`, `6332`, `6333`, `6334`, `6335`, `6336`, `6337`, `6338`, `6339`, `6340`, `6341`, `6342`, `6343`, `6344`, `6345`, `6346`, `6347`, `6348`, `6349`, `6350`, `6351`, `6352`, `6353`, `6354`, `6355`, `6356`, `6357`, `6358`, `6359`, `6360`, `6361`, `6362`, `6363`, `6364`, `6365`, `6366`, `6367`, `6368`, `6369`, `6370`, `6371`, `6372`, `6373`, `6374`, `6375`, `6376`, `6377`, `6378`, `6379`, `6380`, `6381`, `6382`, `6383`, `6384`, `6385`, `6386`, `6387`, `6388`, `6389`, `6390`, `6391`, `6392`, `6393`, `6394`, `6395`, `6396`, `6397`, `6398`, `6399`, `6400`, `6401`, `6402`, `6403`, `6404`, `6405`, `6406`, `6407`, `6408`, `6409`, `6410`, `6411`, `6412`, `6413`, `6414`, `6415`, `6416`, `6417`, `6418`, `6419`, `6420`, `6421`, `6422`, `6423`, `6424`, `6425`, `6426`, `6427`, `6428`, `6429`, `6430`, `6431`, `6432`, `6433`, `6434`, `6435`, `6436`, `6437`, `6438`, `6439`, `6440`, `6441`, `6442`, `6443`, `6444`, `6445`, `6446`, `6447`, `6448`, `6449`, `6450`, `6451`, `6452`, `6453`, `6454`, `6455`, `6456`, `6457`, `6458`, `6459`, `6460`, `6461`, `6462`, `6463`, `6464`, `6465`, `6466`, `6467`, `6468`, `6469`, `6470`, `6471`, `6472`, `6473`, `6474`, `6475`, `6476`, `6477`, `6478`, `6479`, `6480`, `6481`, `6482`, `6483`, `6484`, `6485`, `6486`, `6487`, `6488`, `6489`, `6490`, `6491`, `6492`, `6493`, `6494`, `6495`, `6496`, `6497`, `6498`, `6499`, `6500`, `6501`, `6502`, `6503`, `6504`, `6505`, `6506`, `6507`, `6508`, `6509`, `6510`, `6511`, `6512`, `6513`, `6514`, `6515`, `6516`, `6517`, `6518`, `6519`, `6520`, `6521`, `6522`, `6523`, `6524`, `6525`, `6526`, `6527`, `6528`, `6529`, `6530`, `6531`, `6532`, `6533`, `6534`, `6535`, `6536`, `6537`, `6538`, `6539`, `6540`, `6541`, `6542`, `6543`, `6544`, `6545`, `6546`, `6547`, `6548`, `6549`, `6550`, `6551`, `6552`, `6553`, `6554`, `6555`, `6556`, `6557`, `6558`, `6559`, `6560`, `6561`, `6562`, `6563`, `6564`, `6565`, `6566`, `6567`, `6568`, `6569`, `6570`, `6571`, `6572`, `6573`, `6574`, `6575`, `6576`, `6577`, `6578`, `6579`, `6580`, `6581`, `6582`, `6583`, `6584`, `6585`, `6586`, `6587`, `6588`, `6589`, `6590`, `6591`, `6592`, `6593`, `6594`, `6595`, `6596`, `6597`, `6598`, `6599`, `6600`, `6601`, `6602`, `6603`, `6604`, `6605`, `6606`, `6607`, `6608`, `6609`, `6610`, `6611`, `6612`, `6613`, `6614`, `6615`, `6616`, `6617`, `6618`, `6619`, `6620`, `6621`, `6622`, `6623`, `6624`, `6625`, `6626`, `6627`, `6628`, `6629`, `6630`, `6631`, `6632`, `6633`, `6634`, `6635`, `6636`, `6637`, `6638`, `6639`, `6640`, `6641`, `6642`, `6643`, `6644`, `6645`, `6646`, `6647`, `6648`, `6649`, `6650`, `6651`, `6652`, `6653`, `6654`, `6655`, `6656`, `6657`, `6658`, `6659`, `6660`, `6661`, `6662`, `6663`, `6664`, `6665`, `6666`, `6667`, `6668`, `6669`, `6670`, `6671`, `6672`, `6673`, `6674`, `6675`, `6676`, `6677`, `6678`, `6679`, `6680`, `6681`, `6682`, `6683`, `6684`, `6685`, `6686`, `6687`, `6688`, `6689`, `6690`, `6691`, `6692`, `6693`, `6694`, `6695`, `6696`, `6697`, `6698`, `6699`, `6700`, `6701`, `6702`, `6703`, `6704`, `6705`, `6706`, `6707`, `6708`, `6709`, `6710`, `6711`, `6712`, `6713`, `6714`, `6715`, `6716`, `6717`, `6718`, `6719`, `6720`, `6721`, `6722`, `6723`, `6724`, `6725`, `6726`, `6727`, `6728`, `6729`, `6730`, `6731`, `6732`, `6733`, `6734`, `6735`, `6736`, `6737`, `6738`, `6739`, `6740`, `6741`, `6742`, `6743`, `6744`, `6745`, `6746`, `6747`, `6748`, `6749`, `6750`, `6751`, `6752`, `6753`, `6754`, `6755`, `6756`, `6757`, `6758`, `6759`, `6760`, `6761`, `6762`, `6763`, `6764`, `6765`, `6766`, `6767`, `6768`, `6769`, `6770`, `6771`, `6772`, `6773`, `6774`, `6775`, `6776`, `6777`, `6778`, `6779`, `6780`, `6781`, `6782`, `6783`, `6784`, `6785`, `6786`, `6787`, `6788`, `6789`, `6790`, `6791`, `6792`, `6793`, `6794`, `6795`, `6796`, `6797`, `6798`, `6799`, `6800`, `6801`, `6802`, `6803`, `6804`, `6805`, `6806`, `6807`, `6808`, `6809`, `6810`, `6811`, `6812`, `6813`, `6814`, `6815`, `6816`, `6817`, `6818`, `6819`, `6820`, `6821`, `6822`, `6823`, `6824`, `6825`, `6826`, `6827`, `6828`, `6829`, `6830`, `6831`, `6832`, `6833`, `6834`, `6835`, `6836`, `6837`, `6838`, `6839`, `6840`, `6841`, `6842`, `6843`, `6844`, `6845`, `6846`, `6847`, `6848`, `6849`, `6850`, `6851`, `6852`, `6853`, `6854`, `6855`, `6856`, `6857`, `6858`, `6859`, `6860`, `6861`, `6862`, `6863`, `6864`, `6865`, `6866`, `6867`, `6868`, `6869`, `6870`, `6871`, `6872`, `6873`, `6874`, `6875`, `6876`, `6877`, `6878`, `6879`, `6880`, `6881`, `6882`, `6883`, `6884`, `6885`, `6886`, `6887`, `6888`, `6889`, `6890`, `6891`, `6892`, `6893`, `6894`, `6895`, `6896`, `6897`, `6898`, `6899`, `6900`, `6901`, `6902`, `6903`, `6904`, `6905`, `6906`, `6907`, `6908`, `6909`, `6910`, `6911`, `6912`, `6913`, `6914`, `6915`, `6916`, `6917`, `6918`, `6919`, `6920`, `6921`, `6922`, `6923`, `6924`, `6925`, `6926`, `6927`, `6928`, `6929`, `6930`, `6931`, `6932`, `6933`, `6934`, `6935`, `6936`, `6937`, `6938`, `6939`, `6940`, `6941`, `6942`, `6943`, `6944`, `6945`, `6946`, `6947`, `6948`, `6949`, `6950`, `6951`, `6952`, `6953`, `6954`, `6955`, `6956`, `6957`, `6958`, `6959`, `6960`, `6961`, `6962`, `6963`, `6964`, `6965`, `6966`, `6967`, `6968`, `6969`, `6970`, `6971`, `6972`, `6973`, `6974`, `6975`, `6976`, `6977`, `6978`, `6979`, `6980`, `6981`, `6982`, `6983`, `6984`, `6985`, `6986`, `6987`, `6988`, `6989`, `6990`, `6991`, `6992`, `6993`, `6994`, `6995`, `6996`, `6997`, `6998`, `6999`, `7000`, `7001`, `7002`, `7003`, `7004`, `7005`, `7006`, `7007`, `7008`, `7009`, `7010`, `7011`, `7012`, `7013`, `7014`, `7015`, `7016`, `7017`, `7018`, `7019`, `7020`, `7021`, `7022`, `7023`, `7024`, `7025`, `7026`, `7027`, `7028`, `7029`, `7030`, `7031`, `7032`, `7033`, `7034`, `7035`, `7036`, `7037`, `7038`, `7039`, `7040`, `7041`, `7042`, `7043`, `7044`, `7045`, `7046`, `7047`, `7048`, `7049`, `7050`, `7051`, `7052`, `7053`, `7054`, `7055`, `7056`, `7057`, `7058`, `7059`, `7060`, `7061`, `7062`, `7063`, `7064`, `7065`, `7066`, `7067`, `7068`, `7069`, `7070`, `7071`, `7072`, `7073`, `7074`, `7075`, `7076`, `7077`, `7078`, `7079`, `7080`, `7081`, `7082`, `7083`, `7084`, `7085`, `7086`, `7087`, `7088`, `7089`, `7090`, `7091`, `7092`, `7093`, `7094`, `7095`, `7096`, `7097`, `7098`, `7099`, `7100`, `7101`, `7102`, `7103`, `7104`, `7105`, `7106`, `7107`, `7108`, `7109`, `7110`, `7111`, `7112`, `7113`, `7114`, `7115`, `7116`, `7117`, `7118`, `7119`, `7120`, `7121`, `7122`, `7123`, `7124`, `7125`, `7126`, `7127`, `7128`, `7129`, `7130`, `7131`, `7132`, `7133`, `7134`, `7135`, `7136`, `7137`, `7138`, `7139`, `7140`, `7141`, `7142`, `7143`, `7144`, `7145`, `7146`, `7147`, `7148`, `7149`, `7150`, `7151`, `7152`, `7153`, `7154`, `7155`, `7156`, `7157`, `7158`, `7159`, `7160`, `7161`, `7162`, `7163`, `7164`, `7165`, `7166`, `7167`, `7168`, `7169`, `7170`, `7171`, `7172`, `7173`, `7174`, `7175`, `7176`, `7177`, `7178`, `7179`, `7180`, `7181`, `7182`, `7183`, `7184`, `7185`, `7186`, `7187`, `7188`, `7189`, `7190`, `7191`, `7192`, `7193`, `7194`, `7195`, `7196`, `7197`, `7198`, `7199`, `7200`, `7201`, `7202`, `7203`, `7204`, `7205`, `7206`, `7207`, `7208`, `7209`, `7210`, `7211`, `7212`, `7213`, `7214`, `7215`, `7216`, `7217`, `7218`, `7219`, `7220`, `7221`, `7222`, `7223`, `7224`, `7225`, `7226`, `7227`, `7228`, `7229`, `7230`, `7231`, `7232`, `7233`, `7234`, `7235`, `7236`, `7237`, `7238`, `7239`, `7240`, `7241`, `7242`, `7243`, `7244`, `7245`, `7246`, `7247`, `7248`, `7249`, `7250`, `7251`, `7252`, `7253`, `7254`, `7255`, `7256`, `7257`, `7258`, `7259`, `7260`, `7261`, `7262`, `7263`, `7264`, `7265`, `7266`, `7267`, `7268`, `7269`, `7270`, `7271`, `7272`, `7273`, `7274`, `7275`, `7276`, `7277`, `7278`, `7279`, `7280`, `7281`, `7282`, `7283`, `7284`, `7285`, `7286`, `7287`, `7288`, `7289`, `7290`, `7291`, `7292`, `7293`, `7294`, `7295`, `7296`, `7297`, `7298`, `7299`, `7300`, `7301`, `7302`, `7303`, `7304`, `7305`, `7306`, `7307`, `7308`, `7309`, `7310`, `7311`, `7312`, `7313`, `7314`, `7315`, `7316`, `7317`, `7318`, `7319`, `7320`, `7321`, `7322`, `7323`, `7324`, `7325`, `7326`, `7327`, `7328`, `7329`, `7330`, `7331`, `7332`, `7333`, `7334`, `7335`, `7336`, `7337`, `7338`, `7339`, `7340`, `7341`, `7342`, `7343`, `7344`, `7345`, `7346`, `7347`, `7348`, `7349`, `7350`, `7351`, `7352`, `7353`, `7354`, `7355`, `7356`, `7357`, `7358`, `7359`, `7360`, `7361`, `7362`, `7363`, `7364`, `7365`, `7366`, `7367`, `7368`, `7369`, `7370`, `7371`, `7372`, `7373`, `7374`, `7375`, `7376`, `7377`, `7378`, `7379`, `7380`, `7381`, `7382`, `7383`, `7384`, `7385`, `7386`, `7387`, `7388`, `7389`, `7390`, `7391`, `7392`, `7393`, `7394`, `7395`, `7396`, `7397`, `7398`, `7399`, `7400`, `7401`, `7402`, `7403`, `7404`, `7405`, `7406`, `7407`, `7408`, `7409`, `7410`, `7411`, `7412`, `7413`, `7414`, `7415`, `7416`, `7417`, `7418`, `7419`, `7420`, `7421`, `7422`, `7423`, `7424`, `7425`, `7426`, `7427`, `7428`, `7429`, `7430`, `7431`, `7432`, `7433`, `7434`, `7435`, `7436`, `7437`, `7438`, `7439`, `7440`, `7441`, `7442`, `7443`, `7444`, `7445`, `7446`, `7447`, `7448`, `7449`, `7450`, `7451`, `7452`, `7453`, `7454`, `7455`, `7456`, `7457`, `7458`, `7459`, `7460`, `7461`, `7462`, `7463`, `7464`, `7465`, `7466`, `7467`, `7468`, `7469`, `7470`, `7471`, `7472`, `7473`, `7474`, `7475`, `7476`, `7477`, `7478`, `7479`, `7480`, `7481`, `7482`, `7483`, `7484`, `7485`, `7486`, `7487`, `7488`, `7489`, `7490`, `7491`, `7492`, `7493`, `7494`, `7495`, `7496`, `7497`, `7498`, `7499`, `7500`, `7501`, `7502`, `7503`, `7504`, `7505`, `7506`, `7507`, `7508`, `7509`, `7510`, `7511`, `7512`, `7513`, `7514`, `7515`, `7516`, `7517`, `7518`, `7519`, `7520`, `7521`, `7522`, `7523`, `7524`, `7525`, `7526`, `7527`, `7528`, `7529`, `7530`, `7531`, `7532`, `7533`, `7534`, `7535`, `7536`, `7537`, `7538`, `7539`, `7540`, `7541`, `7542`, `7543`, `7544`, `7545`, `7546`, `7547`, `7548`, `7549`, `7550`, `7551`, `7552`, `7553`, `7554`, `7555`, `7556`, `7557`, `7558`, `7559`, `7560`, `7561`, `7562`, `7563`, `7564`, `7565`, `7566`, `7567`, `7568`, `7569`, `7570`, `7571`, `7572`, `7573`, `7574`, `7575`, `7576`, `7577`, `7578`, `7579`, `7580`, `7581`, `7582`, `7583`, `7584`, `7585`, `7586`, `7587`, `7588`, `7589`, `7590`, `7591`, `7592`, `7593`, `7594`, `7595`, `7596`, `7597`, `7598`, `7599`, `7600`, `7601`, `7602`, `7603`, `7604`, `7605`, `7606`, `7607`, `7608`, `7609`, `7610`, `7611`, `7612`, `7613`, `7614`, `7615`, `7616`, `7617`, `7618`, `7619`, `7620`, `7621`, `7622`, `7623`, `7624`, `7625`, `7626`, `7627`, `7628`, `7629`, `7630`, `7631`, `7632`, `7633`, `7634`, `7635`, `7636`, `7637`, `7638`, `7639`, `7640`, `7641`, `7642`, `7643`, `7644`, `7645`, `7646`, `7647`, `7648`, `7649`, `7650`, `7651`, `7652`, `7653`, `7654`, `7655`, `7656`, `7657`, `7658`, `7659`, `7660`, `7661`, `7662`, `7663`, `7664`, `7665`, `7666`, `7667`, `7668`, `7669`, `7670`, `7671`, `7672`, `7673`, `7674`, `7675`, `7676`, `7677`, `7678`, `7679`, `7680`, `7681`, `7682`, `7683`, `7684`, `7685`, `7686`, `7687`, `7688`, `7689`, `7690`, `7691`, `7692`, `7693`, `7694`, `7695`, `7696`, `7697`, `7698`, `7699`, `7700`, `7701`, `7702`, `7703`, `7704`, `7705`, `7706`, `7707`, `7708`, `7709`, `7710`, `7711`, `7712`, `7713`, `7714`, `7715`, `7716`, `7717`, `7718`, `7719`, `7720`, `7721`, `7722`, `7723`, `7724`, `7725`, `7726`, `7727`, `7728`, `7729`, `7730`, `7731`, `7732`, `7733`, `7734`, `7735`, `7736`, `7737`, `7738`, `7739`, `7740`, `7741`, `7742`, `7743`, `7744`, `7745`, `7746`, `7747`, `7748`, `7749`, `7750`, `7751`, `7752`, `7753`, `7754`, `7755`, `7756`, `7757`, `7758`, `7759`, `7760`, `7761`, `7762`, `7763`, `7764`, `7765`, `7766`, `7767`, `7768`, `7769`, `7770`, `7771`, `7772`, `7773`, `7774`, `7775`, `7776`, `7777`, `7778`, `7779`, `7780`, `7781`, `7782`, `7783`, `7784`, `7785`, `7786`, `7787`, `7788`, `7789`, `7790`, `7791`, `7792`, `7793`, `7794`, `7795`, `7796`, `7797`, `7798`, `7799`, `7800`, `7801`, `7802`, `7803`, `7804`, `7805`, `7806`, `7807`, `7808`, `7809`, `7810`, `7811`, `7812`, `7813`, `7814`, `7815`, `7816`, `7817`, `7818`, `7819`, `7820`, `7821`, `7822`, `7823`, `7824`, `7825`, `7826`, `7827`, `7828`, `7829`, `7830`, `7831`, `7832`, `7833`, `7834`, `7835`, `7836`, `7837`, `7838`, `7839`, `7840`, `7841`, `7842`, `7843`, `7844`, `7845`, `7846`, `7847`, `7848`, `7849`, `7850`, `7851`, `7852`, `7853`, `7854`, `7855`, `7856`, `7857`, `7858`, `7859`, `7860`, `7861`, `7862`, `7863`, `7864`, `7865`, `7866`, `7867`, `7868`, `7869`, `7870`, `7871`, `7872`, `7873`, `7874`, `7875`, `7876`, `7877`, `7878`, `7879`, `7880`, `7881`, `7882`, `7883`, `7884`, `7885`, `7886`, `7887`, `7888`, `7889`, `7890`, `7891`, `7892`, `7893`, `7894`, `7895`, `7896`, `7897`, `7898`, `7899`, `7900`, `7901`, `7902`, `7903`, `7904`, `7905`, `7906`, `7907`, `7908`, `7909`, `7910`, `7911`, `7912`, `7913`, `7914`, `7915`, `7916`, `7917`, `7918`, `7919`, `7920`, `7921`, `7922`, `7923`, `7924`, `7925`, `7926`, `7927`, `7928`, `7929`, `7930`, `7931`, `7932`, `7933`, `7934`, `7935`, `7936`, `7937`, `7938`, `7939`, `7940`, `7941`, `7942`, `7943`, `7944`, `7945`, `7946`, `7947`, `7948`, `7949`, `7950`, `7951`, `7952`, `7953`, `7954`, `7955`, `7956`, `7957`, `7958`, `7959`, `7960`, `7961`, `7962`, `7963`, `7964`, `7965`, `7966`, `7967`, `7968`, `7969`, `7970`, `7971`, `7972`, `7973`, `7974`, `7975`, `7976`, `7977`, `7978`, `7979`, `7980`, `7981`, `7982`, `7983`, `7984`, `7985`, `7986`, `7987`, `7988`, `7989`, `7990`, `7991`, `7992`, `7993`, `7994`, `7995`, `7996`, `7997`, `7998`, `7999`, `8000`, `8001`, `8002`, `8003`, `8004`, `8005`, `8006`, `8007`, `8008`, `8009`, `8010`, `8011`, `8012`, `8013`, `8014`, `8015`, `8016`, `8017`, `8018`, `8019`, `8020`, `8021`, `8022`, `8023`, `8024`, `8025`, `8026`, `8027`, `8028`, `8029`, `8030`, `8031`, `8032`, `8033`, `8034`, `8035`, `8036`, `8037`, `8038`, `8039`, `8040`, `8041`, `8042`, `8043`, `8044`, `8045`, `8046`, `8047`, `8048`, `8049`, `8050`, `8051`, `8052`, `8053`, `8054`, `8055`, `8056`, `8057`, `8058`, `8059`, `8060`, `8061`, `8062`, `8063`, `8064`, `8065`, `8066`, `8067`, `8068`, `8069`, `8070`, `8071`, `8072`, `8073`, `8074`, `8075`, `8076`, `8077`, `8078`, `8079`, `8080`, `8081`, `8082`, `8083`, `8084`, `8085`, `8086`, `8087`, `8088`, `8089`, `8090`, `8091`, `8092`, `8093`, `8094`, `8095`, `8096`, `8097`, `8098`, `8099`, `8100`, `8101`, `8102`, `8103`, `8104`, `8105`, `8106`, `8107`, `8108`, `8109`, `8110`, `8111`, `8112`, `8113`, `8114`, `8115`, `8116`, `8117`, `8118`, `8119`, `8120`, `8121`, `8122`, `8123`, `8124`, `8125`, `8126`, `8127`, `8128`, `8129`, `8130`, `8131`, `8132`, `8133`, `8134`, `8135`, `8136`, `8137`, `8138`, `8139`, `8140`, `8141`, `8142`, `8143`, `8144`, `8145`, `8146`, `8147`, `8148`, `8149`, `8150`, `8151`, `8152`, `8153`, `8154`, `8155`, `8156`, `8157`, `8158`, `8159`, `8160`, `8161`, `8162`, `8163`, `8164`, `8165`, `8166`, `8167`, `8168`, `8169`, `8170`, `8171`, `8172`, `8173`, `8174`, `8175`, `8176`, `8177`, `8178`, `8179`, `8180`, `8181`, `8182`, `8183`, `8184`, `8185`, `8186`, `8187`, `8188`, `8189`, `8190`, `8191`, `8192`, `8193`, `8194`, `8195`, `8196`, `8197`, `8198`, `8199`, `8200`, `8201`, `8202`, `8203`, `8204`, `8205`, `8206`, `8207`, `8208`, `8209`, `8210`, `8211`, `8212`, `8213`, `8214`, `8215`, `8216`, `8217`, `8218`, `8219`, `8220`, `8221`, `8222`, `8223`, `8224`, `8225`, `8226`, `8227`, `8228`, `8229`, `8230`, `8231`, `8232`, `8233`, `8234`, `8235`, `8236`, `8237`, `8238`, `8239`, `8240`, `8241`, `8242`, `8243`, `8244`, `8245`, `8246`, `8247`, `8248`, `8249`, `8250`, `8251`, `8252`, `8253`, `8254`, `8255`, `8256`, `8257`, `8258`, `8259`, `8260`, `8261`, `8262`, `8263`, `8264`, `8265`, `8266`, `8267`, `8268`, `8269`, `8270`, `8271`, `8272`, `8273`, `8274`, `8275`, `8276`, `8277`, `8278`, `8279`, `8280`, `8281`, `8282`, `8283`, `8284`, `8285`, `8286`, `8287`, `8288`, `8289`, `8290`, `8291`, `8292`, `8293`, `8294`, `8295`, `8296`, `8297`, `8298`, `8299`, `8300`, `8301`, `8302`, `8303`, `8304`, `8305`, `8306`, `8307`, `8308`, `8309`, `8310`, `8311`, `8312`, `8313`, `8314`, `8315`, `8316`, `8317`, `8318`, `8319`, `8320`, `8321`, `8322`, `8323`, `8324`, `8325`, `8326`, `8327`, `8328`, `8329`, `8330`, `8331`, `8332`, `8333`, `8334`, `8335`, `8336`, `8337`, `8338`, `8339`, `8340`, `8341`, `8342`, `8343`, `8344`, `8345`, `8346`, `8347`, `8348`, `8349`, `8350`, `8351`, `8352`, `8353`, `8354`, `8355`, `8356`, `8357`, `8358`, `8359`, `8360`, `8361`, `8362`, `8363`, `8364`, `8365`, `8366`, `8367`, `8368`, `8369`, `8370`, `8371`, `8372`, `8373`, `8374`, `8375`, `8376`, `8377`, `8378`, `8379`, `8380`, `8381`, `8382`, `8383`, `8384`, `8385`, `8386`, `8387`, `8388`, `8389`, `8390`, `8391`, `8392`, `8393`, `8394`, `8395`, `8396`, `8397`, `8398`, `8399`, `8400`, `8401`, `8402`, `8403`, `8404`, `8405`, `8406`, `8407`, `8408`, `8409`, `8410`, `8411`, `8412`, `8413`, `8414`, `8415`, `8416`, `8417`, `8418`, `8419`, `8420`, `8421`, `8422`, `8423`, `8424`, `8425`, `8426`, `8427`, `8428`, `8429`, `8430`, `8431`, `8432`, `8433`, `8434`, `8435`, `8436`, `8437`, `8438`, `8439`, `8440`, `8441`, `8442`, `8443`, `8444`, `8445`, `8446`, `8447`, `8448`, `8449`, `8450`, `8451`, `8452`, `8453`, `8454`, `8455`, `8456`, `8457`, `8458`, `8459`, `8460`, `8461`, `8462`, `8463`, `8464`, `8465`, `8466`, `8467`, `8468`, `8469`, `8470`, `8471`, `8472`, `8473`, `8474`, `8475`, `8476`, `8477`, `8478`, `8479`, `8480`, `8481`, `8482`, `8483`, `8484`, `8485`, `8486`, `8487`, `8488`, `8489`, `8490`, `8491`, `8492`, `8493`, `8494`, `8495`, `8496`, `8497`, `8498`, `8499`, `8500`, `8501`, `8502`, `8503`, `8504`, `8505`, `8506`, `8507`, `8508`, `8509`, `8510`, `8511`, `8512`, `8513`, `8514`, `8515`, `8516`, `8517`, `8518`, `8519`, `8520`, `8521`, `8522`, `8523`, `8524`, `8525`, `8526`, `8527`, `8528`, `8529`, `8530`, `8531`, `8532`, `8533`, `8534`, `8535`, `8536`, `8537`, `8538`, `8539`, `8540`, `8541`, `8542`, `8543`, `8544`, `8545`, `8546`, `8547`, `8548`, `8549`, `8550`, `8551`, `8552`, `8553`, `8554`, `8555`, `8556`, `8557`, `8558`, `8559`, `8560`, `8561`, `8562`, `8563`, `8564`, `8565`, `8566`, `8567`, `8568`, `8569`, `8570`, `8571`, `8572`, `8573`, `8574`, `8575`, `8576`, `8577`, `8578`, `8579`, `8580`, `8581`, `8582`, `8583`, `8584`, `8585`, `8586`, `8587`, `8588`, `8589`, `8590`, `8591`, `8592`, `8593`, `8594`, `8595`, `8596`, `8597`, `8598`, `8599`, `8600`, `8601`, `8602`, `8603`, `8604`, `8605`, `8606`, `8607`, `8608`, `8609`, `8610`, `8611`, `8612`, `8613`, `8614`, `8615`, `8616`, `8617`, `8618`, `8619`, `8620`, `8621`, `8622`, `8623`, `8624`, `8625`, `8626`, `8627`, `8628`, `8629`, `8630`, `8631`, `8632`, `8633`, `8634`, `8635`, `8636`, `8637`, `8638`, `8639`, `8640`, `8641`, `8642`, `8643`, `8644`, `8645`, `8646`, `8647`, `8648`, `8649`, `8650`, `8651`, `8652`, `8653`, `8654`, `8655`, `8656`, `8657`, `8658`, `8659`, `8660`, `8661`, `8662`, `8663`, `8664`, `8665`, `8666`, `8667`, `8668`, `8669`, `8670`, `8671`, `8672`, `8673`, `8674`, `8675`, `8676`, `8677`, `8678`, `8679`, `8680`, `8681`, `8682`, `8683`, `8684`, `8685`, `8686`, `8687`, `8688`, `8689`, `8690`, `8691`, `8692`, `8693`, `8694`, `8695`, `8696`, `8697`, `8698`, `8699`, `8700`, `8701`, `8702`, `8703`, `8704`, `8705`, `8706`, `8707`, `8708`, `8709`, `8710`, `8711`, `8712`, `8713`, `8714`, `8715`, `8716`, `8717`, `8718`, `8719`, `8720`, `8721`, `8722`, `8723`, `8724`, `8725`, `8726`, `8727`, `8728`, `8729`, `8730`, `8731`, `8732`, `8733`, `8734`, `8735`, `8736`, `8737`, `8738`, `8739`, `8740`, `8741`, `8742`, `8743`, `8744`, `8745`, `8746`, `8747`, `8748`, `8749`, `8750`, `8751`, `8752`, `8753`, `8754`, `8755`, `8756`, `8757`, `8758`, `8759`, `8760`, `8761`, `8762`, `8763`, `8764`, `8765`, `8766`, `8767`, `8768`, `8769`, `8770`, `8771`, `8772`, `8773`, `8774`, `8775`, `8776`, `8777`, `8778`, `8779`, `8780`, `8781`, `8782`, `8783`, `8784`, `8785`, `8786`, `8787`, `8788`, `8789`, `8790`, `8791`, `8792`, `8793`, `8794`, `8795`, `8796`, `8797`, `8798`, `8799`, `8800`, `8801`, `8802`, `8803`, `8804`, `8805`, `8806`, `8807`, `8808`, `8809`, `8810`, `8811`, `8812`, `8813`, `8814`, `8815`, `8816`, `8817`, `8818`, `8819`, `8820`, `8821`, `8822`, `8823`, `8824`, `8825`, `8826`, `8827`, `8828`, `8829`, `8830`, `8831`, `8832`, `8833`, `8834`, `8835`, `8836`, `8837`, `8838`, `8839`, `8840`, `8841`, `8842`, `8843`, `8844`, `8845`, `8846`, `8847`, `8848`, `8849`, `8850`, `8851`, `8852`, `8853`, `8854`, `8855`, `8856`, `8857`, `8858`, `8859`, `8860`, `8861`, `8862`, `8863`, `8864`, `8865`, `8866`, `8867`, `8868`, `8869`, `8870`, `8871`, `8872`, `8873`, `8874`, `8875`, `8876`, `8877`, `8878`, `8879`, `8880`, `8881`, `8882`, `8883`, `8884`, `8885`, `8886`, `8887`, `8888`, `8889`, `8890`, `8891`, `8892`, `8893`, `8894`, `8895`, `8896`, `8897`, `8898`, `8899`, `8900`, `8901`, `8902`, `8903`, `8904`, `8905`, `8906`, `8907`, `8908`, `8909`, `8910`, `8911`, `8912`, `8913`, `8914`, `8915`, `8916`, `8917`, `8918`, `8919`, `8920`, `8921`, `8922`, `8923`, `8924`, `8925`, `8926`, `8927`, `8928`, `8929`, `8930`, `8931`, `8932`, `8933`, `8934`, `8935`, `8936`, `8937`, `8938`, `8939`, `8940`, `8941`, `8942`, `8943`, `8944`, `8945`, `8946`, `8947`, `8948`, `8949`, `8950`, `8951`, `8952`, `8953`, `8954`, `8955`, `8956`, `8957`, `8958`, `8959`, `8960`, `8961`, `8962`, `8963`, `8964`, `8965`, `8966`, `8967`, `8968`, `8969`, `8970`, `8971`, `8972`, `8973`, `8974`, `8975`, `8976`, `8977`, `8978`, `8979`, `8980`, `8981`, `8982`, `8983`, `8984`, `8985`, `8986`, `8987`, `8988`, `8989`, `8990`, `8991`, `8992`, `8993`, `8994`, `8995`, `8996`, `8997`, `8998`, `8999`, `9000`, `9001`, `9002`, `9003`, `9004`, `9005`, `9006`, `9007`, `9008`, `9009`, `9010`, `9011`, `9012`, `9013`, `9014`, `9015`, `9016`, `9017`, `9018`, `9019`, `9020`, `9021`, `9022`, `9023`, `9024`, `9025`, `9026`, `9027`, `9028`, `9029`, `9030`, `9031`, `9032`, `9033`, `9034`, `9035`, `9036`, `9037`, `9038`, `9039`, `9040`, `9041`, `9042`, `9043`, `9044`, `9045`, `9046`, `9047`, `9048`, `9049`, `9050`, `9051`, `9052`, `9053`, `9054`, `9055`, `9056`, `9057`, `9058`, `9059`, `9060`, `9061`, `9062`, `9063`, `9064`, `9065`, `9066`, `9067`, `9068`, `9069`, `9070`, `9071`, `9072`, `9073`, `9074`, `9075`, `9076`, `9077`, `9078`, `9079`, `9080`, `9081`, `9082`, `9083`, `9084`, `9085`, `9086`, `9087`, `9088`, `9089`, `9090`, `9091`, `9092`, `9093`, `9094`, `9095`, `9096`, `9097`, `9098`, `9099`, `9100`, `9101`, `9102`, `9103`, `9104`, `9105`, `9106`, `9107`, `9108`, `9109`, `9110`, `9111`, `9112`, `9113`, `9114`, `9115`, `9116`, `9117`, `9118`, `9119`, `9120`, `9121`, `9122`, `9123`, `9124`, `9125`, `9126`, `9127`, `9128`, `9129`, `9130`, `9131`, `9132`, `9133`, `9134`, `9135`, `9136`, `9137`, `9138`, `9139`, `9140`, `9141`, `9142`, `9143`, `9144`, `9145`, `9146`, `9147`, `9148`, `9149`, `9150`, `9151`, `9152`, `9153`, `9154`, `9155`, `9156`, `9157`, `9158`, `9159`, `9160`, `9161`, `9162`, `9163`, `9164`, `9165`, `9166`, `9167`, `9168`, `9169`, `9170`, `9171`, `9172`, `9173`, `9174`, `9175`, `9176`, `9177`, `9178`, `9179`, `9180`, `9181`, `9182`, `9183`, `9184`, `9185`, `9186`, `9187`, `9188`, `9189`, `9190`, `9191`, `9192`, `9193`, `9194`, `9195`, `9196`, `9197`, `9198`, `9199`, `9200`, `9201`, `9202`, `9203`, `9204`, `9205`, `9206`, `9207`, `9208`, `9209`, `9210`, `9211`, `9212`, `9213`, `9214`, `9215`, `9216`, `9217`, `9218`, `9219`, `9220`, `9221`, `9222`, `9223`, `9224`, `9225`, `9226`, `9227`, `9228`, `9229`, `9230`, `9231`, `9232`, `9233`, `9234`, `9235`, `9236`, `9237`, `9238`, `9239`, `9240`, `9241`, `9242`, `9243`, `9244`, `9245`, `9246`, `9247`, `9248`, `9249`, `9250`, `9251`, `9252`, `9253`, `9254`, `9255`, `9256`, `9257`, `9258`, `9259`, `9260`, `9261`, `9262`, `9263`, `9264`, `9265`, `9266`, `9267`, `9268`, `9269`, `9270`, `9271`, `9272`, `9273`, `9274`, `9275`, `9276`, `9277`, `9278`, `9279`, `9280`, `9281`, `9282`, `9283`, `9284`, `9285`, `9286`, `9287`, `9288`, `9289`, `9290`, `9291`, `9292`, `9293`, `9294`, `9295`, `9296`, `9297`, `9298`, `9299`, `9300`, `9301`, `9302`, `9303`, `9304`, `9305`, `9306`, `9307`, `9308`, `9309`, `9310`, `9311`, `9312`, `9313`, `9314`, `9315`, `9316`, `9317`, `9318`, `9319`, `9320`, `9321`, `9322`, `9323`, `9324`, `9325`, `9326`, `9327`, `9328`, `9329`, `9330`, `9331`, `9332`, `9333`, `9334`, `9335`, `9336`, `9337`, `9338`, `9339`, `9340`, `9341`, `9342`, `9343`, `9344`, `9345`, `9346`, `9347`, `9348`, `9349`, `9350`, `9351`, `9352`, `9353`, `9354`, `9355`, `9356`, `9357`, `9358`, `9359`, `9360`, `9361`, `9362`, `9363`, `9364`, `9365`, `9366`, `9367`, `9368`, `9369`, `9370`, `9371`, `9372`, `9373`, `9374`, `9375`, `9376`, `9377`, `9378`, `9379`, `9380`, `9381`, `9382`, `9383`, `9384`, `9385`, `9386`, `9387`, `9388`, `9389`, `9390`, `9391`, `9392`, `9393`, `9394`, `9395`, `9396`, `9397`, `9398`, `9399`, `9400`, `9401`, `9402`, `9403`, `9404`, `9405`, `9406`, `9407`, `9408`, `9409`, `9410`, `9411`, `9412`, `9413`, `9414`, `9415`, `9416`, `9417`, `9418`, `9419`, `9420`, `9421`, `9422`, `9423`, `9424`, `9425`, `9426`, `9427`, `9428`, `9429`, `9430`, `9431`, `9432`, `9433`, `9434`, `9435`, `9436`, `9437`, `9438`, `9439`, `9440`, `9441`, `9442`, `9443`, `9444`, `9445`, `9446`, `9447`, `9448`, `9449`, `9450`, `9451`, `9452`, `9453`, `9454`, `9455`, `9456`, `9457`, `9458`, `9459`, `9460`, `9461`, `9462`, `9463`, `9464`, `9465`, `9466`, `9467`, `9468`, `9469`, `9470`, `9471`, `9472`, `9473`, `9474`, `9475`, `9476`, `9477`, `9478`, `9479`, `9480`, `9481`, `9482`, `9483`, `9484`, `9485`, `9486`, `9487`, `9488`, `9489`, `9490`, `9491`, `9492`, `9493`, `9494`, `9495`, `9496`, `9497`, `9498`, `9499`, `9500`, `9501`, `9502`, `9503`, `9504`, `9505`, `9506`, `9507`, `9508`, `9509`, `9510`, `9511`, `9512`, `9513`, `9514`, `9515`, `9516`, `9517`, `9518`, `9519`, `9520`, `9521`, `9522`, `9523`, `9524`, `9525`, `9526`, `9527`, `9528`, `9529`, `9530`, `9531`, `9532`, `9533`, `9534`, `9535`, `9536`, `9537`, `9538`, `9539`, `9540`, `9541`, `9542`, `9543`, `9544`, `9545`, `9546`, `9547`, `9548`, `9549`, `9550`, `9551`, `9552`, `9553`, `9554`, `9555`, `9556`, `9557`, `9558`, `9559`, `9560`, `9561`, `9562`, `9563`, `9564`, `9565`, `9566`, `9567`, `9568`, `9569`, `9570`, `9571`, `9572`, `9573`, `9574`, `9575`, `9576`, `9577`, `9578`, `9579`, `9580`, `9581`, `9582`, `9583`, `9584`, `9585`, `9586`, `9587`, `9588`, `9589`, `9590`, `9591`, `9592`, `9593`, `9594`, `9595`, `9596`, `9597`, `9598`, `9599`, `9600`, `9601`, `9602`, `9603`, `9604`, `9605`, `9606`, `9607`, `9608`, `9609`, `9610`, `9611`, `9612`, `9613`, `9614`, `9615`, `9616`, `9617`, `9618`, `9619`, `9620`, `9621`, `9622`, `9623`, `9624`, `9625`, `9626`, `9627`, `9628`, `9629`, `9630`, `9631`, `9632`, `9633`, `9634`, `9635`, `9636`, `9637`, `9638`, `9639`, `9640`, `9641`, `9642`, `9643`, `9644`, `9645`, `9646`, `9647`, `9648`, `9649`, `9650`, `9651`, `9652`, `9653`, `9654`, `9655`, `9656`, `9657`, `9658`, `9659`, `9660`, `9661`, `9662`, `9663`, `9664`, `9665`, `9666`, `9667`, `9668`, `9669`, `9670`, `9671`, `9672`, `9673`, `9674`, `9675`, `9676`, `9677`, `9678`, `9679`, `9680`, `9681`, `9682`, `9683`, `9684`, `9685`, `9686`, `9687`, `9688`, `9689`, `9690`, `9691`, `9692`, `9693`, `9694`, `9695`, `9696`, `9697`, `9698`, `9699`, `9700`, `9701`, `9702`, `9703`, `9704`, `9705`, `9706`, `9707`, `9708`, `9709`, `9710`, `9711`, `9712`, `9713`, `9714`, `9715`, `9716`, `9717`, `9718`, `9719`, `9720`, `9721`, `9722`, `9723`, `9724`, `9725`, `9726`, `9727`, `9728`, `9729`, `9730`, `9731`, `9732`, `9733`, `9734`, `9735`, `9736`, `9737`, `9738`, `9739`, `9740`, `9741`, `9742`, `9743`, `9744`, `9745`, `9746`, `9747`, `9748`, `9749`, `9750`, `9751`, `9752`, `9753`, `9754`, `9755`, `9756`, `9757`, `9758`, `9759`, `9760`, `9761`, `9762`, `9763`, `9764`, `9765`, `9766`, `9767`, `9768`, `9769`, `9770`, `9771`, `9772`, `9773`, `9774`, `9775`, `9776`, `9777`, `9778`, `9779`, `9780`, `9781`, `9782`, `9783`, `9784`, `9785`, `9786`, `9787`, `9788`, `9789`, `9790`, `9791`, `9792`, `9793`, `9794`, `9795`, `9796`, `9797`, `9798`, `9799`, `9800`, `9801`, `9802`, `9803`, `9804`, `9805`, `9806`, `9807`, `9808`, `9809`, `9810`, `9811`, `9812`, `9813`, `9814`, `9815`, `9816`, `9817`, `9818`, `9819`, `9820`, `9821`, `9822`, `9823`, `9824`, `9825`, `9826`, `9827`, `9828`, `9829`, `9830`, `9831`, `9832`, `9833`, `9834`, `9835`, `9836`, `9837`, `9838`, `9839`, `9840`, `9841`, `9842`, `9843`, `9844`, `9845`, `9846`, `9847`, `9848`, `9849`, `9850`, `9851`, `9852`, `9853`, `9854`, `9855`, `9856`, `9857`, `9858`, `9859`, `9860`, `9861`, `9862`, `9863`, `9864`, `9865`, `9866`, `9867`, `9868`, `9869`, `9870`, `9871`, `9872`, `9873`, `9874`, `9875`, `9876`, `9877`, `9878`, `9879`, `9880`, `9881`, `9882`, `9883`, `9884`, `9885`, `9886`, `9887`, `9888`, `9889`, `9890`, `9891`, `9892`, `9893`, `9894`, `9895`, `9896`, `9897`, `9898`, `9899`, `9900`, `9901`, `9902`, `9903`, `9904`, `9905`, `9906`, `9907`, `9908`, `9909`, `9910`, `9911`, `9912`, `9913`, `9914`, `9915`, `9916`, `9917`, `9918`, `9919`, `9920`, `9921`, `9922`, `9923`, `9924`, `9925`, `9926`, `9927`, `9928`, `9929`, `9930`, `9931`, `9932`, `9933`, `9934`, `9935`, `9936`, `9937`, `9938`, `9939`, `9940`, `9941`, `9942`, `9943`, `9944`, `9945`, `9946`, `9947`, `9948`, `9949`, `9950`, `9951`, `9952`, `9953`, `9954`, `9955`, `9956`, `9957`, `9958`, `9959`, `9960`, `9961`, `9962`, `9963`, `9964`, `9965`, `9966`, `9967`, `9968`, `9971`, `9972`, `9973`, `9974`, `9975`, `9976`, `9977`, `9978`, `9979`, `9980`, `9981`, `9982`, `9983`, `9984`, `9985`, `9986`, `9987`, `9988`, `9989`, `9990`, `9991`, `9992`, `9993`, `9994`, `9995`, `9996`, `9997`, `9998`, `9999`, `10000`, `10001`, `10002`, `10003`, `10004`, `10005`, `10006`, `10007`, `10008`, `10009`, `10010`, `10011`, `10012`, `10013`, `10014`, `10015`, `10016`, `10017`, `10018`, `10019`, `10020`, `10021`, `10022`, `10023`, `10024`, `10025`, `10026`, `10027`, `10028`, `10029`, `10030`, `10031`, `10032`, `10033`, `10034`, `10035`, `10036`, `10037`, `10038`, `10039`, `10040`, `10041`, `10042`, `10043`, `10044`, `10045`, `10046`, `10047`, `10048`, `10049`, `10050`, `10051`, `10052`, `10053`, `10054`, `10055`, `10056`, `10057`, `10058`, `10059`, `10060`, `10061`, `10062`, `10063`, `10064`, `10065`, `10066`, `10067`, `10068`, `10069`, `10070`, `10071`, `10072`, `10073`, `10074`, `10075`, `10076`, `10077`, `10078`, `10079`, `10080`, `10081`, `10082`, `10083`, `10084`, `10085`, `10086`, `10087`, `10088`, `10089`, `10090`, `10091`, `10092`, `10093`, `10094`, `10095`, `10096`, `10097`, `10098`, `10099`, `10100`, `10101`, `10102`, `10103`, `10104`, `10105`, `10106`, `10107`, `10108`, `10109`, `10110`, `10111`, `10112`, `10113`, `10114`, `10115`, `10116`, `10117`, `10118`, `10119`, `10120`, `10121`, `10122`, `10123`, `10124`, `10125`, `10126`, `10127`, `10128`, `10129`, `10130`, `10131`, `10132`, `10133`, `10134`, `10135`, `10136`, `10137`, `10138`, `10139`, `10140`, `10141`, `10142`, `10143`, `10144`, `10145`, `10146`, `10147`, `10148`, `10149`, `10150`, `10151`, `10152`, `10153`, `10154`, `10155`, `10156`, `10157`, `10158`, `10159`, `10160`, `10161`, `10162`, `10163`, `10164`, `10165`, `10166`, `10167`, `10168`, `10169`, `10170`, `10171`, `10172`, `10173`, `10174`, `10175`, `10176`, `10177`, `10178`, `10179`, `10180`, `10181`, `10182`, `10183`, `10184`, `10185`, `10186`, `10187`, `10188`, `10189`, `10190`, `10191`, `10192`, `10193`, `10194`, `10195`, `10196`, `10197`, `10198`, `10199`, `10200`, `10201`, `10202`, `10203`, `10204`, `10205`, `10206`, `10207`, `10208`, `10209`, `10210`, `10211`, `10212`, `10213`, `10214`, `10215`, `10216`, `10217`, `10218`, `10219`, `10220`, `10221`, `10222`, `10223`, `10224`, `10225`, `10226`, `10227`, `10228`, `10229`, `10230`, `10231`, `10232`, `10233`, `10234`, `10235`, `10236`, `10237`, `10238`, `10239`, `10240`, `10241`, `10242`, `10243`, `10244`, `10245`, `10246`, `10247`, `10248`, `10249`, `10250`, `10251`, `10252`, `10253`, `10254`, `10255`, `10256`, `10257`, `10258`, `10259`, `10260`, `10261`, `10262`, `10263`, `10264`, `10265`, `10266`, `10267`, `10268`, `10269`, `10270`, `10271`, `10272`, `10273`, `10274`, `10275`, `10276`, `10277`, `10278`, `10279`, `10280`, `10281`, `10282`, `10283`, `10284`, `10285`, `10286`, `10287`, `10288`, `10289`, `10290`, `10291`, `10292`, `10293`, `10294`, `10295`, `10296`, `10297`, `10298`, `10299`, `10300`, `10301`, `10302`, `10303`, `10304`, `10305`, `10306`, `10307`, `10308`, `10309`, `10310`, `10311`, `10312`, `10313`, `10314`, `10315`, `10316`, `10317`, `10318`, `10319`, `10320`, `10321`, `10322`, `10323`, `10324`, `10325`, `10326`, `10327`, `10328`, `10329`, `10330`, `10331`, `10332`, `10333`, `10334`, `10335`, `10336`, `10337`, `10338`, `10339`, `10340`, `10341`, `10342`, `10343`, `10344`, `10345`, `10346`, `10347`, `10348`, `10349`, `10350`, `10351`, `10352`, `10353`, `10354`, `10355`, `10356`, `10357`, `10358`, `10359`, `10360`, `10361`, `10362`, `10363`, `10364`, `10365`, `10366`, `10367`, `10368`, `10369`, `10370`, `10371`, `10372`, `10373`, `10374`, `10375`, `10376`, `10377`, `10378`, `10379`, `10380`, `10381`, `10382`, `10383`, `10384`, `10385`, `10386`, `10387`, `10388`, `10389`, `10390`, `10391`, `10392`, `10393`, `10394`, `10395`, `10396`, `10397`, `10398`, `10399`, `10400`, `10401`, `10402`, `10403`, `10404`, `10405`, `10406`, `10407`, `10408`, `10409`, `10410`, `10411`, `10412`, `10413`, `10414`, `10415`, `10416`, `10417`, `10418`, `10419`, `10420`, `10421`, `10422`, `10423`, `10424`, `10425`, `10426`, `10427`, `10428`, `10429`, `10430`, `10431`, `10432`, `10433`, `10434`, `10435`, `10436`, `10437`, `10438`, `10439`, `10440`, `10441`, `10442`, `10443`, `10444`, `10445`, `10446`, `10447`, `10448`, `10449`, `10450`, `10451`, `10452`, `10453`, `10454`, `10455`, `10456`, `10457`, `10458`, `10459`, `10460`, `10461`, `10462`, `10463`, `10464`, `10465`, `10466`, `10467`, `10468`, `10469`, `10470`, `10471`, `10472`, `10473`, `10474`, `10475`, `10476`, `10477`, `10478`, `10479`, `10480`, `10481`, `10482`, `10483`, `10484`, `10485`, `10486`, `10487`, `10488`, `10489`, `10490`, `10491`, `10492`, `10493`, `10494`, `10495`, `10496`, `10497`, `10498`, `10499`, `10500`, `10501`, `10502`, `10503`, `10504`, `10505`, `10506`, `10507`, `10508`, `10509`, `10510`, `10511`, `10512`, `10513`, `10514`, `10515`, `10516`, `10517`, `10518`, `10519`, `10520`, `10521`, `10522`, `10523`, `10524`, `10525`, `10526`, `10527`, `10528`, `10529`, `10530`, `10531`, `10532`, `10533`, `10534`, `10535`, `10536`, `10537`, `10538`, `10539`, `10540`, `10541`, `10542`, `10543`, `10544`, `10545`, `10546`, `10547`, `10548`, `10549`, `10550`, `10551`, `10552`, `10553`, `10554`, `10555`, `10556`, `10557`, `10558`, `10559`, `10560`, `10561`, `10562`, `10563`, `10564`, `10565`, `10566`, `10567`, `10568`, `10569`, `10570`, `10571`, `10572`, `10573`, `10574`, `10575`, `10576`, `10577`, `10578`, `10579`, `10580`, `10581`, `10582`, `10583`, `10584`, `10585`, `10586`, `10587`, `10588`, `10589`, `10590`, `10591`, `10592`, `10593`, `10594`, `10595`, `10596`, `10597`, `10598`, `10599`, `10600`, `10601`, `10602`, `10603`, `10604`, `10605`, `10606`, `10607`, `10608`, `10609`, `10610`, `10611`, `10612`, `10613`, `10614`, `10615`, `10616`, `10617`, `10618`, `10619`, `10620`, `10621`, `10622`, `10623`, `10624`, `10625`, `10626`, `10627`, `10628`, `10629`, `10630`, `10631`, `10632`, `10633`, `10634`, `10635`, `10636`, `10637`, `10638`, `10639`, `10640`, `10641`, `10642`, `10643`, `10644`, `10645`, `10646`, `10647`, `10648`, `10649`, `10650`, `10651`, `10652`, `10653`, `10654`, `10655`, `10656`, `10657`, `10658`, `10659`, `10660`, `10661`, `10662`, `10663`, `10664`, `10665`, `10666`, `10667`, `10668`, `10669`, `10670`, `10671`, `10672`, `10673`, `10674`, `10675`, `10676`, `10677`, `10678`, `10679`, `10680`, `10681`, `10682`, `10683`, `10684`, `10685`, `10686`, `10687`, `10688`, `10689`, `10690`, `10691`, `10692`, `10693`, `10694`, `10695`, `10696`, `10697`, `10698`, `10699`, `10700`, `10701`, `10702`, `10703`, `10704`, `10705`, `10706`, `10707`, `10708`, `10709`, `10710`, `10711`, `10712`, `10713`, `10714`, `10715`, `10716`, `10717`, `10718`, `10719`, `10720`, `10721`, `10722`, `10723`, `10724`, `10725`, `10726`, `10727`, `10728`, `10729`, `10730`, `10731`, `10732`, `10733`, `10734`, `10735`, `10736`, `10737`, `10738`, `10739`, `10740`, `10741`, `10742`, `10743`, `10744`, `10745`, `10746`, `10747`, `10748`, `10749`, `10750`, `10751`, `10752`, `10753`, `10754`, `10755`, `10756`, `10757`, `10758`, `10759`, `10760`, `10761`, `10762`, `10763`, `10764`, `10765`, `10766`, `10767`, `10768`, `10769`, `10770`, `10771`, `10772`, `10773`, `10774`, `10775`, `10776`, `10777`, `10778`, `10779`, `10780`, `10781`, `10782`, `10783`, `10784`, `10785`, `10786`, `10787`, `10788`, `10789`, `10790`, `10791`, `10792`, `10793`, `10794`, `10795`, `10796`, `10797`, `10798`, `10799`, `10800`, `10801`, `10802`, `10803`, `10804`, `10805`, `10806`, `10807`, `10808`, `10809`, `10810`, `10811`, `10812`, `10813`, `10814`, `10815`, `10816`, `10817`, `10818`, `10819`, `10820`, `10821`, `10822`, `10823`, `10824`, `10825`, `10826`, `10827`, `10828`, `10829`, `10830`, `10831`, `10832`, `10833`, `10834`, `10835`, `10836`, `10837`, `10838`, `10839`, `10840`, `10841`, `10842`, `10843`, `10844`, `10845`, `10846`, `10847`, `10848`, `10849`, `10850`, `10851`, `10852`, `10853`, `10854`, `10855`, `10856`, `10857`, `10858`, `10859`, `10860`, `10861`, `10862`, `10863`, `10864`, `10865`, `10866`, `10867`, `10868`, `10869`, `10870`, `10871`, `10872`, `10873`, `10874`, `10875`, `10876`, `10877`, `10878`, `10879`, `10880`, `10881`, `10882`, `10883`, `10884`, `10885`, `10886`, `10887`, `10888`, `10889`, `10890`, `10891`, `10892`, `10893`, `10894`, `10895`, `10896`, `10897`, `10898`, `10899`, `10900`, `10901`, `10902`, `10903`, `10904`, `10905`, `10906`, `10907`, `10908`, `10909`, `10910`, `10911`, `10912`, `10913`, `10914`, `10915`, `10916`, `10917`, `10918`, `10919`, `10920`, `10921`, `10922`, `10923`, `10924`, `10925`, `10926`, `10927`, `10928`, `10929`, `10930`, `10931`, `10932`, `10933`, `10934`, `10935`, `10936`, `10937`, `10938`, `10939`, `10940`, `10941`, `10942`, `10943`, `10944`, `10945`, `10946`, `10947`, `10948`, `10949`, `10950`, `10951`, `10952`, `10953`, `10954`, `10955`, `10956`, `10957`, `10958`, `10959`, `10960`, `10961`, `10962`, `10963`, `10964`, `10965`, `10966`, `10967`, `10968`, `10969`, `10970`, `10971`, `10972`, `10973`, `10974`, `10975`, `10976`, `10977`, `10978`, `10979`, `10980`, `10981`, `10982`, `10983`, `10984`, `10985`, `10986`, `10987`, `10988`, `10989`, `10990`, `10991`, `10992`, `10993`, `10994`, `10995`, `10996`, `10997`, `10998`, `10999`, `11000`, `11001`, `11002`, `11003`, `11004`, `11005`, `11006`, `11007`, `11008`, `11009`, `11010`, `11011`, `11012`, `11013`, `11014`, `11015`, `11016`, `11017`, `11018`, `11019`, `11020`, `11021`, `11022`, `11023`, `11024`, `11025`, `11026`, `11027`, `11028`, `11029`, `11030`, `11031`, `11032`, `11033`, `11034`, `11035`, `11036`, `11037`, `11038`, `11039`, `11040`, `11041`, `11042`, `11043`, `11044`, `11045`, `11046`, `11047`, `11048`, `11049`, `11050`, `11051`, `11052`, `11053`, `11054`, `11055`, `11056`, `11057`, `11058`, `11059`, `11060`, `11061`, `11062`, `11063`, `11064`, `11065`, `11066`, `11067`, `11068`, `11069`, `11070`, `11071`, `11072`, `11073`, `11074`, `11075`, `11076`, `11077`, `11078`, `11079`, `11080`, `11081`, `11082`, `11083`, `11084`, `11085`, `11086`, `11087`, `11088`, `11089`, `11090`, `11091`, `11092`, `11093`, `11094`, `11095`, `11096`, `11097`, `11098`, `11099`, `11100`, `11101`, `11102`, `11103`, `11104`, `11105`, `11106`, `11107`, `11108`, `11109`, `11110`, `11111`, `11112`, `11113`, `11114`, `11115`, `11116`, `11117`, `11118`, `11119`, `11120`, `11121`, `11122`, `11123`, `11124`, `11125`, `11126`, `11127`, `11128`, `11129`, `11130`, `11131`, `11132`, `11133`, `11134`, `11135`, `11136`, `11137`, `11138`, `11139`, `11140`, `11141`, `11142`, `11143`, `11144`, `11145`, `11146`, `11147`, `11148`, `11149`, `11150`, `11151`, `11152`, `11153`, `11154`, `11155`, `11156`, `11157`, `11158`, `11159`, `11160`, `11161`, `11162`, `11163`, `11164`, `11165`, `11166`, `11167`, `11168`, `11169`, `11170`, `11171`, `11172`, `11173`, `11174`, `11175`, `11176`, `11177`, `11178`, `11179`, `11180`, `11181`, `11182`, `11183`, `11184`, `11185`, `11186`, `11187`, `11188`, `11189`, `11190`, `11191`, `11192`, `11193`, `11194`, `11195`, `11196`, `11197`, `11198`, `11199`, `11200`, `11201`, `11202`, `11203`, `11204`, `11205`, `11206`, `11207`, `11208`, `11209`, `11210`, `11211`, `11212`, `11213`, `11214`, `11215`, `11216`, `11217`, `11218`, `11219`, `11220`, `11221`, `11222`, `11223`, `11224`, `11225`, `11226`, `11227`, `11228`, `11229`, `11230`, `11231`, `11232`, `11233`, `11234`, `11235`, `11236`, `11237`, `11238`, `11239`, `11240`, `11241`, `11242`, `11243`, `11244`, `11245`, `11246`, `11247`, `11248`, `11249`, `11250`, `11251`, `11252`, `11253`, `11254`, `11255`, `11256`, `11257`, `11258`, `11259`, `11260`, `11261`, `11262`, `11263`, `11264`, `11265`, `11266`, `11267`, `11268`, `11269`, `11270`, `11271`, `11272`, `11273`, `11274`, `11275`, `11276`, `11277`, `11278`, `11279`, `11280`, `11281`, `11282`, `11283`, `11284`, `11285`, `11286`, `11287`, `11288`, `11289`, `11290`, `11291`, `11292`, `11293`, `11294`, `11295`, `11296`, `11297`, `11298`, `11299`, `11300`, `11301`, `11302`, `11303`, `11304`, `11305`, `11306`, `11307`, `11308`, `11309`, `11310`, `11311`, `11312`, `11313`, `11314`, `11315`, `11316`, `11317`, `11318`, `11319`, `11320`, `11321`, `11322`, `11323`, `11324`, `11325`, `11326`, `11327`, `11328`, `11329`, `11330`, `11331`, `11332`, `11333`, `11334`, `11335`, `11336`, `11337`, `11338`, `11339`, `11340`, `11341`, `11342`, `11343`, `11344`, `11345`, `11346`, `11347`, `11348`, `11349`, `11350`, `11351`, `11352`, `11353`, `11354`, `11355`, `11356`, `11357`, `11358`, `11359`, `11360`, `11361`, `11362`, `11363`, `11364`, `11365`, `11366`, `11367`, `11368`, `11369`, `11370`, `11371`, `11372`, `11373`, `11374`, `11375`, `11376`, `11377`, `11378`, `11379`, `11380`, `11381`, `11382`, `11383`, `11384`, `11385`, `11386`, `11387`, `11388`, `11389`, `11390`, `11391`, `11392`, `11393`, `11394`, `11395`, `11396`, `11397`, `11398`, `11399`, `11400`, `11401`, `11402`, `11403`, `11404`, `11405`, `11406`, `11407`, `11408`, `11409`, `11410`, `11411`, `11412`, `11413`, `11414`, `11415`, `11416`, `11417`, `11418`, `11419`, `11420`, `11421`, `11422`, `11423`, `11424`, `11425`, `11426`, `11427`, `11428`, `11429`, `11430`, `11431`, `11432`, `11433`, `11434`, `11435`, `11436`, `11437`, `11438`, `11439`, `11440`, `11441`, `11442`, `11443`, `11444`, `11445`, `11446`, `11447`, `11448`, `11449`, `11450`, `11451`, `11452`, `11453`, `11454`, `11455`, `11456`, `11457`, `11458`, `11459`, `11460`, `11461`, `11462`, `11463`, `11464`, `11465`, `11466`, `11467`, `11468`, `11469`, `11470`, `11471`, `11472`, `11473`, `11474`, `11475`, `11476`, `11477`, `11478`, `11479`, `11480`, `11481`, `11482`, `11483`, `11484`, `11485`, `11486`, `11487`, `11488`, `11489`, `11490`, `11491`, `11492`, `11493`, `11494`, `11495`, `11496`, `11497`, `11498`, `11499`, `11500`, `11501`, `11502`, `11503`, `11504`, `11505`, `11506`, `11507`, `11508`, `11509`, `11510`, `11511`, `11512`, `11513`, `11514`, `11515`, `11516`, `11517`, `11518`, `11519`, `11520`, `11521`, `11522`, `11523`, `11524`, `11525`, `11526`, `11527`, `11528`, `11529`, `11530`, `11531`, `11532`, `11533`, `11534`, `11535`, `11536`, `11537`, `11538`, `11539`, `11540`, `11541`, `11542`, `11543`, `11544`, `11545`, `11546`, `11547`, `11548`, `11549`, `11550`, `11551`, `11552`, `11553`, `11554`, `11555`, `11556`, `11557`, `11558`, `11559`, `11560`, `11561`, `11562`, `11563`, `11564`, `11565`, `11566`, `11567`, `11568`, `11569`, `11570`, `11571`, `11572`, `11573`, `11574`, `11575`, `11576`, `11577`, `11578`, `11579`, `11580`, `11581`, `11582`, `11583`, `11584`, `11585`, `11586`, `11587`, `11588`, `11589`, `11590`, `11591`, `11592`, `11593`, `11594`, `11595`, `11596`, `11597`, `11598`, `11599`, `11600`, `11601`, `11602`, `11603`, `11604`, `11605`, `11606`, `11607`, `11608`, `11609`, `11610`, `11611`, `11612`, `11613`, `11614`, `11615`, `11616`, `11617`, `11618`, `11619`, `11620`, `11621`, `11622`, `11623`, `11624`, `11625`, `11626`, `11627`, `11628`, `11629`, `11630`, `11631`, `11632`, `11633`, `11634`, `11635`, `11636`, `11637`, `11638`, `11639`, `11640`, `11641`, `11642`, `11643`, `11644`, `11645`, `11646`, `11647`, `11648`, `11649`, `11650`, `11651`, `11652`, `11653`, `11654`, `11655`, `11656`, `11657`, `11658`, `11659`, `11660`, `11661`, `11662`, `11663`, `11664`, `11665`, `11666`, `11667`, `11668`, `11669`, `11670`, `11671`, `11672`, `11673`, `11674`, `11675`, `11676`, `11677`, `11678`, `11679`, `11680`, `11681`, `11682`, `11683`, `11684`, `11685`, `11686`, `11687`, `11688`, `11689`, `11690`, `11691`, `11692`, `11693`, `11694`, `11695`, `11696`, `11697`, `11698`, `11699`, `11700`, `11701`, `11702`, `11703`, `11704`, `11705`, `11706`, `11707`, `11708`, `11709`, `11710`, `11711`, `11712`, `11713`, `11714`, `11715`, `11716`, `11717`, `11718`, `11719`, `11720`, `11721`, `11722`, `11723`, `11724`, `11725`, `11726`, `11727`, `11728`, `11729`, `11730`, `11731`, `11732`, `11733`, `11734`, `11735`, `11736`, `11737`, `11738`, `11739`, `11740`, `11741`, `11742`, `11743`, `11744`, `11745`, `11746`, `11747`, `11748`, `11749`, `11750`, `11751`, `11752`, `11753`, `11754`, `11755`, `11756`, `11757`, `11758`, `11759`, `11760`, `11761`, `11762`, `11763`, `11764`, `11765`, `11766`, `11767`, `11768`, `11769`, `11770`, `11771`, `11772`, `11773`, `11774`, `11775`, `11776`, `11777`, `11778`, `11779`, `11780`, `11781`, `11782`, `11783`, `11784`, `11785`, `11786`, `11787`, `11788`, `11789`, `11790`, `11791`, `11792`, `11793`, `11794`, `11795`, `11796`, `11797`, `11798`, `11799`, `11800`, `11801`, `11802`, `11803`, `11804`, `11805`, `11806`, `11807`, `11808`, `11809`, `11810`, `11811`, `11812`, `11813`, `11814`, `11815`, `11816`, `11817`, `11818`, `11819`, `11820`, `11821`, `11822`, `11823`, `11824`, `11825`, `11826`, `11827`, `11828`, `11829`, `11830`, `11831`, `11832`, `11833`, `11834`, `11835`, `11836`, `11837`, `11838`, `11839`, `11840`, `11841`, `11842`, `11843`, `11844`, `11845`, `11846`, `11847`, `11848`, `11849`, `11850`, `11851`, `11852`, `11853`, `11854`, `11855`, `11856`, `11857`, `11858`, `11859`, `11860`, `11861`, `11862`, `11863`, `11864`, `11865`, `11866`, `11867`, `11868`, `11869`, `11870`, `11871`, `11872`, `11873`, `11874`, `11875`, `11876`, `11877`, `11878`, `11879`, `11880`, `11881`, `11882`, `11883`, `11884`, `11885`, `11886`, `11887`, `11888`, `11889`, `11890`, `11891`, `11892`, `11893`, `11894`, `11895`, `11896`, `11897`, `11898`, `11899`, `11900`, `11901`, `11902`, `11903`, `11904`, `11905`, `11906`, `11907`, `11908`, `11909`, `11910`, `11911`, `11912`, `11913`, `11914`, `11915`, `11916`, `11917`, `11918`, `11919`, `11920`, `11921`, `11922`, `11923`, `11924`, `11925`, `11926`, `11927`, `11928`, `11929`, `11930`, `11931`, `11932`, `11933`, `11934`, `11935`, `11936`, `11937`, `11938`, `11939`, `11940`, `11941`, `11942`, `11943`, `11944`, `11945`, `11946`, `11947`, `11948`, `11949`, `11950`, `11951`, `11952`, `11953`, `11954`, `11955`, `11956`, `11957`, `11958`, `11959`, `11960`, `11961`, `11962`, `11963`, `11964`, `11965`, `11966`, `11967`, `11968`, `11969`, `11970`, `11971`, `11972`, `11973`, `11974`, `11975`, `11976`, `11977`, `11978`, `11979`, `11980`, `11981`, `11982`, `11983`, `11984`, `11985`, `11986`, `11987`, `11988`, `11989`, `11990`, `11991`, `11992`, `11993`, `11994`, `11995`, `11996`, `11997`, `11998`, `11999`, `12000`, `12001`, `12002`, `12003`, `12004`, `12005`, `12006`, `12007`, `12008`, `12009`, `12010`, `12011`, `12012`, `12013`, `12014`, `12015`, `12016`, `12017`, `12018`, `12019`, `12020`, `12021`, `12022`, `12023`, `12024`, `12025`, `12026`, `12027`, `12028`, `12029`, `12030`, `12031`, `12032`, `12033`, `12034`, `12035`, `12036`, `12037`, `12038`, `12039`, `12040`, `12041`, `12042`, `12043`, `12044`, `12045`, `12046`, `12047`, `12048`, `12049`, `12050`, `12051`, `12052`, `12053`, `12054`, `12055`, `12056`, `12057`, `12058`, `12059`, `12060`, `12061`, `12062`, `12063`, `12064`, `12065`, `12066`, `12067`, `12068`, `12069`, `12070`, `12071`, `12072`, `12073`, `12074`, `12075`, `12076`, `12077`, `12078`, `12079`, `12080`, `12081`, `12082`, `12083`, `12084`, `12085`, `12086`, `12087`, `12088`, `12089`, `12090`, `12091`, `12092`, `12093`, `12094`, `12095`, `12096`, `12097`, `12098`, `12099`, `12100`, `12101`, `12102`, `12103`, `12104`, `12105`, `12106`, `12107`, `12108`, `12109`, `12110`, `12111`, `12112`, `12113`, `12114`, `12115`, `12116`, `12117`, `12118`, `12119`, `12120`, `12121`, `12122`, `12123`, `12124`, `12125`, `12126`, `12127`, `12128`, `12129`, `12130`, `12131`, `12132`, `12133`, `12134`, `12135`, `12136`, `12137`, `12138`, `12139`, `12140`, `12141`, `12142`, `12143`, `12144`, `12145`, `12146`, `12147`, `12148`, `12149`, `12150`, `12151`, `12152`, `12153`, `12154`, `12155`, `12156`, `12157`, `12158`, `12159`, `12160`, `12161`, `12162`, `12163`, `12164`, `12165`, `12166`, `12167`, `12168`, `12169`, `12170`, `12171`, `12172`, `12173`, `12174`, `12175`, `12176`, `12177`, `12178`, `12179`, `12180`, `12181`, `12182`, `12183`, `12184`, `12185`, `12186`, `12187`, `12188`, `12189`, `12190`, `12191`, `12192`, `12193`, `12194`, `12195`, `12196`, `12197`, `12198`, `12199`, `12200`, `12201`, `12202`, `12203`, `12204`, `12205`, `12206`, `12207`, `12208`, `12209`, `12210`, `12211`, `12212`, `12213`, `12214`, `12215`, `12216`, `12217`, `12218`, `12219`, `12220`, `12221`, `12222`, `12223`, `12224`, `12225`, `12226`, `12227`, `12228`, `12229`, `12230`, `12231`, `12232`, `12233`, `12234`, `12235`, `12236`, `12237`, `12238`, `12239`, `12240`, `12241`, `12242`, `12243`, `12244`, `12245`, `12246`, `12247`, `12248`, `12249`, `12250`, `12251`, `12252`, `12253`, `12254`, `12255`, `12256`, `12257`, `12258`, `12259`, `12260`, `12261`, `12262`, `12263`, `12264`, `12265`, `12266`, `12267`, `12268`, `12269`, `12270`, `12271`, `12272`, `12273`, `12274`, `12275`, `12276`, `12277`, `12278`, `12279`, `12280`, `12281`, `12282`, `12283`, `12284`, `12285`, `12286`, `12287`, `12288`, `12289`, `12290`, `12291`, `12292`, `12293`, `12294`, `12295`, `12296`, `12297`, `12298`, `12299`, `12300`, `12301`, `12302`, `12303`, `12304`, `12305`, `12306`, `12307`, `12308`, `12309`, `12310`, `12311`, `12312`, `12313`, `12314`, `12315`, `12316`, `12317`, `12318`, `12319`, `12320`, `12321`, `12322`, `12323`, `12324`, `12325`, `12326`, `12327`, `12328`, `12329`, `12330`, `12331`, `12332`, `12333`, `12334`, `12335`, `12336`, `12337`, `12338`, `12339`, `12340`, `12341`, `12342`, `12343`, `12344`, `12345`, `12346`, `12347`, `12348`, `12349`, `12350`, `12351`, `12352`, `12353`, `12354`, `12355`, `12356`, `12357`, `12358`, `12359`, `12360`, `12361`, `12362`, `12363`, `12364`, `12365`, `12366`, `12367`, `12368`, `12369`, `12370`, `12371`, `12372`, `12373`, `12374`, `12375`, `12376`, `12377`, `12378`, `12379`, `12380`, `12381`, `12382`, `12383`, `12384`, `12385`, `12386`, `12387`, `12388`, `12389`, `12390`, `12391`, `12392`, `12393`, `12394`, `12395`, `12396`, `12397`, `12398`, `12399`, `12400`, `12401`, `12402`, `12403`, `12404`, `12405`, `12406`, `12407`, `12408`, `12409`, `12410`, `12411`, `12412`, `12413`, `12414`, `12415`, `12416`, `12417`, `12418`, `12419`, `12420`, `12421`, `12422`, `12423`, `12424`, `12425`, `12426`, `12427`, `12428`, `12429`, `12430`, `12431`, `12432`, `12433`, `12434`, `12435`, `12436`, `12437`, `12438`, `12439`, `12440`, `12441`, `12442`, `12443`, `12444`, `12445`, `12446`, `12447`, `12448`, `12449`, `12450`, `12451`, `12452`, `12453`, `12454`, `12455`, `12456`, `12457`, `12458`, `12459`, `12460`, `12461`, `12462`, `12463`, `12464`, `12465`, `12466`, `12467`, `12468`, `12469`, `12470`, `12471`, `12472`, `12473`, `12474`, `12475`, `12476`, `12477`, `12478`, `12479`, `12480`, `12481`, `12482`, `12483`, `12484`, `12485`, `12486`, `12487`, `12488`, `12489`, `12490`, `12491`, `12492`, `12493`, `12494`, `12495`, `12496`, `12497`, `12498`, `12499`, `12500`, `12501`, `12502`, `12503`, `12504`, `12505`, `12506`, `12507`, `12508`, `12509`, `12510`, `12511`, `12512`, `12513`, `12514`, `12515`, `12516`, `12517`, `12518`, `12519`, `12520`, `12521`, `12522`, `12523`, `12524`, `12525`, `12526`, `12527`, `12528`, `12529`, `12530`, `12531`, `12532`, `12533`, `12534`, `12535`, `12536`, `12537`, `12538`, `12539`, `12540`, `12541`, `12542`, `12543`, `12544`, `12545`, `12546`, `12547`, `12548`, `12549`, `12550`, `12551`, `12552`, `12553`, `12554`, `12555`, `12556`, `12557`, `12558`, `12559`, `12560`, `12561`, `12562`, `12563`, `12564`, `12565`, `12566`, `12567`, `12568`, `12569`, `12570`, `12571`, `12572`, `12573`, `12574`, `12575`, `12576`, `12577`, `12578`, `12579`, `12580`, `12581`, `12582`, `12583`, `12584`, `12585`, `12586`, `12587`, `12588`, `12589`, `12590`, `12591`, `12592`, `12593`, `12594`, `12595`, `12596`, `12597`, `12598`, `12599`, `12600`, `12601`, `12602`, `12603`, `12604`, `12605`, `12606`, `12607`, `12608`, `12609`, `12610`, `12611`, `12612`, `12613`, `12614`, `12615`, `12616`, `12617`, `12618`, `12619`, `12620`, `12621`, `12622`, `12623`, `12624`, `12625`, `12626`, `12627`, `12628`, `12629`, `12630`, `12631`, `12632`, `12633`, `12634`, `12635`, `12636`, `12637`, `12638`, `12639`, `12640`, `12641`, `12642`, `12643`, `12644`, `12645`, `12646`, `12647`, `12648`, `12649`, `12650`, `12651`, `12652`, `12653`, `12654`, `12655`, `12656`, `12657`, `12658`, `12659`, `12660`, `12661`, `12662`, `12663`, `12664`, `12665`, `12666`, `12667`, `12668`, `12669`, `12670`, `12671`, `12672`, `12673`, `12674`, `12675`, `12676`, `12677`, `12678`, `12679`, `12680`, `12681`, `12682`, `12683`, `12684`, `12685`, `12686`, `12687`, `12688`, `12689`, `12690`, `12691`, `12692`, `12693`, `12694`, `12695`, `12696`, `12697`, `12698`, `12699`, `12700`, `12701`, `12702`, `12703`, `12704`, `12705`, `12706`, `12707`, `12708`, `12709`, `12710`, `12711`, `12712`, `12713`, `12714`, `12715`, `12716`, `12717`, `12718`, `12719`, `12720`, `12721`, `12722`, `12723`, `12724`, `12725`, `12726`, `12727`, `12728`, `12729`, `12730`, `12731`, `12732`, `12733`, `12734`, `12735`, `12736`, `12737`, `12738`, `12739`, `12740`, `12741`, `12742`, `12743`, `12744`, `12745`, `12746`, `12747`, `12748`, `12749`, `12750`, `12751`, `12752`, `12753`, `12754`, `12755`, `12756`, `12757`, `12758`, `12759`, `12760`, `12761`, `12762`, `12763`, `12764`, `12765`, `12766`, `12767`, `12768`, `12769`, `12770`, `12771`, `12772`, `12773`, `12774`, `12775`, `12776`, `12777`, `12778`, `12779`, `12780`, `12781`, `12782`, `12783`, `12784`, `12785`, `12786`, `12787`, `12788`, `12789`, `12790`, `12791`, `12792`, `12793`, `12794`, `12795`, `12796`, `12797`, `12798`, `12799`, `12800`, `12801`, `12802`, `12803`, `12804`, `12805`, `12806`, `12807`, `12808`, `12809`, `12810`, `12811`, `12812`, `12813`, `12814`, `12815`, `12816`, `12817`, `12818`, `12819`, `12820`, `12821`, `12822`, `12823`, `12824`, `12825`, `12826`, `12827`, `12828`, `12829`, `12830`, `12831`, `12832`, `12833`, `12834`, `12835`, `12836`, `12837`, `12838`, `12839`, `12840`, `12841`, `12842`, `12843`, `12844`, `12845`, `12846`, `12847`, `12848`, `12849`, `12850`, `12851`, `12852`, `12853`, `12854`, `12855`, `12856`, `12857`, `12858`, `12859`, `12860`, `12861`, `12862`, `12863`, `12864`, `12865`, `12866`, `12867`, `12868`, `12869`, `12870`, `12871`, `12872`, `12873`, `12874`, `12875`, `12876`, `12877`, `12878`, `12879`, `12880`, `12881`, `12882`, `12883`, `12884`, `12885`, `12886`, `12887`, `12888`, `12889`, `12890`, `12891`, `12892`, `12893`, `12894`, `12895`, `12896`, `12897`, `12898`, `12899`, `12900`, `12901`, `12902`, `12903`, `12904`, `12905`, `12906`, `12907`, `12908`, `12909`, `12910`, `12911`, `12912`, `12913`, `12914`, `12915`, `12916`, `12917`, `12918`, `12919`, `12920`, `12921`, `12922`, `12923`, `12924`, `12925`, `12926`, `12927`, `12928`, `12929`, `12930`, `12931`, `12932`, `12933`, `12934`, `12935`, `12936`, `12937`, `12938`, `12939`, `12940`, `12941`, `12942`, `12943`, `12944`, `12945`, `12946`, `12947`, `12948`, `12949`, `12950`, `12951`, `12952`, `12953`, `12954`, `12955`, `12956`, `12957`, `12958`, `12959`, `12960`, `12961`, `12962`, `12963`, `12964`, `12965`, `12966`, `12967`, `12968`, `12969`, `12970`, `12971`, `12972`, `12973`, `12974`, `12975`, `12976`, `12977`, `12978`, `12979`, `12980`, `12981`, `12982`, `12983`, `12984`, `12985`, `12986`, `12987`, `12988`, `12989`, `12990`, `12991`, `12992`, `12993`, `12994`, `12995`, `12996`, `12997`, `12998`, `12999`, `13000`, `13001`, `13002`, `13003`, `13004`, `13005`, `13006`, `13007`, `13008`, `13009`, `13010`, `13011`, `13012`, `13013`, `13014`, `13015`, `13016`, `13017`, `13018`, `13019`, `13020`, `13021`, `13022`, `13023`, `13024`, `13025`, `13026`, `13027`, `13028`, `13029`, `13030`, `13031`, `13032`, `13033`, `13034`, `13035`, `13036`, `13037`, `13038`, `13039`, `13040`, `13041`, `13042`, `13043`, `13044`, `13045`, `13046`, `13047`, `13048`, `13049`, `13050`, `13051`, `13052`, `13053`, `13054`, `13055`, `13056`, `13057`, `13058`, `13059`, `13060`, `13061`, `13062`, `13063`, `13064`, `13065`, `13066`, `13067`, `13068`, `13069`, `13070`, `13071`, `13072`, `13073`, `13074`, `13075`, `13076`, `13077`, `13078`, `13079`, `13080`, `13081`, `13082`, `13083`, `13084`, `13085`, `13086`, `13087`, `13088`, `13089`, `13090`, `13091`, `13092`, `13093`, `13094`, `13095`, `13096`, `13097`, `13098`, `13099`, `13100`, `13101`, `13102`, `13103`, `13104`, `13105`, `13106`, `13107`, `13108`, `13109`, `13110`, `13111`, `13112`, `13113`, `13114`, `13115`, `13116`, `13117`, `13118`, `13119`, `13120`, `13121`, `13122`, `13123`, `13124`, `13125`, `13126`, `13127`, `13128`, `13129`, `13130`, `13131`, `13132`, `13133`, `13134`, `13135`, `13136`, `13137`, `13138`, `13139`, `13140`, `13141`, `13142`, `13143`, `13144`, `13145`, `13146`, `13147`, `13148`, `13149`, `13150`, `13151`, `13152`, `13153`, `13154`, `13155`, `13156`, `13157`, `13158`, `13159`, `13160`, `13161`, `13162`, `13163`, `13164`, `13165`, `13166`, `13167`, `13168`, `13169`, `13170`, `13171`, `13172`, `13173`, `13174`, `13175`, `13176`, `13177`, `13178`, `13179`, `13180`, `13181`, `13182`, `13183`, `13184`, `13185`, `13186`, `13187`, `13188`, `13189`, `13190`, `13191`, `13192`, `13193`, `13194`, `13195`, `13196`, `13197`, `13198`, `13199`, `13200`, `13201`, `13202`, `13203`, `13204`, `13205`, `13206`, `13207`, `13208`, `13209`, `13210`, `13211`, `13212`, `13213`, `13214`, `13215`, `13216`, `13217`, `13218`, `13219`, `13220`, `13221`, `13222`, `13223`, `13224`, `13225`, `13226`, `13227`, `13228`, `13229`, `13230`, `13231`, `13232`, `13233`, `13234`, `13235`, `13236`, `13237`, `13238`, `13239`, `13240`, `13241`, `13242`, `13243`, `13244`, `13245`, `13246`, `13247`, `13248`, `13249`, `13250`, `13251`, `13252`, `13253`, `13254`, `13255`, `13256`, `13257`, `13258`, `13259`, `13260`, `13261`, `13262`, `13263`, `13264`, `13265`, `13266`, `13267`, `13268`, `13269`, `13270`, `13271`, `13272`, `13273`, `13274`, `13275`, `13276`, `13277`, `13278`, `13279`, `13280`, `13281`, `13282`, `13283`, `13284`, `13285`, `13286`, `13287`, `13288`, `13289`, `13290`, `13291`, `13292`, `13293`, `13294`, `13295`, `13296`, `13297`, `13298`, `13299`, `13300`, `13301`, `13302`, `13303`, `13304`, `13305`, `13306`, `13307`, `13308`, `13309`, `13310`, `13311`, `13312`, `13313`, `13314`, `13315`, `13316`, `13317`, `13318`, `13319`, `13320`, `13321`, `13322`, `13323`, `13324`, `13325`, `13326`, `13327`, `13328`, `13329`, `13330`, `13331`, `13332`, `13333`, `13334`, `13335`, `13336`, `13337`, `13338`, `13339`, `13340`, `13341`, `13342`, `13343`, `13344`, `13345`, `13346`, `13347`, `13348`, `13349`, `13350`, `13351`, `13352`, `13353`, `13354`, `13355`, `13356`, `13357`, `13358`, `13359`, `13360`, `13361`, `13362`, `13363`, `13364`, `13365`, `13366`, `13367`, `13368`, `13369`, `13370`, `13371`, `13372`, `13373`, `13374`, `13375`, `13376`, `13377`, `13378`, `13379`, `13380`, `13381`, `13382`, `13383`, `13384`, `13385`, `13386`, `13387`, `13388`, `13389`, `13390`, `13391`, `13392`, `13393`, `13394`, `13395`, `13396`, `13397`, `13398`, `13399`, `13400`, `13401`, `13402`, `13403`, `13404`, `13405`, `13406`, `13407`, `13408`, `13409`, `13410`, `13411`, `13412`, `13413`, `13414`, `13415`, `13416`, `13417`, `13418`, `13419`, `13420`, `13421`, `13422`, `13423`, `13424`, `13425`, `13426`, `13427`, `13428`, `13429`, `13430`, `13431`, `13432`, `13433`, `13434`, `13435`, `13436`, `13437`, `13438`, `13439`, `13440`, `13441`, `13442`, `13443`, `13444`, `13445`, `13446`, `13447`, `13448`, `13449`, `13450`, `13451`, `13452`, `13453`, `13454`, `13455`, `13456`, `13457`, `13458`, `13459`, `13460`, `13461`, `13462`, `13463`, `13464`, `13465`, `13466`, `13467`, `13468`, `13469`, `13470`, `13471`, `13472`, `13473`, `13474`, `13475`, `13476`, `13477`, `13478`, `13479`, `13480`, `13481`, `13482`, `13483`, `13484`, `13485`, `13486`, `13487`, `13488`, `13489`, `13490`, `13491`, `13492`, `13493`, `13494`, `13495`, `13496`, `13497`, `13498`, `13499`, `13500`, `13501`, `13502`, `13503`, `13504`, `13505`, `13506`, `13507`, `13508`, `13509`, `13510`, `13511`, `13512`, `13513`, `13514`, `13515`, `13516`, `13517`, `13518`, `13519`, `13520`, `13521`, `13522`, `13523`, `13524`, `13525`, `13526`, `13527`, `13528`, `13529`, `13530`, `13531`, `13532`, `13533`, `13534`, `13535`, `13536`, `13537`, `13538`, `13539`, `13540`, `13541`, `13542`, `13543`, `13544`, `13545`, `13546`, `13547`, `13548`, `13549`, `13550`, `13551`, `13552`, `13553`, `13554`, `13555`, `13556`, `13557`, `13558`, `13559`, `13560`, `13561`, `13562`, `13563`, `13564`, `13565`, `13566`, `13567`, `13568`, `13569`, `13570`, `13571`, `13572`, `13573`, `13574`, `13575`, `13576`, `13577`, `13578`, `13579`, `13580`, `13581`, `13582`, `13583`, `13584`, `13585`, `13586`, `13587`, `13588`, `13589`, `13590`, `13591`, `13592`, `13593`, `13594`, `13595`, `13596`, `13597`, `13598`, `13599`, `13600`, `13601`, `13602`, `13603`, `13604`, `13605`, `13606`, `13607`, `13608`, `13609`, `13610`, `13611`, `13612`, `13613`, `13614`, `13615`, `13616`, `13617`, `13618`, `13619`, `13620`, `13621`, `13622`, `13623`, `13624`, `13625`, `13626`, `13627`, `13628`, `13629`, `13630`, `13631`, `13632`, `13633`, `13634`, `13635`, `13636`, `13637`, `13638`, `13639`, `13640`, `13641`, `13642`, `13643`, `13644`, `13645`, `13646`, `13647`, `13648`, `13649`, `13650`, `13651`, `13652`, `13653`, `13654`, `13655`, `13656`, `13657`, `13658`, `13659`, `13660`, `13661`, `13662`, `13663`, `13664`, `13665`, `13666`, `13667`, `13668`, `13669`, `13670`, `13671`, `13672`, `13673`, `13674`, `13675`, `13676`, `13677`, `13678`, `13679`, `13680`, `13681`, `13682`, `13683`, `13684`, `13685`, `13686`, `13687`, `13688`, `13689`, `13690`, `13691`, `13692`, `13693`, `13694`, `13695`, `13696`, `13697`, `13698`, `13699`, `13700`, `13701`, `13702`, `13703`, `13704`, `13705`, `13706`, `13707`, `13708`, `13709`, `13710`, `13711`, `13712`, `13713`, `13714`, `13715`, `13716`, `13717`, `13718`, `13719`, `13720`, `13721`, `13722`, `13723`, `13724`, `13725`, `13726`, `13727`, `13728`, `13729`, `13730`, `13731`, `13732`, `13733`, `13734`, `13735`, `13736`, `13737`, `13738`, `13739`, `13740`, `13741`, `13742`, `13743`, `13744`, `13745`, `13746`, `13747`, `13748`, `13749`, `13750`, `13751`, `13752`, `13753`, `13754`, `13755`, `13756`, `13757`, `13758`, `13759`, `13760`, `13761`, `13762`, `13763`, `13764`, `13765`, `13766`, `13767`, `13768`, `13769`, `13770`, `13771`, `13772`, `13773`, `13774`, `13775`, `13776`, `13777`, `13778`, `13779`, `13780`, `13781`, `13782`, `13783`, `13784`, `13785`, `13786`, `13787`, `13788`, `13789`, `13790`, `13791`, `13792`, `13793`, `13794`, `13795`, `13796`, `13797`, `13798`, `13799`, `13800`, `13801`, `13802`, `13803`, `13804`, `13805`, `13806`, `13807`, `13808`, `13809`, `13810`, `13811`, `13812`, `13813`, `13814`, `13815`, `13816`, `13817`, `13818`, `13819`, `13820`, `13821`, `13822`, `13823`, `13824`, `13825`, `13826`, `13827`, `13828`, `13829`, `13830`, `13831`, `13832`, `13833`, `13834`, `13835`, `13836`, `13837`, `13838`, `13839`, `13840`, `13841`, `13842`, `13843`, `13844`, `13845`, `13846`, `13847`, `13848`, `13849`, `13850`, `13851`, `13852`, `13853`, `13854`, `13855`, `13856`, `13857`, `13858`, `13859`, `13860`, `13861`, `13862`, `13863`, `13864`, `13865`, `13866`, `13867`, `13868`, `13869`, `13870`, `13871`, `13872`, `13873`, `13874`, `13875`, `13876`, `13877`, `13878`, `13879`, `13880`, `13881`, `13882`, `13883`, `13884`, `13885`, `13886`, `13887`, `13888`, `13889`, `13890`, `13891`, `13892`, `13893`, `13894`, `13895`, `13896`, `13897`, `13898`, `13899`, `13900`, `13901`, `13902`, `13903`, `13904`, `13905`, `13906`, `13907`, `13908`, `13909`, `13910`, `13911`, `13912`, `13913`, `13914`, `13915`, `13916`, `13917`, `13918`, `13919`, `13920`, `13921`, `13922`, `13923`, `13924`, `13925`, `13926`, `13927`, `13928`, `13929`, `13930`, `13931`, `13932`, `13933`, `13934`, `13935`, `13936`, `13937`, `13938`, `13939`, `13940`, `13941`, `13942`, `13943`, `13944`, `13945`, `13946`, `13947`, `13948`, `13949`, `13950`, `13951`, `13952`, `13953`, `13954`, `13955`, `13956`, `13957`, `13958`, `13959`, `13960`, `13961`, `13962`, `13963`, `13964`, `13965`, `13966`, `13967`, `13968`, `13969`, `13970`, `13971`, `13972`, `13973`, `13974`, `13975`, `13976`, `13977`, `13978`, `13979`, `13980`, `13981`, `13982`, `13983`, `13984`, `13985`, `13986`, `13987`, `13988`, `13989`, `13990`, `13991`, `13992`, `13993`, `13994`, `13995`, `13996`, `13997`, `13998`, `13999`, `14000`, `14001`, `14002`, `14003`, `14004`, `14005`, `14006`, `14007`, `14008`, `14009`, `14010`, `14011`, `14012`, `14013`, `14014`, `14015`, `14016`, `14017`, `14018`, `14019`, `14020`, `14021`, `14022`, `14023`, `14024`, `14025`, `14026`, `14027`, `14028`, `14029`, `14030`, `14031`, `14032`, `14033`, `14034`, `14035`, `14036`, `14037`, `14038`, `14039`, `14040`, `14041`, `14042`, `14043`, `14044`, `14045`, `14046`, `14047`, `14048`, `14049`, `14050`, `14051`, `14052`, `14053`, `14054`, `14055`, `14056`, `14057`, `14058`, `14059`, `14060`, `14061`, `14062`, `14063`, `14064`, `14065`, `14066`, `14067`, `14068`, `14069`, `14070`, `14071`, `14072`, `14073`, `14074`, `14075`, `14076`, `14077`, `14078`, `14079`, `14080`, `14081`, `14082`, `14083`, `14084`, `14085`, `14086`, `14087`, `14088`, `14089`, `14090`, `14091`, `14092`, `14093`, `14094`, `14095`, `14096`, `14097`, `14098`, `14099`, `14100`, `14101`, `14102`, `14103`, `14104`, `14105`, `14106`, `14107`, `14108`, `14109`, `14110`, `14111`, `14112`, `14113`, `14114`, `14115`, `14116`, `14117`, `14118`, `14119`, `14120`, `14121`, `14122`, `14123`, `14124`, `14125`, `14126`, `14127`, `14128`, `14129`, `14130`, `14131`, `14132`, `14133`, `14134`, `14135`, `14136`, `14137`, `14138`, `14139`, `14140`, `14141`, `14142`, `14143`, `14144`, `14145`, `14146`, `14147`, `14148`, `14149`, `14150`, `14151`, `14152`, `14153`, `14154`, `14155`, `14156`, `14157`, `14158`, `14159`, `14160`, `14161`, `14162`, `14163`, `14164`, `14165`, `14166`, `14167`, `14168`, `14169`, `14170`, `14171`, `14172`, `14173`, `14174`, `14175`, `14176`, `14177`, `14178`, `14179`, `14180`, `14181`, `14182`, `14183`, `14184`, `14185`, `14186`, `14187`, `14188`, `14189`, `14190`, `14191`, `14192`, `14193`, `14194`, `14195`, `14196`, `14197`, `14198`, `14199`, `14200`, `14201`, `14202`, `14203`, `14204`, `14205`, `14206`, `14207`, `14208`, `14209`, `14210`, `14211`, `14212`, `14213`, `14214`, `14215`, `14216`, `14217`, `14218`, `14219`, `14220`, `14221`, `14222`, `14223`, `14224`, `14225`, `14226`, `14227`, `14228`, `14229`, `14230`, `14231`, `14232`, `14233`, `14234`, `14235`, `14236`, `14237`, `14238`, `14239`, `14240`, `14241`, `14242`, `14243`, `14244`, `14245`, `14246`, `14247`, `14248`, `14249`, `14250`, `14251`, `14252`, `14253`, `14254`, `14255`, `14256`, `14257`, `14258`, `14259`, `14260`, `14261`, `14262`, `14263`, `14264`, `14265`, `14266`, `14267`, `14268`, `14269`, `14270`, `14271`, `14272`, `14273`, `14274`, `14275`, `14276`, `14277`, `14278`, `14279`, `14280`, `14281`, `14282`, `14283`, `14284`, `14285`, `14286`, `14287`, `14288`, `14289`, `14290`, `14291`, `14292`, `14293`, `14294`, `14295`, `14296`, `14297`, `14298`, `14299`, `14300`, `14301`, `14302`, `14303`, `14304`, `14305`, `14306`, `14307`, `14308`, `14309`, `14310`, `14311`, `14312`, `14313`, `14314`, `14315`, `14316`, `14317`, `14318`, `14319`, `14320`, `14321`, `14322`, `14323`, `14324`, `14325`, `14326`, `14327`, `14328`, `14329`, `14330`, `14331`, `14332`, `14333`, `14334`, `14335`, `14336`, `14337`, `14338`, `14339`, `14340`, `14341`, `14342`, `14343`, `14344`, `14345`, `14346`, `14347`, `14348`, `14349`, `14350`, `14351`, `14352`, `14353`, `14354`, `14355`, `14356`, `14357`, `14358`, `14359`, `14360`, `14361`, `14362`, `14363`, `14364`, `14365`, `14366`, `14367`, `14368`, `14369`, `14370`, `14371`, `14372`, `14373`, `14374`, `14375`, `14376`, `14377`, `14378`, `14379`, `14380`, `14381`, `14382`, `14383`, `14384`, `14385`, `14386`, `14387`, `14388`, `14389`, `14390`, `14391`, `14392`, `14393`, `14394`, `14395`, `14396`, `14397`, `14398`, `14399`, `14400`, `14401`, `14402`, `14403`, `14404`, `14405`, `14406`, `14407`, `14408`, `14409`, `14410`, `14411`, `14412`, `14413`, `14414`, `14415`, `14416`, `14417`, `14418`, `14419`, `14420`, `14421`, `14422`, `14423`, `14424`, `14425`, `14426`, `14427`, `14428`, `14429`, `14430`, `14431`, `14432`, `14433`, `14434`, `14435`, `14436`, `14437`, `14438`, `14439`, `14440`, `14441`, `14442`, `14443`, `14444`, `14445`, `14446`, `14447`, `14448`, `14449`, `14450`, `14451`, `14452`, `14453`, `14454`, `14455`, `14456`, `14457`, `14458`, `14459`, `14460`, `14461`, `14462`, `14463`, `14464`, `14465`, `14466`, `14467`, `14468`, `14469`, `14470`, `14471`, `14472`, `14473`, `14474`, `14475`, `14476`, `14477`, `14478`, `14479`, `14480`, `14481`, `14482`, `14483`, `14484`, `14485`, `14486`, `14487`, `14488`, `14489`, `14490`, `14491`, `14492`, `14493`, `14494`, `14495`, `14496`, `14497`, `14498`, `14499`, `14500`, `14501`, `14502`, `14503`, `14504`, `14505`, `14506`, `14507`, `14508`, `14509`, `14510`, `14511`, `14512`, `14513`, `14514`, `14515`, `14516`, `14517`, `14518`, `14519`, `14520`, `14521`, `14522`, `14523`, `14524`, `14525`, `14526`, `14527`, `14528`, `14529`, `14530`, `14531`, `14532`, `14533`, `14534`, `14535`, `14536`, `14537`, `14538`, `14539`, `14540`, `14541`, `14542`, `14543`, `14544`, `14545`, `14546`, `14547`, `14548`, `14549`, `14550`, `14551`, `14552`, `14553`, `14554`, `14555`, `14556`, `14557`, `14558`, `14559`, `14560`, `14561`, `14562`, `14563`, `14564`, `14565`, `14566`, `14567`, `14568`, `14569`, `14570`, `14571`, `14572`, `14573`, `14574`, `14575`, `14576`, `14577`, `14578`, `14579`, `14580`, `14581`, `14582`, `14583`, `14584`, `14585`, `14586`, `14587`, `14588`, `14589`, `14590`, `14591`, `14592`, `14593`, `14594`, `14595`, `14596`, `14597`, `14598`, `14599`, `14600`, `14601`, `14602`, `14603`, `14604`, `14605`, `14606`, `14607`, `14608`, `14609`, `14610`, `14611`, `14612`, `14613`, `14614`, `14615`, `14616`, `14617`, `14618`, `14619`, `14620`, `14621`, `14622`, `14623`, `14624`, `14625`, `14626`, `14627`, `14628`, `14629`, `14630`, `14631`, `14632`, `14633`, `14634`, `14635`, `14636`, `14637`, `14638`, `14639`, `14640`, `14641`, `14642`, `14643`, `14644`, `14645`, `14646`, `14647`, `14648`, `14649`, `14650`, `14651`, `14652`, `14653`, `14654`, `14655`, `14656`, `14657`, `14658`, `14659`, `14660`, `14661`, `14662`, `14663`, `14664`, `14665`, `14666`, `14667`, `14668`, `14669`, `14670`, `14671`, `14672`, `14673`, `14674`, `14675`, `14676`, `14677`, `14678`, `14679`, `14680`, `14681`, `14682`, `14683`, `14684`, `14685`, `14686`, `14687`, `14688`, `14689`, `14690`, `14691`, `14692`, `14693`, `14694`, `14695`, `14696`, `14697`, `14698`, `14699`, `14700`, `14701`, `14702`, `14703`, `14704`, `14705`, `14706`, `14707`, `14708`, `14709`, `14710`, `14711`, `14712`, `14713`, `14714`, `14715`, `14716`, `14717`, `14718`, `14719`, `14720`, `14721`, `14722`, `14723`, `14724`, `14725`, `14726`, `14727`, `14728`, `14729`, `14730`, `14731`, `14732`, `14733`, `14734`, `14735`, `14736`, `14737`, `14738`, `14739`, `14740`, `14741`, `14742`, `14743`, `14744`, `14745`, `14746`, `14747`, `14748`, `14749`, `14750`, `14751`, `14752`, `14753`, `14754`, `14755`, `14756`, `14757`, `14758`, `14759`, `14760`, `14761`, `14762`, `14763`, `14764`, `14765`, `14766`, `14767`, `14768`, `14769`, `14770`, `14771`, `14772`, `14773`, `14774`, `14775`, `14776`, `14777`, `14778`, `14779`, `14780`, `14781`, `14782`, `14783`, `14784`, `14785`, `14786`, `14787`, `14788`, `14789`, `14790`, `14791`, `14792`, `14793`, `14794`, `14795`, `14796`, `14797`, `14798`, `14799`, `14800`, `14801`, `14802`, `14803`, `14804`, `14805`, `14806`, `14807`, `14808`, `14809`, `14810`, `14811`, `14812`, `14813`, `14814`, `14815`, `14816`, `14817`, `14818`, `14819`, `14820`, `14821`, `14822`, `14823`, `14824`, `14825`, `14826`, `14827`, `14828`, `14829`, `14830`, `14831`, `14832`, `14833`, `14834`, `14835`, `14836`, `14837`, `14838`, `14839`, `14840`, `14841`, `14842`, `14843`, `14844`, `14845`, `14846`, `14847`, `14848`, `14849`, `14850`, `14851`, `14852`, `14853`, `14854`, `14855`, `14856`, `14857`, `14858`, `14859`, `14860`, `14861`, `14862`, `14863`, `14864`, `14865`, `14866`, `14867`, `14868`, `14869`, `14870`, `14871`, `14872`, `14873`, `14874`, `14875`, `14876`, `14877`, `14878`, `14879`, `14880`, `14881`, `14882`, `14883`, `14884`, `14885`, `14886`, `14887`, `14888`, `14889`, `14890`, `14891`, `14892`, `14893`, `14894`, `14895`, `14896`, `14897`, `14898`, `14899`, `14900`, `14901`, `14902`, `14903`, `14904`, `14905`, `14906`, `14907`, `14908`, `14909`, `14910`, `14911`, `14912`, `14913`, `14914`, `14915`, `14916`, `14917`, `14918`, `14919`, `14920`, `14921`, `14922`, `14923`, `14924`, `14925`, `14926`, `14927`, `14928`, `14929`, `14930`, `14931`, `14932`, `14933`, `14934`, `14935`, `14936`, `14937`, `14938`, `14939`, `14940`, `14941`, `14942`, `14943`, `14944`, `14945`, `14946`, `14947`, `14948`, `14949`, `14950`, `14951`, `14952`, `14953`, `14954`, `14955`, `14956`, `14957`, `14958`, `14959`, `14960`, `14961`, `14962`, `14963`, `14964`, `14965`, `14966`, `14967`, `14968`, `14969`, `14970`, `14971`, `14972`, `14973`, `14974`, `14975`, `14976`, `14977`, `14978`, `14979`, `14980`, `14981`, `14982`, `14983`, `14984`, `14985`, `14986`, `14987`, `14988`, `14989`, `14990`, `14991`, `14992`, `14993`, `14994`, `14995`, `14996`, `14997`, `14998`, `14999`, `15000`, `15001`, `15002`, `15003`, `15004`, `15005`, `15006`, `15007`, `15008`, `15009`, `15010`, `15011`, `15012`, `15013`, `15014`, `15015`, `15016`, `15017`, `15018`, `15019`, `15020`, `15021`, `15022`, `15023`, `15024`, `15025`, `15026`, `15027`, `15028`, `15029`, `15030`, `15031`, `15032`, `15033`, `15034`, `15035`, `15036`, `15037`, `15038`, `15039`, `15040`, `15041`, `15042`, `15043`, `15044`, `15045`, `15046`, `15047`, `15048`, `15049`, `15050`, `15051`, `15052`, `15053`, `15054`, `15055`, `15056`, `15057`, `15058`, `15059`, `15060`, `15061`, `15062`, `15063`, `15064`, `15065`, `15066`, `15067`, `15068`, `15069`, `15070`, `15071`, `15072`, `15073`, `15074`, `15075`, `15076`, `15077`, `15078`, `15079`, `15080`, `15081`, `15082`, `15083`, `15084`, `15085`, `15086`, `15087`, `15088`, `15089`, `15090`, `15091`, `15092`, `15093`, `15094`, `15095`, `15096`, `15097`, `15098`, `15099`, `15100`, `15101`, `15102`, `15103`, `15104`, `15105`, `15106`, `15107`, `15108`, `15109`, `15110`, `15111`, `15112`, `15113`, `15114`, `15115`, `15116`, `15117`, `15118`, `15119`, `15120`, `15121`, `15122`, `15123`, `15124`, `15125`, `15126`, `15127`, `15128`, `15129`, `15130`, `15131`, `15132`, `15133`, `15134`, `15135`, `15136`, `15137`, `15138`, `15139`, `15140`, `15141`, `15142`, `15143`, `15144`, `15145`, `15146`, `15147`, `15148`, `15149`, `15150`, `15151`, `15152`, `15153`, `15154`, `15155`, `15156`, `15157`, `15158`, `15159`, `15160`, `15161`, `15162`, `15163`, `15164`, `15165`, `15166`, `15167`, `15168`, `15169`, `15170`, `15171`, `15172`, `15173`, `15174`, `15175`, `15176`, `15177`, `15178`, `15179`, `15180`, `15181`, `15182`, `15183`, `15184`, `15185`, `15186`, `15187`, `15188`, `15189`, `15190`, `15191`, `15192`, `15193`, `15194`, `15195`, `15196`, `15197`, `15198`, `15199`, `15200`, `15201`, `15202`, `15203`, `15204`, `15205`, `15206`, `15207`, `15208`, `15209`, `15210`, `15211`, `15212`, `15213`, `15214`, `15215`, `15216`, `15217`, `15218`, `15219`, `15220`, `15221`, `15222`, `15223`, `15224`, `15225`, `15226`, `15227`, `15228`, `15229`, `15230`, `15231`, `15232`, `15233`, `15234`, `15235`, `15236`, `15237`, `15238`, `15239`, `15240`, `15241`, `15242`, `15243`, `15244`, `15245`, `15246`, `15247`, `15248`, `15249`, `15250`, `15251`, `15252`, `15253`, `15254`, `15255`, `15256`, `15257`, `15258`, `15259`, `15260`, `15261`, `15262`, `15263`, `15264`, `15265`, `15266`, `15267`, `15268`, `15269`, `15270`, `15271`, `15272`, `15273`, `15274`, `15275`, `15276`, `15277`, `15278`, `15279`, `15280`, `15281`, `15282`, `15283`, `15284`, `15285`, `15286`, `15287`, `15288`, `15289`, `15290`, `15291`, `15292`, `15293`, `15294`, `15295`, `15296`, `15297`, `15298`, `15299`, `15300`, `15301`, `15302`, `15303`, `15304`, `15305`, `15306`, `15307`, `15308`, `15309`, `15310`, `15311`, `15312`, `15313`, `15314`, `15315`, `15316`, `15317`, `15318`, `15319`, `15320`, `15321`, `15322`, `15323`, `15324`, `15325`, `15326`, `15327`, `15328`, `15329`, `15330`, `15331`, `15332`, `15333`, `15334`, `15335`, `15336`, `15337`, `15338`, `15339`, `15340`, `15341`, `15342`, `15343`, `15344`, `15345`, `15346`, `15347`, `15348`, `15349`, `15350`, `15351`, `15352`, `15353`, `15354`, `15355`, `15356`, `15357`, `15358`, `15359`, `15360`, `15361`, `15362`, `15363`, `15364`, `15365`, `15366`, `15367`, `15368`, `15369`, `15370`, `15371`, `15372`, `15373`, `15374`, `15375`, `15376`, `15377`, `15378`, `15379`, `15380`, `15381`, `15382`, `15383`, `15384`, `15385`, `15386`, `15387`, `15388`, `15389`, `15390`, `15391`, `15392`, `15393`, `15394`, `15395`, `15396`, `15397`, `15398`, `15399`, `15400`, `15401`, `15402`, `15403`, `15404`, `15405`, `15406`, `15407`, `15408`, `15409`, `15410`, `15411`, `15412`, `15413`, `15414`, `15415`, `15416`, `15417`, `15418`, `15419`, `15420`, `15421`, `15422`, `15423`, `15424`, `15425`, `15426`, `15427`, `15428`, `15429`, `15430`, `15431`, `15432`, `15433`, `15434`, `15435`, `15436`, `15437`, `15438`, `15439`, `15440`, `15441`, `15442`, `15443`, `15444`, `15445`, `15446`, `15447`, `15448`, `15449`, `15450`, `15451`, `15452`, `15453`, `15454`, `15455`, `15456`, `15457`, `15458`, `15459`, `15460`, `15461`, `15462`, `15463`, `15464`, `15465`, `15466`, `15467`, `15468`, `15469`, `15470`, `15471`, `15472`, `15473`, `15474`, `15475`, `15476`, `15477`, `15478`, `15479`, `15480`, `15481`, `15482`, `15483`, `15484`, `15485`, `15486`, `15487`, `15488`, `15489`, `15490`, `15491`, `15492`, `15493`, `15494`, `15495`, `15496`, `15497`, `15498`, `15499`, `15500`, `15501`, `15502`, `15503`, `15504`, `15505`, `15506`, `15507`, `15508`, `15509`, `15510`, `15511`, `15512`, `15513`, `15514`, `15515`, `15516`, `15517`, `15518`, `15519`, `15520`, `15521`, `15522`, `15523`, `15524`, `15525`, `15526`, `15527`, `15528`, `15529`, `15530`, `15531`, `15532`, `15533`, `15534`, `15535`, `15536`, `15537`, `15538`, `15539`, `15540`, `15541`, `15542`, `15543`, `15544`, `15545`, `15546`, `15547`, `15548`, `15549`, `15550`, `15551`, `15552`, `15553`, `15554`, `15555`, `15556`, `15557`, `15558`, `15559`, `15560`, `15561`, `15562`, `15563`, `15564`, `15565`, `15566`, `15567`, `15568`, `15569`, `15570`, `15571`, `15572`, `15573`, `15574`, `15575`, `15576`, `15577`, `15578`, `15579`, `15580`, `15581`, `15582`, `15583`, `15584`, `15585`, `15586`, `15587`, `15588`, `15589`, `15590`, `15591`, `15592`, `15593`, `15594`, `15595`, `15596`, `15597`, `15598`, `15599`, `15600`, `15601`, `15602`, `15603`, `15604`, `15605`, `15606`, `15607`, `15608`, `15609`, `15610`, `15611`, `15612`, `15613`, `15614`, `15615`, `15616`, `15617`, `15618`, `15619`, `15620`, `15621`, `15622`, `15623`, `15624`, `15625`, `15626`, `15627`, `15628`, `15629`, `15630`, `15631`, `15632`, `15633`, `15634`, `15635`, `15636`, `15637`, `15638`, `15639`, `15640`, `15641`, `15642`, `15643`, `15644`, `15645`, `15646`, `15647`, `15648`, `15649`, `15650`, `15651`, `15652`, `15653`, `15654`, `15655`, `15656`, `15657`, `15658`, `15659`, `15660`, `15661`, `15662`, `15663`, `15664`, `15665`, `15666`, `15667`, `15668`, `15669`, `15670`, `15671`, `15672`, `15673`, `15674`, `15675`, `15676`, `15677`, `15678`, `15679`, `15680`, `15681`, `15682`, `15683`, `15684`, `15685`, `15686`, `15687`, `15688`, `15689`, `15690`, `15691`, `15692`, `15693`, `15694`, `15695`, `15696`, `15697`, `15698`, `15699`, `15700`, `15701`, `15702`, `15703`, `15704`, `15705`, `15706`, `15707`, `15708`, `15709`, `15710`, `15711`, `15712`, `15713`, `15714`, `15715`, `15716`, `15717`, `15718`, `15719`, `15720`, `15721`, `15722`, `15723`, `15724`, `15725`, `15726`, `15727`, `15728`, `15729`, `15730`, `15731`, `15732`, `15733`, `15734`, `15735`, `15736`, `15737`, `15738`, `15739`, `15740`, `15741`, `15742`, `15743`, `15744`, `15745`, `15746`, `15747`, `15748`, `15749`, `15750`, `15751`, `15752`, `15753`, `15754`, `15755`, `15756`, `15757`, `15758`, `15759`, `15760`, `15761`, `15762`, `15763`, `15764`, `15765`, `15766`, `15767`, `15768`, `15769`, `15770`, `15771`, `15772`, `15773`, `15774`, `15775`, `15776`, `15777`, `15778`, `15779`, `15780`, `15781`, `15782`, `15783`, `15784`, `15785`, `15786`, `15787`, `15788`, `15789`, `15790`, `15791`, `15792`, `15793`, `15794`, `15795`, `15796`, `15797`, `15798`, `15799`, `15800`, `15801`, `15802`, `15803`, `15804`, `15805`, `15806`, `15807`, `15808`, `15809`, `15810`, `15811`, `15812`, `15813`, `15814`, `15815`, `15816`, `15817`, `15818`, `15819`, `15820`, `15821`, `15822`, `15823`, `15824`, `15825`, `15826`, `15827`, `15828`, `15829`, `15830`, `15831`, `15832`, `15833`, `15834`, `15835`, `15836`, `15837`, `15838`, `15839`, `15840`, `15841`, `15842`, `15843`, `15844`, `15845`, `15846`, `15847`, `15848`, `15849`, `15850`, `15851`, `15852`, `15853`, `15854`, `15855`, `15856`, `15857`, `15858`, `15859`, `15860`, `15861`, `15862`, `15863`, `15864`, `15865`, `15866`, `15867`, `15868`, `15869`, `15870`, `15871`, `15872`, `15873`, `15874`, `15875`, `15876`, `15877`, `15878`, `15879`, `15880`, `15881`, `15882`, `15883`, `15884`, `15885`, `15886`, `15887`, `15888`, `15889`, `15890`, `15891`, `15892`, `15893`, `15894`, `15895`, `15896`, `15897`, `15898`, `15899`, `15900`, `15901`, `15902`, `15903`, `15904`, `15905`, `15906`, `15907`, `15908`, `15909`, `15910`, `15911`, `15912`, `15913`, `15914`, `15915`, `15916`, `15917`, `15918`, `15919`, `15920`, `15921`, `15922`, `15923`, `15924`, `15925`, `15926`, `15927`, `15928`, `15929`, `15930`, `15931`, `15932`, `15933`, `15934`, `15935`, `15936`, `15937`, `15938`, `15939`, `15940`, `15941`, `15942`, `15943`, `15944`, `15945`, `15946`, `15947`, `15948`, `15949`, `15950`, `15951`, `15952`, `15953`, `15954`, `15955`, `15956`, `15957`, `15958`, `15959`, `15960`, `15961`, `15962`, `15963`, `15964`, `15965`, `15966`, `15967`, `15968`, `15969`, `15970`, `15971`, `15972`, `15973`, `15974`, `15975`, `15976`, `15977`, `15978`, `15979`, `15980`, `15981`, `15982`, `15983`, `15984`, `15985`, `15986`, `15987`, `15988`, `15989`, `15990`, `15991`, `15992`, `15993`, `15994`, `15995`, `15996`, `15997`, `15998`, `15999`, `16000`, `16001`, `16002`, `16003`, `16004`, `16005`, `16006`, `16007`, `16008`, `16009`, `16010`, `16011`, `16012`, `16013`, `16014`, `16015`, `16016`, `16017`, `16018`, `16019`, `16020`, `16021`, `16022`, `16023`, `16024`, `16025`, `16026`, `16027`, `16028`, `16029`, `16030`, `16031`, `16032`, `16033`, `16034`, `16035`, `16036`, `16037`, `16038`, `16039`, `16040`, `16041`, `16042`, `16043`, `16044`, `16045`, `16046`, `16047`, `16048`, `16049`, `16050`, `16051`, `16052`, `16053`, `16054`, `16055`, `16056`, `16057`, `16058` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_F` | 100.00 | | `TOKEN_P` | 100.00 | | `TOKEN_R` | 100.00 | | `TOKEN_ACC` | 100.00 | | `SENTS_F` | 81.11 | | `SENTS_P` | 79.75 | | `SENTS_R` | 82.52 | | `TAG_ACC` | 96.41 | | `POS_ACC` | 96.52 | | `MORPH_ACC` | 97.74 | | `DEP_UAS` | 90.21 | | `DEP_LAS` | 85.42 | | `LEMMA_ACC` | 90.34 |
{"language": ["multilingual"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
explosion/xx_udv25_oldfrenchsrcmf_trf
null
[ "spacy", "token-classification", "multilingual", "license:cc-by-sa-4.0", "model-index", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "multilingual" ]
TAGS #spacy #token-classification #multilingual #license-cc-by-sa-4.0 #model-index #region-us
UD v2.5 benchmarking pipeline for UD\_Old\_French-SRCMF ### Label Scheme View label scheme (16214 labels for 6 components) ### Accuracy
[ "### Label Scheme\n\n\n\nView label scheme (16214 labels for 6 components)", "### Accuracy" ]
[ "TAGS\n#spacy #token-classification #multilingual #license-cc-by-sa-4.0 #model-index #region-us \n", "### Label Scheme\n\n\n\nView label scheme (16214 labels for 6 components)", "### Accuracy" ]
text-generation
transformers
#peppa pig chat bot
{"tags": ["conversational"]}
f00d4tehg0dz/Peppa
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#peppa pig chat bot
[]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
#yoda chat bot
{"tags": ["conversational"]}
f00d4tehg0dz/Yoda
null
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
#yoda chat bot
[]
[ "TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
token-classification
transformers
# Italian Legal Named Entity Recognition (NER) ELECTRA-based model trained to extract entities of interest from Italian civil judgments issued by the Corte Suprema di Cassazione. ## Dataset The model has been fine-tuned on 9000 judgments from 2016 to 2021 (1500 per year), labeled with a combination of rule-based and manual approaches. It can be used to extract the following named entities from the text. | Tag | Italian name | English name | |:----|:-------------|:-------------| | RIC | ricorso | appeal | | RCR | ricorrente | petitioner | | CTR | controricorrente | respondent | | AVV | avvocato | lawyer | | CNS | consigliere | counselor | | PMI | pubblico ministero | prosecutor | | DOM | domicilio | domicile | | CDA | corte d’appello | appeal court | | SNT | sentenza | judgment|
{"language": ["it"], "tags": ["legal"], "widget": [{"text": "la seguente SENTENZA sul ricorso 24817-2015 proposto da: ANDREA FORMISANO, elettivamente domiciliato in ROMA VIA S. TOMMASO D'AQUINO 7, presso lo studio dell'avvocato CARLO BORELLO, che lo rappresenta e difende giusta delega in calce; - ricorrente - contro SOGET SPA, CAMERA DI COMMERCIO DI PESCARA; - intimati - avverso la sentenza n. 169/2012 della COMM.TRIB.REG.SEZ.DIST. di PESCARA, depositata il 13/03/2012; udita la relazione della causa svolta nella pubblica udienza del 04/04/2018 dal Consigliere Dott. MILENA BALSAMO; udito il P.M. in persona del Sostituto Procuratore Generale Dott. GIOVANNI GIACALONE che ha concluso per l'inammissibilit\u00e0 in subordine rigetto del ricorso.", "example_title": "Judgment example"}]}
fabiod20/italian-legal-ner
null
[ "transformers", "pytorch", "electra", "token-classification", "legal", "it", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[ "it" ]
TAGS #transformers #pytorch #electra #token-classification #legal #it #autotrain_compatible #endpoints_compatible #has_space #region-us
Italian Legal Named Entity Recognition (NER) ============================================ ELECTRA-based model trained to extract entities of interest from Italian civil judgments issued by the Corte Suprema di Cassazione. Dataset ------- The model has been fine-tuned on 9000 judgments from 2016 to 2021 (1500 per year), labeled with a combination of rule-based and manual approaches. It can be used to extract the following named entities from the text.
[]
[ "TAGS\n#transformers #pytorch #electra #token-classification #legal #it #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-de-finetuned-en-to-de-wd01-fp16false This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on the wmt16 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "model-index": [{"name": "opus-mt-en-de-finetuned-en-to-de-wd01-fp16false", "results": []}]}
fabiogr/opus-mt-en-de-finetuned-en-to-de-wd01-fp16false
null
[ "transformers", "pytorch", "marian", "text2text-generation", "generated_from_trainer", "dataset:wmt16", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #marian #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# opus-mt-en-de-finetuned-en-to-de-wd01-fp16false This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-de on the wmt16 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0 - Datasets 1.16.1 - Tokenizers 0.10.3
[ "# opus-mt-en-de-finetuned-en-to-de-wd01-fp16false\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-de on the wmt16 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #marian #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# opus-mt-en-de-finetuned-en-to-de-wd01-fp16false\n\nThis model is a fine-tuned version of Helsinki-NLP/opus-mt-en-de on the wmt16 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0\n- Datasets 1.16.1\n- Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-ag_news This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the ag_news dataset. It achieves the following results on the evaluation set: - Loss: 0.3284 - Accuracy: 0.9375 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 7425 - training_steps: 74250 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5773 | 0.13 | 2000 | 0.3627 | 0.8875 | | 0.3101 | 0.27 | 4000 | 0.2938 | 0.9208 | | 0.3076 | 0.4 | 6000 | 0.3114 | 0.9092 | | 0.3114 | 0.54 | 8000 | 0.4545 | 0.9008 | | 0.3154 | 0.67 | 10000 | 0.3875 | 0.9083 | | 0.3095 | 0.81 | 12000 | 0.3390 | 0.9142 | | 0.2948 | 0.94 | 14000 | 0.3341 | 0.9133 | | 0.2557 | 1.08 | 16000 | 0.4573 | 0.9092 | | 0.258 | 1.21 | 18000 | 0.3356 | 0.9217 | | 0.2455 | 1.35 | 20000 | 0.3348 | 0.9283 | | 0.2361 | 1.48 | 22000 | 0.3218 | 0.93 | | 0.254 | 1.62 | 24000 | 0.3814 | 0.9033 | | 0.2528 | 1.75 | 26000 | 0.3628 | 0.9158 | | 0.2282 | 1.89 | 28000 | 0.3302 | 0.9308 | | 0.224 | 2.02 | 30000 | 0.3967 | 0.9225 | | 0.174 | 2.15 | 32000 | 0.3669 | 0.9333 | | 0.1848 | 2.29 | 34000 | 0.3435 | 0.9283 | | 0.19 | 2.42 | 36000 | 0.3552 | 0.93 | | 0.1865 | 2.56 | 38000 | 0.3996 | 0.9258 | | 0.1877 | 2.69 | 40000 | 0.3749 | 0.9258 | | 0.1951 | 2.83 | 42000 | 0.3963 | 0.9258 | | 0.1702 | 2.96 | 44000 | 0.3655 | 0.9317 | | 0.1488 | 3.1 | 46000 | 0.3942 | 0.9292 | | 0.1231 | 3.23 | 48000 | 0.3998 | 0.9267 | | 0.1319 | 3.37 | 50000 | 0.4292 | 0.9242 | | 0.1334 | 3.5 | 52000 | 0.4904 | 0.9192 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1 - Datasets 1.6.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer", "sibyl"], "datasets": ["ag_news"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-uncased-ag_news", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "ag_news", "type": "ag_news", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9375, "name": "Accuracy"}]}]}]}
fabriceyhc/bert-base-uncased-ag_news
null
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "sibyl", "dataset:ag_news", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #generated_from_trainer #sibyl #dataset-ag_news #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-ag\_news ========================== This model is a fine-tuned version of bert-base-uncased on the ag\_news dataset. It achieves the following results on the evaluation set: * Loss: 0.3284 * Accuracy: 0.9375 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 7425 * training\_steps: 74250 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.7.1 * Datasets 1.6.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 7425\n* training\\_steps: 74250", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.7.1\n* Datasets 1.6.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #sibyl #dataset-ag_news #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 7425\n* training\\_steps: 74250", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.7.1\n* Datasets 1.6.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-amazon_polarity This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the amazon_polarity dataset. It achieves the following results on the evaluation set: - Loss: 0.2945 - Accuracy: 0.9465 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1782000 - training_steps: 17820000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 0.7155 | 0.0 | 2000 | 0.7060 | 0.4622 | | 0.7054 | 0.0 | 4000 | 0.6925 | 0.5165 | | 0.6842 | 0.0 | 6000 | 0.6653 | 0.6116 | | 0.6375 | 0.0 | 8000 | 0.5721 | 0.7909 | | 0.4671 | 0.0 | 10000 | 0.3238 | 0.8770 | | 0.3403 | 0.0 | 12000 | 0.3692 | 0.8861 | | 0.4162 | 0.0 | 14000 | 0.4560 | 0.8908 | | 0.4728 | 0.0 | 16000 | 0.5071 | 0.8980 | | 0.5111 | 0.01 | 18000 | 0.5204 | 0.9015 | | 0.4792 | 0.01 | 20000 | 0.5193 | 0.9076 | | 0.544 | 0.01 | 22000 | 0.4835 | 0.9133 | | 0.4745 | 0.01 | 24000 | 0.4689 | 0.9170 | | 0.4403 | 0.01 | 26000 | 0.4778 | 0.9177 | | 0.4405 | 0.01 | 28000 | 0.4754 | 0.9163 | | 0.4375 | 0.01 | 30000 | 0.4808 | 0.9175 | | 0.4628 | 0.01 | 32000 | 0.4340 | 0.9244 | | 0.4488 | 0.01 | 34000 | 0.4162 | 0.9265 | | 0.4608 | 0.01 | 36000 | 0.4031 | 0.9271 | | 0.4478 | 0.01 | 38000 | 0.4502 | 0.9253 | | 0.4237 | 0.01 | 40000 | 0.4087 | 0.9279 | | 0.4601 | 0.01 | 42000 | 0.4133 | 0.9269 | | 0.4153 | 0.01 | 44000 | 0.4230 | 0.9306 | | 0.4096 | 0.01 | 46000 | 0.4108 | 0.9301 | | 0.4348 | 0.01 | 48000 | 0.4138 | 0.9309 | | 0.3787 | 0.01 | 50000 | 0.4066 | 0.9324 | | 0.4172 | 0.01 | 52000 | 0.4812 | 0.9206 | | 0.3897 | 0.02 | 54000 | 0.4013 | 0.9325 | | 0.3787 | 0.02 | 56000 | 0.3837 | 0.9344 | | 0.4253 | 0.02 | 58000 | 0.3925 | 0.9347 | | 0.3959 | 0.02 | 60000 | 0.3907 | 0.9353 | | 0.4402 | 0.02 | 62000 | 0.3708 | 0.9341 | | 0.4115 | 0.02 | 64000 | 0.3477 | 0.9361 | | 0.3876 | 0.02 | 66000 | 0.3634 | 0.9373 | | 0.4286 | 0.02 | 68000 | 0.3778 | 0.9378 | | 0.422 | 0.02 | 70000 | 0.3540 | 0.9361 | | 0.3732 | 0.02 | 72000 | 0.3853 | 0.9378 | | 0.3641 | 0.02 | 74000 | 0.3951 | 0.9386 | | 0.3701 | 0.02 | 76000 | 0.3582 | 0.9388 | | 0.4498 | 0.02 | 78000 | 0.3268 | 0.9375 | | 0.3587 | 0.02 | 80000 | 0.3825 | 0.9401 | | 0.4474 | 0.02 | 82000 | 0.3155 | 0.9391 | | 0.3598 | 0.02 | 84000 | 0.3666 | 0.9388 | | 0.389 | 0.02 | 86000 | 0.3745 | 0.9377 | | 0.3625 | 0.02 | 88000 | 0.3776 | 0.9387 | | 0.3511 | 0.03 | 90000 | 0.4275 | 0.9336 | | 0.3428 | 0.03 | 92000 | 0.4301 | 0.9336 | | 0.4042 | 0.03 | 94000 | 0.3547 | 0.9359 | | 0.3583 | 0.03 | 96000 | 0.3763 | 0.9396 | | 0.3887 | 0.03 | 98000 | 0.3213 | 0.9412 | | 0.3915 | 0.03 | 100000 | 0.3557 | 0.9409 | | 0.3378 | 0.03 | 102000 | 0.3627 | 0.9418 | | 0.349 | 0.03 | 104000 | 0.3614 | 0.9402 | | 0.3596 | 0.03 | 106000 | 0.3834 | 0.9381 | | 0.3519 | 0.03 | 108000 | 0.3560 | 0.9421 | | 0.3598 | 0.03 | 110000 | 0.3485 | 0.9419 | | 0.3642 | 0.03 | 112000 | 0.3754 | 0.9395 | | 0.3477 | 0.03 | 114000 | 0.3634 | 0.9426 | | 0.4202 | 0.03 | 116000 | 0.3071 | 0.9427 | | 0.3656 | 0.03 | 118000 | 0.3155 | 0.9441 | | 0.3709 | 0.03 | 120000 | 0.2923 | 0.9433 | | 0.374 | 0.03 | 122000 | 0.3272 | 0.9441 | | 0.3142 | 0.03 | 124000 | 0.3348 | 0.9444 | | 0.3452 | 0.04 | 126000 | 0.3603 | 0.9436 | | 0.3365 | 0.04 | 128000 | 0.3339 | 0.9434 | | 0.3353 | 0.04 | 130000 | 0.3471 | 0.9450 | | 0.343 | 0.04 | 132000 | 0.3508 | 0.9418 | | 0.3174 | 0.04 | 134000 | 0.3753 | 0.9436 | | 0.3009 | 0.04 | 136000 | 0.3687 | 0.9422 | | 0.3785 | 0.04 | 138000 | 0.3818 | 0.9396 | | 0.3199 | 0.04 | 140000 | 0.3291 | 0.9438 | | 0.4049 | 0.04 | 142000 | 0.3372 | 0.9454 | | 0.3435 | 0.04 | 144000 | 0.3315 | 0.9459 | | 0.3814 | 0.04 | 146000 | 0.3462 | 0.9401 | | 0.359 | 0.04 | 148000 | 0.3981 | 0.9361 | | 0.3552 | 0.04 | 150000 | 0.3226 | 0.9469 | | 0.345 | 0.04 | 152000 | 0.3731 | 0.9384 | | 0.3228 | 0.04 | 154000 | 0.2956 | 0.9471 | | 0.3637 | 0.04 | 156000 | 0.2869 | 0.9477 | | 0.349 | 0.04 | 158000 | 0.3331 | 0.9430 | | 0.3374 | 0.04 | 160000 | 0.4159 | 0.9340 | | 0.3718 | 0.05 | 162000 | 0.3241 | 0.9459 | | 0.315 | 0.05 | 164000 | 0.3544 | 0.9391 | | 0.3215 | 0.05 | 166000 | 0.3311 | 0.9451 | | 0.3464 | 0.05 | 168000 | 0.3682 | 0.9453 | | 0.3495 | 0.05 | 170000 | 0.3193 | 0.9469 | | 0.305 | 0.05 | 172000 | 0.4132 | 0.9389 | | 0.3479 | 0.05 | 174000 | 0.3465 | 0.947 | | 0.3537 | 0.05 | 176000 | 0.3277 | 0.9449 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer", "sibyl"], "datasets": ["amazon_polarity"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-uncased-amazon_polarity", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "amazon_polarity", "type": "amazon_polarity", "args": "amazon_polarity"}, "metrics": [{"type": "accuracy", "value": 0.94647, "name": "Accuracy"}]}, {"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "amazon_polarity", "type": "amazon_polarity", "config": "amazon_polarity", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.9464875, "name": "Accuracy", "verified": true}, {"type": "precision", "value": 0.9528844934702675, "name": "Precision", "verified": true}, {"type": "recall", "value": 0.939425, "name": "Recall", "verified": true}, {"type": "auc", "value": 0.9863499156250001, "name": "AUC", "verified": true}, {"type": "f1", "value": 0.9461068798388619, "name": "F1", "verified": true}, {"type": "loss", "value": 0.2944573760032654, "name": "loss", "verified": true}]}]}]}
fabriceyhc/bert-base-uncased-amazon_polarity
null
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "sibyl", "dataset:amazon_polarity", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #generated_from_trainer #sibyl #dataset-amazon_polarity #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-amazon\_polarity ================================== This model is a fine-tuned version of bert-base-uncased on the amazon\_polarity dataset. It achieves the following results on the evaluation set: * Loss: 0.2945 * Accuracy: 0.9465 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 1 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1782000 * training\_steps: 17820000 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.7.1 * Datasets 1.12.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1782000\n* training\\_steps: 17820000", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.7.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #sibyl #dataset-amazon_polarity #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1782000\n* training\\_steps: 17820000", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.7.1\n* Datasets 1.12.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-dbpedia_14 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the dbpedia_14 dataset. It achieves the following results on the evaluation set: - Loss: 0.0547 - Accuracy: 0.9903 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 34650 - training_steps: 346500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.7757 | 0.03 | 2000 | 0.2732 | 0.9880 | | 0.1002 | 0.06 | 4000 | 0.0620 | 0.9891 | | 0.0547 | 0.09 | 6000 | 0.0723 | 0.9879 | | 0.0558 | 0.12 | 8000 | 0.0678 | 0.9875 | | 0.0534 | 0.14 | 10000 | 0.0554 | 0.9896 | | 0.0632 | 0.17 | 12000 | 0.0670 | 0.9888 | | 0.0612 | 0.2 | 14000 | 0.0733 | 0.9873 | | 0.0667 | 0.23 | 16000 | 0.0623 | 0.9896 | | 0.0636 | 0.26 | 18000 | 0.0836 | 0.9868 | | 0.0705 | 0.29 | 20000 | 0.0776 | 0.9855 | | 0.0726 | 0.32 | 22000 | 0.0805 | 0.9861 | | 0.0778 | 0.35 | 24000 | 0.0713 | 0.9870 | | 0.0713 | 0.38 | 26000 | 0.1277 | 0.9805 | | 0.0965 | 0.4 | 28000 | 0.0810 | 0.9855 | | 0.0881 | 0.43 | 30000 | 0.0910 | 0.985 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1 - Datasets 1.6.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer", "sibyl"], "datasets": ["dbpedia_14"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-uncased-dbpedia_14", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "dbpedia_14", "type": "dbpedia_14", "args": "dbpedia_14"}, "metrics": [{"type": "accuracy", "value": 0.9902857142857143, "name": "Accuracy"}]}]}]}
fabriceyhc/bert-base-uncased-dbpedia_14
null
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "sibyl", "dataset:dbpedia_14", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #generated_from_trainer #sibyl #dataset-dbpedia_14 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-dbpedia\_14 ============================= This model is a fine-tuned version of bert-base-uncased on the dbpedia\_14 dataset. It achieves the following results on the evaluation set: * Loss: 0.0547 * Accuracy: 0.9903 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 34650 * training\_steps: 346500 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.7.1 * Datasets 1.6.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 34650\n* training\\_steps: 346500", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.7.1\n* Datasets 1.6.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #sibyl #dataset-dbpedia_14 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 34650\n* training\\_steps: 346500", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.7.1\n* Datasets 1.6.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-imdb This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4942 - Accuracy: 0.9126 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1546 - training_steps: 15468 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3952 | 0.65 | 2000 | 0.4012 | 0.86 | | 0.2954 | 1.29 | 4000 | 0.4535 | 0.892 | | 0.2595 | 1.94 | 6000 | 0.4320 | 0.892 | | 0.1516 | 2.59 | 8000 | 0.5309 | 0.896 | | 0.1167 | 3.23 | 10000 | 0.4070 | 0.928 | | 0.0624 | 3.88 | 12000 | 0.5055 | 0.908 | | 0.0329 | 4.52 | 14000 | 0.4342 | 0.92 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1 - Datasets 1.6.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer", "sibyl"], "datasets": ["imdb"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-uncased-imdb", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.91264, "name": "Accuracy"}]}]}]}
fabriceyhc/bert-base-uncased-imdb
null
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "sibyl", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #generated_from_trainer #sibyl #dataset-imdb #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
bert-base-uncased-imdb ====================== This model is a fine-tuned version of bert-base-uncased on the imdb dataset. It achieves the following results on the evaluation set: * Loss: 0.4942 * Accuracy: 0.9126 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 1546 * training\_steps: 15468 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.7.1 * Datasets 1.6.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1546\n* training\\_steps: 15468", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.7.1\n* Datasets 1.6.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #sibyl #dataset-imdb #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1546\n* training\\_steps: 15468", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.7.1\n* Datasets 1.6.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-yahoo_answers_topics This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the yahoo_answers_topics dataset. It achieves the following results on the evaluation set: - Loss: 0.8092 - Accuracy: 0.7499 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 86625 - training_steps: 866250 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 2.162 | 0.01 | 2000 | 1.7444 | 0.5681 | | 1.3126 | 0.02 | 4000 | 1.0081 | 0.7054 | | 0.9592 | 0.03 | 6000 | 0.9021 | 0.7234 | | 0.8903 | 0.05 | 8000 | 0.8827 | 0.7276 | | 0.8685 | 0.06 | 10000 | 0.8540 | 0.7341 | | 0.8422 | 0.07 | 12000 | 0.8547 | 0.7365 | | 0.8535 | 0.08 | 14000 | 0.8264 | 0.7372 | | 0.8178 | 0.09 | 16000 | 0.8331 | 0.7389 | | 0.8325 | 0.1 | 18000 | 0.8242 | 0.7411 | | 0.8181 | 0.12 | 20000 | 0.8356 | 0.7437 | | 0.8171 | 0.13 | 22000 | 0.8090 | 0.7451 | | 0.8092 | 0.14 | 24000 | 0.8469 | 0.7392 | | 0.8057 | 0.15 | 26000 | 0.8185 | 0.7478 | | 0.8085 | 0.16 | 28000 | 0.8090 | 0.7467 | | 0.8229 | 0.17 | 30000 | 0.8225 | 0.7417 | | 0.8151 | 0.18 | 32000 | 0.8262 | 0.7419 | | 0.81 | 0.2 | 34000 | 0.8149 | 0.7383 | | 0.8073 | 0.21 | 36000 | 0.8225 | 0.7441 | | 0.816 | 0.22 | 38000 | 0.8037 | 0.744 | | 0.8217 | 0.23 | 40000 | 0.8409 | 0.743 | | 0.82 | 0.24 | 42000 | 0.8286 | 0.7385 | | 0.8101 | 0.25 | 44000 | 0.8282 | 0.7413 | | 0.8254 | 0.27 | 46000 | 0.8170 | 0.7414 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1 - Datasets 1.6.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer", "sibyl"], "datasets": ["yahoo_answers_topics"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-uncased-yahoo_answers_topics", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "yahoo_answers_topics", "type": "yahoo_answers_topics", "args": "yahoo_answers_topics"}, "metrics": [{"type": "accuracy", "value": 0.7499166666666667, "name": "Accuracy"}]}]}]}
fabriceyhc/bert-base-uncased-yahoo_answers_topics
null
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "sibyl", "dataset:yahoo_answers_topics", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #generated_from_trainer #sibyl #dataset-yahoo_answers_topics #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-yahoo\_answers\_topics ======================================== This model is a fine-tuned version of bert-base-uncased on the yahoo\_answers\_topics dataset. It achieves the following results on the evaluation set: * Loss: 0.8092 * Accuracy: 0.7499 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 86625 * training\_steps: 866250 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.7.1 * Datasets 1.6.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 86625\n* training\\_steps: 866250", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.7.1\n* Datasets 1.6.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #sibyl #dataset-yahoo_answers_topics #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 86625\n* training\\_steps: 866250", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.7.1\n* Datasets 1.6.1\n* Tokenizers 0.10.3" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-yelp_polarity This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the yelp_polarity dataset. It achieves the following results on the evaluation set: - Loss: 0.3222 - Accuracy: 0.9516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 277200 - training_steps: 2772000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.8067 | 0.0 | 2000 | 0.8241 | 0.4975 | | 0.5482 | 0.01 | 4000 | 0.3507 | 0.8591 | | 0.3427 | 0.01 | 6000 | 0.3750 | 0.9139 | | 0.4133 | 0.01 | 8000 | 0.5520 | 0.9016 | | 0.4301 | 0.02 | 10000 | 0.3803 | 0.9304 | | 0.3716 | 0.02 | 12000 | 0.4168 | 0.9337 | | 0.4076 | 0.03 | 14000 | 0.5042 | 0.9170 | | 0.3674 | 0.03 | 16000 | 0.4806 | 0.9268 | | 0.3813 | 0.03 | 18000 | 0.4227 | 0.9261 | | 0.3723 | 0.04 | 20000 | 0.3360 | 0.9418 | | 0.3876 | 0.04 | 22000 | 0.3255 | 0.9407 | | 0.3351 | 0.04 | 24000 | 0.3283 | 0.9404 | | 0.34 | 0.05 | 26000 | 0.3489 | 0.9430 | | 0.3006 | 0.05 | 28000 | 0.3302 | 0.9464 | | 0.349 | 0.05 | 30000 | 0.3853 | 0.9375 | | 0.3696 | 0.06 | 32000 | 0.2992 | 0.9454 | | 0.3301 | 0.06 | 34000 | 0.3484 | 0.9464 | | 0.3151 | 0.06 | 36000 | 0.3529 | 0.9455 | | 0.3682 | 0.07 | 38000 | 0.3052 | 0.9420 | | 0.3184 | 0.07 | 40000 | 0.3323 | 0.9466 | | 0.3207 | 0.08 | 42000 | 0.3133 | 0.9532 | | 0.3346 | 0.08 | 44000 | 0.3826 | 0.9414 | | 0.3008 | 0.08 | 46000 | 0.3059 | 0.9484 | | 0.3306 | 0.09 | 48000 | 0.3089 | 0.9475 | | 0.342 | 0.09 | 50000 | 0.3611 | 0.9486 | | 0.3424 | 0.09 | 52000 | 0.3227 | 0.9445 | | 0.3044 | 0.1 | 54000 | 0.3130 | 0.9489 | | 0.3278 | 0.1 | 56000 | 0.3827 | 0.9368 | | 0.288 | 0.1 | 58000 | 0.3080 | 0.9504 | | 0.3342 | 0.11 | 60000 | 0.3252 | 0.9471 | | 0.3737 | 0.11 | 62000 | 0.4250 | 0.9343 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.7.1 - Datasets 1.6.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer", "sibyl"], "datasets": ["yelp_polarity"], "metrics": ["accuracy"], "model-index": [{"name": "bert-base-uncased-yelp_polarity", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "yelp_polarity", "type": "yelp_polarity", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.9516052631578947, "name": "Accuracy"}]}]}]}
fabriceyhc/bert-base-uncased-yelp_polarity
null
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "sibyl", "dataset:yelp_polarity", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #generated_from_trainer #sibyl #dataset-yelp_polarity #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
bert-base-uncased-yelp\_polarity ================================ This model is a fine-tuned version of bert-base-uncased on the yelp\_polarity dataset. It achieves the following results on the evaluation set: * Loss: 0.3222 * Accuracy: 0.9516 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 1 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 277200 * training\_steps: 2772000 ### Training results ### Framework versions * Transformers 4.10.2 * Pytorch 1.7.1 * Datasets 1.6.1 * Tokenizers 0.10.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 277200\n* training\\_steps: 2772000", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.7.1\n* Datasets 1.6.1\n* Tokenizers 0.10.3" ]
[ "TAGS\n#transformers #pytorch #bert #text-classification #generated_from_trainer #sibyl #dataset-yelp_polarity #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 277200\n* training\\_steps: 2772000", "### Training results", "### Framework versions\n\n\n* Transformers 4.10.2\n* Pytorch 1.7.1\n* Datasets 1.6.1\n* Tokenizers 0.10.3" ]
feature-extraction
transformers
# BART (base-sized model) BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/bart). Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). ## Intended uses & limitations You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import BartTokenizer, BartModel tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') model = BartModel.from_pretrained('facebook/bart-base') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"language": "en", "license": "apache-2.0"}
facebook/bart-base
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bart", "feature-extraction", "en", "arxiv:1910.13461", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1910.13461" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #safetensors #bart #feature-extraction #en #arxiv-1910.13461 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# BART (base-sized model) BART model pre-trained on English language. It was introduced in the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Lewis et al. and first released in this repository. Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). ## Intended uses & limitations You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ### BibTeX entry and citation info
[ "# BART (base-sized model) \n\nBART model pre-trained on English language. It was introduced in the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Lewis et al. and first released in this repository. \n\nDisclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.\n\nBART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).", "## Intended uses & limitations\n\nYou can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the model hub to look for fine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model in PyTorch:", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #jax #safetensors #bart #feature-extraction #en #arxiv-1910.13461 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# BART (base-sized model) \n\nBART model pre-trained on English language. It was introduced in the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Lewis et al. and first released in this repository. \n\nDisclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.\n\nBART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).", "## Intended uses & limitations\n\nYou can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the model hub to look for fine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model in PyTorch:", "### BibTeX entry and citation info" ]
summarization
transformers
# BART (large-sized model), fine-tuned on CNN Daily Mail BART model pre-trained on English language, and fine-tuned on [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail). It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository (https://github.com/pytorch/fairseq/tree/master/examples/bart). Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). This particular checkpoint has been fine-tuned on CNN Daily Mail, a large collection of text-summary pairs. ## Intended uses & limitations You can use this model for text summarization. ### How to use Here is how to use this model with the [pipeline API](https://huggingface.co/transformers/main_classes/pipelines.html): ```python from transformers import pipeline summarizer = pipeline("summarization", model="facebook/bart-large-cnn") ARTICLE = """ New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York. A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband. Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared "I do" five more times, sometimes only within two weeks of each other. In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her "first and only" marriage. Barrientos, now 39, is facing two criminal counts of "offering a false instrument for filing in the first degree," referring to her false statements on the 2010 marriage license application, according to court documents. Prosecutors said the marriages were part of an immigration scam. On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further. After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002. All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say. Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages. Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted. The case was referred to the Bronx District Attorney\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\'s Investigation Division. Seven of the men are from so-called "red-flagged" countries, including Egypt, Turkey, Georgia, Pakistan and Mali. Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force. If convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18. """ print(summarizer(ARTICLE, max_length=130, min_length=30, do_sample=False)) >>> [{'summary_text': 'Liana Barrientos, 39, is charged with two counts of "offering a false instrument for filing in the first degree" In total, she has been married 10 times, with nine of her marriages occurring between 1999 and 2002. She is believed to still be married to four men.'}] ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"language": ["en"], "license": "mit", "datasets": ["cnn_dailymail"], "pipeline_tag": "summarization", "thumbnail": "https://huggingface.co/front/thumbnails/facebook.png", "model-index": [{"name": "facebook/bart-large-cnn", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "cnn_dailymail", "type": "cnn_dailymail", "config": "3.0.0", "split": "train"}, "metrics": [{"type": "rouge", "value": 42.9486, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 20.8149, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 30.6186, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 40.0376, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 2.529000997543335, "name": "loss", "verified": true}, {"type": "gen_len", "value": 78.5866, "name": "gen_len", "verified": true}]}]}]}
facebook/bart-large-cnn
null
[ "transformers", "pytorch", "tf", "jax", "rust", "safetensors", "bart", "text2text-generation", "summarization", "en", "dataset:cnn_dailymail", "arxiv:1910.13461", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1910.13461" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #safetensors #bart #text2text-generation #summarization #en #dataset-cnn_dailymail #arxiv-1910.13461 #license-mit #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
# BART (large-sized model), fine-tuned on CNN Daily Mail BART model pre-trained on English language, and fine-tuned on CNN Daily Mail. It was introduced in the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Lewis et al. and first released in this repository (URL Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). This particular checkpoint has been fine-tuned on CNN Daily Mail, a large collection of text-summary pairs. ## Intended uses & limitations You can use this model for text summarization. ### How to use Here is how to use this model with the [pipeline API: ### BibTeX entry and citation info
[ "# BART (large-sized model), fine-tuned on CNN Daily Mail \n\nBART model pre-trained on English language, and fine-tuned on CNN Daily Mail. It was introduced in the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Lewis et al. and first released in this repository (URL \n\nDisclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.\n\nBART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). This particular checkpoint has been fine-tuned on CNN Daily Mail, a large collection of text-summary pairs.", "## Intended uses & limitations\n\nYou can use this model for text summarization.", "### How to use\n\nHere is how to use this model with the [pipeline API:", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #bart #text2text-generation #summarization #en #dataset-cnn_dailymail #arxiv-1910.13461 #license-mit #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# BART (large-sized model), fine-tuned on CNN Daily Mail \n\nBART model pre-trained on English language, and fine-tuned on CNN Daily Mail. It was introduced in the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Lewis et al. and first released in this repository (URL \n\nDisclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.\n\nBART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). This particular checkpoint has been fine-tuned on CNN Daily Mail, a large collection of text-summary pairs.", "## Intended uses & limitations\n\nYou can use this model for text summarization.", "### How to use\n\nHere is how to use this model with the [pipeline API:", "### BibTeX entry and citation info" ]
zero-shot-classification
transformers
# bart-large-mnli This is the checkpoint for [bart-large](https://huggingface.co/facebook/bart-large) after being trained on the [MultiNLI (MNLI)](https://huggingface.co/datasets/multi_nli) dataset. Additional information about this model: - The [bart-large](https://huggingface.co/facebook/bart-large) model page - [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension ](https://arxiv.org/abs/1910.13461) - [BART fairseq implementation](https://github.com/pytorch/fairseq/tree/master/fairseq/models/bart) ## NLI-based Zero Shot Text Classification [Yin et al.](https://arxiv.org/abs/1909.00161) proposed a method for using pre-trained NLI models as a ready-made zero-shot sequence classifiers. The method works by posing the sequence to be classified as the NLI premise and to construct a hypothesis from each candidate label. For example, if we want to evaluate whether a sequence belongs to the class "politics", we could construct a hypothesis of `This text is about politics.`. The probabilities for entailment and contradiction are then converted to label probabilities. This method is surprisingly effective in many cases, particularly when used with larger pre-trained models like BART and Roberta. See [this blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html) for a more expansive introduction to this and other zero shot methods, and see the code snippets below for examples of using this model for zero-shot classification both with Hugging Face's built-in pipeline and with native Transformers/PyTorch code. #### With the zero-shot classification pipeline The model can be loaded with the `zero-shot-classification` pipeline like so: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="facebook/bart-large-mnli") ``` You can then use this pipeline to classify sequences into any of the class names you specify. ```python sequence_to_classify = "one day I will see the world" candidate_labels = ['travel', 'cooking', 'dancing'] classifier(sequence_to_classify, candidate_labels) #{'labels': ['travel', 'dancing', 'cooking'], # 'scores': [0.9938651323318481, 0.0032737774308770895, 0.002861034357920289], # 'sequence': 'one day I will see the world'} ``` If more than one candidate label can be correct, pass `multi_label=True` to calculate each class independently: ```python candidate_labels = ['travel', 'cooking', 'dancing', 'exploration'] classifier(sequence_to_classify, candidate_labels, multi_label=True) #{'labels': ['travel', 'exploration', 'dancing', 'cooking'], # 'scores': [0.9945111274719238, # 0.9383890628814697, # 0.0057061901316046715, # 0.0018193122232332826], # 'sequence': 'one day I will see the world'} ``` #### With manual PyTorch ```python # pose sequence as a NLI premise and label as a hypothesis from transformers import AutoModelForSequenceClassification, AutoTokenizer nli_model = AutoModelForSequenceClassification.from_pretrained('facebook/bart-large-mnli') tokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-mnli') premise = sequence hypothesis = f'This example is {label}.' # run through model pre-trained on MNLI x = tokenizer.encode(premise, hypothesis, return_tensors='pt', truncation_strategy='only_first') logits = nli_model(x.to(device))[0] # we throw away "neutral" (dim 1) and take the probability of # "entailment" (2) as the probability of the label being true entail_contradiction_logits = logits[:,[0,2]] probs = entail_contradiction_logits.softmax(dim=1) prob_label_is_true = probs[:,1] ```
{"license": "mit", "datasets": ["multi_nli"], "thumbnail": "https://huggingface.co/front/thumbnails/facebook.png", "pipeline_tag": "zero-shot-classification"}
facebook/bart-large-mnli
null
[ "transformers", "pytorch", "jax", "rust", "safetensors", "bart", "text-classification", "zero-shot-classification", "dataset:multi_nli", "arxiv:1910.13461", "arxiv:1909.00161", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1910.13461", "1909.00161" ]
[]
TAGS #transformers #pytorch #jax #rust #safetensors #bart #text-classification #zero-shot-classification #dataset-multi_nli #arxiv-1910.13461 #arxiv-1909.00161 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# bart-large-mnli This is the checkpoint for bart-large after being trained on the MultiNLI (MNLI) dataset. Additional information about this model: - The bart-large model page - BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension - BART fairseq implementation ## NLI-based Zero Shot Text Classification Yin et al. proposed a method for using pre-trained NLI models as a ready-made zero-shot sequence classifiers. The method works by posing the sequence to be classified as the NLI premise and to construct a hypothesis from each candidate label. For example, if we want to evaluate whether a sequence belongs to the class "politics", we could construct a hypothesis of 'This text is about politics.'. The probabilities for entailment and contradiction are then converted to label probabilities. This method is surprisingly effective in many cases, particularly when used with larger pre-trained models like BART and Roberta. See this blog post for a more expansive introduction to this and other zero shot methods, and see the code snippets below for examples of using this model for zero-shot classification both with Hugging Face's built-in pipeline and with native Transformers/PyTorch code. #### With the zero-shot classification pipeline The model can be loaded with the 'zero-shot-classification' pipeline like so: You can then use this pipeline to classify sequences into any of the class names you specify. If more than one candidate label can be correct, pass 'multi_label=True' to calculate each class independently: #### With manual PyTorch
[ "# bart-large-mnli\n\nThis is the checkpoint for bart-large after being trained on the MultiNLI (MNLI) dataset.\n\nAdditional information about this model:\n- The bart-large model page\n- BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension\n\n- BART fairseq implementation", "## NLI-based Zero Shot Text Classification\n\nYin et al. proposed a method for using pre-trained NLI models as a ready-made zero-shot sequence classifiers. The method works by posing the sequence to be classified as the NLI premise and to construct a hypothesis from each candidate label. For example, if we want to evaluate whether a sequence belongs to the class \"politics\", we could construct a hypothesis of 'This text is about politics.'. The probabilities for entailment and contradiction are then converted to label probabilities.\n\nThis method is surprisingly effective in many cases, particularly when used with larger pre-trained models like BART and Roberta. See this blog post for a more expansive introduction to this and other zero shot methods, and see the code snippets below for examples of using this model for zero-shot classification both with Hugging Face's built-in pipeline and with native Transformers/PyTorch code.", "#### With the zero-shot classification pipeline\n\nThe model can be loaded with the 'zero-shot-classification' pipeline like so:\n\n\n\nYou can then use this pipeline to classify sequences into any of the class names you specify.\n\n\n\nIf more than one candidate label can be correct, pass 'multi_label=True' to calculate each class independently:", "#### With manual PyTorch" ]
[ "TAGS\n#transformers #pytorch #jax #rust #safetensors #bart #text-classification #zero-shot-classification #dataset-multi_nli #arxiv-1910.13461 #arxiv-1909.00161 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# bart-large-mnli\n\nThis is the checkpoint for bart-large after being trained on the MultiNLI (MNLI) dataset.\n\nAdditional information about this model:\n- The bart-large model page\n- BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension\n\n- BART fairseq implementation", "## NLI-based Zero Shot Text Classification\n\nYin et al. proposed a method for using pre-trained NLI models as a ready-made zero-shot sequence classifiers. The method works by posing the sequence to be classified as the NLI premise and to construct a hypothesis from each candidate label. For example, if we want to evaluate whether a sequence belongs to the class \"politics\", we could construct a hypothesis of 'This text is about politics.'. The probabilities for entailment and contradiction are then converted to label probabilities.\n\nThis method is surprisingly effective in many cases, particularly when used with larger pre-trained models like BART and Roberta. See this blog post for a more expansive introduction to this and other zero shot methods, and see the code snippets below for examples of using this model for zero-shot classification both with Hugging Face's built-in pipeline and with native Transformers/PyTorch code.", "#### With the zero-shot classification pipeline\n\nThe model can be loaded with the 'zero-shot-classification' pipeline like so:\n\n\n\nYou can then use this pipeline to classify sequences into any of the class names you specify.\n\n\n\nIf more than one candidate label can be correct, pass 'multi_label=True' to calculate each class independently:", "#### With manual PyTorch" ]
summarization
transformers
### Bart model finetuned on xsum docs: https://huggingface.co/transformers/model_doc/bart.html finetuning: examples/seq2seq/ (as of Aug 20, 2020) Metrics: ROUGE > 22 on xsum. variants: search for distilbart paper: https://arxiv.org/abs/1910.13461
{"language": ["en"], "license": "mit", "tags": ["summarization"], "model-index": [{"name": "facebook/bart-large-xsum", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "cnn_dailymail", "type": "cnn_dailymail", "config": "3.0.0", "split": "test"}, "metrics": [{"type": "rouge", "value": 25.2697, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 7.6638, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 17.1808, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 21.7933, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 3.5042972564697266, "name": "loss", "verified": true}, {"type": "gen_len", "value": 27.4462, "name": "gen_len", "verified": true}]}, {"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "xsum", "type": "xsum", "config": "default", "split": "test"}, "metrics": [{"type": "rouge", "value": 45.4525, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 22.3455, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 37.2302, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 37.2323, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 2.3128726482391357, "name": "loss", "verified": true}, {"type": "gen_len", "value": 25.5435, "name": "gen_len", "verified": true}]}, {"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "samsum", "type": "samsum", "config": "samsum", "split": "train"}, "metrics": [{"type": "rouge", "value": 24.7852, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 5.2533, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 18.6792, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 20.629, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 3.746837854385376, "name": "loss", "verified": true}, {"type": "gen_len", "value": 23.1206, "name": "gen_len", "verified": true}]}, {"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "samsum", "type": "samsum", "config": "samsum", "split": "test"}, "metrics": [{"type": "rouge", "value": 24.9158, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 5.5837, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 18.8935, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 20.76, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 3.775235891342163, "name": "loss", "verified": true}, {"type": "gen_len", "value": 23.0928, "name": "gen_len", "verified": true}]}]}]}
facebook/bart-large-xsum
null
[ "transformers", "pytorch", "tf", "jax", "rust", "bart", "text2text-generation", "summarization", "en", "arxiv:1910.13461", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1910.13461" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #bart #text2text-generation #summarization #en #arxiv-1910.13461 #license-mit #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
### Bart model finetuned on xsum docs: URL finetuning: examples/seq2seq/ (as of Aug 20, 2020) Metrics: ROUGE > 22 on xsum. variants: search for distilbart paper: URL
[ "### Bart model finetuned on xsum\n\ndocs: URL\n\nfinetuning: examples/seq2seq/ (as of Aug 20, 2020)\n\nMetrics: ROUGE > 22 on xsum.\n\nvariants: search for distilbart\n\npaper: URL" ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #bart #text2text-generation #summarization #en #arxiv-1910.13461 #license-mit #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Bart model finetuned on xsum\n\ndocs: URL\n\nfinetuning: examples/seq2seq/ (as of Aug 20, 2020)\n\nMetrics: ROUGE > 22 on xsum.\n\nvariants: search for distilbart\n\npaper: URL" ]
feature-extraction
transformers
# BART (large-sized model) BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/bart). Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). ## Intended uses & limitations You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import BartTokenizer, BartModel tokenizer = BartTokenizer.from_pretrained('facebook/bart-large') model = BartModel.from_pretrained('facebook/bart-large') inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"language": "en", "license": "apache-2.0"}
facebook/bart-large
null
[ "transformers", "pytorch", "tf", "jax", "rust", "bart", "feature-extraction", "en", "arxiv:1910.13461", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1910.13461" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #bart #feature-extraction #en #arxiv-1910.13461 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# BART (large-sized model) BART model pre-trained on English language. It was introduced in the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Lewis et al. and first released in this repository. Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). ## Intended uses & limitations You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ### BibTeX entry and citation info
[ "# BART (large-sized model) \n\nBART model pre-trained on English language. It was introduced in the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Lewis et al. and first released in this repository. \n\nDisclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.\n\nBART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).", "## Intended uses & limitations\n\nYou can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the model hub to look for fine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model in PyTorch:", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #jax #rust #bart #feature-extraction #en #arxiv-1910.13461 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# BART (large-sized model) \n\nBART model pre-trained on English language. It was introduced in the paper BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Lewis et al. and first released in this repository. \n\nDisclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nBART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.\n\nBART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).", "## Intended uses & limitations\n\nYou can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the model hub to look for fine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model in PyTorch:", "### BibTeX entry and citation info" ]
text-generation
transformers
## Model description + Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616) + [Original PARLAI Code](https://parl.ai/projects/recipes/) ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
{"language": ["en"], "license": "apache-2.0", "tags": ["convAI", "conversational", "facebook"], "datasets": ["blended_skill_talk"], "metrics": ["perplexity"]}
facebook/blenderbot-1B-distill
null
[ "transformers", "pytorch", "blenderbot", "text2text-generation", "convAI", "conversational", "facebook", "en", "dataset:blended_skill_talk", "arxiv:1907.06616", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1907.06616" ]
[ "en" ]
TAGS #transformers #pytorch #blenderbot #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-1907.06616 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
## Model description + Paper: Recipes for building an open-domain chatbot + Original PARLAI Code ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
[ "## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code", "### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models." ]
[ "TAGS\n#transformers #pytorch #blenderbot #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-1907.06616 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code", "### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models." ]
text-generation
transformers
## Model description + Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616) + [Original PARLAI Code](https://parl.ai/projects/recipes/) ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
{"language": ["en"], "license": "apache-2.0", "tags": ["convAI", "conversational", "facebook"], "datasets": ["blended_skill_talk"], "metrics": ["perplexity"]}
facebook/blenderbot-3B
null
[ "transformers", "pytorch", "blenderbot", "text2text-generation", "convAI", "conversational", "facebook", "en", "dataset:blended_skill_talk", "arxiv:1907.06616", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1907.06616" ]
[ "en" ]
TAGS #transformers #pytorch #blenderbot #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-1907.06616 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
## Model description + Paper: Recipes for building an open-domain chatbot + Original PARLAI Code ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
[ "## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code", "### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models." ]
[ "TAGS\n#transformers #pytorch #blenderbot #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-1907.06616 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code", "### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models." ]
text-generation
transformers
## Model description + Paper: [Recipes for building an open-domain chatbot]( https://arxiv.org/abs/2004.13637) + [Original PARLAI Code](https://parl.ai/projects/recipes/) ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
{"language": ["en"], "license": "apache-2.0", "tags": ["convAI", "conversational", "facebook"], "datasets": ["blended_skill_talk"], "metrics": ["perplexity"]}
facebook/blenderbot-400M-distill
null
[ "transformers", "pytorch", "tf", "jax", "blenderbot", "text2text-generation", "convAI", "conversational", "facebook", "en", "dataset:blended_skill_talk", "arxiv:2004.13637", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.13637" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #blenderbot #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-2004.13637 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
## Model description + Paper: Recipes for building an open-domain chatbot + Original PARLAI Code ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
[ "## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code", "### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models." ]
[ "TAGS\n#transformers #pytorch #tf #jax #blenderbot #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-2004.13637 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code", "### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models." ]
text-generation
transformers
# 🚨🚨**IMPORTANT**🚨🚨 **This model is deprecated! Please use the identical model** **https://huggingface.co/facebook/blenderbot_small-90M instead** ## Model description + Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616) + [Original PARLAI Code](https://parl.ai/projects/recipes/) ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
{"language": ["en"], "license": "apache-2.0", "tags": ["convAI", "conversational", "facebook"], "datasets": ["blended_skill_talk"], "metrics": ["perplexity"]}
facebook/blenderbot-90M
null
[ "transformers", "pytorch", "blenderbot-small", "text2text-generation", "convAI", "conversational", "facebook", "en", "dataset:blended_skill_talk", "arxiv:1907.06616", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1907.06616" ]
[ "en" ]
TAGS #transformers #pytorch #blenderbot-small #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-1907.06616 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# IMPORTANT This model is deprecated! Please use the identical model URL instead ## Model description + Paper: Recipes for building an open-domain chatbot + Original PARLAI Code ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
[ "# IMPORTANT\n\nThis model is deprecated! Please use the identical model URL instead", "## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code", "### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models." ]
[ "TAGS\n#transformers #pytorch #blenderbot-small #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-1907.06616 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# IMPORTANT\n\nThis model is deprecated! Please use the identical model URL instead", "## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code", "### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models." ]
text-generation
transformers
## Model description + Paper: [Recipes for building an open-domain chatbot](https://arxiv.org/abs/1907.06616) + [Original PARLAI Code](https://parl.ai/projects/recipes/) ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
{"language": ["en"], "license": "apache-2.0", "tags": ["convAI", "conversational", "facebook"], "datasets": ["blended_skill_talk"], "metrics": ["perplexity"]}
facebook/blenderbot_small-90M
null
[ "transformers", "pytorch", "tf", "jax", "blenderbot-small", "text2text-generation", "convAI", "conversational", "facebook", "en", "dataset:blended_skill_talk", "arxiv:1907.06616", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "1907.06616" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #blenderbot-small #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-1907.06616 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
## Model description + Paper: Recipes for building an open-domain chatbot + Original PARLAI Code ### Abstract Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
[ "## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code", "### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models." ]
[ "TAGS\n#transformers #pytorch #tf #jax #blenderbot-small #text2text-generation #convAI #conversational #facebook #en #dataset-blended_skill_talk #arxiv-1907.06616 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "## Model description\n\n+ Paper: Recipes for building an open-domain chatbot\n+ Original PARLAI Code", "### Abstract\n\n\nBuilding open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, both asking and answering questions, and displaying knowledge, empathy and personality appropriately, depending on the situation. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter neural models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models." ]
feature-extraction
transformers
This model is the finetuned version of the pre-trained contriever model available here https://huggingface.co/facebook/contriever, following the approach described in [Towards Unsupervised Dense Information Retrieval with Contrastive Learning](https://arxiv.org/abs/2112.09118). The associated GitHub repository is available here https://github.com/facebookresearch/contriever. ## Usage (HuggingFace Transformers) Using the model directly available in HuggingFace transformers requires to add a mean pooling operation to obtain a sentence embedding. ```python import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('facebook/contriever-msmarco') model = AutoModel.from_pretrained('facebook/contriever-msmarco') sentences = [ "Where was Marie Curie born?", "Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.", "Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace." ] # Apply tokenizer inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings outputs = model(**inputs) # Mean pooling def mean_pooling(token_embeddings, mask): token_embeddings = token_embeddings.masked_fill(~mask[..., None].bool(), 0.) sentence_embeddings = token_embeddings.sum(dim=1) / mask.sum(dim=1)[..., None] return sentence_embeddings embeddings = mean_pooling(outputs[0], inputs['attention_mask']) ```
{"tags": ["feature-extraction"], "pipeline_tag": "feature-extraction"}
facebook/contriever-msmarco
null
[ "transformers", "pytorch", "bert", "feature-extraction", "arxiv:2112.09118", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2112.09118" ]
[]
TAGS #transformers #pytorch #bert #feature-extraction #arxiv-2112.09118 #endpoints_compatible #has_space #region-us
This model is the finetuned version of the pre-trained contriever model available here URL following the approach described in Towards Unsupervised Dense Information Retrieval with Contrastive Learning. The associated GitHub repository is available here URL ## Usage (HuggingFace Transformers) Using the model directly available in HuggingFace transformers requires to add a mean pooling operation to obtain a sentence embedding.
[ "## Usage (HuggingFace Transformers)\nUsing the model directly available in HuggingFace transformers requires to add a mean pooling operation to obtain a sentence embedding." ]
[ "TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2112.09118 #endpoints_compatible #has_space #region-us \n", "## Usage (HuggingFace Transformers)\nUsing the model directly available in HuggingFace transformers requires to add a mean pooling operation to obtain a sentence embedding." ]
null
transformers
This model has been trained without supervision following the approach described in [Towards Unsupervised Dense Information Retrieval with Contrastive Learning](https://arxiv.org/abs/2112.09118). The associated GitHub repository is available here https://github.com/facebookresearch/contriever. ## Usage (HuggingFace Transformers) Using the model directly available in HuggingFace transformers requires to add a mean pooling operation to obtain a sentence embedding. ```python import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('facebook/contriever') model = AutoModel.from_pretrained('facebook/contriever') sentences = [ "Where was Marie Curie born?", "Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.", "Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace." ] # Apply tokenizer inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings outputs = model(**inputs) # Mean pooling def mean_pooling(token_embeddings, mask): token_embeddings = token_embeddings.masked_fill(~mask[..., None].bool(), 0.) sentence_embeddings = token_embeddings.sum(dim=1) / mask.sum(dim=1)[..., None] return sentence_embeddings embeddings = mean_pooling(outputs[0], inputs['attention_mask']) ```
{}
facebook/contriever
null
[ "transformers", "pytorch", "bert", "arxiv:2112.09118", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2112.09118" ]
[]
TAGS #transformers #pytorch #bert #arxiv-2112.09118 #endpoints_compatible #has_space #region-us
This model has been trained without supervision following the approach described in Towards Unsupervised Dense Information Retrieval with Contrastive Learning. The associated GitHub repository is available here URL ## Usage (HuggingFace Transformers) Using the model directly available in HuggingFace transformers requires to add a mean pooling operation to obtain a sentence embedding.
[ "## Usage (HuggingFace Transformers)\nUsing the model directly available in HuggingFace transformers requires to add a mean pooling operation to obtain a sentence embedding." ]
[ "TAGS\n#transformers #pytorch #bert #arxiv-2112.09118 #endpoints_compatible #has_space #region-us \n", "## Usage (HuggingFace Transformers)\nUsing the model directly available in HuggingFace transformers requires to add a mean pooling operation to obtain a sentence embedding." ]
image-classification
transformers
# ConvNeXT (base-sized model) ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-base-224-22k-1k") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-384-224-1k") inputs = feature_extractor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet-21k", "imagenet-1k"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
facebook/convnext-base-224-22k-1k
null
[ "transformers", "pytorch", "tf", "safetensors", "convnext", "image-classification", "vision", "dataset:imagenet-21k", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2201.03545" ]
[]
TAGS #transformers #pytorch #tf #safetensors #convnext #image-classification #vision #dataset-imagenet-21k #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# ConvNeXT (base-sized model) ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. !model image ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: For more code examples, we refer to the documentation. ### BibTeX entry and citation info
[ "# ConvNeXT (base-sized model) \n\nConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #convnext #image-classification #vision #dataset-imagenet-21k #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# ConvNeXT (base-sized model) \n\nConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
image-classification
transformers
# ConvNeXT (base-sized model) ConvNeXT model trained on ImageNet-22k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextImageProcessor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] processor = ConvNextImageProcessor.from_pretrained("facebook/convnext-base-224-22k") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-224-22k") inputs = processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 22k ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet-21k"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
facebook/convnext-base-224-22k
null
[ "transformers", "pytorch", "tf", "convnext", "image-classification", "vision", "dataset:imagenet-21k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2201.03545" ]
[]
TAGS #transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-21k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# ConvNeXT (base-sized model) ConvNeXT model trained on ImageNet-22k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. !model image ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: For more code examples, we refer to the documentation. ### BibTeX entry and citation info
[ "# ConvNeXT (base-sized model) \n\nConvNeXT model trained on ImageNet-22k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-21k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# ConvNeXT (base-sized model) \n\nConvNeXT model trained on ImageNet-22k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
image-classification
transformers
# ConvNeXT (base-sized model) ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextImageProcessor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] processor = ConvNextImageProcessor.from_pretrained("facebook/convnext-base-224") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-224") inputs = processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet-1k"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
facebook/convnext-base-224
null
[ "transformers", "pytorch", "tf", "convnext", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2201.03545" ]
[]
TAGS #transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# ConvNeXT (base-sized model) ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. !model image ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: For more code examples, we refer to the documentation. ### BibTeX entry and citation info
[ "# ConvNeXT (base-sized model) \n\nConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# ConvNeXT (base-sized model) \n\nConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
image-classification
transformers
# ConvNeXT (base-sized model) ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextImageProcessor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] processor = ConvNextImageProcessor.from_pretrained("facebook/convnext-base-384-22k-1k") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-384-22k-1k") inputs = processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet-21k", "imagenet-1k"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
facebook/convnext-base-384-22k-1k
null
[ "transformers", "pytorch", "tf", "convnext", "image-classification", "vision", "dataset:imagenet-21k", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2201.03545" ]
[]
TAGS #transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-21k #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# ConvNeXT (base-sized model) ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. !model image ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: For more code examples, we refer to the documentation. ### BibTeX entry and citation info
[ "# ConvNeXT (base-sized model) \n\nConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-21k #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# ConvNeXT (base-sized model) \n\nConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
image-classification
transformers
# ConvNeXT (base-sized model) ConvNeXT model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextImageProcessor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] processor = ConvNextImageProcessor.from_pretrained("facebook/convnext-base-384") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-base-384") inputs = processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet-1k"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
facebook/convnext-base-384
null
[ "transformers", "pytorch", "tf", "convnext", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2201.03545" ]
[]
TAGS #transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# ConvNeXT (base-sized model) ConvNeXT model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. !model image ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: For more code examples, we refer to the documentation. ### BibTeX entry and citation info
[ "# ConvNeXT (base-sized model) \n\nConvNeXT model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# ConvNeXT (base-sized model) \n\nConvNeXT model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
image-classification
transformers
# ConvNeXT (large-sized model) ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextImageProcessor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] processor = ConvNextImageProcessor.from_pretrained("facebook/convnext-large-224-22k-1k") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-large-224-22k-1k") inputs = processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1k ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet-21k"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
facebook/convnext-large-224-22k-1k
null
[ "transformers", "pytorch", "tf", "convnext", "image-classification", "vision", "dataset:imagenet-21k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2201.03545" ]
[]
TAGS #transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-21k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# ConvNeXT (large-sized model) ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. !model image ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: For more code examples, we refer to the documentation. ### BibTeX entry and citation info
[ "# ConvNeXT (large-sized model) \n\nConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-21k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# ConvNeXT (large-sized model) \n\nConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
image-classification
transformers
# ConvNeXT (large-sized model) ConvNeXT model trained on ImageNet-22k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-large-224-22k") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-large-224-22k") inputs = feature_extractor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 22k ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet-21k"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
facebook/convnext-large-224-22k
null
[ "transformers", "pytorch", "tf", "safetensors", "convnext", "image-classification", "vision", "dataset:imagenet-21k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2201.03545" ]
[]
TAGS #transformers #pytorch #tf #safetensors #convnext #image-classification #vision #dataset-imagenet-21k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# ConvNeXT (large-sized model) ConvNeXT model trained on ImageNet-22k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. !model image ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: For more code examples, we refer to the documentation. ### BibTeX entry and citation info
[ "# ConvNeXT (large-sized model) \n\nConvNeXT model trained on ImageNet-22k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #convnext #image-classification #vision #dataset-imagenet-21k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# ConvNeXT (large-sized model) \n\nConvNeXT model trained on ImageNet-22k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
image-classification
transformers
# ConvNeXT (large-sized model) ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextImageProcessor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] processor = ConvNextImageProcessor.from_pretrained("facebook/convnext-large-224") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-large-224") inputs = processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet-1k"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
facebook/convnext-large-224
null
[ "transformers", "pytorch", "tf", "convnext", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2201.03545" ]
[]
TAGS #transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# ConvNeXT (large-sized model) ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. !model image ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: For more code examples, we refer to the documentation. ### BibTeX entry and citation info
[ "# ConvNeXT (large-sized model) \n\nConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# ConvNeXT (large-sized model) \n\nConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
image-classification
transformers
# ConvNeXT (large-sized model) ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-large-384-22k-1k") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-large-384-22k-1k") inputs = feature_extractor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet-21k", "imagenet-1k"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
facebook/convnext-large-384-22k-1k
null
[ "transformers", "pytorch", "tf", "convnext", "image-classification", "vision", "dataset:imagenet-21k", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2201.03545" ]
[]
TAGS #transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-21k #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# ConvNeXT (large-sized model) ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. !model image ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: For more code examples, we refer to the documentation. ### BibTeX entry and citation info
[ "# ConvNeXT (large-sized model) \n\nConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-21k #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# ConvNeXT (large-sized model) \n\nConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
image-classification
transformers
# ConvNeXT (large-sized model) ConvNeXT model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-large-384") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-large-384") inputs = feature_extractor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet-1k"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
facebook/convnext-large-384
null
[ "transformers", "pytorch", "tf", "safetensors", "convnext", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2201.03545" ]
[]
TAGS #transformers #pytorch #tf #safetensors #convnext #image-classification #vision #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# ConvNeXT (large-sized model) ConvNeXT model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. !model image ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: For more code examples, we refer to the documentation. ### BibTeX entry and citation info
[ "# ConvNeXT (large-sized model) \n\nConvNeXT model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #convnext #image-classification #vision #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# ConvNeXT (large-sized model) \n\nConvNeXT model trained on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
image-classification
transformers
# ConvNeXT (large-sized model) ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextImageProcessor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] processor = ConvNextImageProcessor.from_pretrained("facebook/convnext-small-224") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-small-224") inputs = processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet-1k"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
facebook/convnext-small-224
null
[ "transformers", "pytorch", "tf", "safetensors", "convnext", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2201.03545" ]
[]
TAGS #transformers #pytorch #tf #safetensors #convnext #image-classification #vision #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# ConvNeXT (large-sized model) ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. !model image ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: For more code examples, we refer to the documentation. ### BibTeX entry and citation info
[ "# ConvNeXT (large-sized model) \n\nConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #convnext #image-classification #vision #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# ConvNeXT (large-sized model) \n\nConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
image-classification
transformers
# ConvNeXT (tiny-sized model) ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextImageProcessor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] processor = ConvNextImageProcessor.from_pretrained("facebook/convnext-tiny-224") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-tiny-224") inputs = processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet-1k"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
facebook/convnext-tiny-224
null
[ "transformers", "pytorch", "tf", "convnext", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2201.03545" ]
[]
TAGS #transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# ConvNeXT (tiny-sized model) ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. !model image ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: For more code examples, we refer to the documentation. ### BibTeX entry and citation info
[ "# ConvNeXT (tiny-sized model) \n\nConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# ConvNeXT (tiny-sized model) \n\nConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
image-classification
transformers
# ConvNeXT (xlarge-sized model) ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-xlarge-224-22k-1k") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-xlarge-224-22k-1k") inputs = feature_extractor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet-21k", "imagenet-1k"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
facebook/convnext-xlarge-224-22k-1k
null
[ "transformers", "pytorch", "tf", "convnext", "image-classification", "vision", "dataset:imagenet-21k", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2201.03545" ]
[]
TAGS #transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-21k #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# ConvNeXT (xlarge-sized model) ConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. !model image ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: For more code examples, we refer to the documentation. ### BibTeX entry and citation info
[ "# ConvNeXT (xlarge-sized model) \n\nConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-21k #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# ConvNeXT (xlarge-sized model) \n\nConvNeXT model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
image-classification
transformers
# ConvNeXT (xlarge-sized model) ConvNeXT model trained on ImageNet-22k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextFeatureExtractor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] feature_extractor = ConvNextFeatureExtractor.from_pretrained("facebook/convnext-xlarge-224-22k") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-xlarge-224-22k") inputs = feature_extractor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 22k ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet-21k"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
facebook/convnext-xlarge-224-22k
null
[ "transformers", "pytorch", "tf", "convnext", "image-classification", "vision", "dataset:imagenet-21k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2201.03545" ]
[]
TAGS #transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-21k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# ConvNeXT (xlarge-sized model) ConvNeXT model trained on ImageNet-22k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. !model image ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: For more code examples, we refer to the documentation. ### BibTeX entry and citation info
[ "# ConvNeXT (xlarge-sized model) \n\nConvNeXT model trained on ImageNet-22k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-21k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# ConvNeXT (xlarge-sized model) \n\nConvNeXT model trained on ImageNet-22k at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
image-classification
transformers
# ConvNeXT (xlarge-sized model) ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextImageProcessor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] processor = ConvNextImageProcessor.from_pretrained("facebook/convnext-xlarge-384-22k-1k") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-xlarge-384-22k-1k") inputs = processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["vision", "image-classification"], "datasets": ["imagenet-21k", "imagenet-1k"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg", "example_title": "Tiger"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg", "example_title": "Teapot"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg", "example_title": "Palace"}]}
facebook/convnext-xlarge-384-22k-1k
null
[ "transformers", "pytorch", "tf", "convnext", "image-classification", "vision", "dataset:imagenet-21k", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2201.03545" ]
[]
TAGS #transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-21k #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# ConvNeXT (xlarge-sized model) ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. !model image ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: For more code examples, we refer to the documentation. ### BibTeX entry and citation info
[ "# ConvNeXT (xlarge-sized model) \n\nConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #convnext #image-classification #vision #dataset-imagenet-21k #dataset-imagenet-1k #arxiv-2201.03545 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# ConvNeXT (xlarge-sized model) \n\nConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 384x384. It was introduced in the paper A ConvNet for the 2020s by Liu et al. and first released in this repository. \n\nDisclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and \"modernized\" its design by taking the Swin Transformer as inspiration.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\n\nFor more code examples, we refer to the documentation.", "### BibTeX entry and citation info" ]
automatic-speech-recognition
transformers
# Data2Vec-Audio-Base-100h [Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/) The base model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. [Paper](https://arxiv.org/abs/2202.03555) Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli **Abstract** While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches. The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec . # Pre-Training method ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/data2vec.png) For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555). # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Data2VecForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-100h") model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-base-100h") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ```
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
facebook/data2vec-audio-base-100h
null
[ "transformers", "pytorch", "data2vec-audio", "automatic-speech-recognition", "speech", "en", "dataset:librispeech_asr", "arxiv:2202.03555", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2202.03555" ]
[ "en" ]
TAGS #transformers #pytorch #data2vec-audio #automatic-speech-recognition #speech #en #dataset-librispeech_asr #arxiv-2202.03555 #license-apache-2.0 #endpoints_compatible #region-us
# Data2Vec-Audio-Base-100h Facebook's Data2Vec The base model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Paper Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli Abstract While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches. The original model can be found under URL . # Pre-Training method !model image For more information, please take a look at the official paper. # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows:
[ "# Data2Vec-Audio-Base-100h\n\nFacebook's Data2Vec\n\nThe base model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model\nmake sure that your speech input is also sampled at 16Khz.\n\nPaper\n\nAuthors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli\n\nAbstract\n\nWhile the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.\n\nThe original model can be found under URL .", "# Pre-Training method\n\n!model image\n\nFor more information, please take a look at the official paper.", "# Usage\n\nTo transcribe audio files the model can be used as a standalone acoustic model as follows:" ]
[ "TAGS\n#transformers #pytorch #data2vec-audio #automatic-speech-recognition #speech #en #dataset-librispeech_asr #arxiv-2202.03555 #license-apache-2.0 #endpoints_compatible #region-us \n", "# Data2Vec-Audio-Base-100h\n\nFacebook's Data2Vec\n\nThe base model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model\nmake sure that your speech input is also sampled at 16Khz.\n\nPaper\n\nAuthors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli\n\nAbstract\n\nWhile the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.\n\nThe original model can be found under URL .", "# Pre-Training method\n\n!model image\n\nFor more information, please take a look at the official paper.", "# Usage\n\nTo transcribe audio files the model can be used as a standalone acoustic model as follows:" ]
automatic-speech-recognition
transformers
# Data2Vec-Audio-Base-10m [Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/) The base model pretrained and fine-tuned on 10 minutes of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. [Paper](https://arxiv.org/abs/2202.03555) Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli **Abstract** While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches. The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec . # Pre-Training method ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/data2vec.png) For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555). # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Data2VecForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-10m") model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-base-10m") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ```
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
facebook/data2vec-audio-base-10m
null
[ "transformers", "pytorch", "data2vec-audio", "automatic-speech-recognition", "speech", "en", "dataset:librispeech_asr", "arxiv:2202.03555", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2202.03555" ]
[ "en" ]
TAGS #transformers #pytorch #data2vec-audio #automatic-speech-recognition #speech #en #dataset-librispeech_asr #arxiv-2202.03555 #license-apache-2.0 #endpoints_compatible #region-us
# Data2Vec-Audio-Base-10m Facebook's Data2Vec The base model pretrained and fine-tuned on 10 minutes of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Paper Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli Abstract While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches. The original model can be found under URL . # Pre-Training method !model image For more information, please take a look at the official paper. # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows:
[ "# Data2Vec-Audio-Base-10m\n\nFacebook's Data2Vec\n\nThe base model pretrained and fine-tuned on 10 minutes of Librispeech on 16kHz sampled speech audio. When using the model\nmake sure that your speech input is also sampled at 16Khz.\n\nPaper\n\nAuthors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli\n\nAbstract\n\nWhile the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.\n\nThe original model can be found under URL .", "# Pre-Training method\n\n!model image\n\nFor more information, please take a look at the official paper.", "# Usage\n\nTo transcribe audio files the model can be used as a standalone acoustic model as follows:" ]
[ "TAGS\n#transformers #pytorch #data2vec-audio #automatic-speech-recognition #speech #en #dataset-librispeech_asr #arxiv-2202.03555 #license-apache-2.0 #endpoints_compatible #region-us \n", "# Data2Vec-Audio-Base-10m\n\nFacebook's Data2Vec\n\nThe base model pretrained and fine-tuned on 10 minutes of Librispeech on 16kHz sampled speech audio. When using the model\nmake sure that your speech input is also sampled at 16Khz.\n\nPaper\n\nAuthors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli\n\nAbstract\n\nWhile the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.\n\nThe original model can be found under URL .", "# Pre-Training method\n\n!model image\n\nFor more information, please take a look at the official paper.", "# Usage\n\nTo transcribe audio files the model can be used as a standalone acoustic model as follows:" ]
automatic-speech-recognition
transformers
# Data2Vec-Audio-Base-960h [Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/) The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. [Paper](https://arxiv.org/abs/2202.03555) Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli **Abstract** While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches. The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec . # Pre-Training method ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/data2vec.png) For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555). # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Data2VecForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-960h") model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-base-960h") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"],, return_tensors="pt", padding="longest").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **facebook/data2vec-audio-base-960h** on LibriSpeech's "clean" and "other" test data. ```python from transformers import Wav2Vec2Processor, Data2VecForCTC from datasets import load_dataset import torch from jiwer import wer # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/data2vec-audio-base-960h").to("cuda") model = Data2VecForCTC.from_pretrained("facebook/data2vec-audio-base-960h") librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") def map_to_pred(batch): input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 2.77 | 7.08 |
{"language": "en", "license": "apache-2.0", "tags": ["speech", "hf-asr-leaderboard"], "datasets": ["librispeech_asr"], "widget": [{"example_title": "Librispeech sample 1", "src": "https://cdn-media.huggingface.co/speech_samples/sample1.flac"}, {"example_title": "Librispeech sample 2", "src": "https://cdn-media.huggingface.co/speech_samples/sample2.flac"}], "model-index": [{"name": "data2vec-audio-base-960h", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (clean)", "type": "librispeech_asr", "config": "clean", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 2.77, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (other)", "type": "librispeech_asr", "config": "other", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 7.08, "name": "Test WER"}]}]}]}
facebook/data2vec-audio-base-960h
null
[ "transformers", "pytorch", "data2vec-audio", "automatic-speech-recognition", "speech", "hf-asr-leaderboard", "en", "dataset:librispeech_asr", "arxiv:2202.03555", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2202.03555" ]
[ "en" ]
TAGS #transformers #pytorch #data2vec-audio #automatic-speech-recognition #speech #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2202.03555 #license-apache-2.0 #model-index #endpoints_compatible #region-us
Data2Vec-Audio-Base-960h ======================== Facebook's Data2Vec The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Paper Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli Abstract While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches. The original model can be found under URL . Pre-Training method =================== !model image For more information, please take a look at the official paper. Usage ===== To transcribe audio files the model can be used as a standalone acoustic model as follows: Evaluation ---------- This code snippet shows how to evaluate facebook/data2vec-audio-base-960h on LibriSpeech's "clean" and "other" test data. *Result (WER)*:
[]
[ "TAGS\n#transformers #pytorch #data2vec-audio #automatic-speech-recognition #speech #hf-asr-leaderboard #en #dataset-librispeech_asr #arxiv-2202.03555 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n" ]
feature-extraction
transformers
# Data2Vec-Audio-Base [Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. [Paper](https://arxiv.org/abs/2202.03555) Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli **Abstract** While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches. The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec . # Pre-Training method ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/data2vec.png) For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555). # Usage See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
facebook/data2vec-audio-base
null
[ "transformers", "pytorch", "data2vec-audio", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2202.03555", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2202.03555" ]
[ "en" ]
TAGS #transformers #pytorch #data2vec-audio #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2202.03555 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# Data2Vec-Audio-Base Facebook's Data2Vec The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out this blog for more in-detail explanation of how to fine-tune the model. Paper Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli Abstract While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches. The original model can be found under URL . # Pre-Training method !model image For more information, please take a look at the official paper. # Usage See this notebook for more information on how to fine-tune the model.
[ "# Data2Vec-Audio-Base\n\nFacebook's Data2Vec\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. \n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out this blog for more in-detail explanation of how to fine-tune the model.\n\nPaper\n\nAuthors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli\n\nAbstract\n\nWhile the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.\n\nThe original model can be found under URL .", "# Pre-Training method\n\n!model image\n\nFor more information, please take a look at the official paper.", "# Usage\n\nSee this notebook for more information on how to fine-tune the model." ]
[ "TAGS\n#transformers #pytorch #data2vec-audio #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2202.03555 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# Data2Vec-Audio-Base\n\nFacebook's Data2Vec\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. \n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out this blog for more in-detail explanation of how to fine-tune the model.\n\nPaper\n\nAuthors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli\n\nAbstract\n\nWhile the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.\n\nThe original model can be found under URL .", "# Pre-Training method\n\n!model image\n\nFor more information, please take a look at the official paper.", "# Usage\n\nSee this notebook for more information on how to fine-tune the model." ]
feature-extraction
transformers
# Data2Vec-Text base model Pretrained model on English language using the *data2vec* objective. It was introduced in [this paper](https://arxiv.org/abs/2202.03555) and first released in [this repository](https://github.com/pytorch/fairseq/tree/main/examples/data2vec). This model is case-sensitive: it makes a difference between english and English. Disclaimer: The team releasing Data2Vec-Text did not write a model card for this model so this model card has been written by the Hugging Face team. ## Pre-Training method ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/data2vec.png) For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555). ## Abstract *While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a selfdistillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.* ## Intended uses & limitations The model is intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=data2vec-text) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## Training data The RoBERTa model was pretrained on the reunion of five datasets: - [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books; - [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers) ; - [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/), a dataset containing 63 millions English news articles crawled between September 2016 and February 2019. - [OpenWebText](https://github.com/jcpeterson/openwebtext), an opensource recreation of the WebText dataset used to train GPT-2, - [Stories](https://arxiv.org/abs/1806.02847) a dataset containing a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas. Together theses datasets weight 160GB of text. ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2202.03555, doi = {10.48550/ARXIV.2202.03555}, url = {https://arxiv.org/abs/2202.03555}, author = {Baevski, Alexei and Hsu, Wei-Ning and Xu, Qiantong and Babu, Arun and Gu, Jiatao and Auli, Michael}, keywords = {Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
{"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
facebook/data2vec-text-base
null
[ "transformers", "pytorch", "data2vec-text", "feature-extraction", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2202.03555", "arxiv:1806.02847", "license:mit", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2202.03555", "1806.02847" ]
[ "en" ]
TAGS #transformers #pytorch #data2vec-text #feature-extraction #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2202.03555 #arxiv-1806.02847 #license-mit #endpoints_compatible #has_space #region-us
# Data2Vec-Text base model Pretrained model on English language using the *data2vec* objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English. Disclaimer: The team releasing Data2Vec-Text did not write a model card for this model so this model card has been written by the Hugging Face team. ## Pre-Training method !model image For more information, please take a look at the official paper. ## Abstract *While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a selfdistillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches.* ## Intended uses & limitations The model is intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## Training data The RoBERTa model was pretrained on the reunion of five datasets: - BookCorpus, a dataset consisting of 11,038 unpublished books; - English Wikipedia (excluding lists, tables and headers) ; - CC-News, a dataset containing 63 millions English news articles crawled between September 2016 and February 2019. - OpenWebText, an opensource recreation of the WebText dataset used to train GPT-2, - Stories a dataset containing a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas. Together theses datasets weight 160GB of text. ### BibTeX entry and citation info
[ "# Data2Vec-Text base model\n\nPretrained model on English language using the *data2vec* objective. It was introduced in\nthis paper and first released in\nthis repository. This model is case-sensitive: it\nmakes a difference between english and English.\n\nDisclaimer: The team releasing Data2Vec-Text did not write a model card for this model so this model card has been written by\nthe Hugging Face team.", "## Pre-Training method\n\n!model image\n\nFor more information, please take a look at the official paper.", "## Abstract\n\n*While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because\nthey were developed with a single modality in\nmind. To get us closer to general self-supervised\nlearning, we present data2vec, a framework that\nuses the same learning method for either speech,\nNLP or computer vision. The core idea is to predict latent representations of the full input data\nbased on a masked view of the input in a selfdistillation setup using a standard Transformer architecture. Instead of predicting modality-specific\ntargets such as words, visual tokens or units of\nhuman speech which are local in nature, data2vec\npredicts contextualized latent representations that\ncontain information from the entire input. Experiments on the major benchmarks of speech\nrecognition, image classification, and natural language understanding demonstrate a new state of\nthe art or competitive performance to predominant approaches.*", "## Intended uses & limitations\n\nThe model is intended to be fine-tuned on a downstream task.\nSee the model hub to look for fine-tuned versions on a task that\ninterests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.", "## Training data\n\nThe RoBERTa model was pretrained on the reunion of five datasets:\n- BookCorpus, a dataset consisting of 11,038 unpublished books;\n- English Wikipedia (excluding lists, tables and headers) ;\n- CC-News, a dataset containing 63 millions English news\n articles crawled between September 2016 and February 2019.\n- OpenWebText, an opensource recreation of the WebText dataset used to\n train GPT-2,\n- Stories a dataset containing a subset of CommonCrawl data filtered to match the\n story-like style of Winograd schemas.\n\nTogether theses datasets weight 160GB of text.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #data2vec-text #feature-extraction #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-2202.03555 #arxiv-1806.02847 #license-mit #endpoints_compatible #has_space #region-us \n", "# Data2Vec-Text base model\n\nPretrained model on English language using the *data2vec* objective. It was introduced in\nthis paper and first released in\nthis repository. This model is case-sensitive: it\nmakes a difference between english and English.\n\nDisclaimer: The team releasing Data2Vec-Text did not write a model card for this model so this model card has been written by\nthe Hugging Face team.", "## Pre-Training method\n\n!model image\n\nFor more information, please take a look at the official paper.", "## Abstract\n\n*While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because\nthey were developed with a single modality in\nmind. To get us closer to general self-supervised\nlearning, we present data2vec, a framework that\nuses the same learning method for either speech,\nNLP or computer vision. The core idea is to predict latent representations of the full input data\nbased on a masked view of the input in a selfdistillation setup using a standard Transformer architecture. Instead of predicting modality-specific\ntargets such as words, visual tokens or units of\nhuman speech which are local in nature, data2vec\npredicts contextualized latent representations that\ncontain information from the entire input. Experiments on the major benchmarks of speech\nrecognition, image classification, and natural language understanding demonstrate a new state of\nthe art or competitive performance to predominant approaches.*", "## Intended uses & limitations\n\nThe model is intended to be fine-tuned on a downstream task.\nSee the model hub to look for fine-tuned versions on a task that\ninterests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.", "## Training data\n\nThe RoBERTa model was pretrained on the reunion of five datasets:\n- BookCorpus, a dataset consisting of 11,038 unpublished books;\n- English Wikipedia (excluding lists, tables and headers) ;\n- CC-News, a dataset containing 63 millions English news\n articles crawled between September 2016 and February 2019.\n- OpenWebText, an opensource recreation of the WebText dataset used to\n train GPT-2,\n- Stories a dataset containing a subset of CommonCrawl data filtered to match the\n story-like style of Winograd schemas.\n\nTogether theses datasets weight 160GB of text.", "### BibTeX entry and citation info" ]
image-classification
transformers
# Distilled Data-efficient Image Transformer (base-sized model) Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, DeiTForImageClassificationWithTeacher from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-base-distilled-patch16-224') model = DeiTForImageClassificationWithTeacher.from_pretrained('facebook/deit-base-distilled-patch16-224') inputs = feature_extractor(images=image, return_tensors="pt") # forward pass outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. ## Training data This model was pretrained and fine-tuned with distillation on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78). At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------| | DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 | | DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 | | DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 | | DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 | | DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 | | **DeiT-base distilled** | **83.4** | **96.5** | **87M** | **https://huggingface.co/facebook/deit-base-distilled-patch16-224** | | DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 | | DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 | Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{touvron2021training, title={Training data-efficient image transformers & distillation through attention}, author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou}, year={2021}, eprint={2012.12877}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
{"license": "apache-2.0", "tags": ["image-classification", "vision"], "datasets": ["imagenet"]}
facebook/deit-base-distilled-patch16-224
null
[ "transformers", "pytorch", "tf", "deit", "image-classification", "vision", "dataset:imagenet", "arxiv:2012.12877", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.12877", "2006.03677" ]
[]
TAGS #transformers #pytorch #tf #deit #image-classification #vision #dataset-imagenet #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Distilled Data-efficient Image Transformer (base-sized model) ============================================================= Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al. and first released in this repository. However, the weights were converted from the timm repository by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description ----------------- This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Intended uses & limitations --------------------------- You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. Training data ------------- This model was pretrained and fine-tuned with distillation on ImageNet-1k, a dataset consisting of 1 million images and 1k classes. Training procedure ------------------ ### Preprocessing The exact details of preprocessing of images during training/validation can be found here. At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. Evaluation results ------------------ Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info
[ "### How to use\n\n\nSince this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThis model was pretrained and fine-tuned with distillation on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #deit #image-classification #vision #dataset-imagenet #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use\n\n\nSince this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThis model was pretrained and fine-tuned with distillation on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
image-classification
transformers
# Distilled Data-efficient Image Transformer (base-sized model) Distilled data-efficient Image Transformer (DeiT) model pre-trained at resolution 224x224 and fine-tuned at resolution 384x384 on ImageNet-1k (1 million images, 1,000 classes). It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, DeiTForImageClassificationWithTeacher from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-base-distilled-patch16-384') model = DeiTForImageClassificationWithTeacher.from_pretrained('facebook/deit-base-distilled-patch16-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. ## Training data This model was pretrained and fine-tuned with distillation on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78). At inference time, images are resized/rescaled to the same resolution (438x438), center-cropped at 384x384 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Pre-training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |-------------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------| | DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 | | DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 | | DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 | | DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 | | DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 | | DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 | | DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 | | **DeiT-base distilled 384 (1000 epochs)** | **85.2** | **97.2** | **88M** | **https://huggingface.co/facebook/deit-base-distilled-patch16-384** | Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{touvron2021training, title={Training data-efficient image transformers & distillation through attention}, author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou}, year={2021}, eprint={2012.12877}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
{"license": "apache-2.0", "tags": ["image-classification", "vision"], "datasets": ["imagenet"]}
facebook/deit-base-distilled-patch16-384
null
[ "transformers", "pytorch", "tf", "safetensors", "deit", "image-classification", "vision", "dataset:imagenet", "arxiv:2012.12877", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.12877", "2006.03677" ]
[]
TAGS #transformers #pytorch #tf #safetensors #deit #image-classification #vision #dataset-imagenet #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Distilled Data-efficient Image Transformer (base-sized model) ============================================================= Distilled data-efficient Image Transformer (DeiT) model pre-trained at resolution 224x224 and fine-tuned at resolution 384x384 on ImageNet-1k (1 million images, 1,000 classes). It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al. and first released in this repository. However, the weights were converted from the timm repository by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description ----------------- This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Intended uses & limitations --------------------------- You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. Training data ------------- This model was pretrained and fine-tuned with distillation on ImageNet-1k, a dataset consisting of 1 million images and 1k classes. Training procedure ------------------ ### Preprocessing The exact details of preprocessing of images during training/validation can be found here. At inference time, images are resized/rescaled to the same resolution (438x438), center-cropped at 384x384 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Pre-training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. Evaluation results ------------------ Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info
[ "### How to use\n\n\nSince this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThis model was pretrained and fine-tuned with distillation on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (438x438), center-cropped at 384x384 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Pre-training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #deit #image-classification #vision #dataset-imagenet #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use\n\n\nSince this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThis model was pretrained and fine-tuned with distillation on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (438x438), center-cropped at 384x384 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Pre-training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
image-classification
transformers
# Data-efficient Image Transformer (base-sized model) Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This model is actually a more efficiently trained Vision Transformer (ViT). The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained and fine-tuned on a large collection of images in a supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-base-patch16-224') model = ViTForImageClassification.from_pretrained('facebook/deit-base-patch16-224') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. ## Training data The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78). At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------| | DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 | | DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 | | **DeiT-base** | **81.8** | **95.6** | **86M** | **https://huggingface.co/facebook/deit-base-patch16-224** | | DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 | | DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 | | DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 | | DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 | | DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 | Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{touvron2021training, title={Training data-efficient image transformers & distillation through attention}, author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou}, year={2021}, eprint={2012.12877}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet-1k"]}
facebook/deit-base-patch16-224
null
[ "transformers", "pytorch", "tf", "vit", "image-classification", "dataset:imagenet-1k", "arxiv:2012.12877", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.12877", "2006.03677" ]
[]
TAGS #transformers #pytorch #tf #vit #image-classification #dataset-imagenet-1k #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Data-efficient Image Transformer (base-sized model) =================================================== Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al. and first released in this repository. However, the weights were converted from the timm repository by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description ----------------- This model is actually a more efficiently trained Vision Transformer (ViT). The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained and fine-tuned on a large collection of images in a supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Intended uses & limitations --------------------------- You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. Training data ------------- The ViT model was pretrained on ImageNet-1k, a dataset consisting of 1 million images and 1k classes. Training procedure ------------------ ### Preprocessing The exact details of preprocessing of images during training/validation can be found here. At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. Evaluation results ------------------ Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info
[ "### How to use\n\n\nSince this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThe ViT model was pretrained on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #vit #image-classification #dataset-imagenet-1k #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use\n\n\nSince this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThe ViT model was pretrained on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
image-classification
transformers
# Data-efficient Image Transformer (base-sized model) Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 384x384. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This model is actually a more efficiently trained Vision Transformer (ViT). The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained at resolution 224 and fine-tuned at resolution 384 on a large collection of images in a supervised fashion, namely ImageNet-1k. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-base-patch16-384') model = ViTForImageClassification.from_pretrained('facebook/deit-base-patch16-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. ## Training data The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78). At inference time, images are resized/rescaled to the same resolution (438x438), center-cropped at 384x384 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Pre-training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------| | DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 | | DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 | | DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 | | DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 | | DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 | | DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 | | **DeiT-base 384** | **82.9** | **96.2** | **87M** | **https://huggingface.co/facebook/deit-base-patch16-384** | | DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 | Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{touvron2021training, title={Training data-efficient image transformers & distillation through attention}, author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou}, year={2021}, eprint={2012.12877}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet-1k"]}
facebook/deit-base-patch16-384
null
[ "transformers", "pytorch", "tf", "vit", "image-classification", "dataset:imagenet-1k", "arxiv:2012.12877", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.12877", "2006.03677" ]
[]
TAGS #transformers #pytorch #tf #vit #image-classification #dataset-imagenet-1k #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Data-efficient Image Transformer (base-sized model) =================================================== Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 384x384. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al. and first released in this repository. However, the weights were converted from the timm repository by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description ----------------- This model is actually a more efficiently trained Vision Transformer (ViT). The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained at resolution 224 and fine-tuned at resolution 384 on a large collection of images in a supervised fashion, namely ImageNet-1k. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Intended uses & limitations --------------------------- You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. Training data ------------- The ViT model was pretrained on ImageNet-1k, a dataset consisting of 1 million images and 1k classes. Training procedure ------------------ ### Preprocessing The exact details of preprocessing of images during training/validation can be found here. At inference time, images are resized/rescaled to the same resolution (438x438), center-cropped at 384x384 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Pre-training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. Evaluation results ------------------ Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info
[ "### How to use\n\n\nSince this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThe ViT model was pretrained on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (438x438), center-cropped at 384x384 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Pre-training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #vit #image-classification #dataset-imagenet-1k #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nSince this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThe ViT model was pretrained on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (438x438), center-cropped at 384x384 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Pre-training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
image-classification
transformers
# Distilled Data-efficient Image Transformer (small-sized model) Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, DeiTForImageClassificationWithTeacher from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-small-distilled-patch16-224') model = DeiTForImageClassificationWithTeacher.from_pretrained('facebook/deit-small-distilled-patch16-224') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. ## Training data This model was pretrained and fine-tuned with distillation on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78). At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------| | DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 | | DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 | | DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 | | DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 | | **DeiT-small distilled** | **81.2** | **95.4** | **22M** | **https://huggingface.co/facebook/deit-small-distilled-patch16-224** | | DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 | | DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 | | DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 | Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{touvron2021training, title={Training data-efficient image transformers & distillation through attention}, author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou}, year={2021}, eprint={2012.12877}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
{"license": "apache-2.0", "tags": ["image-classification", "vision"], "datasets": ["imagenet"]}
facebook/deit-small-distilled-patch16-224
null
[ "transformers", "pytorch", "tf", "deit", "image-classification", "vision", "dataset:imagenet", "arxiv:2012.12877", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.12877", "2006.03677" ]
[]
TAGS #transformers #pytorch #tf #deit #image-classification #vision #dataset-imagenet #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Distilled Data-efficient Image Transformer (small-sized model) ============================================================== Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al. and first released in this repository. However, the weights were converted from the timm repository by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description ----------------- This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Intended uses & limitations --------------------------- You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. Training data ------------- This model was pretrained and fine-tuned with distillation on ImageNet-1k, a dataset consisting of 1 million images and 1k classes. Training procedure ------------------ ### Preprocessing The exact details of preprocessing of images during training/validation can be found here. At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. Evaluation results ------------------ Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info
[ "### How to use\n\n\nSince this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThis model was pretrained and fine-tuned with distillation on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #deit #image-classification #vision #dataset-imagenet #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nSince this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThis model was pretrained and fine-tuned with distillation on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
image-classification
transformers
# Data-efficient Image Transformer (small-sized model) Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This model is actually a more efficiently trained Vision Transformer (ViT). The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained and fine-tuned on a large collection of images in a supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-small-patch16-224') model = ViTForImageClassification.from_pretrained('facebook/deit-small-patch16-224') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. ## Training data The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78). At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------| | DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 | | **DeiT-small** | **79.9** | **95.0** | **22M** | **https://huggingface.co/facebook/deit-small-patch16-224** | | DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 | | DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 | | DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 | | DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 | | DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 | | DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 | Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{touvron2021training, title={Training data-efficient image transformers & distillation through attention}, author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou}, year={2021}, eprint={2012.12877}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet-1k"]}
facebook/deit-small-patch16-224
null
[ "transformers", "pytorch", "tf", "vit", "image-classification", "dataset:imagenet-1k", "arxiv:2012.12877", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.12877", "2006.03677" ]
[]
TAGS #transformers #pytorch #tf #vit #image-classification #dataset-imagenet-1k #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Data-efficient Image Transformer (small-sized model) ==================================================== Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al. and first released in this repository. However, the weights were converted from the timm repository by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description ----------------- This model is actually a more efficiently trained Vision Transformer (ViT). The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained and fine-tuned on a large collection of images in a supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Intended uses & limitations --------------------------- You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. Training data ------------- The ViT model was pretrained on ImageNet-1k, a dataset consisting of 1 million images and 1k classes. Training procedure ------------------ ### Preprocessing The exact details of preprocessing of images during training/validation can be found here. At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. Evaluation results ------------------ Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info
[ "### How to use\n\n\nSince this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThe ViT model was pretrained on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #vit #image-classification #dataset-imagenet-1k #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nSince this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThe ViT model was pretrained on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
image-classification
transformers
# Distilled Data-efficient Image Transformer (tiny-sized model) Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, DeiTForImageClassificationWithTeacher from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-tiny-distilled-patch16-224') model = DeiTForImageClassificationWithTeacher.from_pretrained('facebook/deit-tiny-distilled-patch16-224') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. ## Training data This model was pretrained and fine-tuned with distillation on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78). At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------| | DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 | | DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 | | DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 | | **DeiT-tiny distilled** | **74.5** | **91.9** | **6M** | **https://huggingface.co/facebook/deit-tiny-distilled-patch16-224** | | DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 | | DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 | | DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 | | DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 | Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{touvron2021training, title={Training data-efficient image transformers & distillation through attention}, author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou}, year={2021}, eprint={2012.12877}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
{"license": "apache-2.0", "tags": ["image-classification", "vision"], "datasets": ["imagenet"]}
facebook/deit-tiny-distilled-patch16-224
null
[ "transformers", "pytorch", "tf", "deit", "image-classification", "vision", "dataset:imagenet", "arxiv:2012.12877", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.12877", "2006.03677" ]
[]
TAGS #transformers #pytorch #tf #deit #image-classification #vision #dataset-imagenet #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Distilled Data-efficient Image Transformer (tiny-sized model) ============================================================= Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al. and first released in this repository. However, the weights were converted from the timm repository by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description ----------------- This model is a distilled Vision Transformer (ViT). It uses a distillation token, besides the class token, to effectively learn from a teacher (CNN) during both pre-training and fine-tuning. The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Intended uses & limitations --------------------------- You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. Training data ------------- This model was pretrained and fine-tuned with distillation on ImageNet-1k, a dataset consisting of 1 million images and 1k classes. Training procedure ------------------ ### Preprocessing The exact details of preprocessing of images during training/validation can be found here. At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. Evaluation results ------------------ Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info
[ "### How to use\n\n\nSince this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThis model was pretrained and fine-tuned with distillation on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #deit #image-classification #vision #dataset-imagenet #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use\n\n\nSince this model is a distilled ViT model, you can plug it into DeiTModel, DeiTForImageClassification or DeiTForImageClassificationWithTeacher. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThis model was pretrained and fine-tuned with distillation on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
image-classification
transformers
# Data-efficient Image Transformer (tiny-sized model) Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This model is actually a more efficiently trained Vision Transformer (ViT). The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained and fine-tuned on a large collection of images in a supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-tiny-patch16-224') model = ViTForImageClassification.from_pretrained('facebook/deit-tiny-patch16-224') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. ## Training data The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78). At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------| | **DeiT-tiny** | **72.2** | **91.1** | **5M** | **https://huggingface.co/facebook/deit-tiny-patch16-224** | | DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 | | DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 | | DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 | | DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 | | DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 | | DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 | | DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 | Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{touvron2021training, title={Training data-efficient image transformers & distillation through attention}, author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou}, year={2021}, eprint={2012.12877}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
facebook/deit-tiny-patch16-224
null
[ "transformers", "pytorch", "tf", "vit", "image-classification", "dataset:imagenet", "arxiv:2012.12877", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2012.12877", "2006.03677" ]
[]
TAGS #transformers #pytorch #tf #vit #image-classification #dataset-imagenet #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
Data-efficient Image Transformer (tiny-sized model) =================================================== Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention by Touvron et al. and first released in this repository. However, the weights were converted from the timm repository by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. Model description ----------------- This model is actually a more efficiently trained Vision Transformer (ViT). The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained and fine-tuned on a large collection of images in a supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Intended uses & limitations --------------------------- You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. Training data ------------- The ViT model was pretrained on ImageNet-1k, a dataset consisting of 1 million images and 1k classes. Training procedure ------------------ ### Preprocessing The exact details of preprocessing of images during training/validation can be found here. At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. Evaluation results ------------------ Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info
[ "### How to use\n\n\nSince this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThe ViT model was pretrained on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #vit #image-classification #dataset-imagenet #arxiv-2012.12877 #arxiv-2006.03677 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### How to use\n\n\nSince this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name.\n\n\nHere is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:\n\n\nCurrently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon.\n\n\nTraining data\n-------------\n\n\nThe ViT model was pretrained on ImageNet-1k, a dataset consisting of 1 million images and 1k classes.\n\n\nTraining procedure\n------------------", "### Preprocessing\n\n\nThe exact details of preprocessing of images during training/validation can be found here.\n\n\nAt inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation.", "### Pretraining\n\n\nThe model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper.\n\n\nEvaluation results\n------------------\n\n\n\nNote that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.", "### BibTeX entry and citation info" ]
object-detection
transformers
# DETR (End-to-End Object Detection) model with ResNet-101 backbone (dilated C5 stage) DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models. ### How to use Here is how to use this model: ```python from transformers import DetrFeatureExtractor, DetrForObjectDetection from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-101-dc5') model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-101-dc5') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts bounding boxes and corresponding COCO classes logits = outputs.logits bboxes = outputs.pred_boxes ``` Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves an AP (average precision) of **44.9** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["object-detection"], "datasets": ["coco"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg", "example_title": "Savanna"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg", "example_title": "Football Match"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg", "example_title": "Airport"}]}
facebook/detr-resnet-101-dc5
null
[ "transformers", "pytorch", "safetensors", "detr", "object-detection", "dataset:coco", "arxiv:2005.12872", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2005.12872" ]
[]
TAGS #transformers #pytorch #safetensors #detr #object-detection #dataset-coco #arxiv-2005.12872 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# DETR (End-to-End Object Detection) model with ResNet-101 backbone (dilated C5 stage) DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ## Intended uses & limitations You can use the raw model for object detection. See the model hub to look for all available DETR models. ### How to use Here is how to use this model: Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on COCO 2017 object detection, a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found here. Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves an AP (average precision) of 44.9 on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info
[ "# DETR (End-to-End Object Detection) model with ResNet-101 backbone (dilated C5 stage)\n\nDEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. \n\nDisclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. \n\nThe model is trained using a \"bipartite matching loss\": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a \"no object\" as class and \"no bounding box\" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.", "## Intended uses & limitations\n\nYou can use the raw model for object detection. See the model hub to look for all available DETR models.", "### How to use\n\nHere is how to use this model:\n\n\n\nCurrently, both the feature extractor and model support PyTorch.", "## Training data\n\nThe DETR model was trained on COCO 2017 object detection, a dataset consisting of 118k/5k annotated images for training/validation respectively.", "## Training procedure", "### Preprocessing\n\nThe exact details of preprocessing of images during training/validation can be found here. \n\nImages are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).", "### Training\n\nThe model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).", "## Evaluation results\n\nThis model achieves an AP (average precision) of 44.9 on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #safetensors #detr #object-detection #dataset-coco #arxiv-2005.12872 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# DETR (End-to-End Object Detection) model with ResNet-101 backbone (dilated C5 stage)\n\nDEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. \n\nDisclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. \n\nThe model is trained using a \"bipartite matching loss\": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a \"no object\" as class and \"no bounding box\" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.", "## Intended uses & limitations\n\nYou can use the raw model for object detection. See the model hub to look for all available DETR models.", "### How to use\n\nHere is how to use this model:\n\n\n\nCurrently, both the feature extractor and model support PyTorch.", "## Training data\n\nThe DETR model was trained on COCO 2017 object detection, a dataset consisting of 118k/5k annotated images for training/validation respectively.", "## Training procedure", "### Preprocessing\n\nThe exact details of preprocessing of images during training/validation can be found here. \n\nImages are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).", "### Training\n\nThe model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).", "## Evaluation results\n\nThis model achieves an AP (average precision) of 44.9 on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.", "### BibTeX entry and citation info" ]
image-segmentation
transformers
# DETR (End-to-End Object Detection) model with ResNet-101 backbone DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. DETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs. ## Intended uses & limitations You can use the raw model for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models. ### How to use Here is how to use this model: ```python from transformers import DetrFeatureExtractor, DetrForSegmentation from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-101-panoptic') model = DetrForSegmentation.from_pretrained('facebook/detr-resnet-101-panoptic') # prepare inputs for the model inputs = feature_extractor(images=image, return_tensors="pt") # forward pass outputs = model(**inputs) # use the `post_process_panoptic` method of `DetrFeatureExtractor` to convert to COCO format processed_sizes = torch.as_tensor(inputs["pixel_values"].shape[-2:]).unsqueeze(0) result = feature_extractor.post_process_panoptic(outputs, processed_sizes)[0] # the segmentation is stored in a special-format png panoptic_seg = Image.open(io.BytesIO(result["png_string"])) panoptic_seg = numpy.array(panoptic_seg, dtype=numpy.uint8) # retrieve the ids corresponding to each mask panoptic_seg_id = rgb_to_id(panoptic_seg) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on [COCO 2017 panoptic](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/detr/blob/master/datasets/coco_panoptic.py). Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves the following results on COCO 2017 validation: a box AP (average precision) of **40.1**, a segmentation AP (average precision) of **33** and a PQ (panoptic quality) of **45.1**. For more details regarding evaluation results, we refer to table 5 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["image-segmentation", "vision"], "datasets": ["coco"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/dog-cat.jpg", "example_title": "Dog & Cat"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/construction-site.jpg", "example_title": "Construction Site"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/apple-orange.jpg", "example_title": "Apple & Orange"}]}
facebook/detr-resnet-101-panoptic
null
[ "transformers", "pytorch", "safetensors", "detr", "image-segmentation", "vision", "dataset:coco", "arxiv:2005.12872", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2005.12872" ]
[]
TAGS #transformers #pytorch #safetensors #detr #image-segmentation #vision #dataset-coco #arxiv-2005.12872 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# DETR (End-to-End Object Detection) model with ResNet-101 backbone DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. DETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs. ## Intended uses & limitations You can use the raw model for panoptic segmentation. See the model hub to look for all available DETR models. ### How to use Here is how to use this model: Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on COCO 2017 panoptic, a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found here. Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves the following results on COCO 2017 validation: a box AP (average precision) of 40.1, a segmentation AP (average precision) of 33 and a PQ (panoptic quality) of 45.1. For more details regarding evaluation results, we refer to table 5 of the original paper. ### BibTeX entry and citation info
[ "# DETR (End-to-End Object Detection) model with ResNet-101 backbone\n\nDEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. \n\nDisclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. \n\nThe model is trained using a \"bipartite matching loss\": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a \"no object\" as class and \"no bounding box\" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.\n\nDETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs.", "## Intended uses & limitations\n\nYou can use the raw model for panoptic segmentation. See the model hub to look for all available DETR models.", "### How to use\n\nHere is how to use this model:\n\n\n\nCurrently, both the feature extractor and model support PyTorch.", "## Training data\n\nThe DETR model was trained on COCO 2017 panoptic, a dataset consisting of 118k/5k annotated images for training/validation respectively.", "## Training procedure", "### Preprocessing\n\nThe exact details of preprocessing of images during training/validation can be found here. \n\nImages are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).", "### Training\n\nThe model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).", "## Evaluation results\n\nThis model achieves the following results on COCO 2017 validation: a box AP (average precision) of 40.1, a segmentation AP (average precision) of 33 and a PQ (panoptic quality) of 45.1.\n\nFor more details regarding evaluation results, we refer to table 5 of the original paper.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #safetensors #detr #image-segmentation #vision #dataset-coco #arxiv-2005.12872 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# DETR (End-to-End Object Detection) model with ResNet-101 backbone\n\nDEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. \n\nDisclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. \n\nThe model is trained using a \"bipartite matching loss\": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a \"no object\" as class and \"no bounding box\" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.\n\nDETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs.", "## Intended uses & limitations\n\nYou can use the raw model for panoptic segmentation. See the model hub to look for all available DETR models.", "### How to use\n\nHere is how to use this model:\n\n\n\nCurrently, both the feature extractor and model support PyTorch.", "## Training data\n\nThe DETR model was trained on COCO 2017 panoptic, a dataset consisting of 118k/5k annotated images for training/validation respectively.", "## Training procedure", "### Preprocessing\n\nThe exact details of preprocessing of images during training/validation can be found here. \n\nImages are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).", "### Training\n\nThe model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).", "## Evaluation results\n\nThis model achieves the following results on COCO 2017 validation: a box AP (average precision) of 40.1, a segmentation AP (average precision) of 33 and a PQ (panoptic quality) of 45.1.\n\nFor more details regarding evaluation results, we refer to table 5 of the original paper.", "### BibTeX entry and citation info" ]
object-detection
transformers
# DETR (End-to-End Object Detection) model with ResNet-101 backbone DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/detr_architecture.png) ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models. ### How to use Here is how to use this model: ```python from transformers import DetrImageProcessor, DetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) # you can specify the revision tag if you don't want the timm dependency processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-101", revision="no_timm") model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-101", revision="no_timm") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.9 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.9)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` This should output (something along the lines of): ``` Detected cat with confidence 0.998 at location [344.06, 24.85, 640.34, 373.74] Detected remote with confidence 0.997 at location [328.13, 75.93, 372.81, 187.66] Detected remote with confidence 0.997 at location [39.34, 70.13, 175.56, 118.78] Detected cat with confidence 0.998 at location [15.36, 51.75, 316.89, 471.16] Detected couch with confidence 0.995 at location [-0.19, 0.71, 639.73, 474.17] ``` Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves an AP (average precision) of **43.5** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["object-detection", "vision"], "datasets": ["coco"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg", "example_title": "Savanna"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg", "example_title": "Football Match"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg", "example_title": "Airport"}]}
facebook/detr-resnet-101
null
[ "transformers", "pytorch", "safetensors", "detr", "object-detection", "vision", "dataset:coco", "arxiv:2005.12872", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2005.12872" ]
[]
TAGS #transformers #pytorch #safetensors #detr #object-detection #vision #dataset-coco #arxiv-2005.12872 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# DETR (End-to-End Object Detection) model with ResNet-101 backbone DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. !model image ## Intended uses & limitations You can use the raw model for object detection. See the model hub to look for all available DETR models. ### How to use Here is how to use this model: This should output (something along the lines of): Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on COCO 2017 object detection, a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found here. Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves an AP (average precision) of 43.5 on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info
[ "# DETR (End-to-End Object Detection) model with ResNet-101 backbone\n\nDEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. \n\nDisclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. \n\nThe model is trained using a \"bipartite matching loss\": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a \"no object\" as class and \"no bounding box\" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for object detection. See the model hub to look for all available DETR models.", "### How to use\n\nHere is how to use this model:\n\n\nThis should output (something along the lines of):\n\n\nCurrently, both the feature extractor and model support PyTorch.", "## Training data\n\nThe DETR model was trained on COCO 2017 object detection, a dataset consisting of 118k/5k annotated images for training/validation respectively.", "## Training procedure", "### Preprocessing\n\nThe exact details of preprocessing of images during training/validation can be found here. \n\nImages are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).", "### Training\n\nThe model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).", "## Evaluation results\n\nThis model achieves an AP (average precision) of 43.5 on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #safetensors #detr #object-detection #vision #dataset-coco #arxiv-2005.12872 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# DETR (End-to-End Object Detection) model with ResNet-101 backbone\n\nDEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. \n\nDisclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. \n\nThe model is trained using a \"bipartite matching loss\": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a \"no object\" as class and \"no bounding box\" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for object detection. See the model hub to look for all available DETR models.", "### How to use\n\nHere is how to use this model:\n\n\nThis should output (something along the lines of):\n\n\nCurrently, both the feature extractor and model support PyTorch.", "## Training data\n\nThe DETR model was trained on COCO 2017 object detection, a dataset consisting of 118k/5k annotated images for training/validation respectively.", "## Training procedure", "### Preprocessing\n\nThe exact details of preprocessing of images during training/validation can be found here. \n\nImages are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).", "### Training\n\nThe model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).", "## Evaluation results\n\nThis model achieves an AP (average precision) of 43.5 on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.", "### BibTeX entry and citation info" ]
image-segmentation
transformers
# DETR (End-to-End Object Detection) model with ResNet-50 backbone (dilated C5 stage) DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. DETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs. ## Intended uses & limitations You can use the raw model for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models. ### How to use Here is how to use this model: ```python from transformers import DetrFeatureExtractor, DetrForSegmentation from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-50-dc5-panoptic') model = DetrForSegmentation.from_pretrained('facebook/detr-resnet-50-dc5-panoptic') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts COCO classes, bounding boxes, and masks logits = outputs.logits bboxes = outputs.pred_boxes masks = outputs.pred_masks ``` Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on [COCO 2017 panoptic](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/detr/blob/master/datasets/coco_panoptic.py). Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves the following results on COCO 2017 validation: a box AP (average precision) of **40.2**, a segmentation AP (average precision) of **31.9** and a PQ (panoptic quality) of **44.6**. For more details regarding evaluation results, we refer to table 5 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["image-segmentation"], "datasets": ["coco"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/dog-cat.jpg", "example_title": "Dog & Cat"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/construction-site.jpg", "example_title": "Construction Site"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/apple-orange.jpg", "example_title": "Apple & Orange"}]}
facebook/detr-resnet-50-dc5-panoptic
null
[ "transformers", "pytorch", "safetensors", "detr", "image-segmentation", "dataset:coco", "arxiv:2005.12872", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2005.12872" ]
[]
TAGS #transformers #pytorch #safetensors #detr #image-segmentation #dataset-coco #arxiv-2005.12872 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# DETR (End-to-End Object Detection) model with ResNet-50 backbone (dilated C5 stage) DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. DETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs. ## Intended uses & limitations You can use the raw model for panoptic segmentation. See the model hub to look for all available DETR models. ### How to use Here is how to use this model: Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on COCO 2017 panoptic, a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found here. Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves the following results on COCO 2017 validation: a box AP (average precision) of 40.2, a segmentation AP (average precision) of 31.9 and a PQ (panoptic quality) of 44.6. For more details regarding evaluation results, we refer to table 5 of the original paper. ### BibTeX entry and citation info
[ "# DETR (End-to-End Object Detection) model with ResNet-50 backbone (dilated C5 stage)\n\nDEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. \n\nDisclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. \n\nThe model is trained using a \"bipartite matching loss\": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a \"no object\" as class and \"no bounding box\" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.\n\nDETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs.", "## Intended uses & limitations\n\nYou can use the raw model for panoptic segmentation. See the model hub to look for all available DETR models.", "### How to use\n\nHere is how to use this model:\n\n\n\nCurrently, both the feature extractor and model support PyTorch.", "## Training data\n\nThe DETR model was trained on COCO 2017 panoptic, a dataset consisting of 118k/5k annotated images for training/validation respectively.", "## Training procedure", "### Preprocessing\n\nThe exact details of preprocessing of images during training/validation can be found here. \n\nImages are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).", "### Training\n\nThe model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).", "## Evaluation results\n\nThis model achieves the following results on COCO 2017 validation: a box AP (average precision) of 40.2, a segmentation AP (average precision) of 31.9 and a PQ (panoptic quality) of 44.6.\n\nFor more details regarding evaluation results, we refer to table 5 of the original paper.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #safetensors #detr #image-segmentation #dataset-coco #arxiv-2005.12872 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# DETR (End-to-End Object Detection) model with ResNet-50 backbone (dilated C5 stage)\n\nDEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. \n\nDisclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. \n\nThe model is trained using a \"bipartite matching loss\": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a \"no object\" as class and \"no bounding box\" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.\n\nDETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs.", "## Intended uses & limitations\n\nYou can use the raw model for panoptic segmentation. See the model hub to look for all available DETR models.", "### How to use\n\nHere is how to use this model:\n\n\n\nCurrently, both the feature extractor and model support PyTorch.", "## Training data\n\nThe DETR model was trained on COCO 2017 panoptic, a dataset consisting of 118k/5k annotated images for training/validation respectively.", "## Training procedure", "### Preprocessing\n\nThe exact details of preprocessing of images during training/validation can be found here. \n\nImages are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).", "### Training\n\nThe model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).", "## Evaluation results\n\nThis model achieves the following results on COCO 2017 validation: a box AP (average precision) of 40.2, a segmentation AP (average precision) of 31.9 and a PQ (panoptic quality) of 44.6.\n\nFor more details regarding evaluation results, we refer to table 5 of the original paper.", "### BibTeX entry and citation info" ]
object-detection
transformers
# DETR (End-to-End Object Detection) model with ResNet-50 backbone (dilated C5 stage) DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models. ### How to use Here is how to use this model: ```python from transformers import DetrFeatureExtractor, DetrForObjectDetection from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-50-dc5') model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50-dc5') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts bounding boxes and corresponding COCO classes logits = outputs.logits bboxes = outputs.pred_boxes ``` Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves an AP (average precision) of **43.3** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["object-detection", "vision"], "datasets": ["coco"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg", "example_title": "Savanna"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg", "example_title": "Football Match"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg", "example_title": "Airport"}]}
facebook/detr-resnet-50-dc5
null
[ "transformers", "pytorch", "safetensors", "detr", "object-detection", "vision", "dataset:coco", "arxiv:2005.12872", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2005.12872" ]
[]
TAGS #transformers #pytorch #safetensors #detr #object-detection #vision #dataset-coco #arxiv-2005.12872 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# DETR (End-to-End Object Detection) model with ResNet-50 backbone (dilated C5 stage) DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ## Intended uses & limitations You can use the raw model for object detection. See the model hub to look for all available DETR models. ### How to use Here is how to use this model: Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on COCO 2017 object detection, a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found here. Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves an AP (average precision) of 43.3 on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info
[ "# DETR (End-to-End Object Detection) model with ResNet-50 backbone (dilated C5 stage)\n\nDEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. \n\nDisclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. \n\nThe model is trained using a \"bipartite matching loss\": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a \"no object\" as class and \"no bounding box\" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.", "## Intended uses & limitations\n\nYou can use the raw model for object detection. See the model hub to look for all available DETR models.", "### How to use\n\nHere is how to use this model:\n\n\n\nCurrently, both the feature extractor and model support PyTorch.", "## Training data\n\nThe DETR model was trained on COCO 2017 object detection, a dataset consisting of 118k/5k annotated images for training/validation respectively.", "## Training procedure", "### Preprocessing\n\nThe exact details of preprocessing of images during training/validation can be found here. \n\nImages are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).", "### Training\n\nThe model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).", "## Evaluation results\n\nThis model achieves an AP (average precision) of 43.3 on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #safetensors #detr #object-detection #vision #dataset-coco #arxiv-2005.12872 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# DETR (End-to-End Object Detection) model with ResNet-50 backbone (dilated C5 stage)\n\nDEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. \n\nDisclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. \n\nThe model is trained using a \"bipartite matching loss\": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a \"no object\" as class and \"no bounding box\" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.", "## Intended uses & limitations\n\nYou can use the raw model for object detection. See the model hub to look for all available DETR models.", "### How to use\n\nHere is how to use this model:\n\n\n\nCurrently, both the feature extractor and model support PyTorch.", "## Training data\n\nThe DETR model was trained on COCO 2017 object detection, a dataset consisting of 118k/5k annotated images for training/validation respectively.", "## Training procedure", "### Preprocessing\n\nThe exact details of preprocessing of images during training/validation can be found here. \n\nImages are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).", "### Training\n\nThe model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).", "## Evaluation results\n\nThis model achieves an AP (average precision) of 43.3 on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.", "### BibTeX entry and citation info" ]
image-segmentation
transformers
# DETR (End-to-End Object Detection) model with ResNet-50 backbone DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. DETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/detr_architecture.png) ## Intended uses & limitations You can use the raw model for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models. ### How to use Here is how to use this model: ```python import io import requests from PIL import Image import torch import numpy from transformers import DetrFeatureExtractor, DetrForSegmentation from transformers.models.detr.feature_extraction_detr import rgb_to_id url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = DetrFeatureExtractor.from_pretrained("facebook/detr-resnet-50-panoptic") model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic") # prepare image for the model inputs = feature_extractor(images=image, return_tensors="pt") # forward pass outputs = model(**inputs) # use the `post_process_panoptic` method of `DetrFeatureExtractor` to convert to COCO format processed_sizes = torch.as_tensor(inputs["pixel_values"].shape[-2:]).unsqueeze(0) result = feature_extractor.post_process_panoptic(outputs, processed_sizes)[0] # the segmentation is stored in a special-format png panoptic_seg = Image.open(io.BytesIO(result["png_string"])) panoptic_seg = numpy.array(panoptic_seg, dtype=numpy.uint8) # retrieve the ids corresponding to each mask panoptic_seg_id = rgb_to_id(panoptic_seg) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on [COCO 2017 panoptic](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/detr/blob/master/datasets/coco_panoptic.py). Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves the following results on COCO 2017 validation: a box AP (average precision) of **38.8**, a segmentation AP (average precision) of **31.1** and a PQ (panoptic quality) of **43.4**. For more details regarding evaluation results, we refer to table 5 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["image-segmentation", "vision"], "datasets": ["coco"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg", "example_title": "Football Match"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/dog-cat.jpg", "example_title": "Dog & Cat"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/construction-site.jpg", "example_title": "Construction Site"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/apple-orange.jpg", "example_title": "Apple & Orange"}]}
facebook/detr-resnet-50-panoptic
null
[ "transformers", "pytorch", "detr", "image-segmentation", "vision", "dataset:coco", "arxiv:2005.12872", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2005.12872" ]
[]
TAGS #transformers #pytorch #detr #image-segmentation #vision #dataset-coco #arxiv-2005.12872 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# DETR (End-to-End Object Detection) model with ResNet-50 backbone DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. DETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs. !model image ## Intended uses & limitations You can use the raw model for panoptic segmentation. See the model hub to look for all available DETR models. ### How to use Here is how to use this model: Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on COCO 2017 panoptic, a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found here. Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves the following results on COCO 2017 validation: a box AP (average precision) of 38.8, a segmentation AP (average precision) of 31.1 and a PQ (panoptic quality) of 43.4. For more details regarding evaluation results, we refer to table 5 of the original paper. ### BibTeX entry and citation info
[ "# DETR (End-to-End Object Detection) model with ResNet-50 backbone\n\nDEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. \n\nDisclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. \n\nThe model is trained using a \"bipartite matching loss\": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a \"no object\" as class and \"no bounding box\" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.\n\nDETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for panoptic segmentation. See the model hub to look for all available DETR models.", "### How to use\n\nHere is how to use this model:\n\n\n\nCurrently, both the feature extractor and model support PyTorch.", "## Training data\n\nThe DETR model was trained on COCO 2017 panoptic, a dataset consisting of 118k/5k annotated images for training/validation respectively.", "## Training procedure", "### Preprocessing\n\nThe exact details of preprocessing of images during training/validation can be found here. \n\nImages are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).", "### Training\n\nThe model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).", "## Evaluation results\n\nThis model achieves the following results on COCO 2017 validation: a box AP (average precision) of 38.8, a segmentation AP (average precision) of 31.1 and a PQ (panoptic quality) of 43.4.\n\nFor more details regarding evaluation results, we refer to table 5 of the original paper.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #detr #image-segmentation #vision #dataset-coco #arxiv-2005.12872 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# DETR (End-to-End Object Detection) model with ResNet-50 backbone\n\nDEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. \n\nDisclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. \n\nThe model is trained using a \"bipartite matching loss\": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a \"no object\" as class and \"no bounding box\" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.\n\nDETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for panoptic segmentation. See the model hub to look for all available DETR models.", "### How to use\n\nHere is how to use this model:\n\n\n\nCurrently, both the feature extractor and model support PyTorch.", "## Training data\n\nThe DETR model was trained on COCO 2017 panoptic, a dataset consisting of 118k/5k annotated images for training/validation respectively.", "## Training procedure", "### Preprocessing\n\nThe exact details of preprocessing of images during training/validation can be found here. \n\nImages are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).", "### Training\n\nThe model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).", "## Evaluation results\n\nThis model achieves the following results on COCO 2017 validation: a box AP (average precision) of 38.8, a segmentation AP (average precision) of 31.1 and a PQ (panoptic quality) of 43.4.\n\nFor more details regarding evaluation results, we refer to table 5 of the original paper.", "### BibTeX entry and citation info" ]
object-detection
transformers
# DETR (End-to-End Object Detection) model with ResNet-50 backbone DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/detr_architecture.png) ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=facebook/detr) to look for all available DETR models. ### How to use Here is how to use this model: ```python from transformers import DetrImageProcessor, DetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) # you can specify the revision tag if you don't want the timm dependency processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50", revision="no_timm") model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50", revision="no_timm") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.9 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.9)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` This should output: ``` Detected remote with confidence 0.998 at location [40.16, 70.81, 175.55, 117.98] Detected remote with confidence 0.996 at location [333.24, 72.55, 368.33, 187.66] Detected couch with confidence 0.995 at location [-0.02, 1.15, 639.73, 473.76] Detected cat with confidence 0.999 at location [13.24, 52.05, 314.02, 470.93] Detected cat with confidence 0.999 at location [345.4, 23.85, 640.37, 368.72] ``` Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves an AP (average precision) of **42.0** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2005-12872, author = {Nicolas Carion and Francisco Massa and Gabriel Synnaeve and Nicolas Usunier and Alexander Kirillov and Sergey Zagoruyko}, title = {End-to-End Object Detection with Transformers}, journal = {CoRR}, volume = {abs/2005.12872}, year = {2020}, url = {https://arxiv.org/abs/2005.12872}, archivePrefix = {arXiv}, eprint = {2005.12872}, timestamp = {Thu, 28 May 2020 17:38:09 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2005-12872.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["object-detection", "vision"], "datasets": ["coco"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg", "example_title": "Savanna"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg", "example_title": "Football Match"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg", "example_title": "Airport"}]}
facebook/detr-resnet-50
null
[ "transformers", "pytorch", "detr", "object-detection", "vision", "dataset:coco", "arxiv:2005.12872", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2005.12872" ]
[]
TAGS #transformers #pytorch #detr #object-detection #vision #dataset-coco #arxiv-2005.12872 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# DETR (End-to-End Object Detection) model with ResNet-50 backbone DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. !model image ## Intended uses & limitations You can use the raw model for object detection. See the model hub to look for all available DETR models. ### How to use Here is how to use this model: This should output: Currently, both the feature extractor and model support PyTorch. ## Training data The DETR model was trained on COCO 2017 object detection, a dataset consisting of 118k/5k annotated images for training/validation respectively. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found here. Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### Training The model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64). ## Evaluation results This model achieves an AP (average precision) of 42.0 on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper. ### BibTeX entry and citation info
[ "# DETR (End-to-End Object Detection) model with ResNet-50 backbone\n\nDEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. \n\nDisclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. \n\nThe model is trained using a \"bipartite matching loss\": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a \"no object\" as class and \"no bounding box\" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for object detection. See the model hub to look for all available DETR models.", "### How to use\n\nHere is how to use this model:\n\n\nThis should output:\n\n\nCurrently, both the feature extractor and model support PyTorch.", "## Training data\n\nThe DETR model was trained on COCO 2017 object detection, a dataset consisting of 118k/5k annotated images for training/validation respectively.", "## Training procedure", "### Preprocessing\n\nThe exact details of preprocessing of images during training/validation can be found here. \n\nImages are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).", "### Training\n\nThe model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).", "## Evaluation results\n\nThis model achieves an AP (average precision) of 42.0 on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #detr #object-detection #vision #dataset-coco #arxiv-2005.12872 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# DETR (End-to-End Object Detection) model with ResNet-50 backbone\n\nDEtection TRansformer (DETR) model trained end-to-end on COCO 2017 object detection (118k annotated images). It was introduced in the paper End-to-End Object Detection with Transformers by Carion et al. and first released in this repository. \n\nDisclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. \n\nThe model is trained using a \"bipartite matching loss\": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a \"no object\" as class and \"no bounding box\" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.\n\n!model image", "## Intended uses & limitations\n\nYou can use the raw model for object detection. See the model hub to look for all available DETR models.", "### How to use\n\nHere is how to use this model:\n\n\nThis should output:\n\n\nCurrently, both the feature extractor and model support PyTorch.", "## Training data\n\nThe DETR model was trained on COCO 2017 object detection, a dataset consisting of 118k/5k annotated images for training/validation respectively.", "## Training procedure", "### Preprocessing\n\nThe exact details of preprocessing of images during training/validation can be found here. \n\nImages are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).", "### Training\n\nThe model was trained for 300 epochs on 16 V100 GPUs. This takes 3 days, with 4 images per GPU (hence a total batch size of 64).", "## Evaluation results\n\nThis model achieves an AP (average precision) of 42.0 on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.", "### BibTeX entry and citation info" ]
feature-extraction
transformers
# Vision Transformer (base-sized model, patch size 16) trained using DINO Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper [Emerging Properties in Self-Supervised Vision Transformers](https://arxiv.org/abs/2104.14294) by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in [this repository](https://github.com/facebookresearch/dino). Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not include any fine-tuned heads. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import ViTImageProcessor, ViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = ViTImageProcessor.from_pretrained('facebook/dino-vitb16') model = ViTModel.from_pretrained('facebook/dino-vitb16') inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2104-14294, author = {Mathilde Caron and Hugo Touvron and Ishan Misra and Herv{\'{e}} J{\'{e}}gou and Julien Mairal and Piotr Bojanowski and Armand Joulin}, title = {Emerging Properties in Self-Supervised Vision Transformers}, journal = {CoRR}, volume = {abs/2104.14294}, year = {2021}, url = {https://arxiv.org/abs/2104.14294}, archivePrefix = {arXiv}, eprint = {2104.14294}, timestamp = {Tue, 04 May 2021 15:12:43 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2104-14294.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["dino", "vision"], "datasets": ["imagenet-1k"]}
facebook/dino-vitb16
null
[ "transformers", "pytorch", "tf", "vit", "feature-extraction", "dino", "vision", "dataset:imagenet-1k", "arxiv:2104.14294", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2104.14294" ]
[]
TAGS #transformers #pytorch #tf #vit #feature-extraction #dino #vision #dataset-imagenet-1k #arxiv-2104.14294 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# Vision Transformer (base-sized model, patch size 16) trained using DINO Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper Emerging Properties in Self-Supervised Vision Transformers by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in this repository. Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not include any fine-tuned heads. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ### BibTeX entry and citation info
[ "# Vision Transformer (base-sized model, patch size 16) trained using DINO \n\nVision Transformer (ViT) model trained using the DINO method. It was introduced in the paper Emerging Properties in Self-Supervised Vision Transformers by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in this repository. \n\nDisclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. \n\nImages are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.\n\nNote that this model does not include any fine-tuned heads. \n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #tf #vit #feature-extraction #dino #vision #dataset-imagenet-1k #arxiv-2104.14294 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# Vision Transformer (base-sized model, patch size 16) trained using DINO \n\nVision Transformer (ViT) model trained using the DINO method. It was introduced in the paper Emerging Properties in Self-Supervised Vision Transformers by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in this repository. \n\nDisclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. \n\nImages are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.\n\nNote that this model does not include any fine-tuned heads. \n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:", "### BibTeX entry and citation info" ]
feature-extraction
transformers
# Vision Transformer (base-sized model, patch size 8) trained using DINO Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper [Emerging Properties in Self-Supervised Vision Transformers](https://arxiv.org/abs/2104.14294) by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in [this repository](https://github.com/facebookresearch/dino). Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 8x8), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not include any fine-tuned heads. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import ViTImageProcessor, ViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = ViTImageProcessor.from_pretrained('facebook/dino-vitb8') model = ViTModel.from_pretrained('facebook/dino-vitb8') inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2104-14294, author = {Mathilde Caron and Hugo Touvron and Ishan Misra and Herv{\'{e}} J{\'{e}}gou and Julien Mairal and Piotr Bojanowski and Armand Joulin}, title = {Emerging Properties in Self-Supervised Vision Transformers}, journal = {CoRR}, volume = {abs/2104.14294}, year = {2021}, url = {https://arxiv.org/abs/2104.14294}, archivePrefix = {arXiv}, eprint = {2104.14294}, timestamp = {Tue, 04 May 2021 15:12:43 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2104-14294.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["dino", "vision"], "datasets": ["imagenet-1k"]}
facebook/dino-vitb8
null
[ "transformers", "pytorch", "vit", "feature-extraction", "dino", "vision", "dataset:imagenet-1k", "arxiv:2104.14294", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2104.14294" ]
[]
TAGS #transformers #pytorch #vit #feature-extraction #dino #vision #dataset-imagenet-1k #arxiv-2104.14294 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# Vision Transformer (base-sized model, patch size 8) trained using DINO Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper Emerging Properties in Self-Supervised Vision Transformers by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in this repository. Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 8x8), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not include any fine-tuned heads. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ### BibTeX entry and citation info
[ "# Vision Transformer (base-sized model, patch size 8) trained using DINO \n\nVision Transformer (ViT) model trained using the DINO method. It was introduced in the paper Emerging Properties in Self-Supervised Vision Transformers by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in this repository. \n\nDisclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. \n\nImages are presented to the model as a sequence of fixed-size patches (resolution 8x8), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.\n\nNote that this model does not include any fine-tuned heads. \n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #vit #feature-extraction #dino #vision #dataset-imagenet-1k #arxiv-2104.14294 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# Vision Transformer (base-sized model, patch size 8) trained using DINO \n\nVision Transformer (ViT) model trained using the DINO method. It was introduced in the paper Emerging Properties in Self-Supervised Vision Transformers by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in this repository. \n\nDisclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. \n\nImages are presented to the model as a sequence of fixed-size patches (resolution 8x8), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.\n\nNote that this model does not include any fine-tuned heads. \n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:", "### BibTeX entry and citation info" ]
feature-extraction
transformers
# Vision Transformer (small-sized model, patch size 16) trained using DINO Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper [Emerging Properties in Self-Supervised Vision Transformers](https://arxiv.org/abs/2104.14294) by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in [this repository](https://github.com/facebookresearch/dino). Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not include any fine-tuned heads. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import ViTImageProcessor, ViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = ViTImageProcessor.from_pretrained('facebook/dino-vits16') model = ViTModel.from_pretrained('facebook/dino-vits16') inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2104-14294, author = {Mathilde Caron and Hugo Touvron and Ishan Misra and Herv{\'{e}} J{\'{e}}gou and Julien Mairal and Piotr Bojanowski and Armand Joulin}, title = {Emerging Properties in Self-Supervised Vision Transformers}, journal = {CoRR}, volume = {abs/2104.14294}, year = {2021}, url = {https://arxiv.org/abs/2104.14294}, archivePrefix = {arXiv}, eprint = {2104.14294}, timestamp = {Tue, 04 May 2021 15:12:43 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2104-14294.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["dino", "vision"], "datasets": ["imagenet-1k"]}
facebook/dino-vits16
null
[ "transformers", "pytorch", "vit", "feature-extraction", "dino", "vision", "dataset:imagenet-1k", "arxiv:2104.14294", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2104.14294" ]
[]
TAGS #transformers #pytorch #vit #feature-extraction #dino #vision #dataset-imagenet-1k #arxiv-2104.14294 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# Vision Transformer (small-sized model, patch size 16) trained using DINO Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper Emerging Properties in Self-Supervised Vision Transformers by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in this repository. Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not include any fine-tuned heads. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ### BibTeX entry and citation info
[ "# Vision Transformer (small-sized model, patch size 16) trained using DINO \n\nVision Transformer (ViT) model trained using the DINO method. It was introduced in the paper Emerging Properties in Self-Supervised Vision Transformers by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in this repository. \n\nDisclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. \n\nImages are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.\n\nNote that this model does not include any fine-tuned heads. \n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #vit #feature-extraction #dino #vision #dataset-imagenet-1k #arxiv-2104.14294 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# Vision Transformer (small-sized model, patch size 16) trained using DINO \n\nVision Transformer (ViT) model trained using the DINO method. It was introduced in the paper Emerging Properties in Self-Supervised Vision Transformers by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in this repository. \n\nDisclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. \n\nImages are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.\n\nNote that this model does not include any fine-tuned heads. \n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:", "### BibTeX entry and citation info" ]
feature-extraction
transformers
# Vision Transformer (small-sized model, patch size 8) trained using DINO Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper [Emerging Properties in Self-Supervised Vision Transformers](https://arxiv.org/abs/2104.14294) by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in [this repository](https://github.com/facebookresearch/dino). Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 8x8), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not include any fine-tuned heads. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import ViTImageProcessor, ViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = ViTImageProcessor.from_pretrained('facebook/dino-vits8') model = ViTModel.from_pretrained('facebook/dino-vits8') inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2104-14294, author = {Mathilde Caron and Hugo Touvron and Ishan Misra and Herv{\'{e}} J{\'{e}}gou and Julien Mairal and Piotr Bojanowski and Armand Joulin}, title = {Emerging Properties in Self-Supervised Vision Transformers}, journal = {CoRR}, volume = {abs/2104.14294}, year = {2021}, url = {https://arxiv.org/abs/2104.14294}, archivePrefix = {arXiv}, eprint = {2104.14294}, timestamp = {Tue, 04 May 2021 15:12:43 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2104-14294.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
{"license": "apache-2.0", "tags": ["dino", "vision"], "datasets": ["imagenet-1k"]}
facebook/dino-vits8
null
[ "transformers", "pytorch", "vit", "feature-extraction", "dino", "vision", "dataset:imagenet-1k", "arxiv:2104.14294", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2104.14294" ]
[]
TAGS #transformers #pytorch #vit #feature-extraction #dino #vision #dataset-imagenet-1k #arxiv-2104.14294 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# Vision Transformer (small-sized model, patch size 8) trained using DINO Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper Emerging Properties in Self-Supervised Vision Transformers by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in this repository. Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 8x8), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not include any fine-tuned heads. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the model hub to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ### BibTeX entry and citation info
[ "# Vision Transformer (small-sized model, patch size 8) trained using DINO \n\nVision Transformer (ViT) model trained using the DINO method. It was introduced in the paper Emerging Properties in Self-Supervised Vision Transformers by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in this repository. \n\nDisclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. \n\nImages are presented to the model as a sequence of fixed-size patches (resolution 8x8), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.\n\nNote that this model does not include any fine-tuned heads. \n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #vit #feature-extraction #dino #vision #dataset-imagenet-1k #arxiv-2104.14294 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# Vision Transformer (small-sized model, patch size 8) trained using DINO \n\nVision Transformer (ViT) model trained using the DINO method. It was introduced in the paper Emerging Properties in Self-Supervised Vision Transformers by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in this repository. \n\nDisclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nThe Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. \n\nImages are presented to the model as a sequence of fixed-size patches (resolution 8x8), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.\n\nNote that this model does not include any fine-tuned heads. \n\nBy pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.", "## Intended uses & limitations\n\nYou can use the raw model for image classification. See the model hub to look for\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:", "### BibTeX entry and citation info" ]
null
transformers
# `dpr-ctx_encoder-multiset-base` ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation-results) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) ## Model Details **Model Description:** [Dense Passage Retrieval (DPR)](https://github.com/facebookresearch/DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. `dpr-ctx_encoder-multiset-base` is the context encoder trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), and [CuratedTREC (TREC)](https://huggingface.co/datasets/trec). - **Developed by:** See [GitHub repo](https://github.com/facebookresearch/DPR) for model developers - **Model Type:** BERT-based encoder - **Language(s):** [CC-BY-NC-4.0](https://github.com/facebookresearch/DPR/blob/main/LICENSE), also see [Code of Conduct](https://github.com/facebookresearch/DPR/blob/main/CODE_OF_CONDUCT.md) - **License:** English - **Related Models:** - [`dpr-question_encoder-multiset-base`](https://huggingface.co/facebook/dpr-question_encoder-multiset-base) - [`dpr-reader-multiset-base`](https://huggingface.co/facebook/dpr-reader-multiset-base) - [`dpr-question-encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base) - [`dpr-reader-single-nq-base`](https://huggingface.co/facebook/dpr-reader-single-nq-base) - [`dpr-ctx_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base) - **Resources for more information:** - [Research Paper](https://arxiv.org/abs/2004.04906) - [GitHub Repo](https://github.com/facebookresearch/DPR) - [Hugging Face DPR docs](https://huggingface.co/docs/transformers/main/en/model_doc/dpr) - [BERT Base Uncased Model Card](https://huggingface.co/bert-base-uncased) ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import DPRContextEncoder, DPRContextEncoderTokenizer tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-multiset-base") model = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-multiset-base") input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="pt")["input_ids"] embeddings = model(input_ids).pooler_output ``` ## Uses #### Direct Use `dpr-ctx_encoder-multiset-base`, [`dpr-question_encoder-multiset-base`](https://huggingface.co/facebook/dpr-question_encoder-multiset-base), and [`dpr-reader-multiset-base`](https://huggingface.co/facebook/dpr-reader-multiset-base) can be used for the task of open-domain question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Training #### Training Data This model was trained using the following datasets: - **[Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open)** ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)) - **[TriviaQA](https://huggingface.co/datasets/trivia_qa)** ([Joshi et al., 2017](https://aclanthology.org/P17-1147/)) - **[WebQuestions (WQ)](https://huggingface.co/datasets/web_questions)** ([Berant et al., 2013](https://aclanthology.org/D13-1160/)) - **[CuratedTREC (TREC)](https://huggingface.co/datasets/trec)** ([Baudiš & Šedivý, 2015](https://www.aminer.cn/pub/599c7953601a182cd263079b/reading-wikipedia-to-answer-open-domain-questions)) #### Training Procedure The training procedure is described in the [associated paper](https://arxiv.org/pdf/2004.04906.pdf): > Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time. > Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector. The authors report that for encoders, they used two independent BERT ([Devlin et al., 2019](https://aclanthology.org/N19-1423/)) networks (base, un-cased) and use FAISS ([Johnson et al., 2017](https://arxiv.org/abs/1702.08734)) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives. ## Evaluation The following evaluation information is extracted from the [associated paper](https://arxiv.org/pdf/2004.04906.pdf). #### Testing Data, Factors and Metrics The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were [NQ](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), [CuratedTREC (TREC)](https://huggingface.co/datasets/trec), and [SQuAD v1.1](https://huggingface.co/datasets/squad). #### Results | | Top 20 | | | | | Top 100| | | | | |:----:|:------:|:---------:|:--:|:----:|:-----:|:------:|:---------:|:--:|:----:|:-----:| | | NQ | TriviaQA | WQ | TREC | SQuAD | NQ | TriviaQA | WQ | TREC | SQuAD | | | 79.4 | 78.8 |75.0| 89.1 | 51.6 | 86.0 | 84.7 |82.9| 93.9 | 67.6 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and based on the [associated paper](https://arxiv.org/abs/2004.04906). - **Hardware Type:** 8 32GB GPUs - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://arxiv.org/abs/2004.04906) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @inproceedings{karpukhin-etal-2020-dense, title = "Dense Passage Retrieval for Open-Domain Question Answering", author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.550", doi = "10.18653/v1/2020.emnlp-main.550", pages = "6769--6781", } ``` ## Model Card Authors This model card was written by the team at Hugging Face.
{"language": "en", "license": "cc-by-nc-4.0", "tags": ["dpr"], "datasets": ["nq_open"], "inference": false}
facebook/dpr-ctx_encoder-multiset-base
null
[ "transformers", "pytorch", "tf", "dpr", "en", "dataset:nq_open", "arxiv:2004.04906", "arxiv:1702.08734", "arxiv:1910.09700", "license:cc-by-nc-4.0", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.04906", "1702.08734", "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #dpr #en #dataset-nq_open #arxiv-2004.04906 #arxiv-1702.08734 #arxiv-1910.09700 #license-cc-by-nc-4.0 #has_space #region-us
'dpr-ctx\_encoder-multiset-base' ================================ Table of Contents ----------------- * Model Details * How To Get Started With the Model * Uses * Risks, Limitations and Biases * Training * Evaluation * Environmental Impact * Technical Specifications * Citation Information * Model Card Authors Model Details ------------- Model Description: Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. 'dpr-ctx\_encoder-multiset-base' is the context encoder trained using the Natural Questions (NQ) dataset, TriviaQA, WebQuestions (WQ), and CuratedTREC (TREC). * Developed by: See GitHub repo for model developers * Model Type: BERT-based encoder * Language(s): CC-BY-NC-4.0, also see Code of Conduct * License: English * Related Models: + 'dpr-question\_encoder-multiset-base' + 'dpr-reader-multiset-base' + 'dpr-question-encoder-single-nq-base' + 'dpr-reader-single-nq-base' + 'dpr-ctx\_encoder-single-nq-base' * Resources for more information: + Research Paper + GitHub Repo + Hugging Face DPR docs + BERT Base Uncased Model Card How to Get Started with the Model --------------------------------- Use the code below to get started with the model. Uses ---- #### Direct Use 'dpr-ctx\_encoder-multiset-base', 'dpr-question\_encoder-multiset-base', and 'dpr-reader-multiset-base' can be used for the task of open-domain question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. Risks, Limitations and Biases ----------------------------- CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Training -------- #### Training Data This model was trained using the following datasets: * Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019) * TriviaQA (Joshi et al., 2017) * WebQuestions (WQ) (Berant et al., 2013) * CuratedTREC (TREC) (Baudiš & Šedivý, 2015) #### Training Procedure The training procedure is described in the associated paper: > > Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time. > > > > > Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector. > > > The authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives. Evaluation ---------- The following evaluation information is extracted from the associated paper. #### Testing Data, Factors and Metrics The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1. #### Results Environmental Impact -------------------- Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper. * Hardware Type: 8 32GB GPUs * Hours used: Unknown * Cloud Provider: Unknown * Compute Region: Unknown * Carbon Emitted: Unknown Technical Specifications ------------------------ See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. Model Card Authors ------------------ This model card was written by the team at Hugging Face.
[ "#### Direct Use\n\n\n'dpr-ctx\\_encoder-multiset-base', 'dpr-question\\_encoder-multiset-base', and 'dpr-reader-multiset-base' can be used for the task of open-domain question answering.", "#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n\n\nTraining\n--------", "#### Training Data\n\n\nThis model was trained using the following datasets:\n\n\n* Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019)\n* TriviaQA (Joshi et al., 2017)\n* WebQuestions (WQ) (Berant et al., 2013)\n* CuratedTREC (TREC) (Baudiš & Šedivý, 2015)", "#### Training Procedure\n\n\nThe training procedure is described in the associated paper:\n\n\n\n> \n> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.\n> \n> \n> \n\n\n\n> \n> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.\n> \n> \n> \n\n\nThe authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.", "#### Testing Data, Factors and Metrics\n\n\nThe model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1.", "#### Results\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper.\n\n\n* Hardware Type: 8 32GB GPUs\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the team at Hugging Face." ]
[ "TAGS\n#transformers #pytorch #tf #dpr #en #dataset-nq_open #arxiv-2004.04906 #arxiv-1702.08734 #arxiv-1910.09700 #license-cc-by-nc-4.0 #has_space #region-us \n", "#### Direct Use\n\n\n'dpr-ctx\\_encoder-multiset-base', 'dpr-question\\_encoder-multiset-base', and 'dpr-reader-multiset-base' can be used for the task of open-domain question answering.", "#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n\n\nTraining\n--------", "#### Training Data\n\n\nThis model was trained using the following datasets:\n\n\n* Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019)\n* TriviaQA (Joshi et al., 2017)\n* WebQuestions (WQ) (Berant et al., 2013)\n* CuratedTREC (TREC) (Baudiš & Šedivý, 2015)", "#### Training Procedure\n\n\nThe training procedure is described in the associated paper:\n\n\n\n> \n> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.\n> \n> \n> \n\n\n\n> \n> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.\n> \n> \n> \n\n\nThe authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.", "#### Testing Data, Factors and Metrics\n\n\nThe model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1.", "#### Results\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper.\n\n\n* Hardware Type: 8 32GB GPUs\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the team at Hugging Face." ]
null
transformers
# `dpr-ctx_encoder-single-nq-base` ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation-results) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) ## Model Details **Model Description:** [Dense Passage Retrieval (DPR)](https://github.com/facebookresearch/DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. `dpr-ctx_encoder-single-nq-base` is the Context Encoder trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open) ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)). - **Developed by:** See [GitHub repo](https://github.com/facebookresearch/DPR) for model developers - **Model Type:** BERT-based encoder - **Language(s):** [CC-BY-NC-4.0](https://github.com/facebookresearch/DPR/blob/main/LICENSE), also see [Code of Conduct](https://github.com/facebookresearch/DPR/blob/main/CODE_OF_CONDUCT.md) - **License:** English - **Related Models:** - [`dpr-question-encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base) - [`dpr-reader-single-nq-base`](https://huggingface.co/facebook/dpr-reader-single-nq-base) - [`dpr-ctx_encoder-multiset-base`](https://huggingface.co/facebook/dpr-ctx_encoder-multiset-base) - [`dpr-question_encoder-multiset-base`](https://huggingface.co/facebook/dpr-question_encoder-multiset-base) - [`dpr-reader-multiset-base`](https://huggingface.co/facebook/dpr-reader-multiset-base) - **Resources for more information:** - [Research Paper](https://arxiv.org/abs/2004.04906) - [GitHub Repo](https://github.com/facebookresearch/DPR) - [Hugging Face DPR docs](https://huggingface.co/docs/transformers/main/en/model_doc/dpr) - [BERT Base Uncased Model Card](https://huggingface.co/bert-base-uncased) ## How to Get Started with the Model Use the code below to get started with the model. ```python >>> from transformers import DPRContextEncoder, DPRContextEncoderTokenizer >>> tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") >>> model = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") >>> input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="pt")["input_ids"] >>> embeddings = model(input_ids).pooler_output ``` ## Uses #### Direct Use `dpr-ctx_encoder-single-nq-base`, [`dpr-question-encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base), and [`dpr-reader-single-nq-base`](https://huggingface.co/facebook/dpr-reader-single-nq-base) can be used for the task of open-domain question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Training #### Training Data This model was trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open) ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)). The model authors write that: > [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators. #### Training Procedure The training procedure is described in the [associated paper](https://arxiv.org/pdf/2004.04906.pdf): > Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time. > Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector. The authors report that for encoders, they used two independent BERT ([Devlin et al., 2019](https://aclanthology.org/N19-1423/)) networks (base, un-cased) and use FAISS ([Johnson et al., 2017](https://arxiv.org/abs/1702.08734)) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives. ## Evaluation The following evaluation information is extracted from the [associated paper](https://arxiv.org/pdf/2004.04906.pdf). #### Testing Data, Factors and Metrics The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were [NQ](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), [CuratedTREC (TREC)](https://huggingface.co/datasets/trec), and [SQuAD v1.1](https://huggingface.co/datasets/squad). #### Results | | Top 20 | | | | | Top 100| | | | | |:----:|:------:|:---------:|:--:|:----:|:-----:|:------:|:---------:|:--:|:----:|:-----:| | | NQ | TriviaQA | WQ | TREC | SQuAD | NQ | TriviaQA | WQ | TREC | SQuAD | | | 78.4 | 79.4 |73.2| 79.8 | 63.2 | 85.4 | 85.0 |81.4| 89.1 | 77.2 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and based on the [associated paper](https://arxiv.org/abs/2004.04906). - **Hardware Type:** 8 32GB GPUs - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://arxiv.org/abs/2004.04906) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @inproceedings{karpukhin-etal-2020-dense, title = "Dense Passage Retrieval for Open-Domain Question Answering", author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.550", doi = "10.18653/v1/2020.emnlp-main.550", pages = "6769--6781", } ``` ## Model Card Authors This model card was written by the team at Hugging Face.
{"language": "en", "license": "cc-by-nc-4.0", "tags": ["dpr"], "datasets": ["nq_open"], "inference": false}
facebook/dpr-ctx_encoder-single-nq-base
null
[ "transformers", "pytorch", "tf", "dpr", "en", "dataset:nq_open", "arxiv:2004.04906", "arxiv:1702.08734", "arxiv:1910.09700", "license:cc-by-nc-4.0", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.04906", "1702.08734", "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #dpr #en #dataset-nq_open #arxiv-2004.04906 #arxiv-1702.08734 #arxiv-1910.09700 #license-cc-by-nc-4.0 #has_space #region-us
'dpr-ctx\_encoder-single-nq-base' ================================= Table of Contents ----------------- * Model Details * How To Get Started With the Model * Uses * Risks, Limitations and Biases * Training * Evaluation * Environmental Impact * Technical Specifications * Citation Information * Model Card Authors Model Details ------------- Model Description: Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. 'dpr-ctx\_encoder-single-nq-base' is the Context Encoder trained using the Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019). * Developed by: See GitHub repo for model developers * Model Type: BERT-based encoder * Language(s): CC-BY-NC-4.0, also see Code of Conduct * License: English * Related Models: + 'dpr-question-encoder-single-nq-base' + 'dpr-reader-single-nq-base' + 'dpr-ctx\_encoder-multiset-base' + 'dpr-question\_encoder-multiset-base' + 'dpr-reader-multiset-base' * Resources for more information: + Research Paper + GitHub Repo + Hugging Face DPR docs + BERT Base Uncased Model Card How to Get Started with the Model --------------------------------- Use the code below to get started with the model. Uses ---- #### Direct Use 'dpr-ctx\_encoder-single-nq-base', 'dpr-question-encoder-single-nq-base', and 'dpr-reader-single-nq-base' can be used for the task of open-domain question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. Risks, Limitations and Biases ----------------------------- CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Training -------- #### Training Data This model was trained using the Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019). The model authors write that: > > [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators. > > > #### Training Procedure The training procedure is described in the associated paper: > > Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time. > > > > > Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector. > > > The authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives. Evaluation ---------- The following evaluation information is extracted from the associated paper. #### Testing Data, Factors and Metrics The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1. #### Results Environmental Impact -------------------- Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper. * Hardware Type: 8 32GB GPUs * Hours used: Unknown * Cloud Provider: Unknown * Compute Region: Unknown * Carbon Emitted: Unknown Technical Specifications ------------------------ See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. Model Card Authors ------------------ This model card was written by the team at Hugging Face.
[ "#### Direct Use\n\n\n'dpr-ctx\\_encoder-single-nq-base', 'dpr-question-encoder-single-nq-base', and 'dpr-reader-single-nq-base' can be used for the task of open-domain question answering.", "#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n\n\nTraining\n--------", "#### Training Data\n\n\nThis model was trained using the Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019). The model authors write that:\n\n\n\n> \n> [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators.\n> \n> \n>", "#### Training Procedure\n\n\nThe training procedure is described in the associated paper:\n\n\n\n> \n> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.\n> \n> \n> \n\n\n\n> \n> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.\n> \n> \n> \n\n\nThe authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.", "#### Testing Data, Factors and Metrics\n\n\nThe model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1.", "#### Results\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper.\n\n\n* Hardware Type: 8 32GB GPUs\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the team at Hugging Face." ]
[ "TAGS\n#transformers #pytorch #tf #dpr #en #dataset-nq_open #arxiv-2004.04906 #arxiv-1702.08734 #arxiv-1910.09700 #license-cc-by-nc-4.0 #has_space #region-us \n", "#### Direct Use\n\n\n'dpr-ctx\\_encoder-single-nq-base', 'dpr-question-encoder-single-nq-base', and 'dpr-reader-single-nq-base' can be used for the task of open-domain question answering.", "#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n\n\nTraining\n--------", "#### Training Data\n\n\nThis model was trained using the Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019). The model authors write that:\n\n\n\n> \n> [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators.\n> \n> \n>", "#### Training Procedure\n\n\nThe training procedure is described in the associated paper:\n\n\n\n> \n> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.\n> \n> \n> \n\n\n\n> \n> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.\n> \n> \n> \n\n\nThe authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.", "#### Testing Data, Factors and Metrics\n\n\nThe model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1.", "#### Results\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper.\n\n\n* Hardware Type: 8 32GB GPUs\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the team at Hugging Face." ]
feature-extraction
transformers
# `dpr-question_encoder-multiset-base` ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation-results) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) ## Model Details **Model Description:** [Dense Passage Retrieval (DPR)](https://github.com/facebookresearch/DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. `dpr-question_encoder-multiset-base` is the question encoder trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), and [CuratedTREC (TREC)](https://huggingface.co/datasets/trec). - **Developed by:** See [GitHub repo](https://github.com/facebookresearch/DPR) for model developers - **Model Type:** BERT-based encoder - **Language(s):** [CC-BY-NC-4.0](https://github.com/facebookresearch/DPR/blob/main/LICENSE), also see [Code of Conduct](https://github.com/facebookresearch/DPR/blob/main/CODE_OF_CONDUCT.md) - **License:** English - **Related Models:** - [`dpr-ctx_encoder-multiset-base`](https://huggingface.co/facebook/dpr-ctx_encoder-multiset-base) - [`dpr-reader-multiset-base`](https://huggingface.co/facebook/dpr-reader-multiset-base) - [`dpr-ctx_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base) - [`dpr-question_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base) - [`dpr-reader-single-nq-base`](https://huggingface.co/facebook/dpr-reader-single-nq-base) - **Resources for more information:** - [Research Paper](https://arxiv.org/abs/2004.04906) - [GitHub Repo](https://github.com/facebookresearch/DPR) - [Hugging Face DPR docs](https://huggingface.co/docs/transformers/main/en/model_doc/dpr) - [BERT Base Uncased Model Card](https://huggingface.co/bert-base-uncased) ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-multiset-base") model = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-multiset-base") input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="pt")["input_ids"] embeddings = model(input_ids).pooler_output ``` ## Uses #### Direct Use `dpr-question_encoder-multiset-base`, [`dpr-ctx_encoder-multiset-base`](https://huggingface.co/facebook/dpr-ctx_encoder-multiset-base), and [`dpr-reader-multiset-base`](https://huggingface.co/facebook/dpr-reader-multiset-base) can be used for the task of open-domain question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al., 2021](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Training #### Training Data This model was trained using the following datasets: - **[Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open)** ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)) - **[TriviaQA](https://huggingface.co/datasets/trivia_qa)** ([Joshi et al., 2017](https://aclanthology.org/P17-1147/)) - **[WebQuestions (WQ)](https://huggingface.co/datasets/web_questions)** ([Berant et al., 2013](https://aclanthology.org/D13-1160/)) - **[CuratedTREC (TREC)](https://huggingface.co/datasets/trec)** ([Baudiš & Šedivý, 2015](https://www.aminer.cn/pub/599c7953601a182cd263079b/reading-wikipedia-to-answer-open-domain-questions)) #### Training Procedure The training procedure is described in the [associated paper](https://arxiv.org/pdf/2004.04906.pdf): > Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time. > Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector. The authors report that for encoders, they used two independent BERT ([Devlin et al., 2019](https://aclanthology.org/N19-1423/)) networks (base, un-cased) and use FAISS ([Johnson et al., 2017](https://arxiv.org/abs/1702.08734)) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives. ## Evaluation The following evaluation information is extracted from the [associated paper](https://arxiv.org/pdf/2004.04906.pdf). #### Testing Data, Factors and Metrics The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were [NQ](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), [CuratedTREC (TREC)](https://huggingface.co/datasets/trec), and [SQuAD v1.1](https://huggingface.co/datasets/squad). #### Results | | Top 20 | | | | | Top 100| | | | | |:----:|:------:|:---------:|:--:|:----:|:-----:|:------:|:---------:|:--:|:----:|:-----:| | | NQ | TriviaQA | WQ | TREC | SQuAD | NQ | TriviaQA | WQ | TREC | SQuAD | | | 79.4 | 78.8 |75.0| 89.1 | 51.6 | 86.0 | 84.7 |82.9| 93.9 | 67.6 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and based on the [associated paper](https://arxiv.org/abs/2004.04906). - **Hardware Type:** 8 32GB GPUs - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://arxiv.org/abs/2004.04906) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @inproceedings{karpukhin-etal-2020-dense, title = "Dense Passage Retrieval for Open-Domain Question Answering", author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.550", doi = "10.18653/v1/2020.emnlp-main.550", pages = "6769--6781", } ``` ## Model Card Authors This model card was written by the team at Hugging Face.
{"language": "en", "license": "cc-by-nc-4.0", "tags": ["dpr"], "datasets": ["nq_open", "trivia_qa", "web_questions", "trec"], "inference": false}
facebook/dpr-question_encoder-multiset-base
null
[ "transformers", "pytorch", "tf", "dpr", "feature-extraction", "en", "dataset:nq_open", "dataset:trivia_qa", "dataset:web_questions", "dataset:trec", "arxiv:2004.04906", "arxiv:1702.08734", "arxiv:1910.09700", "license:cc-by-nc-4.0", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.04906", "1702.08734", "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #dpr #feature-extraction #en #dataset-nq_open #dataset-trivia_qa #dataset-web_questions #dataset-trec #arxiv-2004.04906 #arxiv-1702.08734 #arxiv-1910.09700 #license-cc-by-nc-4.0 #has_space #region-us
'dpr-question\_encoder-multiset-base' ===================================== Table of Contents ----------------- * Model Details * How To Get Started With the Model * Uses * Risks, Limitations and Biases * Training * Evaluation * Environmental Impact * Technical Specifications * Citation Information * Model Card Authors Model Details ------------- Model Description: Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. 'dpr-question\_encoder-multiset-base' is the question encoder trained using the Natural Questions (NQ) dataset, TriviaQA, WebQuestions (WQ), and CuratedTREC (TREC). * Developed by: See GitHub repo for model developers * Model Type: BERT-based encoder * Language(s): CC-BY-NC-4.0, also see Code of Conduct * License: English * Related Models: + 'dpr-ctx\_encoder-multiset-base' + 'dpr-reader-multiset-base' + 'dpr-ctx\_encoder-single-nq-base' + 'dpr-question\_encoder-single-nq-base' + 'dpr-reader-single-nq-base' * Resources for more information: + Research Paper + GitHub Repo + Hugging Face DPR docs + BERT Base Uncased Model Card How to Get Started with the Model --------------------------------- Use the code below to get started with the model. Uses ---- #### Direct Use 'dpr-question\_encoder-multiset-base', 'dpr-ctx\_encoder-multiset-base', and 'dpr-reader-multiset-base' can be used for the task of open-domain question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. Risks, Limitations and Biases ----------------------------- CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al., 2021 and Bender et al., 2021). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Training -------- #### Training Data This model was trained using the following datasets: * Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019) * TriviaQA (Joshi et al., 2017) * WebQuestions (WQ) (Berant et al., 2013) * CuratedTREC (TREC) (Baudiš & Šedivý, 2015) #### Training Procedure The training procedure is described in the associated paper: > > Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time. > > > > > Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector. > > > The authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives. Evaluation ---------- The following evaluation information is extracted from the associated paper. #### Testing Data, Factors and Metrics The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1. #### Results Environmental Impact -------------------- Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper. * Hardware Type: 8 32GB GPUs * Hours used: Unknown * Cloud Provider: Unknown * Compute Region: Unknown * Carbon Emitted: Unknown Technical Specifications ------------------------ See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. Model Card Authors ------------------ This model card was written by the team at Hugging Face.
[ "#### Direct Use\n\n\n'dpr-question\\_encoder-multiset-base', 'dpr-ctx\\_encoder-multiset-base', and 'dpr-reader-multiset-base' can be used for the task of open-domain question answering.", "#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al., 2021 and Bender et al., 2021). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n\n\nTraining\n--------", "#### Training Data\n\n\nThis model was trained using the following datasets:\n\n\n* Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019)\n* TriviaQA (Joshi et al., 2017)\n* WebQuestions (WQ) (Berant et al., 2013)\n* CuratedTREC (TREC) (Baudiš & Šedivý, 2015)", "#### Training Procedure\n\n\nThe training procedure is described in the associated paper:\n\n\n\n> \n> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.\n> \n> \n> \n\n\n\n> \n> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.\n> \n> \n> \n\n\nThe authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.", "#### Testing Data, Factors and Metrics\n\n\nThe model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1.", "#### Results\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper.\n\n\n* Hardware Type: 8 32GB GPUs\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the team at Hugging Face." ]
[ "TAGS\n#transformers #pytorch #tf #dpr #feature-extraction #en #dataset-nq_open #dataset-trivia_qa #dataset-web_questions #dataset-trec #arxiv-2004.04906 #arxiv-1702.08734 #arxiv-1910.09700 #license-cc-by-nc-4.0 #has_space #region-us \n", "#### Direct Use\n\n\n'dpr-question\\_encoder-multiset-base', 'dpr-ctx\\_encoder-multiset-base', and 'dpr-reader-multiset-base' can be used for the task of open-domain question answering.", "#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al., 2021 and Bender et al., 2021). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n\n\nTraining\n--------", "#### Training Data\n\n\nThis model was trained using the following datasets:\n\n\n* Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019)\n* TriviaQA (Joshi et al., 2017)\n* WebQuestions (WQ) (Berant et al., 2013)\n* CuratedTREC (TREC) (Baudiš & Šedivý, 2015)", "#### Training Procedure\n\n\nThe training procedure is described in the associated paper:\n\n\n\n> \n> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.\n> \n> \n> \n\n\n\n> \n> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.\n> \n> \n> \n\n\nThe authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.", "#### Testing Data, Factors and Metrics\n\n\nThe model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1.", "#### Results\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper.\n\n\n* Hardware Type: 8 32GB GPUs\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the team at Hugging Face." ]
feature-extraction
transformers
# `dpr-question_encoder-single-nq-base` ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation-results) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) ## Model Details **Model Description:** [Dense Passage Retrieval (DPR)](https://github.com/facebookresearch/DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. `dpr-question_encoder-single-nq-base` is the question encoder trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open) ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)). - **Developed by:** See [GitHub repo](https://github.com/facebookresearch/DPR) for model developers - **Model Type:** BERT-based encoder - **Language(s):** [CC-BY-NC-4.0](https://github.com/facebookresearch/DPR/blob/main/LICENSE), also see [Code of Conduct](https://github.com/facebookresearch/DPR/blob/main/CODE_OF_CONDUCT.md) - **License:** English - **Related Models:** - [`dpr-ctx_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base) - [`dpr-reader-single-nq-base`](https://huggingface.co/facebook/dpr-reader-single-nq-base) - [`dpr-ctx_encoder-multiset-base`](https://huggingface.co/facebook/dpr-ctx_encoder-multiset-base) - [`dpr-question_encoder-multiset-base`](https://huggingface.co/facebook/dpr-question_encoder-multiset-base) - [`dpr-reader-multiset-base`](https://huggingface.co/facebook/dpr-reader-multiset-base) - **Resources for more information:** - [Research Paper](https://arxiv.org/abs/2004.04906) - [GitHub Repo](https://github.com/facebookresearch/DPR) - [Hugging Face DPR docs](https://huggingface.co/docs/transformers/main/en/model_doc/dpr) - [BERT Base Uncased Model Card](https://huggingface.co/bert-base-uncased) ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base") model = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-single-nq-base") input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="pt")["input_ids"] embeddings = model(input_ids).pooler_output ``` ## Uses #### Direct Use `dpr-question_encoder-single-nq-base`, [`dpr-ctx_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base), and [`dpr-reader-single-nq-base`](https://huggingface.co/facebook/dpr-reader-single-nq-base) can be used for the task of open-domain question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al., 2021](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Training #### Training Data This model was trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open) ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)). The model authors write that: > [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators. #### Training Procedure The training procedure is described in the [associated paper](https://arxiv.org/pdf/2004.04906.pdf): > Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time. > Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector. The authors report that for encoders, they used two independent BERT ([Devlin et al., 2019](https://aclanthology.org/N19-1423/)) networks (base, un-cased) and use FAISS ([Johnson et al., 2017](https://arxiv.org/abs/1702.08734)) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives. ## Evaluation The following evaluation information is extracted from the [associated paper](https://arxiv.org/pdf/2004.04906.pdf). #### Testing Data, Factors and Metrics The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were [NQ](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), [CuratedTREC (TREC)](https://huggingface.co/datasets/trec), and [SQuAD v1.1](https://huggingface.co/datasets/squad). #### Results | | Top 20 | | | | | Top 100| | | | | |:----:|:------:|:---------:|:--:|:----:|:-----:|:------:|:---------:|:--:|:----:|:-----:| | | NQ | TriviaQA | WQ | TREC | SQuAD | NQ | TriviaQA | WQ | TREC | SQuAD | | | 78.4 | 79.4 |73.2| 79.8 | 63.2 | 85.4 | 85.0 |81.4| 89.1 | 77.2 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and based on the [associated paper](https://arxiv.org/abs/2004.04906). - **Hardware Type:** 8 32GB GPUs - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://arxiv.org/abs/2004.04906) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @inproceedings{karpukhin-etal-2020-dense, title = "Dense Passage Retrieval for Open-Domain Question Answering", author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.550", doi = "10.18653/v1/2020.emnlp-main.550", pages = "6769--6781", } ``` ## Model Card Authors This model card was written by the team at Hugging Face.
{"language": "en", "license": "cc-by-nc-4.0", "tags": ["dpr"], "datasets": ["nq_open"], "inference": false}
facebook/dpr-question_encoder-single-nq-base
null
[ "transformers", "pytorch", "tf", "dpr", "feature-extraction", "en", "dataset:nq_open", "arxiv:2004.04906", "arxiv:1702.08734", "arxiv:1910.09700", "license:cc-by-nc-4.0", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.04906", "1702.08734", "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #dpr #feature-extraction #en #dataset-nq_open #arxiv-2004.04906 #arxiv-1702.08734 #arxiv-1910.09700 #license-cc-by-nc-4.0 #has_space #region-us
'dpr-question\_encoder-single-nq-base' ====================================== Table of Contents ----------------- * Model Details * How To Get Started With the Model * Uses * Risks, Limitations and Biases * Training * Evaluation * Environmental Impact * Technical Specifications * Citation Information * Model Card Authors Model Details ------------- Model Description: Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. 'dpr-question\_encoder-single-nq-base' is the question encoder trained using the Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019). * Developed by: See GitHub repo for model developers * Model Type: BERT-based encoder * Language(s): CC-BY-NC-4.0, also see Code of Conduct * License: English * Related Models: + 'dpr-ctx\_encoder-single-nq-base' + 'dpr-reader-single-nq-base' + 'dpr-ctx\_encoder-multiset-base' + 'dpr-question\_encoder-multiset-base' + 'dpr-reader-multiset-base' * Resources for more information: + Research Paper + GitHub Repo + Hugging Face DPR docs + BERT Base Uncased Model Card How to Get Started with the Model --------------------------------- Use the code below to get started with the model. Uses ---- #### Direct Use 'dpr-question\_encoder-single-nq-base', 'dpr-ctx\_encoder-single-nq-base', and 'dpr-reader-single-nq-base' can be used for the task of open-domain question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. Risks, Limitations and Biases ----------------------------- CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al., 2021 and Bender et al., 2021). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Training -------- #### Training Data This model was trained using the Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019). The model authors write that: > > [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators. > > > #### Training Procedure The training procedure is described in the associated paper: > > Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time. > > > > > Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector. > > > The authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives. Evaluation ---------- The following evaluation information is extracted from the associated paper. #### Testing Data, Factors and Metrics The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1. #### Results Environmental Impact -------------------- Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper. * Hardware Type: 8 32GB GPUs * Hours used: Unknown * Cloud Provider: Unknown * Compute Region: Unknown * Carbon Emitted: Unknown Technical Specifications ------------------------ See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. Model Card Authors ------------------ This model card was written by the team at Hugging Face.
[ "#### Direct Use\n\n\n'dpr-question\\_encoder-single-nq-base', 'dpr-ctx\\_encoder-single-nq-base', and 'dpr-reader-single-nq-base' can be used for the task of open-domain question answering.", "#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al., 2021 and Bender et al., 2021). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n\n\nTraining\n--------", "#### Training Data\n\n\nThis model was trained using the Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019). The model authors write that:\n\n\n\n> \n> [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators.\n> \n> \n>", "#### Training Procedure\n\n\nThe training procedure is described in the associated paper:\n\n\n\n> \n> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.\n> \n> \n> \n\n\n\n> \n> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.\n> \n> \n> \n\n\nThe authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.", "#### Testing Data, Factors and Metrics\n\n\nThe model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1.", "#### Results\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper.\n\n\n* Hardware Type: 8 32GB GPUs\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the team at Hugging Face." ]
[ "TAGS\n#transformers #pytorch #tf #dpr #feature-extraction #en #dataset-nq_open #arxiv-2004.04906 #arxiv-1702.08734 #arxiv-1910.09700 #license-cc-by-nc-4.0 #has_space #region-us \n", "#### Direct Use\n\n\n'dpr-question\\_encoder-single-nq-base', 'dpr-ctx\\_encoder-single-nq-base', and 'dpr-reader-single-nq-base' can be used for the task of open-domain question answering.", "#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al., 2021 and Bender et al., 2021). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n\n\nTraining\n--------", "#### Training Data\n\n\nThis model was trained using the Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019). The model authors write that:\n\n\n\n> \n> [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators.\n> \n> \n>", "#### Training Procedure\n\n\nThe training procedure is described in the associated paper:\n\n\n\n> \n> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.\n> \n> \n> \n\n\n\n> \n> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.\n> \n> \n> \n\n\nThe authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.", "#### Testing Data, Factors and Metrics\n\n\nThe model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1.", "#### Results\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper.\n\n\n* Hardware Type: 8 32GB GPUs\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the team at Hugging Face." ]
null
transformers
# `dpr-reader-multiset-base` ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation-results) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) ## Model Details **Model Description:** [Dense Passage Retrieval (DPR)](https://github.com/facebookresearch/DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. `dpr-reader-multiset-base` is the reader model trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), and [CuratedTREC (TREC)](https://huggingface.co/datasets/trec). - **Developed by:** See [GitHub repo](https://github.com/facebookresearch/DPR) for model developers - **Model Type:** BERT-based encoder - **Language(s):** [CC-BY-NC-4.0](https://github.com/facebookresearch/DPR/blob/main/LICENSE), also see [Code of Conduct](https://github.com/facebookresearch/DPR/blob/main/CODE_OF_CONDUCT.md) - **License:** English - **Related Models:** - [`dpr-question_encoder-multiset-base`](https://huggingface.co/facebook/dpr-question_encoder-multiset-base) - [`dpr-ctx_encoder-multiset-base`](https://huggingface.co/facebook/dpr-ctx_encoder-multiset-base) - [`dpr-question-encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base) - [`dpr-reader-single-nq-base`](https://huggingface.co/facebook/dpr-reader-single-nq-base) - [`dpr-ctx_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base) - **Resources for more information:** - [Research Paper](https://arxiv.org/abs/2004.04906) - [GitHub Repo](https://github.com/facebookresearch/DPR) - [Hugging Face DPR docs](https://huggingface.co/docs/transformers/main/en/model_doc/dpr) - [BERT Base Uncased Model Card](https://huggingface.co/bert-base-uncased) ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import DPRReader, DPRReaderTokenizer tokenizer = DPRReaderTokenizer.from_pretrained("facebook/dpr-reader-multiset-base") model = DPRReader.from_pretrained("facebook/dpr-reader-multiset-base") encoded_inputs = tokenizer( questions=["What is love ?"], titles=["Haddaway"], texts=["'What Is Love' is a song recorded by the artist Haddaway"], return_tensors="pt", ) outputs = model(**encoded_inputs) start_logits = outputs.start_logits end_logits = outputs.end_logits relevance_logits = outputs.relevance_logits ``` ## Uses #### Direct Use `dpr-reader-multiset-base`, [`dpr-question_encoder-multiset-base`](https://huggingface.co/facebook/dpr-question_encoder-multiset-base), and [`dpr-ctx_encoder-multiset-base`](https://huggingface.co/facebook/dpr-ctx_encoder-multiset-base) can be used for the task of open-domain question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Training #### Training Data This model was trained using the following datasets: - **[Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open)** ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)) - **[TriviaQA](https://huggingface.co/datasets/trivia_qa)** ([Joshi et al., 2017](https://aclanthology.org/P17-1147/)) - **[WebQuestions (WQ)](https://huggingface.co/datasets/web_questions)** ([Berant et al., 2013](https://aclanthology.org/D13-1160/)) - **[CuratedTREC (TREC)](https://huggingface.co/datasets/trec)** ([Baudiš & Šedivý, 2015](https://www.aminer.cn/pub/599c7953601a182cd263079b/reading-wikipedia-to-answer-open-domain-questions)) #### Training Procedure The training procedure is described in the [associated paper](https://arxiv.org/pdf/2004.04906.pdf): > Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time. > Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector. The authors report that for encoders, they used two independent BERT ([Devlin et al., 2019](https://aclanthology.org/N19-1423/)) networks (base, un-cased) and use FAISS ([Johnson et al., 2017](https://arxiv.org/abs/1702.08734)) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives. ## Evaluation The following evaluation information is extracted from the [associated paper](https://arxiv.org/pdf/2004.04906.pdf). #### Testing Data, Factors and Metrics The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were [NQ](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), [CuratedTREC (TREC)](https://huggingface.co/datasets/trec), and [SQuAD v1.1](https://huggingface.co/datasets/squad). #### Results | | Top 20 | | | | | Top 100| | | | | |:----:|:------:|:---------:|:--:|:----:|:-----:|:------:|:---------:|:--:|:----:|:-----:| | | NQ | TriviaQA | WQ | TREC | SQuAD | NQ | TriviaQA | WQ | TREC | SQuAD | | | 79.4 | 78.8 |75.0| 89.1 | 51.6 | 86.0 | 84.7 |82.9| 93.9 | 67.6 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and based on the [associated paper](https://arxiv.org/abs/2004.04906). - **Hardware Type:** 8 32GB GPUs - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://arxiv.org/abs/2004.04906) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @inproceedings{karpukhin-etal-2020-dense, title = "Dense Passage Retrieval for Open-Domain Question Answering", author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.550", doi = "10.18653/v1/2020.emnlp-main.550", pages = "6769--6781", } ``` ## Model Card Authors This model card was written by the team at Hugging Face.
{"language": "en", "license": "cc-by-nc-4.0", "tags": ["dpr"], "datasets": ["nq_open", "trivia_qa", "web_questions", "trec"], "inference": false}
facebook/dpr-reader-multiset-base
null
[ "transformers", "pytorch", "tf", "dpr", "en", "dataset:nq_open", "dataset:trivia_qa", "dataset:web_questions", "dataset:trec", "arxiv:2004.04906", "arxiv:1702.08734", "arxiv:1910.09700", "license:cc-by-nc-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.04906", "1702.08734", "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #dpr #en #dataset-nq_open #dataset-trivia_qa #dataset-web_questions #dataset-trec #arxiv-2004.04906 #arxiv-1702.08734 #arxiv-1910.09700 #license-cc-by-nc-4.0 #region-us
'dpr-reader-multiset-base' ========================== Table of Contents ----------------- * Model Details * How To Get Started With the Model * Uses * Risks, Limitations and Biases * Training * Evaluation * Environmental Impact * Technical Specifications * Citation Information * Model Card Authors Model Details ------------- Model Description: Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. 'dpr-reader-multiset-base' is the reader model trained using the Natural Questions (NQ) dataset, TriviaQA, WebQuestions (WQ), and CuratedTREC (TREC). * Developed by: See GitHub repo for model developers * Model Type: BERT-based encoder * Language(s): CC-BY-NC-4.0, also see Code of Conduct * License: English * Related Models: + 'dpr-question\_encoder-multiset-base' + 'dpr-ctx\_encoder-multiset-base' + 'dpr-question-encoder-single-nq-base' + 'dpr-reader-single-nq-base' + 'dpr-ctx\_encoder-single-nq-base' * Resources for more information: + Research Paper + GitHub Repo + Hugging Face DPR docs + BERT Base Uncased Model Card How to Get Started with the Model --------------------------------- Use the code below to get started with the model. Uses ---- #### Direct Use 'dpr-reader-multiset-base', 'dpr-question\_encoder-multiset-base', and 'dpr-ctx\_encoder-multiset-base' can be used for the task of open-domain question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. Risks, Limitations and Biases ----------------------------- CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Training -------- #### Training Data This model was trained using the following datasets: * Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019) * TriviaQA (Joshi et al., 2017) * WebQuestions (WQ) (Berant et al., 2013) * CuratedTREC (TREC) (Baudiš & Šedivý, 2015) #### Training Procedure The training procedure is described in the associated paper: > > Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time. > > > > > Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector. > > > The authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives. Evaluation ---------- The following evaluation information is extracted from the associated paper. #### Testing Data, Factors and Metrics The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1. #### Results Environmental Impact -------------------- Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper. * Hardware Type: 8 32GB GPUs * Hours used: Unknown * Cloud Provider: Unknown * Compute Region: Unknown * Carbon Emitted: Unknown Technical Specifications ------------------------ See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. Model Card Authors ------------------ This model card was written by the team at Hugging Face.
[ "#### Direct Use\n\n\n'dpr-reader-multiset-base', 'dpr-question\\_encoder-multiset-base', and 'dpr-ctx\\_encoder-multiset-base' can be used for the task of open-domain question answering.", "#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n\n\nTraining\n--------", "#### Training Data\n\n\nThis model was trained using the following datasets:\n\n\n* Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019)\n* TriviaQA (Joshi et al., 2017)\n* WebQuestions (WQ) (Berant et al., 2013)\n* CuratedTREC (TREC) (Baudiš & Šedivý, 2015)", "#### Training Procedure\n\n\nThe training procedure is described in the associated paper:\n\n\n\n> \n> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.\n> \n> \n> \n\n\n\n> \n> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.\n> \n> \n> \n\n\nThe authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.", "#### Testing Data, Factors and Metrics\n\n\nThe model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1.", "#### Results\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper.\n\n\n* Hardware Type: 8 32GB GPUs\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the team at Hugging Face." ]
[ "TAGS\n#transformers #pytorch #tf #dpr #en #dataset-nq_open #dataset-trivia_qa #dataset-web_questions #dataset-trec #arxiv-2004.04906 #arxiv-1702.08734 #arxiv-1910.09700 #license-cc-by-nc-4.0 #region-us \n", "#### Direct Use\n\n\n'dpr-reader-multiset-base', 'dpr-question\\_encoder-multiset-base', and 'dpr-ctx\\_encoder-multiset-base' can be used for the task of open-domain question answering.", "#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n\n\nTraining\n--------", "#### Training Data\n\n\nThis model was trained using the following datasets:\n\n\n* Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019)\n* TriviaQA (Joshi et al., 2017)\n* WebQuestions (WQ) (Berant et al., 2013)\n* CuratedTREC (TREC) (Baudiš & Šedivý, 2015)", "#### Training Procedure\n\n\nThe training procedure is described in the associated paper:\n\n\n\n> \n> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.\n> \n> \n> \n\n\n\n> \n> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.\n> \n> \n> \n\n\nThe authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.", "#### Testing Data, Factors and Metrics\n\n\nThe model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1.", "#### Results\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper.\n\n\n* Hardware Type: 8 32GB GPUs\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the team at Hugging Face." ]
null
transformers
`dpr-reader-single-nq-base` # Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation-results) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) ## Model Details **Model Description:** [Dense Passage Retrieval (DPR)](https://github.com/facebookresearch/DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. `dpr-reader-single-nq-base` is the reader model trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open) ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)). - **Developed by:** See [GitHub repo](https://github.com/facebookresearch/DPR) for model developers - **Model Type:** QA Reader Model - **Language(s):** [CC-BY-NC-4.0](https://github.com/facebookresearch/DPR/blob/main/LICENSE), also see [Code of Conduct](https://github.com/facebookresearch/DPR/blob/main/CODE_OF_CONDUCT.md) - **License:** English - **Related Models:** - [`dpr-ctx_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base) - [`dpr-question_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base) - [`dpr-ctx_encoder-multiset-base`](https://huggingface.co/facebook/dpr-ctx_encoder-multiset-base) - [`dpr-question_encoder-multiset-base`](https://huggingface.co/facebook/dpr-question_encoder-multiset-base) - [`dpr-reader-multiset-base`](https://huggingface.co/facebook/dpr-reader-multiset-base) - **Resources for more information:** - [Research Paper](https://arxiv.org/abs/2004.04906) - [GitHub Repo](https://github.com/facebookresearch/DPR) - [Hugging Face DPR docs](https://huggingface.co/docs/transformers/main/en/model_doc/dpr) - [BERT Base Uncased Model Card](https://huggingface.co/bert-base-uncased) ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import DPRReader, DPRReaderTokenizer tokenizer = DPRReaderTokenizer.from_pretrained("facebook/dpr-reader-single-nq-base") model = DPRReader.from_pretrained("facebook/dpr-reader-single-nq-base") encoded_inputs = tokenizer( questions=["What is love ?"], titles=["Haddaway"], texts=["'What Is Love' is a song recorded by the artist Haddaway"], return_tensors="pt", ) outputs = model(**encoded_inputs) start_logits = outputs.start_logits end_logits = outputs.end_logits relevance_logits = outputs.relevance_logits ``` ## Uses #### Direct Use `dpr-reader-single-nq-base`, [`dpr-ctx_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base), and [`dpr-question_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base) can be used for the task of open-domain question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al., 2021](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Training #### Training Data This model was trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open) ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)). The model authors write that: > [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators. #### Training Procedure The training procedure is described in the [associated paper](https://arxiv.org/pdf/2004.04906.pdf): > Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time. > Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector. The authors report that for encoders, they used two independent BERT ([Devlin et al., 2019](https://aclanthology.org/N19-1423/)) networks (base, un-cased) and use FAISS ([Johnson et al., 2017](https://arxiv.org/abs/1702.08734)) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives. ## Evaluation The following evaluation information is extracted from the [associated paper](https://arxiv.org/pdf/2004.04906.pdf). #### Testing Data, Factors and Metrics The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were [NQ](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), [CuratedTREC (TREC)](https://huggingface.co/datasets/trec), and [SQuAD v1.1](https://huggingface.co/datasets/squad). #### Results | | Top 20 | | | | | Top 100| | | | | |:----:|:------:|:---------:|:--:|:----:|:-----:|:------:|:---------:|:--:|:----:|:-----:| | | NQ | TriviaQA | WQ | TREC | SQuAD | NQ | TriviaQA | WQ | TREC | SQuAD | | | 78.4 | 79.4 |73.2| 79.8 | 63.2 | 85.4 | 85.0 |81.4| 89.1 | 77.2 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and based on the [associated paper](https://arxiv.org/abs/2004.04906). - **Hardware Type:** 8 32GB GPUs - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://arxiv.org/abs/2004.04906) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @inproceedings{karpukhin-etal-2020-dense, title = "Dense Passage Retrieval for Open-Domain Question Answering", author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.550", doi = "10.18653/v1/2020.emnlp-main.550", pages = "6769--6781", } ``` ## Model Card Authors This model card was written by the team at Hugging Face.
{"language": "en", "license": "cc-by-nc-4.0", "tags": ["dpr"], "datasets": ["nq_open"], "inference": false}
facebook/dpr-reader-single-nq-base
null
[ "transformers", "pytorch", "tf", "dpr", "en", "dataset:nq_open", "arxiv:2004.04906", "arxiv:1702.08734", "arxiv:1910.09700", "license:cc-by-nc-4.0", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2004.04906", "1702.08734", "1910.09700" ]
[ "en" ]
TAGS #transformers #pytorch #tf #dpr #en #dataset-nq_open #arxiv-2004.04906 #arxiv-1702.08734 #arxiv-1910.09700 #license-cc-by-nc-4.0 #has_space #region-us
'dpr-reader-single-nq-base' Table of Contents ================= * Model Details * How To Get Started With the Model * Uses * Risks, Limitations and Biases * Training * Evaluation * Environmental Impact * Technical Specifications * Citation Information * Model Card Authors Model Details ------------- Model Description: Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. 'dpr-reader-single-nq-base' is the reader model trained using the Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019). * Developed by: See GitHub repo for model developers * Model Type: QA Reader Model * Language(s): CC-BY-NC-4.0, also see Code of Conduct * License: English * Related Models: + 'dpr-ctx\_encoder-single-nq-base' + 'dpr-question\_encoder-single-nq-base' + 'dpr-ctx\_encoder-multiset-base' + 'dpr-question\_encoder-multiset-base' + 'dpr-reader-multiset-base' * Resources for more information: + Research Paper + GitHub Repo + Hugging Face DPR docs + BERT Base Uncased Model Card How to Get Started with the Model --------------------------------- Use the code below to get started with the model. Uses ---- #### Direct Use 'dpr-reader-single-nq-base', 'dpr-ctx\_encoder-single-nq-base', and 'dpr-question\_encoder-single-nq-base' can be used for the task of open-domain question answering. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model. Risks, Limitations and Biases ----------------------------- CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al., 2021 and Bender et al., 2021). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. Training -------- #### Training Data This model was trained using the Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019). The model authors write that: > > [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators. > > > #### Training Procedure The training procedure is described in the associated paper: > > Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time. > > > > > Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector. > > > The authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives. Evaluation ---------- The following evaluation information is extracted from the associated paper. #### Testing Data, Factors and Metrics The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1. #### Results Environmental Impact -------------------- Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper. * Hardware Type: 8 32GB GPUs * Hours used: Unknown * Cloud Provider: Unknown * Compute Region: Unknown * Carbon Emitted: Unknown Technical Specifications ------------------------ See the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details. Model Card Authors ------------------ This model card was written by the team at Hugging Face.
[ "#### Direct Use\n\n\n'dpr-reader-single-nq-base', 'dpr-ctx\\_encoder-single-nq-base', and 'dpr-question\\_encoder-single-nq-base' can be used for the task of open-domain question answering.", "#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al., 2021 and Bender et al., 2021). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n\n\nTraining\n--------", "#### Training Data\n\n\nThis model was trained using the Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019). The model authors write that:\n\n\n\n> \n> [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators.\n> \n> \n>", "#### Training Procedure\n\n\nThe training procedure is described in the associated paper:\n\n\n\n> \n> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.\n> \n> \n> \n\n\n\n> \n> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.\n> \n> \n> \n\n\nThe authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.", "#### Testing Data, Factors and Metrics\n\n\nThe model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1.", "#### Results\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper.\n\n\n* Hardware Type: 8 32GB GPUs\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the team at Hugging Face." ]
[ "TAGS\n#transformers #pytorch #tf #dpr #en #dataset-nq_open #arxiv-2004.04906 #arxiv-1702.08734 #arxiv-1910.09700 #license-cc-by-nc-4.0 #has_space #region-us \n", "#### Direct Use\n\n\n'dpr-reader-single-nq-base', 'dpr-ctx\\_encoder-single-nq-base', and 'dpr-question\\_encoder-single-nq-base' can be used for the task of open-domain question answering.", "#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nCONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al., 2021 and Bender et al., 2021). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.\n\n\nTraining\n--------", "#### Training Data\n\n\nThis model was trained using the Natural Questions (NQ) dataset (Lee et al., 2019; Kwiatkowski et al., 2019). The model authors write that:\n\n\n\n> \n> [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators.\n> \n> \n>", "#### Training Procedure\n\n\nThe training procedure is described in the associated paper:\n\n\n\n> \n> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.\n> \n> \n> \n\n\n\n> \n> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.\n> \n> \n> \n\n\nThe authors report that for encoders, they used two independent BERT (Devlin et al., 2019) networks (base, un-cased) and use FAISS (Johnson et al., 2017) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.\n\n\nEvaluation\n----------\n\n\nThe following evaluation information is extracted from the associated paper.", "#### Testing Data, Factors and Metrics\n\n\nThe model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were NQ, TriviaQA, WebQuestions (WQ), CuratedTREC (TREC), and SQuAD v1.1.", "#### Results\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type and based on the associated paper.\n\n\n* Hardware Type: 8 32GB GPUs\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture, objective, compute infrastructure, and training details.\n\n\nModel Card Authors\n------------------\n\n\nThis model card was written by the team at Hugging Face." ]
fill-mask
transformers
This repository has been deprecated and will be deleted shortly. All ESM models have been moved to their official names to match their naming at the original FAIR repo. You can now find the ESM-1b model at [facebook/esm1b_t33_650M_UR50S](https://huggingface.co/facebook/esm1b_t33_650M_UR50S).
{}
facebook/esm-1b
null
[ "transformers", "pytorch", "safetensors", "esm", "fill-mask", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #esm #fill-mask #autotrain_compatible #endpoints_compatible #has_space #region-us
This repository has been deprecated and will be deleted shortly. All ESM models have been moved to their official names to match their naming at the original FAIR repo. You can now find the ESM-1b model at facebook/esm1b_t33_650M_UR50S.
[]
[ "TAGS\n#transformers #pytorch #safetensors #esm #fill-mask #autotrain_compatible #endpoints_compatible #has_space #region-us \n" ]
text-to-speech
fairseq
# fastspeech2-en-200_speaker-cv4 [FastSpeech 2](https://arxiv.org/abs/2006.04558) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)): - English - 200 male/female voices (random speaker when using the widget) - Trained on [Common Voice v4](https://commonvoice.mozilla.org/en/datasets) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/fastspeech2-en-200_speaker-cv4", arg_overrides={"vocoder": "hifigan", "fp16": False} ) model = models[0] TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg) generator = task.build_generator(model, cfg) text = "Hello, this is a test run." sample = TTSHubInterface.get_model_input(task, text) wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample) ipd.Audio(wav, rate=rate) ``` See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/common_voice_example.md). ## Citation ```bibtex @inproceedings{wang-etal-2021-fairseq, title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit", author = "Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.17", doi = "10.18653/v1/2021.emnlp-demo.17", pages = "143--152", } ```
{"language": "en", "library_name": "fairseq", "tags": ["fairseq", "audio", "text-to-speech", "multi-speaker"], "datasets": ["common_voice"], "task": "text-to-speech", "widget": [{"text": "Hello, this is a test run.", "example_title": "Hello, this is a test run."}]}
facebook/fastspeech2-en-200_speaker-cv4
null
[ "fairseq", "audio", "text-to-speech", "multi-speaker", "en", "dataset:common_voice", "arxiv:2006.04558", "arxiv:2109.06912", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2006.04558", "2109.06912" ]
[ "en" ]
TAGS #fairseq #audio #text-to-speech #multi-speaker #en #dataset-common_voice #arxiv-2006.04558 #arxiv-2109.06912 #has_space #region-us
# fastspeech2-en-200_speaker-cv4 FastSpeech 2 text-to-speech model from fairseq S^2 (paper/code): - English - 200 male/female voices (random speaker when using the widget) - Trained on Common Voice v4 ## Usage See also fairseq S^2 example.
[ "# fastspeech2-en-200_speaker-cv4\n\nFastSpeech 2 text-to-speech model from fairseq S^2 (paper/code):\n- English\n- 200 male/female voices (random speaker when using the widget)\n- Trained on Common Voice v4", "## Usage\n\n\n\nSee also fairseq S^2 example." ]
[ "TAGS\n#fairseq #audio #text-to-speech #multi-speaker #en #dataset-common_voice #arxiv-2006.04558 #arxiv-2109.06912 #has_space #region-us \n", "# fastspeech2-en-200_speaker-cv4\n\nFastSpeech 2 text-to-speech model from fairseq S^2 (paper/code):\n- English\n- 200 male/female voices (random speaker when using the widget)\n- Trained on Common Voice v4", "## Usage\n\n\n\nSee also fairseq S^2 example." ]
text-to-speech
fairseq
# fastspeech2-en-ljspeech [FastSpeech 2](https://arxiv.org/abs/2006.04558) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)): - English - Single-speaker female voice - Trained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/) ## Usage ```python from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub from fairseq.models.text_to_speech.hub_interface import TTSHubInterface import IPython.display as ipd models, cfg, task = load_model_ensemble_and_task_from_hf_hub( "facebook/fastspeech2-en-ljspeech", arg_overrides={"vocoder": "hifigan", "fp16": False} ) model = models[0] TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg) generator = task.build_generator(model, cfg) text = "Hello, this is a test run." sample = TTSHubInterface.get_model_input(task, text) wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample) ipd.Audio(wav, rate=rate) ``` See also [fairseq S^2 example](https://github.com/pytorch/fairseq/blob/main/examples/speech_synthesis/docs/ljspeech_example.md). ## Citation ```bibtex @inproceedings{wang-etal-2021-fairseq, title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit", author = "Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-demo.17", doi = "10.18653/v1/2021.emnlp-demo.17", pages = "143--152", } ```
{"language": "en", "library_name": "fairseq", "tags": ["fairseq", "audio", "text-to-speech"], "datasets": ["ljspeech"], "task": "text-to-speech", "widget": [{"text": "Hello, this is a test run.", "example_title": "Hello, this is a test run."}]}
facebook/fastspeech2-en-ljspeech
null
[ "fairseq", "audio", "text-to-speech", "en", "dataset:ljspeech", "arxiv:2006.04558", "arxiv:2109.06912", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2006.04558", "2109.06912" ]
[ "en" ]
TAGS #fairseq #audio #text-to-speech #en #dataset-ljspeech #arxiv-2006.04558 #arxiv-2109.06912 #has_space #region-us
# fastspeech2-en-ljspeech FastSpeech 2 text-to-speech model from fairseq S^2 (paper/code): - English - Single-speaker female voice - Trained on LJSpeech ## Usage See also fairseq S^2 example.
[ "# fastspeech2-en-ljspeech\n\nFastSpeech 2 text-to-speech model from fairseq S^2 (paper/code):\n- English\n- Single-speaker female voice\n- Trained on LJSpeech", "## Usage\n\n\n\nSee also fairseq S^2 example." ]
[ "TAGS\n#fairseq #audio #text-to-speech #en #dataset-ljspeech #arxiv-2006.04558 #arxiv-2109.06912 #has_space #region-us \n", "# fastspeech2-en-ljspeech\n\nFastSpeech 2 text-to-speech model from fairseq S^2 (paper/code):\n- English\n- Single-speaker female voice\n- Trained on LJSpeech", "## Usage\n\n\n\nSee also fairseq S^2 example." ]
feature-extraction
transformers
# Hubert-Base [Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. [Paper](https://arxiv.org/abs/2106.07447) Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed **Abstract** Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `HubertForCTC`.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["librispeech_asr"]}
facebook/hubert-base-ls960
null
[ "transformers", "pytorch", "tf", "hubert", "feature-extraction", "speech", "en", "dataset:librispeech_asr", "arxiv:2106.07447", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.07447" ]
[ "en" ]
TAGS #transformers #pytorch #tf #hubert #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2106.07447 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# Hubert-Base Facebook's Hubert The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out this blog for more in-detail explanation of how to fine-tune the model. Paper Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed Abstract Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under URL . # Usage See this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'HubertForCTC'.
[ "# Hubert-Base \n\nFacebook's Hubert\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out this blog for more in-detail explanation of how to fine-tune the model.\n\nPaper\n\nAuthors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed\n\nAbstract\nSelf-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'HubertForCTC'." ]
[ "TAGS\n#transformers #pytorch #tf #hubert #feature-extraction #speech #en #dataset-librispeech_asr #arxiv-2106.07447 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# Hubert-Base \n\nFacebook's Hubert\n\nThe base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out this blog for more in-detail explanation of how to fine-tune the model.\n\nPaper\n\nAuthors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed\n\nAbstract\nSelf-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'HubertForCTC'." ]
feature-extraction
transformers
# Hubert-Large [Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression) The large model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model. The model was pretrained on [Libri-Light](https://github.com/facebookresearch/libri-light). [Paper](https://arxiv.org/abs/2106.07447) Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed **Abstract** Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `HubertForCTC`.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["libri-light"]}
facebook/hubert-large-ll60k
null
[ "transformers", "pytorch", "tf", "hubert", "feature-extraction", "speech", "en", "dataset:libri-light", "arxiv:2106.07447", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.07447" ]
[ "en" ]
TAGS #transformers #pytorch #tf #hubert #feature-extraction #speech #en #dataset-libri-light #arxiv-2106.07447 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# Hubert-Large Facebook's Hubert The large model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out this blog for more in-detail explanation of how to fine-tune the model. The model was pretrained on Libri-Light. Paper Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed Abstract Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under URL . # Usage See this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'HubertForCTC'.
[ "# Hubert-Large \n\nFacebook's Hubert\n\nThe large model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out this blog for more in-detail explanation of how to fine-tune the model.\n\nThe model was pretrained on Libri-Light.\n\nPaper\n\nAuthors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed\n\nAbstract\nSelf-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'HubertForCTC'." ]
[ "TAGS\n#transformers #pytorch #tf #hubert #feature-extraction #speech #en #dataset-libri-light #arxiv-2106.07447 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# Hubert-Large \n\nFacebook's Hubert\n\nThe large model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.\n\nNote: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out this blog for more in-detail explanation of how to fine-tune the model.\n\nThe model was pretrained on Libri-Light.\n\nPaper\n\nAuthors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed\n\nAbstract\nSelf-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'HubertForCTC'." ]
automatic-speech-recognition
transformers
# Hubert-Large-Finetuned [Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression) The large model fine-tuned on 960h of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. The model is a fine-tuned version of [hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k). [Paper](https://arxiv.org/abs/2106.07447) Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed **Abstract** Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert . # Usage The model can be used for automatic-speech-recognition as follows: ```python import torch from transformers import Wav2Vec2Processor, HubertForCTC from datasets import load_dataset processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-large-ls960-ft") model = HubertForCTC.from_pretrained("facebook/hubert-large-ls960-ft") ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1 logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.decode(predicted_ids[0]) # ->"A MAN SAID TO THE UNIVERSE SIR I EXIST" ```
{"language": "en", "license": "apache-2.0", "tags": ["speech", "audio", "automatic-speech-recognition", "hf-asr-leaderboard"], "datasets": ["libri-light", "librispeech_asr"], "model-index": [{"name": "hubert-large-ls960-ft", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (clean)", "type": "librispeech_asr", "config": "clean", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 1.9, "name": "Test WER"}]}]}]}
facebook/hubert-large-ls960-ft
null
[ "transformers", "pytorch", "tf", "hubert", "automatic-speech-recognition", "speech", "audio", "hf-asr-leaderboard", "en", "dataset:libri-light", "dataset:librispeech_asr", "arxiv:2106.07447", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.07447" ]
[ "en" ]
TAGS #transformers #pytorch #tf #hubert #automatic-speech-recognition #speech #audio #hf-asr-leaderboard #en #dataset-libri-light #dataset-librispeech_asr #arxiv-2106.07447 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
# Hubert-Large-Finetuned Facebook's Hubert The large model fine-tuned on 960h of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. The model is a fine-tuned version of hubert-large-ll60k. Paper Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed Abstract Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under URL . # Usage The model can be used for automatic-speech-recognition as follows:
[ "# Hubert-Large-Finetuned\n\nFacebook's Hubert\n\nThe large model fine-tuned on 960h of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. \n\nThe model is a fine-tuned version of hubert-large-ll60k.\n\nPaper\n\nAuthors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed\n\nAbstract\nSelf-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.\n\nThe original model can be found under URL .", "# Usage\n\nThe model can be used for automatic-speech-recognition as follows:" ]
[ "TAGS\n#transformers #pytorch #tf #hubert #automatic-speech-recognition #speech #audio #hf-asr-leaderboard #en #dataset-libri-light #dataset-librispeech_asr #arxiv-2106.07447 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", "# Hubert-Large-Finetuned\n\nFacebook's Hubert\n\nThe large model fine-tuned on 960h of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. \n\nThe model is a fine-tuned version of hubert-large-ll60k.\n\nPaper\n\nAuthors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed\n\nAbstract\nSelf-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.\n\nThe original model can be found under URL .", "# Usage\n\nThe model can be used for automatic-speech-recognition as follows:" ]
feature-extraction
transformers
# Hubert-Extra-Large [Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression) The extra large model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... The model was pretrained on [Libri-Light](https://github.com/facebookresearch/libri-light). [Paper](https://arxiv.org/abs/2106.07447) Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed **Abstract** Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert . # Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `HubertForCTC`.
{"language": "en", "license": "apache-2.0", "tags": ["speech"], "datasets": ["libri-light"]}
facebook/hubert-xlarge-ll60k
null
[ "transformers", "pytorch", "tf", "hubert", "feature-extraction", "speech", "en", "dataset:libri-light", "arxiv:2106.07447", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.07447" ]
[ "en" ]
TAGS #transformers #pytorch #tf #hubert #feature-extraction #speech #en #dataset-libri-light #arxiv-2106.07447 #license-apache-2.0 #endpoints_compatible #has_space #region-us
# Hubert-Extra-Large Facebook's Hubert The extra large model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... The model was pretrained on Libri-Light. Paper Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed Abstract Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under URL . # Usage See this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'HubertForCTC'.
[ "# Hubert-Extra-Large \n\nFacebook's Hubert\n\nThe extra large model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nThe model was pretrained on Libri-Light.\n\nPaper\n\nAuthors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed\n\nAbstract\nSelf-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'HubertForCTC'." ]
[ "TAGS\n#transformers #pytorch #tf #hubert #feature-extraction #speech #en #dataset-libri-light #arxiv-2106.07447 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# Hubert-Extra-Large \n\nFacebook's Hubert\n\nThe extra large model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...\n\nThe model was pretrained on Libri-Light.\n\nPaper\n\nAuthors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed\n\nAbstract\nSelf-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.\n\nThe original model can be found under URL .", "# Usage\n\nSee this blog for more information on how to fine-tune the model. Note that the class 'Wav2Vec2ForCTC' has to be replaced by 'HubertForCTC'." ]
automatic-speech-recognition
transformers
# Hubert-Extra-Large-Finetuned [Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression) The extra large model fine-tuned on 960h of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. The model is a fine-tuned version of [hubert-xlarge-ll60k](https://huggingface.co/facebook/hubert-xlarge-ll60k). [Paper](https://arxiv.org/abs/2106.07447) Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed **Abstract** Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert . # Usage The model can be used for automatic-speech-recognition as follows: ```python import torch from transformers import Wav2Vec2Processor, HubertForCTC from datasets import load_dataset processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-xlarge-ls960-ft") model = HubertForCTC.from_pretrained("facebook/hubert-xlarge-ls960-ft") ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1 logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.decode(predicted_ids[0]) # ->"A MAN SAID TO THE UNIVERSE SIR I EXIST" ```
{"language": "en", "license": "apache-2.0", "tags": ["speech", "audio", "automatic-speech-recognition", "hf-asr-leaderboard"], "datasets": ["libri-light", "librispeech_asr"], "model-index": [{"name": "hubert-large-ls960-ft", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "LibriSpeech (clean)", "type": "librispeech_asr", "config": "clean", "split": "test", "args": {"language": "en"}}, "metrics": [{"type": "wer", "value": 1.8, "name": "Test WER"}]}]}]}
facebook/hubert-xlarge-ls960-ft
null
[ "transformers", "pytorch", "tf", "safetensors", "hubert", "automatic-speech-recognition", "speech", "audio", "hf-asr-leaderboard", "en", "dataset:libri-light", "dataset:librispeech_asr", "arxiv:2106.07447", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2106.07447" ]
[ "en" ]
TAGS #transformers #pytorch #tf #safetensors #hubert #automatic-speech-recognition #speech #audio #hf-asr-leaderboard #en #dataset-libri-light #dataset-librispeech_asr #arxiv-2106.07447 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
# Hubert-Extra-Large-Finetuned Facebook's Hubert The extra large model fine-tuned on 960h of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. The model is a fine-tuned version of hubert-xlarge-ll60k. Paper Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed Abstract Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. The original model can be found under URL . # Usage The model can be used for automatic-speech-recognition as follows:
[ "# Hubert-Extra-Large-Finetuned\n\nFacebook's Hubert\n\nThe extra large model fine-tuned on 960h of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. \n\nThe model is a fine-tuned version of hubert-xlarge-ll60k.\n\nPaper\n\nAuthors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed\n\nAbstract\nSelf-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.\n\nThe original model can be found under URL .", "# Usage\n\nThe model can be used for automatic-speech-recognition as follows:" ]
[ "TAGS\n#transformers #pytorch #tf #safetensors #hubert #automatic-speech-recognition #speech #audio #hf-asr-leaderboard #en #dataset-libri-light #dataset-librispeech_asr #arxiv-2106.07447 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", "# Hubert-Extra-Large-Finetuned\n\nFacebook's Hubert\n\nThe extra large model fine-tuned on 960h of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. \n\nThe model is a fine-tuned version of hubert-xlarge-ll60k.\n\nPaper\n\nAuthors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed\n\nAbstract\nSelf-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.\n\nThe original model can be found under URL .", "# Usage\n\nThe model can be used for automatic-speech-recognition as follows:" ]
null
null
# <p align="center"> IC-GAN: Instance-Conditioned GAN </p> Official Pytorch code of [Instance-Conditioned GAN](https://arxiv.org/abs/2109.05070) by Arantxa Casanova, Marlène Careil, Jakob Verbeek, Michał Drożdżal, Adriana Romero-Soriano. ![IC-GAN results](./figures/github_image.png?raw=true) ## Generate images with IC-GAN in a Colab Notebook We provide a [Google Colab notebook](https://colab.research.google.com/github/facebookresearch/ic_gan/blob/main/inference/icgan_colab.ipynb) to generate images with IC-GAN and its class-conditional counter part. We also invite users to check out the [demo on Replicate](https://replicate.ai/arantxacasanova/ic_gan), courtesy of [Replicate](https://replicate.ai/home). The figure below depicts two instances, unseen during training and downloaded from [Creative Commons search](https://search.creativecommons.org), and the generated images with IC-GAN and class-conditional IC-GAN when conditioning on the class "castle": ![IC-GAN results transfer](./figures/icgan_transfer_all_github.png?raw=true) Additionally, and inspired by [this Colab](https://colab.research.google.com/github/eyaler/clip_biggan/blob/main/ClipBigGAN.ipynb), we provide the funcionality in the same Colab notebook to guide generations with text captions, using the [CLIP model](https://github.com/openai/CLIP). As an example, the following Figure shows three instance conditionings and a text caption (top), followed by the resulting generated images with IC-GAN (bottom), when optimizing the noise vector following CLIP's gradient for 100 iterations. ![IC-GAN results transfer CLIP](./figures/icgan_clip.png?raw=true) *Credit for the three instance conditionings, from left to right, that were modified with a resize and central crop:* [1: "Landscape in Bavaria" by shining.darkness, licensed under CC BY 2.0](https://search.creativecommons.org/photos/92ef279c-4469-49a5-aa4b-48ad746f2dc4), [2: "Fantasy Landscape - slolsss" by Douglas Tofoli is marked with CC PDM 1.0](https://search.creativecommons.org/photos/13646adc-f1df-437a-a0dd-8223452ee46c), [3: "How to Draw Landscapes Simply" by Kuwagata Keisai is marked with CC0 1.0](https://search.creativecommons.org/photos/2ab9c3b7-de99-4536-81ed-604ee988bd5f) ## Requirements * Python 3.8 * Cuda v10.2 / Cudnn v7.6.5 * gcc v7.3.0 * Pytorch 1.8.0 * A conda environment can be created from `environment.yaml` by entering the command: `conda env create -f environment.yml`, that contains the aforemention version of Pytorch and other required packages. * Faiss: follow the instructions in the [original repository](https://github.com/facebookresearch/faiss). ## Overview This repository consists of four main folders: * `data_utils`: A common folder to obtain and format the data needed to train and test IC-GAN, agnostic of the specific backbone. * `inference`: Scripts to test the models both qualitatively and quantitatively. * `BigGAN_PyTorch`: It provides the training, evaluation and sampling scripts for IC-GAN with a BigGAN backbone. The code base comes from [Pytorch BigGAN repository](https://github.com/ajbrock/BigGAN-PyTorch), made available under the MIT License. It has been modified to [add additional utilities](#biggan-changelog) and it enables IC-GAN training on top of it. * `stylegan2_ada_pytorch`: It provides the training, evaluation and sampling scripts for IC-GAN with a StyleGAN2 backbone. The code base comes from [StyleGAN2 Pytorch](https://github.com/NVlabs/stylegan2-ada-pytorch), made available under the [Nvidia Source Code License](https://nvlabs.github.io/stylegan2-ada-pytorch/license.html). It has been modified to [add additional utilities](#stylegan-changelog) and it enables IC-GAN training on top of it. ## (Python script) Generate images with IC-GAN Alternatively, we can <b> generate images with IC-GAN models </b> directly from a python script, by following the next steps: 1) Download the desired pretrained models (links below) and the [pre-computed 1000 instance features from ImageNet](https://dl.fbaipublicfiles.com/ic_gan/stored_instances.tar.gz) and extract them into a folder `pretrained_models_path`. | model | backbone | class-conditional? | training dataset | resolution | url | |-------------------|-------------------|-------------------|---------------------|--------------------|--------------------| | IC-GAN | BigGAN | No | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res256.tar.gz) | | IC-GAN (half capacity) | BigGAN | No | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res256_halfcap.tar.gz) | | IC-GAN | BigGAN | No | ImageNet | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res128.tar.gz) | | IC-GAN | BigGAN | No | ImageNet | 64x64 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_imagenet_res64.tar.gz) | | IC-GAN | BigGAN | Yes | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res256.tar.gz) | | IC-GAN (half capacity) | BigGAN | Yes | ImageNet | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res256_halfcap.tar.gz) | | IC-GAN | BigGAN | Yes | ImageNet | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res128.tar.gz) | | IC-GAN | BigGAN | Yes | ImageNet | 64x64 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenet_res64.tar.gz) | | IC-GAN | BigGAN | Yes | ImageNet-LT | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenetlt_res256.tar.gz) | | IC-GAN | BigGAN | Yes | ImageNet-LT | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenetlt_res128.tar.gz) | | IC-GAN | BigGAN | Yes | ImageNet-LT | 64x64 | [model](https://dl.fbaipublicfiles.com/ic_gan/cc_icgan_biggan_imagenetlt_res64.tar.gz) | | IC-GAN | BigGAN | No | COCO-Stuff | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_coco_res256.tar.gz) | | IC-GAN | BigGAN | No | COCO-Stuff | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_biggan_coco_res128.tar.gz) | | IC-GAN | StyleGAN2 | No | COCO-Stuff | 256x256 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_stylegan2_coco_res256.tar.gz) | | IC-GAN | StyleGAN2 | No | COCO-Stuff | 128x128 | [model](https://dl.fbaipublicfiles.com/ic_gan/icgan_stylegan2_coco_res128.tar.gz) | 2) Execute: ``` python inference/generate_images.py --root_path [pretrained_models_path] --model [model] --model_backbone [backbone] --resolution [res] ``` * `model` can be chosen from `["icgan", "cc_icgan"]` to use the IC-GAN or the class-conditional IC-GAN model respectively. * `backbone` can be chosen from `["biggan", "stylegan2"]`. * `res` indicates the resolution at which the model has been trained. For ImageNet, choose one in `[64, 128, 256]`, and for COCO-Stuff, one in `[128, 256]`. This script results in a .PNG file where several generated images are shown, given an instance feature (each row), and a sampled noise vector (each grid position). <b>Additional and optional parameters</b>: * `index`: (None by default), is an integer from 0 to 999 that choses a specific instance feature vector out of the 1000 instances that have been selected with k-means on the ImageNet dataset and stored in `pretrained_models_path/stored_instances`. * `swap_target`: (None by default) is an integer from 0 to 999 indicating an ImageNet class label. This label will be used to condition the class-conditional IC-GAN, regardless of which instance features are being used. * `which_dataset`: (ImageNet by default) can be chosen from `["imagenet", "coco"]` to indicate which dataset (training split) to sample the instances from. * `trained_dataset`: (ImageNet by default) can be chosen from `["imagenet", "coco"]` to indicate the dataset in which the IC-GAN model has been trained on. * `num_imgs_gen`: (5 by default), it changes the number of noise vectors to sample per conditioning. Increasing this number results in a bigger .PNG file to save and load. * `num_conditionings_gen`: (5 by default), it changes the number of conditionings to sample. Increasing this number results in a bigger .PNG file to save and load. * `z_var`: (1.0 by default) controls the truncation factor for the generation. * Optionally, the script can be run with the following additional options `--visualize_instance_images --dataset_path [dataset_path]` to visualize the ground-truth images corresponding to the conditioning instance features, given a path to the dataset's ground-truth images `dataset_path`. Ground-truth instances will be plotted as the leftmost image for each row. ## Data preparation <div id="data-preparation"> <details> <summary>ImageNet</summary> <br> <ol> <li>Download dataset from <a href="https://image-net.org/download.php"> here </a>. </li> <li>Download <a href="https://github.com/facebookresearch/swav"> SwAV </a> feature extractor weights from <a href="https://dl.fbaipublicfiles.com/deepcluster/swav_800ep_pretrain.pth.tar"> here </a>. </li> <li> Replace the paths in data_utils/prepare_data.sh: <code>out_path</code> by the path where hdf5 files will be stored, <code>path_imnet</code> by the path where ImageNet dataset is downloaded, and <code>path_swav</code> by the path where SwAV weights are stored. </li> <li> Execute <code>./data_utils/prepare_data.sh imagenet [resolution]</code>, where <code>[resolution]</code> can be an integer in {64,128,256}. This script will create several hdf5 files: <ul> <li> <code>ILSVRC[resolution]_xy.hdf5</code> and <code>ILSVRC[resolution]_val_xy.hdf5</code>, where images and labels are stored for the training and validation set respectively. </li> <li> <code>ILSVRC[resolution]_feats_[feature_extractor]_resnet50.hdf5</code> that contains the instance features for each image. </li> <li> <code>ILSVRC[resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5</code> that contains the list of [k_nn] neighbors for each of the instance features. </li> </ul> </li> </ol> </br> </details> <details> <summary>ImageNet-LT</summary> <br> <ol> <li>Download ImageNet dataset from <a href="https://image-net.org/download.php"> here </a>. Following <a href="https://github.com/zhmiao/OpenLongTailRecognition-OLTR"> ImageNet-LT </a>, the file <code>ImageNet_LT_train.txt</code> can be downloaded from <a href="https://drive.google.com/drive/u/1/folders/1j7Nkfe6ZhzKFXePHdsseeeGI877Xu1yf" > this link </a> and later stored in the folder <code>./BigGAN_PyTorch/imagenet_lt</code>. </li> <li>Download the pre-trained weights of the ResNet on ImageNet-LT from <a href="https://dl.fbaipublicfiles.com/classifier-balancing/ImageNet_LT/models/resnet50_uniform_e90.pth"> this link</a>, provided by the <a href="https://github.com/facebookresearch/classifier-balancing"> classifier-balancing repository </a>. </li> <li> Replace the paths in data_utils/prepare_data.sh: <code>out_path</code> by the path where hdf5 files will be stored, <code>path_imnet</code> by the path where ImageNet dataset is downloaded, and <code>path_classifier_lt</code> by the path where the pre-trained ResNet50 weights are stored. </li> <li> Execute <code>./data_utils/prepare_data.sh imagenet_lt [resolution]</code>, where <code>[resolution]</code> can be an integer in {64,128,256}. This script will create several hdf5 files: <ul> <li> <code>ILSVRC[resolution]longtail_xy.hdf5</code>, where images and labels are stored for the training and validation set respectively. </li> <li> <code>ILSVRC[resolution]longtail_feats_[feature_extractor]_resnet50.hdf5</code> that contains the instance features for each image. </li> <li> <code>ILSVRC[resolution]longtail_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5</code> that contains the list of [k_nn] neighbors for each of the instance features. </li> </ul> </li> </ol> </br> </details> <details> <summary>COCO-Stuff</summary> <br> <ol> <li>Download the dataset following the <a href="https://github.com/WillSuen/LostGANs/blob/master/INSTALL.md"> LostGANs' repository instructions </a>. </li> <li>Download <a href="https://github.com/facebookresearch/swav"> SwAV </a> feature extractor weights from <a href="https://dl.fbaipublicfiles.com/deepcluster/swav_800ep_pretrain.pth.tar"> here </a>. </li> <li> Replace the paths in data_utils/prepare_data.sh: <code>out_path</code> by the path where hdf5 files will be stored, <code>path_imnet</code> by the path where ImageNet dataset is downloaded, and <code>path_swav</code> by the path where SwAV weights are stored. </li> <li> Execute <code>./data_utils/prepare_data.sh coco [resolution]</code>, where <code>[resolution]</code> can be an integer in {128,256}. This script will create several hdf5 files: <ul> <li> <code>COCO[resolution]_xy.hdf5</code> and <code>COCO[resolution]_val_test_xy.hdf5</code>, where images and labels are stored for the training and evaluation set respectively. </li> <li> <code>COCO[resolution]_feats_[feature_extractor]_resnet50.hdf5</code> that contains the instance features for each image. </li> <li> <code>COCO[resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5</code> that contains the list of [k_nn] neighbors for each of the instance features. </li> </ul> </li> </ol> </br> </details> <details> <summary>Other datasets</summary> <br> <ol> <li>Download the corresponding dataset and store in a folder <code>dataset_path</code>. </li> <li>Download <a href="https://github.com/facebookresearch/swav"> SwAV </a> feature extractor weights from <a href="https://dl.fbaipublicfiles.com/deepcluster/swav_800ep_pretrain.pth.tar"> here </a>. </li> <li> Replace the paths in data_utils/prepare_data.sh: <code>out_path</code> by the path where hdf5 files will be stored and <code>path_swav</code> by the path where SwAV weights are stored. </li> <li> Execute <code>./data_utils/prepare_data.sh [dataset_name] [resolution] [dataset_path]</code>, where <code>[dataset_name]</code> will be the dataset name, <code>[resolution]</code> can be an integer, for example 128 or 256, and <code>dataset_path</code> contains the dataset images. This script will create several hdf5 files: <ul> <li> <code>[dataset_name][resolution]_xy.hdf5</code>, where images and labels are stored for the training set. </li> <li> <code>[dataset_name][resolution]_feats_[feature_extractor]_resnet50.hdf5</code> that contains the instance features for each image. </li> <li> <code>[dataset_name][resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5</code> that contains the list of <code>k_nn</code> neighbors for each of the instance features. </li> </ul> </li> </ol> </br> </details> <details> <summary>How to subsample an instance feature dataset with k-means</summary> <br> To downsample the instance feature vector dataset, after we have prepared the data, we can use the k-means algorithm: <code> python data_utils/store_kmeans_indexes.py --resolution [resolution] --which_dataset [dataset_name] --data_root [data_path] </code> <ul> <li> Adding <code>--gpu</code> allows the faiss library to compute k-means leveraging GPUs, resulting in faster execution. </li> <li> Adding the parameter <code>--feature_extractor [feature_extractor]</code> chooses which feature extractor to use, with <code>feature_extractor</code> in <code>['selfsupervised', 'classification'] </code>, if we are using swAV as feature extactor or the ResNet pretrained on the classification task on ImageNet, respectively. </li> <li> The number of k-means clusters can be set with <code>--kmeans_subsampled [centers]</code>, where <code>centers</code> is an integer. </li> </ul> </br> </details> </div> ## How to train the models #### BigGAN or StyleGAN2 backbone Training parameters are stored in JSON files in `[backbone_folder]/config_files/[dataset]/*.json`, where `[backbone_folder]` is either BigGAN_Pytorch or stylegan2_ada_pytorch and `[dataset]` can either be ImageNet, ImageNet-LT or COCO_Stuff. ``` cd BigGAN_PyTorch python run.py --json_config config_files/<dataset>/<selected_config>.json --data_root [data_root] --base_root [base_root] ``` or ``` cd stylegan_ada_pytorch python run.py --json_config config_files/<dataset>/<selected_config>.json --data_root [data_root] --base_root [base_root] ``` where: * `data_root` path where the data has been prepared and stored, following the previous section (<a href="./README.md#data-preparation">Data preparation</a>). * `base_root` path where to store the model weights and logs. Note that one can create other JSON files to modify the training parameters. #### Other backbones To be able to run IC-GAN with other backbones, we provide some orientative steps: * Place the new backbone code in a new folder under `ic_gan` (`ic_gan/new_backbone`). * Modify the relevant piece of code in the GAN architecture to allow instance features as conditionings (for both generator and discriminator). * Create a `trainer.py` file with the training loop to train an IC-GAN with the new backbone. The `data_utils` folder provides the tools to prepare the dataset, load the data and conditioning sampling to train an IC-GAN. The IC-GAN with BigGAN backbone [`trainer.py`](BigGAN_PyTorch/trainer.py) file can be used as an inspiration. ## How to test the models <b>To obtain the FID and IS metrics on ImageNet and ImageNet-LT</b>: 1) Execute: ``` python inference/test.py --json_config [BigGAN-PyTorch or stylegan-ada-pytorch]/config_files/<dataset>/<selected_config>.json --num_inception_images [num_imgs] --sample_num_npz [num_imgs] --eval_reference_set [ref_set] --sample_npz --base_root [base_root] --data_root [data_root] --kmeans_subsampled [kmeans_centers] --model_backbone [backbone] ``` To obtain the tensorflow IS and FID metrics, use an environment with the Python <3.7 and Tensorflow 1.15. Then: 2) Obtain Inception Scores and pre-computed FID moments: ``` python ../data_utils/inception_tf13.py --experiment_name [exp_name] --experiment_root [base_root] --kmeans_subsampled [kmeans_centers] ``` For stratified FIDs in the ImageNet-LT dataset, the following parameters can be added `--which_dataset 'imagenet_lt' --split 'val' --strat_name [stratified_split]`, where `stratified_split` can be in `[few,low, many]`. 3) (Only needed once) Pre-compute reference moments with tensorflow code: ``` python ../data_utils/inception_tf13.py --use_ground_truth_data --data_root [data_root] --split [ref_set] --resolution [res] --which_dataset [dataset] ``` 4) (Using this [repository](https://github.com/bioinf-jku/TTUR)) FID can be computed using the pre-computed statistics obtained in 2) and the pre-computed ground-truth statistics obtain in 3). For example, to compute the FID with reference ImageNet validation set: ```python TTUR/fid.py [base_root]/[exp_name]/TF_pool_.npz [data_root]/imagenet_val_res[res]_tf_inception_moments_ground_truth.npz ``` <b>To obtain the FID metric on COCO-Stuff</b>: 1) Obtain ground-truth jpeg images: ```python data_utils/store_coco_jpeg_images.py --resolution [res] --split [ref_set] --data_root [data_root] --out_path [gt_coco_images] --filter_hd [filter_hd] ``` 2) Store generated images as jpeg images: ```python sample.py --json_config ../[BigGAN-PyTorch or stylegan-ada-pytorch]/config_files/<dataset>/<selected_config>.json --data_root [data_root] --base_root [base_root] --sample_num_npz [num_imgs] --which_dataset 'coco' --eval_instance_set [ref_set] --eval_reference_set [ref_set] --filter_hd [filter_hd] --model_backbone [backbone] ``` 3) Using this [repository](https://github.com/bioinf-jku/TTUR), compute FID on the two folders of ground-truth and generated images. where: * `dataset`: option to select the dataset in `['imagenet', 'imagenet_lt', 'coco'] * `exp_name`: name of the experiment folder. * `data_root`: path where the data has been prepared and stored, following the previous section ["Data preparation"](#data-preparation). * `base_root`: path where to find the model (for example, where the pretrained models have been downloaded). * `num_imgs`: needs to be set to 50000 for ImageNet and ImageNet-LT (with validation set as reference) and set to 11500 for ImageNet-LT (with training set as reference). For COCO-Stuff, set to 75777, 2050, 675, 1375 if using the training, evaluation, evaluation seen or evaluation unseen set as reference. * `ref_set`: set to `'val'` for ImageNet, ImageNet-LT (and COCO) to obtain metrics with the validation (evaluation) set as reference, or set to `'train'` for ImageNet-LT or COCO to obtain metrics with the training set as reference. * `kmeans_centers`: set to 1000 for ImageNet and to -1 for ImageNet-LT. * `backbone`: model backbone architecture in `['biggan','stylegan2']`. * `res`: integer indicating the resolution of the images (64,128,256). * `gt_coco_images`: folder to store the ground-truth JPEG images of that specific split. * `filter_hd`: only valid for `ref_set=val`. If -1, use the entire evaluation set; if 0, use only conditionings and their ground-truth images with seen class combinations during training (eval seen); if 1, use only conditionings and their ground-truth images with unseen class combinations during training (eval unseen). ## Utilities for GAN backbones We change and provide extra utilities to facilitate the training, for both BigGAN and StyleGAN2 base repositories. ### BigGAN change log The following changes were made: * BigGAN architecture: * In `train_fns.py`: option to either have the optimizers inside the generator and discriminator class, or directly in the `G_D` wrapper module. Additionally, added an option to augment both generated and real images with augmentations from [DiffAugment](https://github.com/mit-han-lab/data-efficient-gans). * In `BigGAN.py`: added a function `get_condition_embeddings` to handle the conditioning separately. * Small modifications to `layers.py` to adapt the batchnorm function calls to the pytorch 1.8 version. * Training utilities: * Added `trainer.py` file (replacing train.py): * Training now allows the usage of DDP for faster single-node and multi-node training. * Training is performed by epochs instead of by iterations. * Option to stop the training by using early stopping or when experiments diverge. * In `utils.py`: * Replaced `MultiEpochSampler` for `CheckpointedSampler` to allow experiments to be resumable when using epochs and fixing a bug where `MultiEpochSampler` would require a long time to fetch data permutations when the number of epochs increased. * ImageNet-LT: Added option to use different class distributions when sampling a class label for the generator. * ImageNet-LT: Added class balancing (uniform and temperature annealed). * Added data augmentations from [DiffAugment](https://github.com/mit-han-lab/data-efficient-gans). * Testing utilities: * In `calculate_inception_moments.py`: added option to obtain moments for ImageNet-LT dataset, as well as stratified moments for many, medium and few-shot classes (stratified FID computation). * In `inception_utils.py`: added option to compute [Precision, Recall, Density, Coverage](https://github.com/clovaai/generative-evaluation-prdc) and stratified FID. * Data utilities: * In `datasets.py`, added option to load ImageNet-LT dataset. * Added ImageNet-LT.txt files with image indexes for training and validation split. * In `utils.py`: * Separate functions to obtain the data from hdf5 files (`get_dataset_hdf5`) or from directory (`get_dataset_images`), as well as a function to obtain only the data loader (`get_dataloader`). * Added the function `sample_conditionings` to handle possible different conditionings to train G with. * Experiment utilities: * Added JSON files to launch experiments with the proposed hyper-parameter configuration. * Script to launch experiments with either the [submitit tool](https://github.com/facebookincubator/submitit) or locally in the same machine (run.py). ### StyleGAN2 change log <div id="stylegan-changelog"> <ul> <li> Multi-node DistributedDataParallel training. </li> <li> Added early stopping based on the training FID metric. </li> <li> Automatic checkpointing when jobs are automatically rescheduled on a cluster. </li> <li> Option to load dataset from hdf5 file. </li> <li> Replaced the usage of Click python package by an `ArgumentParser`. </li> <li> Only saving best and last model weights. </li> </ul> </div> ## Acknowledgements We would like to thanks the authors of the [Pytorch BigGAN repository](https://github.com/ajbrock/BigGAN-PyTorch) and [StyleGAN2 Pytorch](https://github.com/NVlabs/stylegan2-ada-pytorch), as our model requires their repositories to train IC-GAN with BigGAN or StyleGAN2 bakcbone respectively. Moreover, we would like to further thank the authors of [generative-evaluation-prdc](https://github.com/clovaai/generative-evaluation-prdc), [data-efficient-gans](https://github.com/mit-han-lab/data-efficient-gans), [faiss](https://github.com/facebookresearch/faiss) and [sg2im](https://github.com/google/sg2im) as some components were borrowed and modified from their code bases. Finally, we thank the author of [WanderCLIP](https://colab.research.google.com/github/eyaler/clip_biggan/blob/main/WanderCLIP.ipynb) as well as the following repositories, that we use in our Colab notebook: [pytorch-pretrained-BigGAN](https://github.com/huggingface/pytorch-pretrained-BigGAN) and [CLIP](https://github.com/openai/CLIP). ## License The majority of IC-GAN is licensed under CC-BY-NC, however portions of the project are available under separate license terms: BigGAN and [PRDC](https://github.com/facebookresearch/ic_gan/blob/main/data_utils/compute_pdrc.py) are licensed under the MIT license; [COCO-Stuff loader](https://github.com/facebookresearch/ic_gan/blob/main/data_utils/cocostuff_dataset.py) is licensed under Apache License 2.0; [DiffAugment](https://github.com/facebookresearch/ic_gan/blob/main/BigGAN_PyTorch/diffaugment_utils.py) is licensed under BSD 2-Clause Simplified license; StyleGAN2 is licensed under a NVIDIA license, available here: https://github.com/NVlabs/stylegan2-ada-pytorch/blob/main/LICENSE.txt. In the Colab notebook, [CLIP](https://github.com/openai/CLIP) and [pytorch-pretrained-BigGAN](https://github.com/huggingface/pytorch-pretrained-BigGAN) code is used, both licensed under the MIT license. ## Disclaimers THE DIFFAUGMENT SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE CLIP SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. THE PYTORCH-PRETRAINED-BIGGAN SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ## Cite the paper If this repository, the paper or any of its content is useful for your research, please cite: ``` @inproceedings{casanova2021instanceconditioned, title={Instance-Conditioned GAN}, author={Arantxa Casanova and Marlène Careil and Jakob Verbeek and Michal Drozdzal and Adriana Romero-Soriano}, booktitle={Advances in Neural Information Processing Systems (NeurIPS)}, year={2021} } ```
{"license": "cc-by-nc-4.0", "tags": ["image-generation", "conditional-image-generation", "generative-model"], "library": "pytorch"}
facebook/ic_gan
null
[ "image-generation", "conditional-image-generation", "generative-model", "arxiv:2109.05070", "license:cc-by-nc-4.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2109.05070" ]
[]
TAGS #image-generation #conditional-image-generation #generative-model #arxiv-2109.05070 #license-cc-by-nc-4.0 #region-us
IC-GAN: Instance-Conditioned GAN ================================= Official Pytorch code of Instance-Conditioned GAN by Arantxa Casanova, Marlène Careil, Jakob Verbeek, Michał Drożdżal, Adriana Romero-Soriano. !IC-GAN results Generate images with IC-GAN in a Colab Notebook ----------------------------------------------- We provide a Google Colab notebook to generate images with IC-GAN and its class-conditional counter part. We also invite users to check out the demo on Replicate, courtesy of Replicate. The figure below depicts two instances, unseen during training and downloaded from Creative Commons search, and the generated images with IC-GAN and class-conditional IC-GAN when conditioning on the class "castle": !IC-GAN results transfer Additionally, and inspired by this Colab, we provide the funcionality in the same Colab notebook to guide generations with text captions, using the CLIP model. As an example, the following Figure shows three instance conditionings and a text caption (top), followed by the resulting generated images with IC-GAN (bottom), when optimizing the noise vector following CLIP's gradient for 100 iterations. !IC-GAN results transfer CLIP *Credit for the three instance conditionings, from left to right, that were modified with a resize and central crop:* 1: "Landscape in Bavaria" by shining.darkness, licensed under CC BY 2.0, 2: "Fantasy Landscape - slolsss" by Douglas Tofoli is marked with CC PDM 1.0, 3: "How to Draw Landscapes Simply" by Kuwagata Keisai is marked with CC0 1.0 Requirements ------------ * Python 3.8 * Cuda v10.2 / Cudnn v7.6.5 * gcc v7.3.0 * Pytorch 1.8.0 * A conda environment can be created from 'URL' by entering the command: 'conda env create -f URL', that contains the aforemention version of Pytorch and other required packages. * Faiss: follow the instructions in the original repository. Overview -------- This repository consists of four main folders: * 'data\_utils': A common folder to obtain and format the data needed to train and test IC-GAN, agnostic of the specific backbone. * 'inference': Scripts to test the models both qualitatively and quantitatively. * 'BigGAN\_PyTorch': It provides the training, evaluation and sampling scripts for IC-GAN with a BigGAN backbone. The code base comes from Pytorch BigGAN repository, made available under the MIT License. It has been modified to add additional utilities and it enables IC-GAN training on top of it. * 'stylegan2\_ada\_pytorch': It provides the training, evaluation and sampling scripts for IC-GAN with a StyleGAN2 backbone. The code base comes from StyleGAN2 Pytorch, made available under the Nvidia Source Code License. It has been modified to add additional utilities and it enables IC-GAN training on top of it. (Python script) Generate images with IC-GAN ------------------------------------------- Alternatively, we can **generate images with IC-GAN models** directly from a python script, by following the next steps: 1. Download the desired pretrained models (links below) and the pre-computed 1000 instance features from ImageNet and extract them into a folder 'pretrained\_models\_path'. 2. Execute: * 'model' can be chosen from '["icgan", "cc\_icgan"]' to use the IC-GAN or the class-conditional IC-GAN model respectively. * 'backbone' can be chosen from '["biggan", "stylegan2"]'. * 'res' indicates the resolution at which the model has been trained. For ImageNet, choose one in '[64, 128, 256]', and for COCO-Stuff, one in '[128, 256]'. This script results in a .PNG file where several generated images are shown, given an instance feature (each row), and a sampled noise vector (each grid position). **Additional and optional parameters**: * 'index': (None by default), is an integer from 0 to 999 that choses a specific instance feature vector out of the 1000 instances that have been selected with k-means on the ImageNet dataset and stored in 'pretrained\_models\_path/stored\_instances'. * 'swap\_target': (None by default) is an integer from 0 to 999 indicating an ImageNet class label. This label will be used to condition the class-conditional IC-GAN, regardless of which instance features are being used. * 'which\_dataset': (ImageNet by default) can be chosen from '["imagenet", "coco"]' to indicate which dataset (training split) to sample the instances from. * 'trained\_dataset': (ImageNet by default) can be chosen from '["imagenet", "coco"]' to indicate the dataset in which the IC-GAN model has been trained on. * 'num\_imgs\_gen': (5 by default), it changes the number of noise vectors to sample per conditioning. Increasing this number results in a bigger .PNG file to save and load. * 'num\_conditionings\_gen': (5 by default), it changes the number of conditionings to sample. Increasing this number results in a bigger .PNG file to save and load. * 'z\_var': (1.0 by default) controls the truncation factor for the generation. * Optionally, the script can be run with the following additional options '--visualize\_instance\_images --dataset\_path [dataset\_path]' to visualize the ground-truth images corresponding to the conditioning instance features, given a path to the dataset's ground-truth images 'dataset\_path'. Ground-truth instances will be plotted as the leftmost image for each row. Data preparation ---------------- ImageNet 1. Download dataset from [feature extractor weights from [. Following [this link](URL ImageNet-LT </a>, the file <code>ImageNet_LT_train.txt</code> can be downloaded from <a href=) and later stored in the folder `./BigGAN_PyTorch/imagenet_lt`.](URL here </a>. </li> <li> Replace the paths in data_utils/prepare_data.sh: <code>out_path</code> by the path where hdf5 files will be stored, <code>path_imnet</code> by the path where ImageNet dataset is downloaded, and <code>path_swav</code> by the path where SwAV weights are stored. </li> <li> Execute <code>./data_utils/prepare_data.sh imagenet [resolution]</code>, where <code>[resolution]</code> can be an integer in {64,128,256}. This script will create several hdf5 files: <ul> <li> <code>ILSVRC[resolution]_xy.hdf5</code> and <code>ILSVRC[resolution]_val_xy.hdf5</code>, where images and labels are stored for the training and validation set respectively. </li> <li> <code>ILSVRC[resolution]_feats_[feature_extractor]_resnet50.hdf5</code> that contains the instance features for each image. </li> <li> <code>ILSVRC[resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5</code> that contains the list of [k_nn] neighbors for each of the instance features. </li> </ul> </li> </ol> </br> </details> <details> <summary>ImageNet-LT</summary> <br> <ol> <li>Download ImageNet dataset from <a href=)](URL here </a>. </li> <li>Download <a href=) 2. Download the pre-trained weights of the ResNet on ImageNet-LT from [.](URL this link</a>, provided by the <a href=) 3. Replace the paths in data\_utils/prepare\_data.sh: `out_path` by the path where hdf5 files will be stored, `path_imnet` by the path where ImageNet dataset is downloaded, and `path_classifier_lt` by the path where the pre-trained ResNet50 weights are stored. 4. Execute `./data_utils/prepare_data.sh imagenet_lt [resolution]`, where `[resolution]` can be an integer in {64,128,256}. This script will create several hdf5 files: * `ILSVRC[resolution]longtail_xy.hdf5`, where images and labels are stored for the training and validation set respectively. * `ILSVRC[resolution]longtail_feats_[feature_extractor]_resnet50.hdf5` that contains the instance features for each image. * `ILSVRC[resolution]longtail_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5` that contains the list of [k\_nn] neighbors for each of the instance features. COCO-Stuff 1. Download the dataset following the [feature extractor weights from [feature extractor weights from [Data preparation](URL here </a>. </li> <li> Replace the paths in data_utils/prepare_data.sh: <code>out_path</code> by the path where hdf5 files will be stored and <code>path_swav</code> by the path where SwAV weights are stored. </li> <li> Execute <code>./data_utils/prepare_data.sh [dataset_name] [resolution] [dataset_path]</code>, where <code>[dataset_name]</code> will be the dataset name, <code>[resolution]</code> can be an integer, for example 128 or 256, and <code>dataset_path</code> contains the dataset images. This script will create several hdf5 files: <ul> <li> <code>[dataset_name][resolution]_xy.hdf5</code>, where images and labels are stored for the training set. </li> <li> <code>[dataset_name][resolution]_feats_[feature_extractor]_resnet50.hdf5</code> that contains the instance features for each image. </li> <li> <code>[dataset_name][resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5</code> that contains the list of <code>k_nn</code> neighbors for each of the instance features. </li> </ul> </li> </ol> </br> </details> <details> <summary>How to subsample an instance feature dataset with k-means</summary> <br> To downsample the instance feature vector dataset, after we have prepared the data, we can use the k-means algorithm: <code> python data_utils/store_kmeans_indexes.py --resolution [resolution] --which_dataset [dataset_name] --data_root [data_path] </code> <ul> <li> Adding <code>--gpu</code> allows the faiss library to compute k-means leveraging GPUs, resulting in faster execution. </li> <li> Adding the parameter <code>--feature_extractor [feature_extractor]</code> chooses which feature extractor to use, with <code>feature_extractor</code> in <code>['selfsupervised', 'classification'] </code>, if we are using swAV as feature extactor or the ResNet pretrained on the classification task on ImageNet, respectively. </li> <li> The number of k-means clusters can be set with <code>--kmeans_subsampled [centers]</code>, where <code>centers</code> is an integer. </li> </ul> </br> </details> </div> <h2>How to train the models</h2> <h4>BigGAN or StyleGAN2 backbone</h4> <p>Training parameters are stored in JSON files in '[backbone_folder]/config_files/[dataset]/*.json', where '[backbone_folder]' is either BigGAN_Pytorch or stylegan2_ada_pytorch and '[dataset]' can either be ImageNet, ImageNet-LT or COCO_Stuff.</p> <p>or</p> <p>where:</p> <ul> <li>'data_root' path where the data has been prepared and stored, following the previous section (<a href=)).](URL here </a>. </li> <li> Replace the paths in data_utils/prepare_data.sh: <code>out_path</code> by the path where hdf5 files will be stored, <code>path_imnet</code> by the path where ImageNet dataset is downloaded, and <code>path_swav</code> by the path where SwAV weights are stored. </li> <li> Execute <code>./data_utils/prepare_data.sh coco [resolution]</code>, where <code>[resolution]</code> can be an integer in {128,256}. This script will create several hdf5 files: <ul> <li> <code>COCO[resolution]_xy.hdf5</code> and <code>COCO[resolution]_val_test_xy.hdf5</code>, where images and labels are stored for the training and evaluation set respectively. </li> <li> <code>COCO[resolution]_feats_[feature_extractor]_resnet50.hdf5</code> that contains the instance features for each image. </li> <li> <code>COCO[resolution]_feats_[feature_extractor]_resnet50_nn_k[k_nn].hdf5</code> that contains the list of [k_nn] neighbors for each of the instance features. </li> </ul> </li> </ol> </br> </details> <details> <summary>Other datasets</summary> <br> <ol> <li>Download the corresponding dataset and store in a folder <code>dataset_path</code>. </li> <li>Download <a href=)](URL LostGANs' repository instructions </a>. </li> <li>Download <a href=) 2. 'base\_root' path where to store the model weights and logs. Note that one can create other JSON files to modify the training parameters. #### Other backbones To be able to run IC-GAN with other backbones, we provide some orientative steps: * Place the new backbone code in a new folder under 'ic\_gan' ('ic\_gan/new\_backbone'). * Modify the relevant piece of code in the GAN architecture to allow instance features as conditionings (for both generator and discriminator). * Create a 'URL' file with the training loop to train an IC-GAN with the new backbone. The 'data\_utils' folder provides the tools to prepare the dataset, load the data and conditioning sampling to train an IC-GAN. The IC-GAN with BigGAN backbone 'URL' file can be used as an inspiration. How to test the models ---------------------- **To obtain the FID and IS metrics on ImageNet and ImageNet-LT**: 1. Execute: To obtain the tensorflow IS and FID metrics, use an environment with the Python <3.7 and Tensorflow 1.15. Then: 2. Obtain Inception Scores and pre-computed FID moments: For stratified FIDs in the ImageNet-LT dataset, the following parameters can be added '--which\_dataset 'imagenet\_lt' --split 'val' --strat\_name [stratified\_split]', where 'stratified\_split' can be in '[few,low, many]'. 3. (Only needed once) Pre-compute reference moments with tensorflow code: 4. (Using this repository) FID can be computed using the pre-computed statistics obtained in 2) and the pre-computed ground-truth statistics obtain in 3). For example, to compute the FID with reference ImageNet validation set: **To obtain the FID metric on COCO-Stuff**: 1. Obtain ground-truth jpeg images: 2. Store generated images as jpeg images: 3. Using this repository, compute FID on the two folders of ground-truth and generated images. where: * 'dataset': option to select the dataset in '['imagenet', 'imagenet\_lt', 'coco'] * 'exp\_name': name of the experiment folder. * 'data\_root': path where the data has been prepared and stored, following the previous section "Data preparation". * 'base\_root': path where to find the model (for example, where the pretrained models have been downloaded). * 'num\_imgs': needs to be set to 50000 for ImageNet and ImageNet-LT (with validation set as reference) and set to 11500 for ImageNet-LT (with training set as reference). For COCO-Stuff, set to 75777, 2050, 675, 1375 if using the training, evaluation, evaluation seen or evaluation unseen set as reference. * 'ref\_set': set to ''val'' for ImageNet, ImageNet-LT (and COCO) to obtain metrics with the validation (evaluation) set as reference, or set to ''train'' for ImageNet-LT or COCO to obtain metrics with the training set as reference. * 'kmeans\_centers': set to 1000 for ImageNet and to -1 for ImageNet-LT. * 'backbone': model backbone architecture in '['biggan','stylegan2']'. * 'res': integer indicating the resolution of the images (64,128,256). * 'gt\_coco\_images': folder to store the ground-truth JPEG images of that specific split. * 'filter\_hd': only valid for 'ref\_set=val'. If -1, use the entire evaluation set; if 0, use only conditionings and their ground-truth images with seen class combinations during training (eval seen); if 1, use only conditionings and their ground-truth images with unseen class combinations during training (eval unseen). Utilities for GAN backbones --------------------------- We change and provide extra utilities to facilitate the training, for both BigGAN and StyleGAN2 base repositories. ### BigGAN change log The following changes were made: * BigGAN architecture: + In 'train\_fns.py': option to either have the optimizers inside the generator and discriminator class, or directly in the 'G\_D' wrapper module. Additionally, added an option to augment both generated and real images with augmentations from DiffAugment. + In 'URL': added a function 'get\_condition\_embeddings' to handle the conditioning separately. + Small modifications to 'URL' to adapt the batchnorm function calls to the pytorch 1.8 version. * Training utilities: + Added 'URL' file (replacing URL): - Training now allows the usage of DDP for faster single-node and multi-node training. - Training is performed by epochs instead of by iterations. - Option to stop the training by using early stopping or when experiments diverge. + In 'URL': - Replaced 'MultiEpochSampler' for 'CheckpointedSampler' to allow experiments to be resumable when using epochs and fixing a bug where 'MultiEpochSampler' would require a long time to fetch data permutations when the number of epochs increased. - ImageNet-LT: Added option to use different class distributions when sampling a class label for the generator. - ImageNet-LT: Added class balancing (uniform and temperature annealed). - Added data augmentations from DiffAugment. * Testing utilities: + In 'calculate\_inception\_moments.py': added option to obtain moments for ImageNet-LT dataset, as well as stratified moments for many, medium and few-shot classes (stratified FID computation). + In 'inception\_utils.py': added option to compute Precision, Recall, Density, Coverage and stratified FID. * Data utilities: + In 'URL', added option to load ImageNet-LT dataset. + Added URL files with image indexes for training and validation split. + In 'URL': - Separate functions to obtain the data from hdf5 files ('get\_dataset\_hdf5') or from directory ('get\_dataset\_images'), as well as a function to obtain only the data loader ('get\_dataloader'). - Added the function 'sample\_conditionings' to handle possible different conditionings to train G with. * Experiment utilities: + Added JSON files to launch experiments with the proposed hyper-parameter configuration. + Script to launch experiments with either the submitit tool or locally in the same machine (URL). ### StyleGAN2 change log * Multi-node DistributedDataParallel training. * Added early stopping based on the training FID metric. * Automatic checkpointing when jobs are automatically rescheduled on a cluster. * Option to load dataset from hdf5 file. * Replaced the usage of Click python package by an 'ArgumentParser'. * Only saving best and last model weights. Acknowledgements ---------------- We would like to thanks the authors of the Pytorch BigGAN repository and StyleGAN2 Pytorch, as our model requires their repositories to train IC-GAN with BigGAN or StyleGAN2 bakcbone respectively. Moreover, we would like to further thank the authors of generative-evaluation-prdc, data-efficient-gans, faiss and sg2im as some components were borrowed and modified from their code bases. Finally, we thank the author of WanderCLIP as well as the following repositories, that we use in our Colab notebook: pytorch-pretrained-BigGAN and CLIP. License ------- The majority of IC-GAN is licensed under CC-BY-NC, however portions of the project are available under separate license terms: BigGAN and PRDC are licensed under the MIT license; COCO-Stuff loader is licensed under Apache License 2.0; DiffAugment is licensed under BSD 2-Clause Simplified license; StyleGAN2 is licensed under a NVIDIA license, available here: URL In the Colab notebook, CLIP and pytorch-pretrained-BigGAN code is used, both licensed under the MIT license. Disclaimers ----------- THE DIFFAUGMENT SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE CLIP SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. THE PYTORCH-PRETRAINED-BIGGAN SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Cite the paper -------------- If this repository, the paper or any of its content is useful for your research, please cite:
[ "#### Other backbones\n\n\nTo be able to run IC-GAN with other backbones, we provide some orientative steps:\n\n* Place the new backbone code in a new folder under 'ic\\_gan' ('ic\\_gan/new\\_backbone').\n* Modify the relevant piece of code in the GAN architecture to allow instance features as conditionings (for both generator and discriminator).\n* Create a 'URL' file with the training loop to train an IC-GAN with the new backbone. The 'data\\_utils' folder provides the tools to prepare the dataset, load the data and conditioning sampling to train an IC-GAN. The IC-GAN with BigGAN backbone 'URL' file can be used as an inspiration.\n\nHow to test the models\n----------------------\n\n\n**To obtain the FID and IS metrics on ImageNet and ImageNet-LT**:\n\n1. Execute:\n\nTo obtain the tensorflow IS and FID metrics, use an environment with the Python <3.7 and Tensorflow 1.15. Then:\n\n2. Obtain Inception Scores and pre-computed FID moments:\n\nFor stratified FIDs in the ImageNet-LT dataset, the following parameters can be added '--which\\_dataset 'imagenet\\_lt' --split 'val' --strat\\_name [stratified\\_split]', where 'stratified\\_split' can be in '[few,low, many]'.\n\n3. (Only needed once) Pre-compute reference moments with tensorflow code:\n4. (Using this repository) FID can be computed using the pre-computed statistics obtained in 2) and the pre-computed ground-truth statistics obtain in 3). For example, to compute the FID with reference ImageNet validation set:\n\n**To obtain the FID metric on COCO-Stuff**:\n\n1. Obtain ground-truth jpeg images:\n2. Store generated images as jpeg images:\n3. Using this repository, compute FID on the two folders of ground-truth and generated images.\n\nwhere:\n\n* 'dataset': option to select the dataset in '['imagenet', 'imagenet\\_lt', 'coco']\n* 'exp\\_name': name of the experiment folder.\n* 'data\\_root': path where the data has been prepared and stored, following the previous section \"Data preparation\".\n* 'base\\_root': path where to find the model (for example, where the pretrained models have been downloaded).\n* 'num\\_imgs': needs to be set to 50000 for ImageNet and ImageNet-LT (with validation set as reference) and set to 11500 for ImageNet-LT (with training set as reference). For COCO-Stuff, set to 75777, 2050, 675, 1375 if using the training, evaluation, evaluation seen or evaluation unseen set as reference.\n* 'ref\\_set': set to ''val'' for ImageNet, ImageNet-LT (and COCO) to obtain metrics with the validation (evaluation) set as reference, or set to ''train'' for ImageNet-LT or COCO to obtain metrics with the training set as reference.\n* 'kmeans\\_centers': set to 1000 for ImageNet and to -1 for ImageNet-LT.\n* 'backbone': model backbone architecture in '['biggan','stylegan2']'.\n* 'res': integer indicating the resolution of the images (64,128,256).\n* 'gt\\_coco\\_images': folder to store the ground-truth JPEG images of that specific split.\n* 'filter\\_hd': only valid for 'ref\\_set=val'. If -1, use the entire evaluation set; if 0, use only conditionings and their ground-truth images with seen class combinations during training (eval seen); if 1, use only conditionings and their ground-truth images with unseen class combinations during training (eval unseen).\n\nUtilities for GAN backbones\n---------------------------\n\n\nWe change and provide extra utilities to facilitate the training, for both BigGAN and StyleGAN2 base repositories.", "### BigGAN change log\n\n\nThe following changes were made:\n\n* BigGAN architecture:\n\n\n\t+ In 'train\\_fns.py': option to either have the optimizers inside the generator and discriminator class, or directly in the 'G\\_D' wrapper module. Additionally, added an option to augment both generated and real images with augmentations from DiffAugment.\n\t+ In 'URL': added a function 'get\\_condition\\_embeddings' to handle the conditioning separately.\n\t+ Small modifications to 'URL' to adapt the batchnorm function calls to the pytorch 1.8 version.\n* Training utilities:\n\n\n\t+ Added 'URL' file (replacing URL):\n\t\t- Training now allows the usage of DDP for faster single-node and multi-node training.\n\t\t- Training is performed by epochs instead of by iterations.\n\t\t- Option to stop the training by using early stopping or when experiments diverge.\n\t+ In 'URL':\n\t\t- Replaced 'MultiEpochSampler' for 'CheckpointedSampler' to allow experiments to be resumable when using epochs and fixing a bug where 'MultiEpochSampler' would require a long time to fetch data permutations when the number of epochs increased.\n\t\t- ImageNet-LT: Added option to use different class distributions when sampling a class label for the generator.\n\t\t- ImageNet-LT: Added class balancing (uniform and temperature annealed).\n\t\t- Added data augmentations from DiffAugment.\n* Testing utilities:\n\n\n\t+ In 'calculate\\_inception\\_moments.py': added option to obtain moments for ImageNet-LT dataset, as well as stratified moments for many, medium and few-shot classes (stratified FID computation).\n\t+ In 'inception\\_utils.py': added option to compute Precision, Recall, Density, Coverage and stratified FID.\n* Data utilities:\n\n\n\t+ In 'URL', added option to load ImageNet-LT dataset.\n\t+ Added URL files with image indexes for training and validation split.\n\t+ In 'URL':\n\t\t- Separate functions to obtain the data from hdf5 files ('get\\_dataset\\_hdf5') or from directory ('get\\_dataset\\_images'), as well as a function to obtain only the data loader ('get\\_dataloader').\n\t\t- Added the function 'sample\\_conditionings' to handle possible different conditionings to train G with.\n* Experiment utilities:\n\n\n\t+ Added JSON files to launch experiments with the proposed hyper-parameter configuration.\n\t+ Script to launch experiments with either the submitit tool or locally in the same machine (URL).", "### StyleGAN2 change log\n\n\n\n* Multi-node DistributedDataParallel training.\n* Added early stopping based on the training FID metric.\n* Automatic checkpointing when jobs are automatically rescheduled on a cluster.\n* Option to load dataset from hdf5 file.\n* Replaced the usage of Click python package by an 'ArgumentParser'.\n* Only saving best and last model weights.\n\n\n\nAcknowledgements\n----------------\n\n\nWe would like to thanks the authors of the Pytorch BigGAN repository and StyleGAN2 Pytorch, as our model requires their repositories to train IC-GAN with BigGAN or StyleGAN2 bakcbone respectively.\nMoreover, we would like to further thank the authors of generative-evaluation-prdc, data-efficient-gans, faiss and sg2im as some components were borrowed and modified from their code bases. Finally, we thank the author of WanderCLIP as well as the following repositories, that we use in our Colab notebook: pytorch-pretrained-BigGAN and CLIP.\n\n\nLicense\n-------\n\n\nThe majority of IC-GAN is licensed under CC-BY-NC, however portions of the project are available under separate license terms: BigGAN and PRDC are licensed under the MIT license; COCO-Stuff loader is licensed under Apache License 2.0; DiffAugment is licensed under BSD 2-Clause Simplified license; StyleGAN2 is licensed under a NVIDIA license, available here: URL In the Colab notebook, CLIP and pytorch-pretrained-BigGAN code is used, both licensed under the MIT license.\n\n\nDisclaimers\n-----------\n\n\nTHE DIFFAUGMENT SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\nTHE CLIP SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n\nTHE PYTORCH-PRETRAINED-BIGGAN SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n\nCite the paper\n--------------\n\n\nIf this repository, the paper or any of its content is useful for your research, please cite:" ]
[ "TAGS\n#image-generation #conditional-image-generation #generative-model #arxiv-2109.05070 #license-cc-by-nc-4.0 #region-us \n", "#### Other backbones\n\n\nTo be able to run IC-GAN with other backbones, we provide some orientative steps:\n\n* Place the new backbone code in a new folder under 'ic\\_gan' ('ic\\_gan/new\\_backbone').\n* Modify the relevant piece of code in the GAN architecture to allow instance features as conditionings (for both generator and discriminator).\n* Create a 'URL' file with the training loop to train an IC-GAN with the new backbone. The 'data\\_utils' folder provides the tools to prepare the dataset, load the data and conditioning sampling to train an IC-GAN. The IC-GAN with BigGAN backbone 'URL' file can be used as an inspiration.\n\nHow to test the models\n----------------------\n\n\n**To obtain the FID and IS metrics on ImageNet and ImageNet-LT**:\n\n1. Execute:\n\nTo obtain the tensorflow IS and FID metrics, use an environment with the Python <3.7 and Tensorflow 1.15. Then:\n\n2. Obtain Inception Scores and pre-computed FID moments:\n\nFor stratified FIDs in the ImageNet-LT dataset, the following parameters can be added '--which\\_dataset 'imagenet\\_lt' --split 'val' --strat\\_name [stratified\\_split]', where 'stratified\\_split' can be in '[few,low, many]'.\n\n3. (Only needed once) Pre-compute reference moments with tensorflow code:\n4. (Using this repository) FID can be computed using the pre-computed statistics obtained in 2) and the pre-computed ground-truth statistics obtain in 3). For example, to compute the FID with reference ImageNet validation set:\n\n**To obtain the FID metric on COCO-Stuff**:\n\n1. Obtain ground-truth jpeg images:\n2. Store generated images as jpeg images:\n3. Using this repository, compute FID on the two folders of ground-truth and generated images.\n\nwhere:\n\n* 'dataset': option to select the dataset in '['imagenet', 'imagenet\\_lt', 'coco']\n* 'exp\\_name': name of the experiment folder.\n* 'data\\_root': path where the data has been prepared and stored, following the previous section \"Data preparation\".\n* 'base\\_root': path where to find the model (for example, where the pretrained models have been downloaded).\n* 'num\\_imgs': needs to be set to 50000 for ImageNet and ImageNet-LT (with validation set as reference) and set to 11500 for ImageNet-LT (with training set as reference). For COCO-Stuff, set to 75777, 2050, 675, 1375 if using the training, evaluation, evaluation seen or evaluation unseen set as reference.\n* 'ref\\_set': set to ''val'' for ImageNet, ImageNet-LT (and COCO) to obtain metrics with the validation (evaluation) set as reference, or set to ''train'' for ImageNet-LT or COCO to obtain metrics with the training set as reference.\n* 'kmeans\\_centers': set to 1000 for ImageNet and to -1 for ImageNet-LT.\n* 'backbone': model backbone architecture in '['biggan','stylegan2']'.\n* 'res': integer indicating the resolution of the images (64,128,256).\n* 'gt\\_coco\\_images': folder to store the ground-truth JPEG images of that specific split.\n* 'filter\\_hd': only valid for 'ref\\_set=val'. If -1, use the entire evaluation set; if 0, use only conditionings and their ground-truth images with seen class combinations during training (eval seen); if 1, use only conditionings and their ground-truth images with unseen class combinations during training (eval unseen).\n\nUtilities for GAN backbones\n---------------------------\n\n\nWe change and provide extra utilities to facilitate the training, for both BigGAN and StyleGAN2 base repositories.", "### BigGAN change log\n\n\nThe following changes were made:\n\n* BigGAN architecture:\n\n\n\t+ In 'train\\_fns.py': option to either have the optimizers inside the generator and discriminator class, or directly in the 'G\\_D' wrapper module. Additionally, added an option to augment both generated and real images with augmentations from DiffAugment.\n\t+ In 'URL': added a function 'get\\_condition\\_embeddings' to handle the conditioning separately.\n\t+ Small modifications to 'URL' to adapt the batchnorm function calls to the pytorch 1.8 version.\n* Training utilities:\n\n\n\t+ Added 'URL' file (replacing URL):\n\t\t- Training now allows the usage of DDP for faster single-node and multi-node training.\n\t\t- Training is performed by epochs instead of by iterations.\n\t\t- Option to stop the training by using early stopping or when experiments diverge.\n\t+ In 'URL':\n\t\t- Replaced 'MultiEpochSampler' for 'CheckpointedSampler' to allow experiments to be resumable when using epochs and fixing a bug where 'MultiEpochSampler' would require a long time to fetch data permutations when the number of epochs increased.\n\t\t- ImageNet-LT: Added option to use different class distributions when sampling a class label for the generator.\n\t\t- ImageNet-LT: Added class balancing (uniform and temperature annealed).\n\t\t- Added data augmentations from DiffAugment.\n* Testing utilities:\n\n\n\t+ In 'calculate\\_inception\\_moments.py': added option to obtain moments for ImageNet-LT dataset, as well as stratified moments for many, medium and few-shot classes (stratified FID computation).\n\t+ In 'inception\\_utils.py': added option to compute Precision, Recall, Density, Coverage and stratified FID.\n* Data utilities:\n\n\n\t+ In 'URL', added option to load ImageNet-LT dataset.\n\t+ Added URL files with image indexes for training and validation split.\n\t+ In 'URL':\n\t\t- Separate functions to obtain the data from hdf5 files ('get\\_dataset\\_hdf5') or from directory ('get\\_dataset\\_images'), as well as a function to obtain only the data loader ('get\\_dataloader').\n\t\t- Added the function 'sample\\_conditionings' to handle possible different conditionings to train G with.\n* Experiment utilities:\n\n\n\t+ Added JSON files to launch experiments with the proposed hyper-parameter configuration.\n\t+ Script to launch experiments with either the submitit tool or locally in the same machine (URL).", "### StyleGAN2 change log\n\n\n\n* Multi-node DistributedDataParallel training.\n* Added early stopping based on the training FID metric.\n* Automatic checkpointing when jobs are automatically rescheduled on a cluster.\n* Option to load dataset from hdf5 file.\n* Replaced the usage of Click python package by an 'ArgumentParser'.\n* Only saving best and last model weights.\n\n\n\nAcknowledgements\n----------------\n\n\nWe would like to thanks the authors of the Pytorch BigGAN repository and StyleGAN2 Pytorch, as our model requires their repositories to train IC-GAN with BigGAN or StyleGAN2 bakcbone respectively.\nMoreover, we would like to further thank the authors of generative-evaluation-prdc, data-efficient-gans, faiss and sg2im as some components were borrowed and modified from their code bases. Finally, we thank the author of WanderCLIP as well as the following repositories, that we use in our Colab notebook: pytorch-pretrained-BigGAN and CLIP.\n\n\nLicense\n-------\n\n\nThe majority of IC-GAN is licensed under CC-BY-NC, however portions of the project are available under separate license terms: BigGAN and PRDC are licensed under the MIT license; COCO-Stuff loader is licensed under Apache License 2.0; DiffAugment is licensed under BSD 2-Clause Simplified license; StyleGAN2 is licensed under a NVIDIA license, available here: URL In the Colab notebook, CLIP and pytorch-pretrained-BigGAN code is used, both licensed under the MIT license.\n\n\nDisclaimers\n-----------\n\n\nTHE DIFFAUGMENT SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\nTHE CLIP SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n\nTHE PYTORCH-PRETRAINED-BIGGAN SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n\nCite the paper\n--------------\n\n\nIf this repository, the paper or any of its content is useful for your research, please cite:" ]
text2text-generation
transformers
# M2M100 1.2B M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation. It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository. The model that can directly translate between the 9,900 directions of 100 languages. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method. *Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.* To install `sentencepiece` run `pip install sentencepiece` ```python from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।" chinese_text = "生活就像一盒巧克力。" model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_1.2B") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_1.2B") # translate Hindi to French tokenizer.src_lang = "hi" encoded_hi = tokenizer(hi_text, return_tensors="pt") generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "La vie est comme une boîte de chocolat." # translate Chinese to English tokenizer.src_lang = "zh" encoded_zh = tokenizer(chinese_text, return_tensors="pt") generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "Life is like a box of chocolate." ``` See the [model hub](https://huggingface.co/models?filter=m2m_100) to look for more fine-tuned versions. ## Languages covered Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu) ## BibTeX entry and citation info ``` @misc{fan2020englishcentric, title={Beyond English-Centric Multilingual Machine Translation}, author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin}, year={2020}, eprint={2010.11125}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": ["multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", false, "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu"], "license": "mit"}
facebook/m2m100_1.2B
null
[ "transformers", "pytorch", "rust", "m2m_100", "text2text-generation", "multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "arxiv:2010.11125", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2010.11125" ]
[ "multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu" ]
TAGS #transformers #pytorch #rust #m2m_100 #text2text-generation #multilingual #af #am #ar #ast #az #ba #be #bg #bn #br #bs #ca #ceb #cs #cy #da #de #el #en #es #et #fa #ff #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #ht #hu #hy #id #ig #ilo #is #it #ja #jv #ka #kk #km #kn #ko #lb #lg #ln #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #ns #oc #or #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #so #sq #sr #ss #su #sv #sw #ta #th #tl #tn #tr #uk #ur #uz #vi #wo #xh #yi #yo #zh #zu #arxiv-2010.11125 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# M2M100 1.2B M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation. It was introduced in this paper and first released in this repository. The model that can directly translate between the 9,900 directions of 100 languages. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the 'forced_bos_token_id' parameter to the 'generate' method. *Note: 'M2M100Tokenizer' depends on 'sentencepiece', so make sure to install it before running the example.* To install 'sentencepiece' run 'pip install sentencepiece' See the model hub to look for more fine-tuned versions. ## Languages covered Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu) ## BibTeX entry and citation info
[ "# M2M100 1.2B\n\nM2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation.\nIt was introduced in this paper and first released in this repository.\n\nThe model that can directly translate between the 9,900 directions of 100 languages.\nTo translate into a target language, the target language id is forced as the first generated token.\nTo force the target language id as the first generated token, pass the 'forced_bos_token_id' parameter to the 'generate' method.\n\n*Note: 'M2M100Tokenizer' depends on 'sentencepiece', so make sure to install it before running the example.*\n\nTo install 'sentencepiece' run 'pip install sentencepiece'\n\n\n\n\n\nSee the model hub to look for more fine-tuned versions.", "## Languages covered\nAfrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu)", "## BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #rust #m2m_100 #text2text-generation #multilingual #af #am #ar #ast #az #ba #be #bg #bn #br #bs #ca #ceb #cs #cy #da #de #el #en #es #et #fa #ff #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #ht #hu #hy #id #ig #ilo #is #it #ja #jv #ka #kk #km #kn #ko #lb #lg #ln #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #ns #oc #or #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #so #sq #sr #ss #su #sv #sw #ta #th #tl #tn #tr #uk #ur #uz #vi #wo #xh #yi #yo #zh #zu #arxiv-2010.11125 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# M2M100 1.2B\n\nM2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation.\nIt was introduced in this paper and first released in this repository.\n\nThe model that can directly translate between the 9,900 directions of 100 languages.\nTo translate into a target language, the target language id is forced as the first generated token.\nTo force the target language id as the first generated token, pass the 'forced_bos_token_id' parameter to the 'generate' method.\n\n*Note: 'M2M100Tokenizer' depends on 'sentencepiece', so make sure to install it before running the example.*\n\nTo install 'sentencepiece' run 'pip install sentencepiece'\n\n\n\n\n\nSee the model hub to look for more fine-tuned versions.", "## Languages covered\nAfrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu)", "## BibTeX entry and citation info" ]
text2text-generation
transformers
# M2M100 418M M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation. It was introduced in this [paper](https://arxiv.org/abs/2010.11125) and first released in [this](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100) repository. The model that can directly translate between the 9,900 directions of 100 languages. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method. *Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.* To install `sentencepiece` run `pip install sentencepiece` ```python from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।" chinese_text = "生活就像一盒巧克力。" model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M") tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M") # translate Hindi to French tokenizer.src_lang = "hi" encoded_hi = tokenizer(hi_text, return_tensors="pt") generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.get_lang_id("fr")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "La vie est comme une boîte de chocolat." # translate Chinese to English tokenizer.src_lang = "zh" encoded_zh = tokenizer(chinese_text, return_tensors="pt") generated_tokens = model.generate(**encoded_zh, forced_bos_token_id=tokenizer.get_lang_id("en")) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "Life is like a box of chocolate." ``` See the [model hub](https://huggingface.co/models?filter=m2m_100) to look for more fine-tuned versions. ## Languages covered Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu) ## BibTeX entry and citation info ``` @misc{fan2020englishcentric, title={Beyond English-Centric Multilingual Machine Translation}, author={Angela Fan and Shruti Bhosale and Holger Schwenk and Zhiyi Ma and Ahmed El-Kishky and Siddharth Goyal and Mandeep Baines and Onur Celebi and Guillaume Wenzek and Vishrav Chaudhary and Naman Goyal and Tom Birch and Vitaliy Liptchinsky and Sergey Edunov and Edouard Grave and Michael Auli and Armand Joulin}, year={2020}, eprint={2010.11125}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
{"language": ["multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", false, "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu"], "license": "mit"}
facebook/m2m100_418M
null
[ "transformers", "pytorch", "rust", "m2m_100", "text2text-generation", "multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "arxiv:2010.11125", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2010.11125" ]
[ "multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu" ]
TAGS #transformers #pytorch #rust #m2m_100 #text2text-generation #multilingual #af #am #ar #ast #az #ba #be #bg #bn #br #bs #ca #ceb #cs #cy #da #de #el #en #es #et #fa #ff #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #ht #hu #hy #id #ig #ilo #is #it #ja #jv #ka #kk #km #kn #ko #lb #lg #ln #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #ns #oc #or #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #so #sq #sr #ss #su #sv #sw #ta #th #tl #tn #tr #uk #ur #uz #vi #wo #xh #yi #yo #zh #zu #arxiv-2010.11125 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
# M2M100 418M M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation. It was introduced in this paper and first released in this repository. The model that can directly translate between the 9,900 directions of 100 languages. To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the 'forced_bos_token_id' parameter to the 'generate' method. *Note: 'M2M100Tokenizer' depends on 'sentencepiece', so make sure to install it before running the example.* To install 'sentencepiece' run 'pip install sentencepiece' See the model hub to look for more fine-tuned versions. ## Languages covered Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu) ## BibTeX entry and citation info
[ "# M2M100 418M\n\nM2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation.\nIt was introduced in this paper and first released in this repository.\n\nThe model that can directly translate between the 9,900 directions of 100 languages.\nTo translate into a target language, the target language id is forced as the first generated token.\nTo force the target language id as the first generated token, pass the 'forced_bos_token_id' parameter to the 'generate' method.\n\n*Note: 'M2M100Tokenizer' depends on 'sentencepiece', so make sure to install it before running the example.*\n\nTo install 'sentencepiece' run 'pip install sentencepiece'\n\n\n\n\n\nSee the model hub to look for more fine-tuned versions.", "## Languages covered\nAfrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu)", "## BibTeX entry and citation info" ]
[ "TAGS\n#transformers #pytorch #rust #m2m_100 #text2text-generation #multilingual #af #am #ar #ast #az #ba #be #bg #bn #br #bs #ca #ceb #cs #cy #da #de #el #en #es #et #fa #ff #fi #fr #fy #ga #gd #gl #gu #ha #he #hi #hr #ht #hu #hy #id #ig #ilo #is #it #ja #jv #ka #kk #km #kn #ko #lb #lg #ln #lo #lt #lv #mg #mk #ml #mn #mr #ms #my #ne #nl #no #ns #oc #or #pa #pl #ps #pt #ro #ru #sd #si #sk #sl #so #sq #sr #ss #su #sv #sw #ta #th #tl #tn #tr #uk #ur #uz #vi #wo #xh #yi #yo #zh #zu #arxiv-2010.11125 #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# M2M100 418M\n\nM2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation.\nIt was introduced in this paper and first released in this repository.\n\nThe model that can directly translate between the 9,900 directions of 100 languages.\nTo translate into a target language, the target language id is forced as the first generated token.\nTo force the target language id as the first generated token, pass the 'forced_bos_token_id' parameter to the 'generate' method.\n\n*Note: 'M2M100Tokenizer' depends on 'sentencepiece', so make sure to install it before running the example.*\n\nTo install 'sentencepiece' run 'pip install sentencepiece'\n\n\n\n\n\nSee the model hub to look for more fine-tuned versions.", "## Languages covered\nAfrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu)", "## BibTeX entry and citation info" ]
image-segmentation
transformers
# MaskFormer MaskFormer model trained on ADE20k semantic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169). Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/maskformer_architecture.png) ## Intended uses & limitations You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation from PIL import Image import requests url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-ade") inputs = feature_extractor(images=image, return_tensors="pt") model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-ade") outputs = model(**inputs) # model predicts class_queries_logits of shape `(batch_size, num_queries)` # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` class_queries_logits = outputs.class_queries_logits masks_queries_logits = outputs.masks_queries_logits # you can pass them to feature_extractor for postprocessing # we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs) predicted_semantic_map = feature_extractor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
{"license": "other", "tags": ["vision", "image-segmentation"], "datasets": ["scene_parse_150"], "widget": [{"src": "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg", "example_title": "House"}, {"src": "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg", "example_title": "Castle"}]}
facebook/maskformer-swin-base-ade
null
[ "transformers", "pytorch", "maskformer", "vision", "image-segmentation", "dataset:scene_parse_150", "arxiv:2107.06278", "license:other", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2107.06278" ]
[]
TAGS #transformers #pytorch #maskformer #vision #image-segmentation #dataset-scene_parse_150 #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us
# MaskFormer MaskFormer model trained on ADE20k semantic segmentation (base-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. !model image ## Intended uses & limitations You can use this particular checkpoint for semantic segmentation. See the model hub to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: For more code examples, we refer to the documentation.
[ "# MaskFormer\n\nMaskFormer model trained on ADE20k semantic segmentation (base-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image", "## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation." ]
[ "TAGS\n#transformers #pytorch #maskformer #vision #image-segmentation #dataset-scene_parse_150 #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us \n", "# MaskFormer\n\nMaskFormer model trained on ADE20k semantic segmentation (base-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image", "## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation." ]
image-segmentation
transformers
# MaskFormer MaskFormer model trained on COCO panoptic segmentation (base-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169). Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/maskformer_architecture.png) ## Intended uses & limitations You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation from PIL import Image import requests # load MaskFormer fine-tuned on COCO panoptic segmentation feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-base-coco") model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-base-coco") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts class_queries_logits of shape `(batch_size, num_queries)` # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` class_queries_logits = outputs.class_queries_logits masks_queries_logits = outputs.masks_queries_logits # you can pass them to feature_extractor for postprocessing result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] # we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs) predicted_panoptic_map = result["segmentation"] ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
{"license": "other", "tags": ["vision", "image-segmentation"], "datasets": ["coco"], "widget": [{"src": "http://images.cocodataset.org/val2017/000000039769.jpg", "example_title": "Cats"}, {"src": "http://images.cocodataset.org/val2017/000000039770.jpg", "example_title": "Castle"}]}
facebook/maskformer-swin-base-coco
null
[ "transformers", "pytorch", "maskformer", "vision", "image-segmentation", "dataset:coco", "arxiv:2107.06278", "license:other", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2107.06278" ]
[]
TAGS #transformers #pytorch #maskformer #vision #image-segmentation #dataset-coco #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us
# MaskFormer MaskFormer model trained on COCO panoptic segmentation (base-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. !model image ## Intended uses & limitations You can use this particular checkpoint for semantic segmentation. See the model hub to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: For more code examples, we refer to the documentation.
[ "# MaskFormer\n\nMaskFormer model trained on COCO panoptic segmentation (base-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image", "## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation." ]
[ "TAGS\n#transformers #pytorch #maskformer #vision #image-segmentation #dataset-coco #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us \n", "# MaskFormer\n\nMaskFormer model trained on COCO panoptic segmentation (base-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image", "## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation." ]
image-segmentation
transformers
# MaskFormer MaskFormer model trained on ADE20k semantic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169). Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/maskformer_architecture.png) ## Intended uses & limitations You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import MaskFormerImageProcessor, MaskFormerForInstanceSegmentation from PIL import Image import requests url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = MaskFormerImageProcessor.from_pretrained("facebook/maskformer-swin-large-ade") inputs = processor(images=image, return_tensors="pt") model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-large-ade") outputs = model(**inputs) # model predicts class_queries_logits of shape `(batch_size, num_queries)` # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` class_queries_logits = outputs.class_queries_logits masks_queries_logits = outputs.masks_queries_logits # you can pass them to processor for postprocessing # we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs) predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
{"license": "other", "tags": ["vision", "image-segmentation"], "datasets": ["scene_parse_150"], "widget": [{"src": "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg", "example_title": "House"}, {"src": "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg", "example_title": "Castle"}]}
facebook/maskformer-swin-large-ade
null
[ "transformers", "pytorch", "maskformer", "vision", "image-segmentation", "dataset:scene_parse_150", "arxiv:2107.06278", "license:other", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
[ "2107.06278" ]
[]
TAGS #transformers #pytorch #maskformer #vision #image-segmentation #dataset-scene_parse_150 #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us
# MaskFormer MaskFormer model trained on ADE20k semantic segmentation (large-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. !model image ## Intended uses & limitations You can use this particular checkpoint for semantic segmentation. See the model hub to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: For more code examples, we refer to the documentation.
[ "# MaskFormer\n\nMaskFormer model trained on ADE20k semantic segmentation (large-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image", "## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation." ]
[ "TAGS\n#transformers #pytorch #maskformer #vision #image-segmentation #dataset-scene_parse_150 #arxiv-2107.06278 #license-other #endpoints_compatible #has_space #region-us \n", "# MaskFormer\n\nMaskFormer model trained on ADE20k semantic segmentation (large-sized version, Swin backbone). It was introduced in the paper Per-Pixel Classification is Not All You Need for Semantic Segmentation and first released in this repository. \n\nDisclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.", "## Model description\n\nMaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.\n\n!model image", "## Intended uses & limitations\n\nYou can use this particular checkpoint for semantic segmentation. See the model hub to look for other\nfine-tuned versions on a task that interests you.", "### How to use\n\nHere is how to use this model:\n\n\n\nFor more code examples, we refer to the documentation." ]