id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.06918
|
John Pavlopoulos
|
John Pavlopoulos, Alv Romell, Jacob Curman, Olof Steinert, Tony
Lindgren, Markus Borg
|
Automotive Multilingual Fault Diagnosis
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Automated fault diagnosis can facilitate diagnostics assistance, speedier
troubleshooting, and better-organised logistics. Currently, AI-based
prognostics and health management in the automotive industry ignore the textual
descriptions of the experienced problems or symptoms. With this study, however,
we show that a multilingual pre-trained Transformer can effectively classify
the textual claims from a large company with vehicle fleets, despite the task's
challenging nature due to the 38 languages and 1,357 classes involved. Overall,
we report an accuracy of more than 80% for high-frequency classes and above 60%
for above-low-frequency classes, bringing novel evidence that multilingual
classification can benefit automotive troubleshooting management.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 11:33:10 GMT"
}
] | 2022-10-14T00:00:00 |
[
[
"Pavlopoulos",
"John",
""
],
[
"Romell",
"Alv",
""
],
[
"Curman",
"Jacob",
""
],
[
"Steinert",
"Olof",
""
],
[
"Lindgren",
"Tony",
""
],
[
"Borg",
"Markus",
""
]
] |
new_dataset
| 0.996365 |
2210.06924
|
Rui Qin
|
Rui Qin, Bin Wang and Yu-Wing Tai
|
Scene Text Image Super-Resolution via Content Perceptual Loss and
Criss-Cross Transformer Blocks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text image super-resolution is a unique and important task to enhance
readability of text images to humans. It is widely used as pre-processing in
scene text recognition. However, due to the complex degradation in natural
scenes, recovering high-resolution texts from the low-resolution inputs is
ambiguous and challenging. Existing methods mainly leverage deep neural
networks trained with pixel-wise losses designed for natural image
reconstruction, which ignore the unique character characteristics of texts. A
few works proposed content-based losses. However, they only focus on text
recognizers' accuracy, while the reconstructed images may still be ambiguous to
humans. Further, they often have weak generalizability to handle cross
languages. To this end, we present TATSR, a Text-Aware Text Super-Resolution
framework, which effectively learns the unique text characteristics using
Criss-Cross Transformer Blocks (CCTBs) and a novel Content Perceptual (CP)
Loss. The CCTB extracts vertical and horizontal content information from text
images by two orthogonal transformers, respectively. The CP Loss supervises the
text reconstruction with content semantics by multi-scale text recognition
features, which effectively incorporates content awareness into the framework.
Extensive experiments on various language datasets demonstrate that TATSR
outperforms state-of-the-art methods in terms of both recognition accuracy and
human perception.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 11:48:45 GMT"
}
] | 2022-10-14T00:00:00 |
[
[
"Qin",
"Rui",
""
],
[
"Wang",
"Bin",
""
],
[
"Tai",
"Yu-Wing",
""
]
] |
new_dataset
| 0.999256 |
2210.06926
|
Aleksey Buzmakov
|
Aleksey Buzmakov, Tatiana Makhalova, Sergei O. Kuznetsov, Amedeo
Napoli
|
Delta-Closure Structure for Studying Data Distribution
| null | null | null | null |
cs.LG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we revisit pattern mining and study the distribution
underlying a binary dataset thanks to the closure structure which is based on
passkeys, i.e., minimum generators in equivalence classes robust to noise. We
introduce $\Delta$-closedness, a generalization of the closure operator, where
$\Delta$ measures how a closed set differs from its upper neighbors in the
partial order induced by closure. A $\Delta$-class of equivalence includes
minimum and maximum elements and allows us to characterize the distribution
underlying the data. Moreover, the set of $\Delta$-classes of equivalence can
be partitioned into the so-called $\Delta$-closure structure. In particular, a
$\Delta$-class of equivalence with a high level demonstrates correlations among
many attributes, which are supported by more observations when $\Delta$ is
large. In the experiments, we study the $\Delta$-closure structure of several
real-world datasets and show that this structure is very stable for large
$\Delta$ and does not substantially depend on the data sampling used for the
analysis.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 11:50:27 GMT"
}
] | 2022-10-14T00:00:00 |
[
[
"Buzmakov",
"Aleksey",
""
],
[
"Makhalova",
"Tatiana",
""
],
[
"Kuznetsov",
"Sergei O.",
""
],
[
"Napoli",
"Amedeo",
""
]
] |
new_dataset
| 0.985586 |
2210.07158
|
Qing Li
|
Qing Li, Yu-Shen Liu, Jin-San Cheng, Cheng Wang, Yi Fang, Zhizhong Han
|
HSurf-Net: Normal Estimation for 3D Point Clouds by Learning Hyper
Surfaces
|
Accepted by NeurIPS 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose a novel normal estimation method called HSurf-Net, which can
accurately predict normals from point clouds with noise and density variations.
Previous methods focus on learning point weights to fit neighborhoods into a
geometric surface approximated by a polynomial function with a predefined
order, based on which normals are estimated. However, fitting surfaces
explicitly from raw point clouds suffers from overfitting or underfitting
issues caused by inappropriate polynomial orders and outliers, which
significantly limits the performance of existing methods. To address these
issues, we introduce hyper surface fitting to implicitly learn hyper surfaces,
which are represented by multi-layer perceptron (MLP) layers that take point
features as input and output surface patterns in a high dimensional feature
space. We introduce a novel space transformation module, which consists of a
sequence of local aggregation layers and global shift layers, to learn an
optimal feature space, and a relative position encoding module to effectively
convert point clouds into the learned feature space. Our model learns hyper
surfaces from the noise-less features and directly predicts normal vectors. We
jointly optimize the MLP weights and module parameters in a data-driven manner
to make the model adaptively find the most suitable surface pattern for various
points. Experimental results show that our HSurf-Net achieves the
state-of-the-art performance on the synthetic shape dataset, the real-world
indoor and outdoor scene datasets. The code, data and pretrained models are
publicly available.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 16:39:53 GMT"
}
] | 2022-10-14T00:00:00 |
[
[
"Li",
"Qing",
""
],
[
"Liu",
"Yu-Shen",
""
],
[
"Cheng",
"Jin-San",
""
],
[
"Wang",
"Cheng",
""
],
[
"Fang",
"Yi",
""
],
[
"Han",
"Zhizhong",
""
]
] |
new_dataset
| 0.9869 |
2210.07212
|
Adnan Aijaz
|
Joseph Bolarinwa, Alex Smith, Adnan Aijaz, Aleksandar Stanoev, Mahesh
Sooriyabandara, Manuel Giuliani
|
Haptic Teleoperation goes Wireless: Evaluation and Benchmarking of a
High-Performance Low-Power Wireless Control Technology
|
Accepted for publication in IEEE International Symposium on Safety,
Security, and Rescue Robotics (SSRR) 2022
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Communication delays and packet losses are commonly investigated issues in
the area of robotic teleoperation. This paper investigates application of a
novel low-power wireless control technology (GALLOP) in a haptic teleoperation
scenario developed to aid in nuclear decommissioning. The new wireless control
protocol, which is based on an off-the-shelf Bluetooth chipset, is compared
against standard implementations of wired and wireless TCP/IP data transport.
Results, through objective and subjective data, show that GALLOP can be a
reasonable substitute for a wired TCP/IP connection, and performs better than a
standard wireless TCP/IP method based on Wi-Fi connectivity.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 17:39:59 GMT"
}
] | 2022-10-14T00:00:00 |
[
[
"Bolarinwa",
"Joseph",
""
],
[
"Smith",
"Alex",
""
],
[
"Aijaz",
"Adnan",
""
],
[
"Stanoev",
"Aleksandar",
""
],
[
"Sooriyabandara",
"Mahesh",
""
],
[
"Giuliani",
"Manuel",
""
]
] |
new_dataset
| 0.955831 |
2210.07242
|
Jingkang Yang
|
Jingkang Yang, Pengyun Wang, Dejian Zou, Zitang Zhou, Kunyuan Ding,
Wenxuan Peng, Haoqi Wang, Guangyao Chen, Bo Li, Yiyou Sun, Xuefeng Du,
Kaiyang Zhou, Wayne Zhang, Dan Hendrycks, Yixuan Li, Ziwei Liu
|
OpenOOD: Benchmarking Generalized Out-of-Distribution Detection
|
Accepted by NeurIPS 2022 Datasets and Benchmarks Track. Codebase:
https://github.com/Jingkang50/OpenOOD
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Out-of-distribution (OOD) detection is vital to safety-critical machine
learning applications and has thus been extensively studied, with a plethora of
methods developed in the literature. However, the field currently lacks a
unified, strictly formulated, and comprehensive benchmark, which often results
in unfair comparisons and inconclusive results. From the problem setting
perspective, OOD detection is closely related to neighboring fields including
anomaly detection (AD), open set recognition (OSR), and model uncertainty,
since methods developed for one domain are often applicable to each other. To
help the community to improve the evaluation and advance, we build a unified,
well-structured codebase called OpenOOD, which implements over 30 methods
developed in relevant fields and provides a comprehensive benchmark under the
recently proposed generalized OOD detection framework. With a comprehensive
comparison of these methods, we are gratified that the field has progressed
significantly over the past few years, where both preprocessing methods and the
orthogonal post-hoc methods show strong potential.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 17:59:57 GMT"
}
] | 2022-10-14T00:00:00 |
[
[
"Yang",
"Jingkang",
""
],
[
"Wang",
"Pengyun",
""
],
[
"Zou",
"Dejian",
""
],
[
"Zhou",
"Zitang",
""
],
[
"Ding",
"Kunyuan",
""
],
[
"Peng",
"Wenxuan",
""
],
[
"Wang",
"Haoqi",
""
],
[
"Chen",
"Guangyao",
""
],
[
"Li",
"Bo",
""
],
[
"Sun",
"Yiyou",
""
],
[
"Du",
"Xuefeng",
""
],
[
"Zhou",
"Kaiyang",
""
],
[
"Zhang",
"Wayne",
""
],
[
"Hendrycks",
"Dan",
""
],
[
"Li",
"Yixuan",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.96196 |
2002.06635
|
Pedro Oliveira
|
Pedro Oliveira, Alexandre Silva, Rui Valadas
|
HPIM-DM: a fast and reliable dense-mode multicast routing protocol
(extended version)
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes the HPIM-DM (Hard-state Protocol Independent Multicast -
Dense Mode) multicast routing protocol. HPIM-DM is a hard-state version of
PIM-DM that keeps its main characteristics but has faster convergence and
better resilience to replay attacks. Like PIM-DM, HPIM-DM is meant for dense
networks and supports its operation on a unicast routing protocol and reverse
path forwarding checks. However, routers maintain sense of the multicast trees
at all times, allowing fast reconfiguration in the presence of network failures
or unicast route changes. This is achieved by (i) keeping information on all
upstream neighbors from which multicast data can be received, (ii) ensuring the
reliable transmission and sequencing of control messages, and (iii)
synchronizing the routing information immediately when a new router joins the
network. The protocol was fully implemented in Python, and the implementation
is publicly available. Finally, the correctness of the protocol was extensively
validated using model checking, logical reasoning and tests performed over the
protocol implementation.
|
[
{
"version": "v1",
"created": "Sun, 16 Feb 2020 18:16:47 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Dec 2021 12:18:46 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Feb 2022 13:18:55 GMT"
},
{
"version": "v4",
"created": "Wed, 12 Oct 2022 14:38:31 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Oliveira",
"Pedro",
""
],
[
"Silva",
"Alexandre",
""
],
[
"Valadas",
"Rui",
""
]
] |
new_dataset
| 0.998288 |
2105.11605
|
Tianxing Xu
|
Tian-Xing Xu, Yuan-Chen Guo, Zhiqiang Li, Ge Yu, Yu-Kun Lai, Song-Hai
Zhang
|
TransLoc3D : Point Cloud based Large-scale Place Recognition using
Adaptive Receptive Fields
|
Appeared in Computational Visual Media 2022, poster. Communications
in Information and Systems. Accepted
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Place recognition plays an essential role in the field of autonomous driving
and robot navigation. Point cloud based methods mainly focus on extracting
global descriptors from local features of point clouds. Despite having achieved
promising results, existing solutions neglect the following aspects, which may
cause performance degradation: (1) huge size difference between objects in
outdoor scenes; (2) moving objects that are unrelated to place recognition; (3)
long-range contextual information. We illustrate that the above aspects bring
challenges to extracting discriminative global descriptors. To mitigate these
problems, we propose a novel method named TransLoc3D, utilizing adaptive
receptive fields with a point-wise reweighting scheme to handle objects of
different sizes while suppressing noises, and an external transformer to
capture long-range feature dependencies. As opposed to existing architectures
which adopt fixed and limited receptive fields, our method benefits from
size-adaptive receptive fields as well as global contextual information, and
outperforms current state-of-the-arts with significant improvements on popular
datasets.
|
[
{
"version": "v1",
"created": "Tue, 25 May 2021 01:54:31 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Jun 2021 09:38:58 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Oct 2022 09:22:30 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Xu",
"Tian-Xing",
""
],
[
"Guo",
"Yuan-Chen",
""
],
[
"Li",
"Zhiqiang",
""
],
[
"Yu",
"Ge",
""
],
[
"Lai",
"Yu-Kun",
""
],
[
"Zhang",
"Song-Hai",
""
]
] |
new_dataset
| 0.970607 |
2112.13593
|
Shwai He
|
Shwai He and Shi Gu
|
Multi-modal Attention Network for Stock Movements Prediction
|
The AAAI-22 Workshop on Knowledge Discovery from Unstructured Data in
Financial Services (KDF 2022)
| null | null | null |
cs.LG cs.CL q-fin.TR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Stock prices move as piece-wise trending fluctuation rather than a purely
random walk. Traditionally, the prediction of future stock movements is based
on the historical trading record. Nowadays, with the development of social
media, many active participants in the market choose to publicize their
strategies, which provides a window to glimpse over the whole market's attitude
towards future movements by extracting the semantics behind social media.
However, social media contains conflicting information and cannot replace
historical records completely. In this work, we propose a multi-modality
attention network to reduce conflicts and integrate semantic and numeric
features to predict future stock movements comprehensively. Specifically, we
first extract semantic information from social media and estimate their
credibility based on posters' identity and public reputation. Then we
incorporate the semantic from online posts and numeric features from historical
records to make the trading strategy. Experimental results show that our
approach outperforms previous methods by a significant margin in both
prediction accuracy (61.20\%) and trading profits (9.13\%). It demonstrates
that our method improves the performance of stock movements prediction and
informs future research on multi-modality fusion towards stock prediction.
|
[
{
"version": "v1",
"created": "Mon, 27 Dec 2021 10:03:09 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jan 2022 10:13:31 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Jun 2022 06:46:51 GMT"
},
{
"version": "v4",
"created": "Mon, 19 Sep 2022 07:33:29 GMT"
},
{
"version": "v5",
"created": "Wed, 12 Oct 2022 13:00:01 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"He",
"Shwai",
""
],
[
"Gu",
"Shi",
""
]
] |
new_dataset
| 0.972846 |
2201.03533
|
Uri Shaham
|
Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv,
Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, Omer Levy
|
SCROLLS: Standardized CompaRison Over Long Language Sequences
|
EMNLP 2022
| null | null | null |
cs.CL cs.AI cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
NLP benchmarks have largely focused on short texts, such as sentences and
paragraphs, even though long texts comprise a considerable amount of natural
language in the wild. We introduce SCROLLS, a suite of tasks that require
reasoning over long texts. We examine existing long-text datasets, and handpick
ones where the text is naturally long, while prioritizing tasks that involve
synthesizing information across the input. SCROLLS contains summarization,
question answering, and natural language inference tasks, covering multiple
domains, including literature, science, business, and entertainment. Initial
baselines, including Longformer Encoder-Decoder, indicate that there is ample
room for improvement on SCROLLS. We make all datasets available in a unified
text-to-text format and host a live leaderboard to facilitate research on model
architecture and pretraining methods.
|
[
{
"version": "v1",
"created": "Mon, 10 Jan 2022 18:47:15 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Oct 2022 21:30:28 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Shaham",
"Uri",
""
],
[
"Segal",
"Elad",
""
],
[
"Ivgi",
"Maor",
""
],
[
"Efrat",
"Avia",
""
],
[
"Yoran",
"Ori",
""
],
[
"Haviv",
"Adi",
""
],
[
"Gupta",
"Ankit",
""
],
[
"Xiong",
"Wenhan",
""
],
[
"Geva",
"Mor",
""
],
[
"Berant",
"Jonathan",
""
],
[
"Levy",
"Omer",
""
]
] |
new_dataset
| 0.998734 |
2203.12865
|
Shaily Bhatt
|
Karthikeyan K, Shaily Bhatt, Pankaj Singh, Somak Aditya, Sandipan
Dandapat, Sunayana Sitaram, Monojit Choudhury
|
Multilingual CheckList: Generation and Evaluation
|
Accepted to Findings of AACL-IJCNLP 2022
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multilingual evaluation benchmarks usually contain limited high-resource
languages and do not test models for specific linguistic capabilities.
CheckList is a template-based evaluation approach that tests models for
specific capabilities. The CheckList template creation process requires native
speakers, posing a challenge in scaling to hundreds of languages. In this work,
we explore multiple approaches to generate Multilingual CheckLists. We device
an algorithm - Template Extraction Algorithm (TEA) for automatically extracting
target language CheckList templates from machine translated instances of a
source language templates. We compare the TEA CheckLists with CheckLists
created with different levels of human intervention. We further introduce
metrics along the dimensions of cost, diversity, utility, and correctness to
compare the CheckLists. We thoroughly analyze different approaches to creating
CheckLists in Hindi. Furthermore, we experiment with 9 more different
languages. We find that TEA followed by human verification is ideal for scaling
Checklist-based evaluation to multiple languages while TEA gives a good
estimates of model performance.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 06:05:28 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2022 11:29:17 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Oct 2022 03:43:20 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"K",
"Karthikeyan",
""
],
[
"Bhatt",
"Shaily",
""
],
[
"Singh",
"Pankaj",
""
],
[
"Aditya",
"Somak",
""
],
[
"Dandapat",
"Sandipan",
""
],
[
"Sitaram",
"Sunayana",
""
],
[
"Choudhury",
"Monojit",
""
]
] |
new_dataset
| 0.997677 |
2205.02177
|
Sebastian M\"uller
|
Sebastian M\"uller, Andreas Penzkofer, Nikita Polyanskii, Jonas Theis,
William Sanders, Hans Moog
|
Tangle 2.0 Leaderless Nakamoto Consensus on the Heaviest DAG
|
revised version, to appear in IEEE Access
| null |
10.1109/ACCESS.2022.3211422
| null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the theoretical foundations of the Tangle 2.0, a probabilistic
leaderless consensus protocol based on a directed acyclic graph (DAG) called
the Tangle. The Tangle naturally succeeds the blockchain as its next
evolutionary step as it offers features suited to establish more efficient and
scalable distributed ledger solutions.
Consensus is no longer found in the longest chain but on the heaviest DAG,
where PoW is replaced by a stake- or reputation-based weight function. The DAG
structure and the underlying Reality-based UTXO Ledger allow parallel
validation of transactions without the need for total ordering. Moreover, it
enables the removal of the intermediary of miners and validators, allowing a
pure two-step process that follows the \emph{propose-vote} paradigm at the node
level and not at the validator level.
We propose a framework to analyse liveness and safety under different
communication and adversary models. This allows providing impossibility results
in some edge cases and in the asynchronous communication model. We provide
formal proof of the security of the protocol assuming a common random coin.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 16:46:53 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Oct 2022 10:15:10 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Müller",
"Sebastian",
""
],
[
"Penzkofer",
"Andreas",
""
],
[
"Polyanskii",
"Nikita",
""
],
[
"Theis",
"Jonas",
""
],
[
"Sanders",
"William",
""
],
[
"Moog",
"Hans",
""
]
] |
new_dataset
| 0.980042 |
2206.04916
|
Yuchen Rao
|
Yuchen Rao, Yinyu Nie, Angela Dai
|
PatchComplete: Learning Multi-Resolution Patch Priors for 3D Shape
Completion on Unseen Categories
|
Video link: https://www.youtube.com/watch?v=Ch1rvw2D_Kc ; Project
page: https://yuchenrao.github.io/projects/patchComplete/patchComplete.html ;
Accepted to NeurIPS'22
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While 3D shape representations enable powerful reasoning in many visual and
perception applications, learning 3D shape priors tends to be constrained to
the specific categories trained on, leading to an inefficient learning process,
particularly for general applications with unseen categories. Thus, we propose
PatchComplete, which learns effective shape priors based on multi-resolution
local patches, which are often more general than full shapes (e.g., chairs and
tables often both share legs) and thus enable geometric reasoning about unseen
class categories. To learn these shared substructures, we learn
multi-resolution patch priors across all train categories, which are then
associated to input partial shape observations by attention across the patch
priors, and finally decoded into a complete shape reconstruction. Such
patch-based priors avoid overfitting to specific train categories and enable
reconstruction on entirely unseen categories at test time. We demonstrate the
effectiveness of our approach on synthetic ShapeNet data as well as challenging
real-scanned objects from ScanNet, which include noise and clutter, improving
over state of the art in novel-category shape completion by 19.3% in chamfer
distance on ShapeNet, and 9.0% for ScanNet.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 07:34:10 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Oct 2022 11:50:45 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Rao",
"Yuchen",
""
],
[
"Nie",
"Yinyu",
""
],
[
"Dai",
"Angela",
""
]
] |
new_dataset
| 0.999751 |
2206.07307
|
Fabian Mentzer
|
Fabian Mentzer, George Toderici, David Minnen, Sung-Jin Hwang, Sergi
Caelles, Mario Lucic, Eirikur Agustsson
|
VCT: A Video Compression Transformer
|
NeurIPS'22 Camera Ready Version. Code: https://goo.gle/vct-paper
| null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show how transformers can be used to vastly simplify neural video
compression. Previous methods have been relying on an increasing number of
architectural biases and priors, including motion prediction and warping
operations, resulting in complex models. Instead, we independently map input
frames to representations and use a transformer to model their dependencies,
letting it predict the distribution of future representations given the past.
The resulting video compression transformer outperforms previous methods on
standard video compression data sets. Experiments on synthetic data show that
our model learns to handle complex motion patterns such as panning, blurring
and fading purely from data. Our approach is easy to implement, and we release
code to facilitate future research.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 05:31:32 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Oct 2022 09:01:27 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Mentzer",
"Fabian",
""
],
[
"Toderici",
"George",
""
],
[
"Minnen",
"David",
""
],
[
"Hwang",
"Sung-Jin",
""
],
[
"Caelles",
"Sergi",
""
],
[
"Lucic",
"Mario",
""
],
[
"Agustsson",
"Eirikur",
""
]
] |
new_dataset
| 0.989268 |
2206.10558
|
Jiayi Weng
|
Jiayi Weng, Min Lin, Shengyi Huang, Bo Liu, Denys Makoviichuk, Viktor
Makoviychuk, Zichen Liu, Yufan Song, Ting Luo, Yukun Jiang, Zhongwen Xu,
Shuicheng Yan
|
EnvPool: A Highly Parallel Reinforcement Learning Environment Execution
Engine
|
NeurIPS'22 camera-ready version
| null | null | null |
cs.LG cs.AI cs.DC cs.PF cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There has been significant progress in developing reinforcement learning (RL)
training systems. Past works such as IMPALA, Apex, Seed RL, Sample Factory, and
others, aim to improve the system's overall throughput. In this paper, we aim
to address a common bottleneck in the RL training system, i.e., parallel
environment execution, which is often the slowest part of the whole system but
receives little attention. With a curated design for paralleling RL
environments, we have improved the RL environment simulation speed across
different hardware setups, ranging from a laptop and a modest workstation, to a
high-end machine such as NVIDIA DGX-A100. On a high-end machine, EnvPool
achieves one million frames per second for the environment execution on Atari
environments and three million frames per second on MuJoCo environments. When
running EnvPool on a laptop, the speed is 2.8x that of the Python subprocess.
Moreover, great compatibility with existing RL training libraries has been
demonstrated in the open-sourced community, including CleanRL, rl_games,
DeepMind Acme, etc. Finally, EnvPool allows researchers to iterate their ideas
at a much faster pace and has great potential to become the de facto RL
environment execution engine. Example runs show that it only takes five minutes
to train agents to play Atari Pong and MuJoCo Ant on a laptop. EnvPool is
open-sourced at https://github.com/sail-sg/envpool.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 17:36:15 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Oct 2022 16:53:29 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Weng",
"Jiayi",
""
],
[
"Lin",
"Min",
""
],
[
"Huang",
"Shengyi",
""
],
[
"Liu",
"Bo",
""
],
[
"Makoviichuk",
"Denys",
""
],
[
"Makoviychuk",
"Viktor",
""
],
[
"Liu",
"Zichen",
""
],
[
"Song",
"Yufan",
""
],
[
"Luo",
"Ting",
""
],
[
"Jiang",
"Yukun",
""
],
[
"Xu",
"Zhongwen",
""
],
[
"Yan",
"Shuicheng",
""
]
] |
new_dataset
| 0.991991 |
2207.11690
|
Xinyu Li
|
Xinyu Li
|
HouseX: A Fine-grained House Music Dataset and its Potential in the
Music Industry
|
7 pages. Accepted by APSIPA ASC 2022 to be held during Nov. 2022
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Machine sound classification has been one of the fundamental tasks of music
technology. A major branch of sound classification is the classification of
music genres. However, though covering most genres of music, existing music
genre datasets often do not contain fine-grained labels that indicate the
detailed sub-genres of music. In consideration of the consistency of genres of
songs in a mixtape or in a DJ (live) set, we have collected and annotated a
dataset of house music that provide 4 sub-genre labels, namely future house,
bass house, progressive house and melodic house. Experiments show that our
annotations well exhibit the characteristics of different categories. Also, we
have built baseline models that classify the sub-genre based on the
mel-spectrograms of a track, achieving strongly competitive results. Besides,
we have put forward a few application scenarios of our dataset and baseline
model, with a simulated sci-fi tunnel as a short demo built and rendered in a
3D modeling software, with the colors of the lights automated by the output of
our model.
|
[
{
"version": "v1",
"created": "Sun, 24 Jul 2022 08:19:19 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Oct 2022 00:34:19 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Li",
"Xinyu",
""
]
] |
new_dataset
| 0.999864 |
2208.04950
|
Amel Dechemi
|
Amel Dechemi, Vikarn Bhakri, Ipsita Sahin, Arjun Modi, Julya Mestas,
Pamodya Peiris, Dannya Enriquez Barrundia, Elena Kokkoni, and Konstantinos
Karydis
|
BabyNet: A Lightweight Network for Infant Reaching Action Recognition in
Unconstrained Environments to Support Future Pediatric Rehabilitation
Applications
|
Accepted to RO-MAN 2021
| null |
10.1109/RO-MAN50785.2021.9515507
| null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Action recognition is an important component to improve autonomy of physical
rehabilitation devices, such as wearable robotic exoskeletons. Existing human
action recognition algorithms focus on adult applications rather than pediatric
ones. In this paper, we introduce BabyNet, a light-weight (in terms of
trainable parameters) network structure to recognize infant reaching action
from off-body stationary cameras. We develop an annotated dataset that includes
diverse reaches performed while in a sitting posture by different infants in
unconstrained environments (e.g., in home settings, etc.). Our approach uses
the spatial and temporal connection of annotated bounding boxes to interpret
onset and offset of reaching, and to detect a complete reaching action. We
evaluate the efficiency of our proposed approach and compare its performance
against other learning-based network structures in terms of capability of
capturing temporal inter-dependencies and accuracy of detection of reaching
onset and offset. Results indicate our BabyNet can attain solid performance in
terms of (average) testing accuracy that exceeds that of other larger networks,
and can hence serve as a light-weight data-driven framework for video-based
infant reaching action recognition.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 07:38:36 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Oct 2022 04:34:41 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Dechemi",
"Amel",
""
],
[
"Bhakri",
"Vikarn",
""
],
[
"Sahin",
"Ipsita",
""
],
[
"Modi",
"Arjun",
""
],
[
"Mestas",
"Julya",
""
],
[
"Peiris",
"Pamodya",
""
],
[
"Barrundia",
"Dannya Enriquez",
""
],
[
"Kokkoni",
"Elena",
""
],
[
"Karydis",
"Konstantinos",
""
]
] |
new_dataset
| 0.999571 |
2209.05471
|
Yaping Zhao
|
Yaping Zhao, Ramgopal Ravi, Shuhui Shi, Zhongrui Wang, Edmund Y. Lam,
Jichang Zhao
|
PATE: Property, Amenities, Traffic and Emotions Coming Together for Real
Estate Price Prediction
|
Accepted by IEEE DSAA 2022. 10 pages, 3 figures
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Real estate prices have a significant impact on individuals, families,
businesses, and governments. The general objective of real estate price
prediction is to identify and exploit socioeconomic patterns arising from real
estate transactions over multiple aspects, ranging from the property itself to
other contributing factors. However, price prediction is a challenging
multidimensional problem that involves estimating many characteristics beyond
the property itself. In this paper, we use multiple sources of data to evaluate
the economic contribution of different socioeconomic characteristics such as
surrounding amenities, traffic conditions and social emotions. Our experiments
were conducted on 28,550 houses in Beijing, China and we rank each
characteristic by its importance. Since the use of multi-source information
improves the accuracy of predictions, the aforementioned characteristics can be
an invaluable resource to assess the economic and social value of real estate.
Code and data are available at: https://github.com/IndigoPurple/PATE
|
[
{
"version": "v1",
"created": "Mon, 29 Aug 2022 12:31:10 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Oct 2022 01:10:06 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Zhao",
"Yaping",
""
],
[
"Ravi",
"Ramgopal",
""
],
[
"Shi",
"Shuhui",
""
],
[
"Wang",
"Zhongrui",
""
],
[
"Lam",
"Edmund Y.",
""
],
[
"Zhao",
"Jichang",
""
]
] |
new_dataset
| 0.997998 |
2210.04068
|
Prashant Pandey
|
Prashant Pandey, Michael A. Bender, Alex Conway, Mart\'in
Farach-Colton, William Kuszmaul, Guido Tagliavini, Rob Johnson
|
IcebergHT: High Performance PMEM Hash Tables Through Stability and Low
Associativity
| null | null | null | null |
cs.DS cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Modern hash table designs strive to minimize space while maximizing speed.
The most important factor in speed is the number of cache lines accessed during
updates and queries. This is especially important on PMEM, which is slower than
DRAM and in which writes are more expensive than reads.
This paper proposes two stronger design objectives: stability and
low-associativity. A stable hash table doesn't move items around, and a hash
table has low associativity if there are only a few locations where an item can
be stored. Low associativity ensures that queries need to examine only a few
memory locations, and stability ensures that insertions write to very few cache
lines. Stability also simplifies scaling and crash safety.
We present IcebergHT, a fast, crash-safe, concurrent, and space-efficient
hash table for PMEM based on the design principles of stability and low
associativity. IcebergHT combines in-memory metadata with a new hashing
technique, iceberg hashing, that is (1) space efficient, (2) stable, and (3)
supports low associativity. In contrast, existing hash-tables either modify
numerous cache lines during insertions (e.g. cuckoo hashing), access numerous
cache lines during queries (e.g. linear probing), or waste space (e.g.
chaining). Moreover, the combination of (1)-(3) yields several emergent
benefits: IcebergHT scales better than other hash tables, supports
crash-safety, and has excellent performance on PMEM (where writes are
particularly expensive).
|
[
{
"version": "v1",
"created": "Sat, 8 Oct 2022 17:32:59 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Oct 2022 22:23:04 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Pandey",
"Prashant",
""
],
[
"Bender",
"Michael A.",
""
],
[
"Conway",
"Alex",
""
],
[
"Farach-Colton",
"Martín",
""
],
[
"Kuszmaul",
"William",
""
],
[
"Tagliavini",
"Guido",
""
],
[
"Johnson",
"Rob",
""
]
] |
new_dataset
| 0.998972 |
2210.04600
|
Herman Kamper
|
Kayode Olaleye, Dan Oneata, Herman Kamper
|
YFACC: A Yor\`ub\'a speech-image dataset for cross-lingual keyword
localisation through visual grounding
|
Accepted to IEEE SLT 2022
| null | null | null |
cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visually grounded speech (VGS) models are trained on images paired with
unlabelled spoken captions. Such models could be used to build speech systems
in settings where it is impossible to get labelled data, e.g. for documenting
unwritten languages. However, most VGS studies are in English or other
high-resource languages. This paper attempts to address this shortcoming. We
collect and release a new single-speaker dataset of audio captions for 6k
Flickr images in Yor\`ub\'a -- a real low-resource language spoken in Nigeria.
We train an attention-based VGS model where images are automatically tagged
with English visual labels and paired with Yor\`ub\'a utterances. This enables
cross-lingual keyword localisation: a written English query is detected and
located in Yor\`ub\'a speech. To quantify the effect of the smaller dataset, we
compare to English systems trained on similar and more data. We hope that this
new dataset will stimulate research in the use of VGS models for real
low-resource languages.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 11:58:10 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Oct 2022 07:55:39 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Olaleye",
"Kayode",
""
],
[
"Oneata",
"Dan",
""
],
[
"Kamper",
"Herman",
""
]
] |
new_dataset
| 0.999042 |
2210.05425
|
Rabin Adhikari
|
Rabin Adhikari, Safal Thapaliya, Nirajan Basnet, Samip Poudel, Aman
Shakya, Bishesh Khanal
|
COVID-19-related Nepali Tweets Classification in a Low Resource Setting
|
Accepted at the 7th Social Media Mining for Health (#SMM4H) Workshop,
co-located at Coling 2022
| null | null | null |
cs.CL cs.AI cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Billions of people across the globe have been using social media platforms in
their local languages to voice their opinions about the various topics related
to the COVID-19 pandemic. Several organizations, including the World Health
Organization, have developed automated social media analysis tools that
classify COVID-19-related tweets into various topics. However, these tools that
help combat the pandemic are limited to very few languages, making several
countries unable to take their benefit. While multi-lingual or low-resource
language-specific tools are being developed, they still need to expand their
coverage, such as for the Nepali language. In this paper, we identify the eight
most common COVID-19 discussion topics among the Twitter community using the
Nepali language, set up an online platform to automatically gather Nepali
tweets containing the COVID-19-related keywords, classify the tweets into the
eight topics, and visualize the results across the period in a web-based
dashboard. We compare the performance of two state-of-the-art multi-lingual
language models for Nepali tweet classification, one generic (mBERT) and the
other Nepali language family-specific model (MuRIL). Our results show that the
models' relative performance depends on the data size, with MuRIL doing better
for a larger dataset. The annotated data, models, and the web-based dashboard
are open-sourced at https://github.com/naamiinepal/covid-tweet-classification.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 13:08:37 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Adhikari",
"Rabin",
""
],
[
"Thapaliya",
"Safal",
""
],
[
"Basnet",
"Nirajan",
""
],
[
"Poudel",
"Samip",
""
],
[
"Shakya",
"Aman",
""
],
[
"Khanal",
"Bishesh",
""
]
] |
new_dataset
| 0.979713 |
2210.05726
|
Anastasia Safonova
|
Anastasia Safonova, Tatiana Yudina, Emil Nadimanov, Cydnie Davenport
|
Automatic Speech Recognition of Low-Resource Languages Based on Chukchi
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The following paper presents a project focused on the research and creation
of a new Automatic Speech Recognition (ASR) based in the Chukchi language.
There is no one complete corpus of the Chukchi language, so most of the work
consisted in collecting audio and texts in the Chukchi language from open
sources and processing them. We managed to collect 21:34:23 hours of audio
recordings and 112,719 sentences (or 2,068,273 words) of text in the Chukchi
language. The XLSR model was trained on the obtained data, which showed good
results even with a small amount of data. Besides the fact that the Chukchi
language is a low-resource language, it is also polysynthetic, which
significantly complicates any automatic processing. Thus, the usual WER metric
for evaluating ASR becomes less indicative for a polysynthetic language.
However, the CER metric showed good results. The question of metrics for
polysynthetic languages remains open.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 18:37:15 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Safonova",
"Anastasia",
""
],
[
"Yudina",
"Tatiana",
""
],
[
"Nadimanov",
"Emil",
""
],
[
"Davenport",
"Cydnie",
""
]
] |
new_dataset
| 0.999155 |
2210.05765
|
Alex Lecavalier
|
Alex Lecavalier, Jeff Denis, Jean-S\'ebastien Plante, Alexandre Girard
|
A Bimodal Hydrostatic Actuator for Robotic Legs with Compliant Fast
Motion and High Lifting Force
|
7 pages, 15 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic legs have bimodal operations: swing phases when the leg needs to move
quickly in the air (high-speed, low-force) and stance phases when the leg bears
the weight of the system (low-speed, high-force). Sizing a traditional
single-ratio actuation system for such extremum operations leads to oversized
heavy electric motor and poor energy efficiency, which hinder the capability of
legged systems that bear the mass of their actuators and energy source. This
paper explores an actuation concept where a hydrostatic transmission is
dynamically reconfigured using valves to suit the requirements of each phase of
a robotic leg. An analysis of the mass-delay-flow trade-off for the switching
valve is presented. Then, a custom actuation system is built and integrated on
a robotic leg test bench to evaluate the concept. Experimental results show
that 1) small motorized ball valves can make fast transitions between operating
modes when designed for this task, 2) the proposed operating principle and
control schemes allow for seamless transitions, even during an impact with the
ground and 3) the actuator characteristics address the needs of a leg bimodal
operation in terms of force, speed and compliance.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 20:17:26 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Lecavalier",
"Alex",
""
],
[
"Denis",
"Jeff",
""
],
[
"Plante",
"Jean-Sébastien",
""
],
[
"Girard",
"Alexandre",
""
]
] |
new_dataset
| 0.999548 |
2210.05772
|
Zirong Chen
|
Zirong Chen
|
Applying FrameNet to Chinese(Poetry)
| null | null | null | null |
cs.CL cs.AI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
FrameNet( Fillmore and Baker [2009] ) is well-known for its wide use for
knowledge representation in the form of inheritance-based ontologies and
lexica( Trott et al. [2020] ). Although FrameNet is usually applied to
languages like English, Spanish and Italian, there are still plenty of FrameNet
data sets available for other languages like Chinese, which differs
significantly from those languages based on Latin alphabets. In this paper, the
translation from ancient Chinese Poetry to modern Chinese will be first
conducted to further apply the Chinese FrameNet(CFN, provided by Shanxi
University). Afterwards, the translation from modern Chinese will be conducted
as well for the comparison between the applications of CFN and English
FrameNet. Finally, the overall comparison will be draw between CFN to modern
Chinese and English FrameNet.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 20:28:20 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Chen",
"Zirong",
""
]
] |
new_dataset
| 0.986858 |
2210.05773
|
Zirong Chen
|
Zirong Chen and Haotian Xue
|
Bil-DOS: A Bi-lingual Dialogue Ordering System (for Subway)
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Due to the unfamiliarity to particular words(or proper nouns) for
ingredients, non-native English speakers can be extremely confused about the
ordering process in restaurants like Subway. Thus, We developed a dialogue
system, which supports Chinese(Mandarin)1 and English2 at the same time. In
other words, users can switch arbitrarily between Chinese(Mandarin) and English
as the conversation is being conducted. This system is specifically designed
for Subway ordering3. In BilDOS, we designed a Discriminator module to tell the
language is being used in inputted user utterance, a Translator module to
translate used language into English if it is not English, and a Dialogue
Manager module to detect the intention within inputted user utterances, handle
outlier inputs by throwing clarification requests, map detected Intention and
detailed Keyword4 into a particular intention class, locate the current
ordering process, continue to give queries to finish the order, conclude the
order details once the order is completed, activate the evaluation process when
the conversation is done.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 20:32:02 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Chen",
"Zirong",
""
],
[
"Xue",
"Haotian",
""
]
] |
new_dataset
| 0.99952 |
2210.05784
|
Yusuke Tanaka
|
Yusuke Tanaka, Ankur Mehta
|
REMS: Middleware for Robotics Education and Development
|
Submission to ICRA2023
| null | null | null |
cs.RO cs.MA cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces REMS, a robotics middleware and control framework that
is designed to introduce the Zen of Python to robotics and to improve robotics
education and development flow. Although existing middleware can serve hardware
abstraction and modularity, setting up environments and learning
middleware-specific syntax and procedures are less viable in education. They
can curb opportunities to understand robotics concepts, theories, and
algorithms. Robotics is a field of integration; students and developers from
various backgrounds will be involved in programming. Establishing Pythonic and
object-oriented robotic framework in a natural way can enhance modular and
abstracted programming for better readability, reusability, and simplicity, but
also supports useful and practical skills generally in coding. REMS is to be a
valuable robot educational medium not just as a tool and to be a platform from
one robot to multi-agent across hardware, simulation, and analytical model
implementations.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 21:05:08 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Tanaka",
"Yusuke",
""
],
[
"Mehta",
"Ankur",
""
]
] |
new_dataset
| 0.996452 |
2210.05828
|
Peiye Zhuang
|
Peiye Zhuang, Jia-bin Huang, Ayush Saraf, Xuejian Rong, Changil Kim,
Denis Demandolx
|
AMICO: Amodal Instance Composition
|
Accepted to BMVC 2021, 20 oages, 12 figures
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Image composition aims to blend multiple objects to form a harmonized image.
Existing approaches often assume precisely segmented and intact objects. Such
assumptions, however, are hard to satisfy in unconstrained scenarios. We
present Amodal Instance Composition for compositing imperfect -- potentially
incomplete and/or coarsely segmented -- objects onto a target image. We first
develop object shape prediction and content completion modules to synthesize
the amodal contents. We then propose a neural composition model to blend the
objects seamlessly. Our primary technical novelty lies in using separate
foreground/background representations and blending mask prediction to alleviate
segmentation errors. Our results show state-of-the-art performance on public
COCOA and KINS benchmarks and attain favorable visual results across diverse
scenes. We demonstrate various image composition applications such as object
insertion and de-occlusion.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 23:23:14 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Zhuang",
"Peiye",
""
],
[
"Huang",
"Jia-bin",
""
],
[
"Saraf",
"Ayush",
""
],
[
"Rong",
"Xuejian",
""
],
[
"Kim",
"Changil",
""
],
[
"Demandolx",
"Denis",
""
]
] |
new_dataset
| 0.999671 |
2210.05836
|
An Yan
|
An Yan, Jiacheng Li, Wanrong Zhu, Yujie Lu, William Yang Wang, Julian
McAuley
|
CLIP also Understands Text: Prompting CLIP for Phrase Understanding
|
Work in progress
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Contrastive Language-Image Pretraining (CLIP) efficiently learns visual
concepts by pre-training with natural language supervision. CLIP and its visual
encoder have been explored on various vision and language tasks and achieve
strong zero-shot or transfer learning performance. However, the application of
its text encoder solely for text understanding has been less explored. In this
paper, we find that the text encoder of CLIP actually demonstrates strong
ability for phrase understanding, and can even significantly outperform popular
language models such as BERT with a properly designed prompt. Extensive
experiments validate the effectiveness of our method across different datasets
and domains on entity clustering and entity set expansion tasks.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 23:35:18 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Yan",
"An",
""
],
[
"Li",
"Jiacheng",
""
],
[
"Zhu",
"Wanrong",
""
],
[
"Lu",
"Yujie",
""
],
[
"Wang",
"William Yang",
""
],
[
"McAuley",
"Julian",
""
]
] |
new_dataset
| 0.996973 |
2210.05840
|
Jielin Qiu
|
Jielin Qiu, Franck Dernoncourt, Trung Bui, Zhaowen Wang, Ding Zhao,
Hailin Jin
|
LiveSeg: Unsupervised Multimodal Temporal Segmentation of Long
Livestream Videos
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Livestream videos have become a significant part of online learning, where
design, digital marketing, creative painting, and other skills are taught by
experienced experts in the sessions, making them valuable materials. However,
Livestream tutorial videos are usually hours long, recorded, and uploaded to
the Internet directly after the live sessions, making it hard for other people
to catch up quickly. An outline will be a beneficial solution, which requires
the video to be temporally segmented according to topics. In this work, we
introduced a large Livestream video dataset named MultiLive, and formulated the
temporal segmentation of the long Livestream videos (TSLLV) task. We propose
LiveSeg, an unsupervised Livestream video temporal Segmentation solution, which
takes advantage of multimodal features from different domains. Our method
achieved a $16.8\%$ F1-score performance improvement compared with the
state-of-the-art method.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 00:08:17 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Qiu",
"Jielin",
""
],
[
"Dernoncourt",
"Franck",
""
],
[
"Bui",
"Trung",
""
],
[
"Wang",
"Zhaowen",
""
],
[
"Zhao",
"Ding",
""
],
[
"Jin",
"Hailin",
""
]
] |
new_dataset
| 0.99965 |
2210.05875
|
Sunjae Kwon
|
Sunjae Kwon, Zonghai Yao, Harmon S. Jordan, David A. Levy, Brian
Corner, Hong Yu
|
MedJEx: A Medical Jargon Extraction Model with Wiki's Hyperlink Span and
Contextualized Masked Language Model Score
|
Accepted to EMNLP 22
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a new natural language processing (NLP) application for
identifying medical jargon terms potentially difficult for patients to
comprehend from electronic health record (EHR) notes. We first present a novel
and publicly available dataset with expert-annotated medical jargon terms from
18K+ EHR note sentences ($MedJ$). Then, we introduce a novel medical jargon
extraction ($MedJEx$) model which has been shown to outperform existing
state-of-the-art NLP models. First, MedJEx improved the overall performance
when it was trained on an auxiliary Wikipedia hyperlink span dataset, where
hyperlink spans provide additional Wikipedia articles to explain the spans (or
terms), and then fine-tuned on the annotated MedJ data. Secondly, we found that
a contextualized masked language model score was beneficial for detecting
domain-specific unfamiliar jargon terms. Moreover, our results show that
training on the auxiliary Wikipedia hyperlink span datasets improved six out of
eight biomedical named entity recognition benchmark datasets. Both MedJ and
MedJEx are publicly available.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 02:27:32 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Kwon",
"Sunjae",
""
],
[
"Yao",
"Zonghai",
""
],
[
"Jordan",
"Harmon S.",
""
],
[
"Levy",
"David A.",
""
],
[
"Corner",
"Brian",
""
],
[
"Yu",
"Hong",
""
]
] |
new_dataset
| 0.999612 |
2210.05895
|
Haodong Duan
|
Haodong Duan, Jiaqi Wang, Kai Chen, Dahua Lin
|
DG-STGCN: Dynamic Spatial-Temporal Modeling for Skeleton-based Action
Recognition
|
Codes will be released in https://github.com/kennymckormick/pyskl
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph convolution networks (GCN) have been widely used in skeleton-based
action recognition. We note that existing GCN-based approaches primarily rely
on prescribed graphical structures (ie., a manually defined topology of
skeleton joints), which limits their flexibility to capture complicated
correlations between joints. To move beyond this limitation, we propose a new
framework for skeleton-based action recognition, namely Dynamic Group
Spatio-Temporal GCN (DG-STGCN). It consists of two modules, DG-GCN and DG-TCN,
respectively, for spatial and temporal modeling. In particular, DG-GCN uses
learned affinity matrices to capture dynamic graphical structures instead of
relying on a prescribed one, while DG-TCN performs group-wise temporal
convolutions with varying receptive fields and incorporates a dynamic
joint-skeleton fusion module for adaptive multi-level temporal modeling. On a
wide range of benchmarks, including NTURGB+D, Kinetics-Skeleton, BABEL, and
Toyota SmartHome, DG-STGCN consistently outperforms state-of-the-art methods,
often by a notable margin.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 03:17:37 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Duan",
"Haodong",
""
],
[
"Wang",
"Jiaqi",
""
],
[
"Chen",
"Kai",
""
],
[
"Lin",
"Dahua",
""
]
] |
new_dataset
| 0.966643 |
2210.05896
|
Zhijie Wang
|
Shuangzhi Li, Zhijie Wang, Felix Juefei-Xu, Qing Guo, Xingyu Li and
Lei Ma
|
Common Corruption Robustness of Point Cloud Detectors: Benchmark and
Enhancement
|
16 pages, 6 figures
| null | null | null |
cs.CV cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Object detection through LiDAR-based point cloud has recently been important
in autonomous driving. Although achieving high accuracy on public benchmarks,
the state-of-the-art detectors may still go wrong and cause a heavy loss due to
the widespread corruptions in the real world like rain, snow, sensor noise,
etc. Nevertheless, there is a lack of a large-scale dataset covering diverse
scenes and realistic corruption types with different severities to develop
practical and robust point cloud detectors, which is challenging due to the
heavy collection costs. To alleviate the challenge and start the first step for
robust point cloud detection, we propose the physical-aware simulation methods
to generate degraded point clouds under different real-world common
corruptions. Then, for the first attempt, we construct a benchmark based on the
physical-aware common corruptions for point cloud detectors, which contains a
total of 1,122,150 examples covering 7,481 scenes, 25 common corruption types,
and 6 severities. With such a novel benchmark, we conduct extensive empirical
studies on 8 state-of-the-art detectors that contain 6 different detection
frameworks. Thus we get several insight observations revealing the
vulnerabilities of the detectors and indicating the enhancement directions.
Moreover, we further study the effectiveness of existing robustness enhancement
methods based on data augmentation and data denoising. The benchmark can
potentially be a new platform for evaluating point cloud detectors, opening a
door for developing novel robustness enhancement methods.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 03:23:35 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Li",
"Shuangzhi",
""
],
[
"Wang",
"Zhijie",
""
],
[
"Juefei-Xu",
"Felix",
""
],
[
"Guo",
"Qing",
""
],
[
"Li",
"Xingyu",
""
],
[
"Ma",
"Lei",
""
]
] |
new_dataset
| 0.997551 |
2210.05912
|
Runmin Cong
|
Runmin Cong, Weiyu Song, Jianjun Lei, Guanghui Yue, Yao Zhao, and Sam
Kwong
|
PSNet: Parallel Symmetric Network for Video Salient Object Detection
|
Accepted by IEEE Transactions on Emerging Topics in Computational
Intelligence 2022, 13 pages, 8 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
For the video salient object detection (VSOD) task, how to excavate the
information from the appearance modality and the motion modality has always
been a topic of great concern. The two-stream structure, including an RGB
appearance stream and an optical flow motion stream, has been widely used as a
typical pipeline for VSOD tasks, but the existing methods usually only use
motion features to unidirectionally guide appearance features or adaptively but
blindly fuse two modality features. However, these methods underperform in
diverse scenarios due to the uncomprehensive and unspecific learning schemes.
In this paper, following a more secure modeling philosophy, we deeply
investigate the importance of appearance modality and motion modality in a more
comprehensive way and propose a VSOD network with up and down parallel
symmetry, named PSNet. Two parallel branches with different dominant modalities
are set to achieve complete video saliency decoding with the cooperation of the
Gather Diffusion Reinforcement (GDR) module and Cross-modality Refinement and
Complement (CRC) module. Finally, we use the Importance Perception Fusion (IPF)
module to fuse the features from two parallel branches according to their
different importance in different scenarios. Experiments on four dataset
benchmarks demonstrate that our method achieves desirable and competitive
performance.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 04:11:48 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Cong",
"Runmin",
""
],
[
"Song",
"Weiyu",
""
],
[
"Lei",
"Jianjun",
""
],
[
"Yue",
"Guanghui",
""
],
[
"Zhao",
"Yao",
""
],
[
"Kwong",
"Sam",
""
]
] |
new_dataset
| 0.99942 |
2210.05984
|
Xu Xuecheng
|
Xuecheng Xu, Sha Lu, Jun Wu, Haojian Lu, Qiuguo Zhu, Yiyi Liao, Rong
Xiong and Yue Wang
|
RING++: Roto-translation Invariant Gram for Global Localization on a
Sparse Scan Map
| null | null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Global localization plays a critical role in many robot applications.
LiDAR-based global localization draws the community's focus with its robustness
against illumination and seasonal changes. To further improve the localization
under large viewpoint differences, we propose RING++ which has roto-translation
invariant representation for place recognition, and global convergence for both
rotation and translation estimation. With the theoretical guarantee, RING++ is
able to address the large viewpoint difference using a lightweight map with
sparse scans. In addition, we derive sufficient conditions of feature
extractors for the representation preserving the roto-translation invariance,
making RING++ a framework applicable to generic multi-channel features. To the
best of our knowledge, this is the first learning-free framework to address all
subtasks of global localization in the sparse scan map. Validations on
real-world datasets show that our approach demonstrates better performance than
state-of-the-art learning-free methods, and competitive performance with
learning-based methods. Finally, we integrate RING++ into a multi-robot/session
SLAM system, performing its effectiveness in collaborative applications.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 07:49:24 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Xu",
"Xuecheng",
""
],
[
"Lu",
"Sha",
""
],
[
"Wu",
"Jun",
""
],
[
"Lu",
"Haojian",
""
],
[
"Zhu",
"Qiuguo",
""
],
[
"Liao",
"Yiyi",
""
],
[
"Xiong",
"Rong",
""
],
[
"Wang",
"Yue",
""
]
] |
new_dataset
| 0.967816 |
2210.06023
|
Tim Schopf
|
Tim Schopf, Daniel Braun, Florian Matthes
|
Lbl2Vec: An Embedding-Based Approach for Unsupervised Document Retrieval
on Predefined Topics
| null |
In Proceedings of the 17th International Conference on Web
Information Systems and Technologies - WEBIST, ISBN 978-989-758-536-4; ISSN
2184-3252, pages 124-132 (2021)
|
10.5220/0010710300003058
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we consider the task of retrieving documents with predefined
topics from an unlabeled document dataset using an unsupervised approach. The
proposed unsupervised approach requires only a small number of keywords
describing the respective topics and no labeled document. Existing approaches
either heavily relied on a large amount of additionally encoded world knowledge
or on term-document frequencies. Contrariwise, we introduce a method that
learns jointly embedded document and word vectors solely from the unlabeled
document dataset in order to find documents that are semantically similar to
the topics described by the keywords. The proposed method requires almost no
text preprocessing but is simultaneously effective at retrieving relevant
documents with high probability. When successively retrieving documents on
different predefined topics from publicly available and commonly used datasets,
we achieved an average area under the receiver operating characteristic curve
value of 0.95 on one dataset and 0.92 on another. Further, our method can be
used for multiclass document classification, without the need to assign labels
to the dataset in advance. Compared with an unsupervised classification
baseline, we increased F1 scores from 76.6 to 82.7 and from 61.0 to 75.1 on the
respective datasets. For easy replication of our approach, we make the
developed Lbl2Vec code publicly available as a ready-to-use tool under the
3-Clause BSD license.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 08:57:01 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Schopf",
"Tim",
""
],
[
"Braun",
"Daniel",
""
],
[
"Matthes",
"Florian",
""
]
] |
new_dataset
| 0.999134 |
2210.06033
|
Max Spahn
|
Max Spahn, Chadi Salmi, Javier Alonso-Mora
|
Local Planner Bench: Benchmarking for Local Motion Planning
|
Workshop @IROS2022: Evaluating Motion Planning Performance, 4 pages
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Local motion planning is a heavily researched topic in the field of robotics
with many promising algorithms being published every year. However, it is
difficult and time-consuming to compare different methods in the field. In this
paper, we present localPlannerBench, a new benchmarking suite that allows quick
and seamless comparison between local motion planning algorithms. The key focus
of the project lies in the extensibility of the environment and the simulation
cases. Out-of-the-box, localPlannerBench already supports many simulation cases
ranging from a simple 2D point mass to full-fledged 3D 7DoF manipulators, and
it is straightforward to add your own custom robot using a URDF file. A
post-processor is built-in that can be extended with custom metrics and plots.
To integrate your own motion planner, simply create a wrapper that derives from
the provided base class. Ultimately we aim to improve the reproducibility of
local motion planning algorithms and encourage standardized open-source
comparison.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 09:09:46 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Spahn",
"Max",
""
],
[
"Salmi",
"Chadi",
""
],
[
"Alonso-Mora",
"Javier",
""
]
] |
new_dataset
| 0.997396 |
2210.06063
|
Heyuan Yao
|
Heyuan Yao, Zhenhua Song, Baoquan Chen, Libin Liu
|
ControlVAE: Model-Based Learning of Generative Controllers for
Physics-Based Characters
|
SIGGRAPH Asia 2022 (Journal Track);
| null |
10.1145/3550454.3555434
| null |
cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce ControlVAE, a novel model-based framework for
learning generative motion control policies based on variational autoencoders
(VAE). Our framework can learn a rich and flexible latent representation of
skills and a skill-conditioned generative control policy from a diverse set of
unorganized motion sequences, which enables the generation of realistic human
behaviors by sampling in the latent space and allows high-level control
policies to reuse the learned skills to accomplish a variety of downstream
tasks. In the training of ControlVAE, we employ a learnable world model to
realize direct supervision of the latent space and the control policy. This
world model effectively captures the unknown dynamics of the simulation system,
enabling efficient model-based learning of high-level downstream tasks. We also
learn a state-conditional prior distribution in the VAE-based generative
control policy, which generates a skill embedding that outperforms the
non-conditional priors in downstream tasks. We demonstrate the effectiveness of
ControlVAE using a diverse set of tasks, which allows realistic and interactive
control of the simulated characters.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 10:11:36 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Yao",
"Heyuan",
""
],
[
"Song",
"Zhenhua",
""
],
[
"Chen",
"Baoquan",
""
],
[
"Liu",
"Libin",
""
]
] |
new_dataset
| 0.992543 |
2210.06094
|
Achraf Ben-Hamadou
|
Achraf Ben-Hamadou and Oussama Smaoui and Houda Chaabouni-Chouayakh
and Ahmed Rekik and Sergi Pujades and Edmond Boyer and Julien Strippoli and
Aur\'elien Thollot and Hugo Setbon and Cyril Trosset and Edouard Ladroit
|
Teeth3DS: a benchmark for teeth segmentation and labeling from
intra-oral 3D scans
|
8 pages, 5 figures, 1 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Teeth segmentation and labeling are critical components of Computer-Aided
Dentistry (CAD) systems. Indeed, before any orthodontic or prosthetic treatment
planning, a CAD system needs to first accurately segment and label each
instance of teeth visible in the 3D dental scan, this is to avoid
time-consuming manual adjustments by the dentist. Nevertheless, developing such
an automated and accurate dental segmentation and labeling tool is very
challenging, especially given the lack of publicly available datasets or
benchmarks. This article introduces the first public benchmark, named Teeth3DS,
which has been created in the frame of the 3DTeethSeg 2022 MICCAI challenge to
boost the research field and inspire the 3D vision research community to work
on intra-oral 3D scans analysis such as teeth identification, segmentation,
labeling, 3D modeling and 3D reconstruction. Teeth3DS is made of 1800
intra-oral scans (23999 annotated teeth) collected from 900 patients covering
the upper and lower jaws separately, acquired and validated by
orthodontists/dental surgeons with more than 5 years of professional
experience.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 11:18:35 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Ben-Hamadou",
"Achraf",
""
],
[
"Smaoui",
"Oussama",
""
],
[
"Chaabouni-Chouayakh",
"Houda",
""
],
[
"Rekik",
"Ahmed",
""
],
[
"Pujades",
"Sergi",
""
],
[
"Boyer",
"Edmond",
""
],
[
"Strippoli",
"Julien",
""
],
[
"Thollot",
"Aurélien",
""
],
[
"Setbon",
"Hugo",
""
],
[
"Trosset",
"Cyril",
""
],
[
"Ladroit",
"Edouard",
""
]
] |
new_dataset
| 0.999513 |
2210.06104
|
Amir Hadifar
|
Amir Hadifar, Semere Kiros Bitew, Johannes Deleu, Chris Develder,
Thomas Demeester
|
EduQG: A Multi-format Multiple Choice Dataset for the Educational Domain
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a high-quality dataset that contains 3,397 samples comprising
(i) multiple choice questions, (ii) answers (including distractors), and (iii)
their source documents, from the educational domain. Each question is phrased
in two forms, normal and close. Correct answers are linked to source documents
with sentence-level annotations. Thus, our versatile dataset can be used for
both question and distractor generation, as well as to explore new challenges
such as question format conversion. Furthermore, 903 questions are accompanied
by their cognitive complexity level as per Bloom's taxonomy. All questions have
been generated by educational experts rather than crowd workers to ensure they
are maintaining educational and learning standards. Our analysis and
experiments suggest distinguishable differences between our dataset and
commonly used ones for question generation for educational purposes. We believe
this new dataset can serve as a valuable resource for research and evaluation
in the educational domain. The dataset and baselines will be released to
support further research in question generation.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 11:28:34 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Hadifar",
"Amir",
""
],
[
"Bitew",
"Semere Kiros",
""
],
[
"Deleu",
"Johannes",
""
],
[
"Develder",
"Chris",
""
],
[
"Demeester",
"Thomas",
""
]
] |
new_dataset
| 0.999705 |
2210.06150
|
Samia Touileb
|
Petter M{\ae}hlum, Andre K{\aa}sen, Samia Touileb, Jeremy Barnes
|
Annotating Norwegian Language Varieties on Twitter for Part-of-Speech
|
Accepted at the Ninth Workshop on NLP for Similar Languages,
Varieties and Dialects (Vardial2022). Collocated with COLING2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Norwegian Twitter data poses an interesting challenge for Natural Language
Processing (NLP) tasks. These texts are difficult for models trained on
standardized text in one of the two Norwegian written forms (Bokm{\aa}l and
Nynorsk), as they contain both the typical variation of social media text, as
well as a large amount of dialectal variety. In this paper we present a novel
Norwegian Twitter dataset annotated with POS-tags. We show that models trained
on Universal Dependency (UD) data perform worse when evaluated against this
dataset, and that models trained on Bokm{\aa}l generally perform better than
those trained on Nynorsk. We also see that performance on dialectal tweets is
comparable to the written standards for some models. Finally we perform a
detailed analysis of the errors that models commonly make on this data.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 12:53:30 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Mæhlum",
"Petter",
""
],
[
"Kåsen",
"Andre",
""
],
[
"Touileb",
"Samia",
""
],
[
"Barnes",
"Jeremy",
""
]
] |
new_dataset
| 0.994067 |
2210.06160
|
Yu Wei Tan
|
Yu Wei Tan, Nicholas Chua, Clarence Koh and Anand Bhojan
|
RTSDF: Real-time Signed Distance Fields for Soft Shadow Approximation in
Games
| null | null |
10.5220/0010996200003124
| null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Signed distance fields (SDFs) are a form of surface representation widely
used in computer graphics, having applications in rendering, collision
detection and modelling. In interactive media such as games, high-resolution
SDFs are commonly produced offline and subsequently loaded into the
application, representing rigid meshes only. This work develops a novel
technique that combines jump flooding and ray tracing to generate approximate
SDFs in real-time. Our approach can produce relatively accurate scene
representation for rendering soft shadows while maintaining interactive frame
rates. We extend our previous work with details on the design and
implementation as well as visual quality and performance evaluation of the
technique.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 11:47:12 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Tan",
"Yu Wei",
""
],
[
"Chua",
"Nicholas",
""
],
[
"Koh",
"Clarence",
""
],
[
"Bhojan",
"Anand",
""
]
] |
new_dataset
| 0.998285 |
2210.06177
|
Junjie Li
|
Junjie Li, Meng Ge, Zexu Pan, Longbiao Wang, Jianwu Dang
|
VCSE: Time-Domain Visual-Contextual Speaker Extraction Network
| null | null | null | null |
cs.CV cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speaker extraction seeks to extract the target speech in a multi-talker
scenario given an auxiliary reference. Such reference can be auditory, i.e., a
pre-recorded speech, visual, i.e., lip movements, or contextual, i.e., phonetic
sequence. References in different modalities provide distinct and complementary
information that could be fused to form top-down attention on the target
speaker. Previous studies have introduced visual and contextual modalities in a
single model. In this paper, we propose a two-stage time-domain
visual-contextual speaker extraction network named VCSE, which incorporates
visual and self-enrolled contextual cues stage by stage to take full advantage
of every modality. In the first stage, we pre-extract a target speech with
visual cues and estimate the underlying phonetic sequence. In the second stage,
we refine the pre-extracted target speech with the self-enrolled contextual
cues. Experimental results on the real-world Lip Reading Sentences 3 (LRS3)
database demonstrate that our proposed VCSE network consistently outperforms
other state-of-the-art baselines.
|
[
{
"version": "v1",
"created": "Sun, 9 Oct 2022 12:29:38 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Li",
"Junjie",
""
],
[
"Ge",
"Meng",
""
],
[
"Pan",
"Zexu",
""
],
[
"Wang",
"Longbiao",
""
],
[
"Dang",
"Jianwu",
""
]
] |
new_dataset
| 0.998952 |
2210.06249
|
Tianyi Yang
|
Tianyi Yang, Baitong Li, Jiacheng Shen, Yuxin Su, Yongqiang Yang,
Michael R. Lyu
|
Managing Service Dependency for Cloud Reliability: The Industrial
Practice
|
In Proceedings of the 33rd IEEE International Symposium on Software
Reliability Engineering Workshops (ISSREW'22)
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Interactions between cloud services result in service dependencies.
Evaluating and managing the cascading impacts caused by service dependencies is
critical to the reliability of cloud systems. This paper summarizes the
dependency types in cloud systems and demonstrates the design of the Dependency
Management System (DMS), a platform for managing the service dependencies in
the production cloud system. DMS features full-lifecycle support for service
reliability (i.e., initial service deployment, service upgrade, proactive
architectural optimization, and reactive failure mitigation) and refined
characterization of the intensity of dependencies.
|
[
{
"version": "v1",
"created": "Sun, 28 Aug 2022 08:15:26 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Yang",
"Tianyi",
""
],
[
"Li",
"Baitong",
""
],
[
"Shen",
"Jiacheng",
""
],
[
"Su",
"Yuxin",
""
],
[
"Yang",
"Yongqiang",
""
],
[
"Lyu",
"Michael R.",
""
]
] |
new_dataset
| 0.989421 |
2210.06307
|
Yu Zhao
|
Yu Zhao and Brent Harrison and Tingting Yu
|
DinoDroid: Testing Android Apps Using Deep Q-Networks
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The large demand of mobile devices creates significant concerns about the
quality of mobile applications (apps). Developers need to guarantee the quality
of mobile apps before it is released to the market. There have been many
approaches using different strategies to test the GUI of mobile apps. However,
they still need improvement due to their limited effectiveness. In this paper,
we propose DinoDroid, an approach based on deep Q-networks to automate testing
of Android apps. DinoDroid learns a behavior model from a set of existing apps
and the learned model can be used to explore and generate tests for new apps.
DinoDroid is able to capture the fine-grained details of GUI events (e.g., the
content of GUI widgets) and use them as features that are fed into deep neural
network, which acts as the agent to guide app exploration. DinoDroid
automatically adapts the learned model during the exploration without the need
of any modeling strategies or pre-defined rules. We conduct experiments on 64
open-source Android apps. The results showed that DinoDroid outperforms
existing Android testing tools in terms of code coverage and bug detection.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 15:20:24 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Zhao",
"Yu",
""
],
[
"Harrison",
"Brent",
""
],
[
"Yu",
"Tingting",
""
]
] |
new_dataset
| 0.995315 |
2210.06350
|
R\'obert Csord\'as
|
R\'obert Csord\'as, Kazuki Irie, J\"urgen Schmidhuber
|
CTL++: Evaluating Generalization on Never-Seen Compositional Patterns of
Known Functions, and Compatibility of Neural Representations
|
Accepted to EMNLP 2022
| null | null | null |
cs.LG cs.AI cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Well-designed diagnostic tasks have played a key role in studying the failure
of neural nets (NNs) to generalize systematically. Famous examples include SCAN
and Compositional Table Lookup (CTL). Here we introduce CTL++, a new diagnostic
dataset based on compositions of unary symbolic functions. While the original
CTL is used to test length generalization or productivity, CTL++ is designed to
test systematicity of NNs, that is, their capability to generalize to unseen
compositions of known functions. CTL++ splits functions into groups and tests
performance on group elements composed in a way not seen during training. We
show that recent CTL-solving Transformer variants fail on CTL++. The simplicity
of the task design allows for fine-grained control of task difficulty, as well
as many insightful analyses. For example, we measure how much overlap between
groups is needed by tested NNs for learning to compose. We also visualize how
learned symbol representations in outputs of functions from different groups
are compatible in case of success but not in case of failure. These results
provide insights into failure cases reported on more complex compositions in
the natural language domain. Our code is public.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 16:01:57 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Csordás",
"Róbert",
""
],
[
"Irie",
"Kazuki",
""
],
[
"Schmidhuber",
"Jürgen",
""
]
] |
new_dataset
| 0.998729 |
2210.06353
|
George Chernishev
|
Platon Fedorov, Alexey Mironov, George Chernishev
|
Russian Web Tables: A Public Corpus of Web Tables for Russian Language
Based on Wikipedia
| null | null | null | null |
cs.CL cs.DL cs.IR cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Corpora that contain tabular data such as WebTables are a vital resource for
the academic community. Essentially, they are the backbone of any modern
research in information management. They are used for various tasks of data
extraction, knowledge base construction, question answering, column semantic
type detection and many other. Such corpora are useful not only as a source of
data, but also as a base for building test datasets. So far, there were no such
corpora for the Russian language and this seriously hindered research in the
aforementioned areas.
In this paper, we present the first corpus of Web tables created specifically
out of Russian language material. It was built via a special toolkit we have
developed to crawl the Russian Wikipedia. Both the corpus and the toolkit are
open-source and publicly available. Finally, we present a short study that
describes Russian Wikipedia tables and their statistics.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 16:15:48 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Fedorov",
"Platon",
""
],
[
"Mironov",
"Alexey",
""
],
[
"Chernishev",
"George",
""
]
] |
new_dataset
| 0.998421 |
2210.06397
|
Jason Parker
|
Jason Parker and Dan Barker
|
Star Anagram Detection and Classification
|
14 pages, 14 figures in main article. Appendix contains several
thousand figures over 250+ pages. In preparation for submission to
Computational Geometry
| null | null | null |
cs.OH
|
http://creativecommons.org/licenses/by/4.0/
|
A star anagram is a rearrangement of the letters of one word to produce
another word where no letter retains its original neighbors. These maximally
shuffled anagrams are rare, comprising only about 5.7% of anagrams in English.
They can also be depicted as unicursal polygons with varying forms, including
the eponymous stars. We develop automated methods for detecting stars among
other anagrams and for classifying them based on their polygon's degree of both
rotational and reflective symmetry. Next, we explore several properties of star
anagrams including proofs for two results about the edge lengths of perfect,
i.e., maximally symmetric, stars leveraging perhaps surprising connections to
modular arithmetic and the celebrated Chinese Remainder Theorem. Finally, we
conduct an exhaustive search of English for star anagrams and provide numerical
results about their clustering into common shapes along with examples of
geometrically noteworthy stars.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 16:54:35 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Parker",
"Jason",
""
],
[
"Barker",
"Dan",
""
]
] |
new_dataset
| 0.999627 |
2210.06407
|
Corey Lynch
|
Corey Lynch, Ayzaan Wahid, Jonathan Tompson, Tianli Ding, James
Betker, Robert Baruch, Travis Armstrong, Pete Florence
|
Interactive Language: Talking to Robots in Real Time
| null | null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present a framework for building interactive, real-time, natural
language-instructable robots in the real world, and we open source related
assets (dataset, environment, benchmark, and policies). Trained with behavioral
cloning on a dataset of hundreds of thousands of language-annotated
trajectories, a produced policy can proficiently execute an order of magnitude
more commands than previous works: specifically we estimate a 93.5% success
rate on a set of 87,000 unique natural language strings specifying raw
end-to-end visuo-linguo-motor skills in the real world. We find that the same
policy is capable of being guided by a human via real-time language to address
a wide range of precise long-horizon rearrangement goals, e.g. "make a smiley
face out of blocks". The dataset we release comprises nearly 600,000
language-labeled trajectories, an order of magnitude larger than prior
available datasets. We hope the demonstrated results and associated assets
enable further advancement of helpful, capable, natural-language-interactable
robots. See videos at https://interactive-language.github.io.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 17:03:41 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Lynch",
"Corey",
""
],
[
"Wahid",
"Ayzaan",
""
],
[
"Tompson",
"Jonathan",
""
],
[
"Ding",
"Tianli",
""
],
[
"Betker",
"James",
""
],
[
"Baruch",
"Robert",
""
],
[
"Armstrong",
"Travis",
""
],
[
"Florence",
"Pete",
""
]
] |
new_dataset
| 0.953335 |
2210.06431
|
Yan Sym
|
Yan V. Sym, Jo\~ao Gabriel M. Campos, Fabio G. Cozman
|
BLAB Reporter: Automated journalism covering the Blue Amazon
|
Accepted at the 15th International Natural Language Generation
Conference (INLG 2022)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This demo paper introduces the BLAB Reporter, a robot-journalist covering the
Brazilian Blue Amazon. The Reporter is based on a pipeline architecture for
Natural Language Generation; it offers daily reports, news summaries and
curious facts in Brazilian Portuguese. By collecting, storing and analysing
structured data from publicly available sources, the robot-journalist uses
domain knowledge to generate and publish texts in Twitter. Code and corpus are
publicly available
|
[
{
"version": "v1",
"created": "Sat, 8 Oct 2022 21:51:50 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Sym",
"Yan V.",
""
],
[
"Campos",
"João Gabriel M.",
""
],
[
"Cozman",
"Fabio G.",
""
]
] |
new_dataset
| 0.999471 |
2210.06463
|
Sridhar Pandian Arunachalam
|
Sridhar Pandian Arunachalam, Irmak G\"uzey, Soumith Chintala, Lerrel
Pinto
|
Holo-Dex: Teaching Dexterity with Immersive Mixed Reality
|
Data, code and videos are available at https://holo-dex.github.io
| null | null | null |
cs.RO cs.AI cs.CV cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A fundamental challenge in teaching robots is to provide an effective
interface for human teachers to demonstrate useful skills to a robot. This
challenge is exacerbated in dexterous manipulation, where teaching
high-dimensional, contact-rich behaviors often require esoteric teleoperation
tools. In this work, we present Holo-Dex, a framework for dexterous
manipulation that places a teacher in an immersive mixed reality through
commodity VR headsets. The high-fidelity hand pose estimator onboard the
headset is used to teleoperate the robot and collect demonstrations for a
variety of general-purpose dexterous tasks. Given these demonstrations, we use
powerful feature learning combined with non-parametric imitation to train
dexterous skills. Our experiments on six common dexterous tasks, including
in-hand rotation, spinning, and bottle opening, indicate that Holo-Dex can both
collect high-quality demonstration data and train skills in a matter of hours.
Finally, we find that our trained skills can exhibit generalization on objects
not seen in training. Videos of Holo-Dex are available at
https://holo-dex.github.io.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 17:59:02 GMT"
}
] | 2022-10-13T00:00:00 |
[
[
"Arunachalam",
"Sridhar Pandian",
""
],
[
"Güzey",
"Irmak",
""
],
[
"Chintala",
"Soumith",
""
],
[
"Pinto",
"Lerrel",
""
]
] |
new_dataset
| 0.998539 |
2109.14743
|
Mahnoosh Sadeghi
|
Mahnoosh Sadeghi, Anthony D McDonald, Farzan Sasangohar
|
Posttraumatic Stress Disorder Hyperarousal Event Detection Using
Smartwatch Physiological and Activity Data
|
23 pages, 3 figures
| null |
10.1371/journal.pone.0267749
| null |
cs.LG cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Posttraumatic Stress Disorder (PTSD) is a psychiatric condition affecting
nearly a quarter of the United States war veterans who return from war zones.
Treatment for PTSD typically consists of a combination of in-session therapy
and medication. However; patients often experience their most severe PTSD
symptoms outside of therapy sessions. Mobile health applications may address
this gap, but their effectiveness is limited by the current gap in continuous
monitoring and detection capabilities enabling timely intervention. The goal of
this article is to develop a novel method to detect hyperarousal events using
physiological and activity-based machine learning algorithms. Physiological
data including heart rate and body acceleration as well as self-reported
hyperarousal events were collected using a tool developed for commercial
off-the-shelf wearable devices from 99 United States veterans diagnosed with
PTSD over several days. The data were used to develop four machine learning
algorithms: Random Forest, Support Vector Machine, Logistic Regression and
XGBoost. The XGBoost model had the best performance in detecting onset of PTSD
symptoms with over 83% accuracy and an AUC of 0.70. Post-hoc SHapley Additive
exPlanations (SHAP) additive explanation analysis showed that algorithm
predictions were correlated with average heart rate, minimum heart rate and
average body acceleration. Findings show promise in detecting onset of PTSD
symptoms which could be the basis for developing remote and continuous
monitoring systems for PTSD. Such systems may address a vital gap in
just-in-time interventions for PTSD self-management outside of scheduled
clinical appointments.
|
[
{
"version": "v1",
"created": "Wed, 29 Sep 2021 22:24:10 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Oct 2021 00:55:40 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Sadeghi",
"Mahnoosh",
""
],
[
"McDonald",
"Anthony D",
""
],
[
"Sasangohar",
"Farzan",
""
]
] |
new_dataset
| 0.999065 |
2110.09886
|
Gholamreza Jafari
|
Saeedeh Mohammadi, Parham Moradi, S. Mahdi Firouzabadi, G. Reza Jafari
|
The Footprint of Campaign Strategies in Farsi Twitter: A case for 2021
Iranian presidential election
|
11 pages, 6 figures
| null |
10.1371/journal.pone.0270822
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rise of social media accompanied by the Covid-19 Pandemic has instigated
a shift in paradigm in the presidential campaigns in Iran from the real world
to social media. Unlike previous presidential elections, there was a decrease
in physical events and advertisements for the candidates; in turn, the online
presence of presidential candidates is significantly increased. Farsi Twitter
played a specific role in this matter, as it became the platform for creating
political content. In this study, we found traces of organizational activities
in Farsi Twitter. Our investigations reveals that the discussion network of the
2021 election is heterogeneous and highly polarized. However, unlike other
elections, candidates' supporters are very close, and "Anti-voters" who endorse
boycotting the election is at the discussions opposite end. Furthermore, high
presence of the bot activity is observed among the most influential users in
all of the involved communities.
|
[
{
"version": "v1",
"created": "Mon, 4 Oct 2021 12:56:27 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Mohammadi",
"Saeedeh",
""
],
[
"Moradi",
"Parham",
""
],
[
"Firouzabadi",
"S. Mahdi",
""
],
[
"Jafari",
"G. Reza",
""
]
] |
new_dataset
| 0.984791 |
2111.10073
|
Shivam Garg
|
Shivam Garg, Nandini Venkatraman, Elizabeth Serena Bentley, Sunil
Kumar
|
An Asynchronous Multi-Beam MAC Protocol for Multi-Hop Wireless Networks
|
Medium access control (MAC), directional communication, wireless
network, multi-beam antenna
| null |
10.1109/ICCCN54977.2022.9868910
| null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
A node equipped with a multi-beam antenna can achieve a throughput of up to m
times as compared to a single-beam antenna, by simultaneously communicating on
its m non-interfering beams. However, the existing multi-beam medium access
control (MAC) schemes can achieve concurrent data communication only when the
transmitter nodes are locally synchronized. Asynchronous packet arrival at a
multi-beam receiver node would increase the node deafness and MAC layer capture
problems, and thereby limit the data throughput. This paper presents an
asynchronous multi-beam MAC protocol for multi-hop wireless networks, which
makes the following enhancements to the existing multi-beam MAC schemes (i) A
windowing mechanism to achieve concurrent communication when the packet arrival
is asynchronous, (ii) A smart packet processing mechanism which reduces the
node deafness, hidden terminals and MAC-layer capture problems, and (iii) A
channel access mechanism which decreases resource wastage and node starvation.
Our proposed protocol also works in heterogeneous networks that deploy the
nodes equipped with single-beam as well as multi-beam antennas. Simulation
results demonstrate a superior performance of our proposed protocol.
|
[
{
"version": "v1",
"created": "Fri, 19 Nov 2021 07:23:01 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Garg",
"Shivam",
""
],
[
"Venkatraman",
"Nandini",
""
],
[
"Bentley",
"Elizabeth Serena",
""
],
[
"Kumar",
"Sunil",
""
]
] |
new_dataset
| 0.999444 |
2202.01003
|
Antonio Sgorbissa
|
Luca Morando, Carmine Tommaso Recchiuto, Jacopo Call\`a, Paolo Scuteri
and Antonio Sgorbissa
|
Thermal and Visual Tracking of Photovoltaic Plants for Autonomous UAV
inspection
|
17 pages, 34 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since photovoltaic (PV) plants require periodic maintenance, using Unmanned
Aerial Vehicles (UAV) for inspections can help reduce costs. The thermal and
visual inspection of PV installations is currently based on UAV photogrammetry.
A UAV equipped with a Global Positioning System (GPS) receiver is assigned a
flight zone: the UAV will cover it back and forth to collect images to be later
composed in an orthomosaic. The UAV typically flies at a height above the
ground that is appropriate to ensure that images overlap even in the presence
of GPS positioning errors. However, this approach has two limitations. Firstly,
it requires to cover the whole flight zone, including "empty" areas between PV
module rows. Secondly, flying high above the ground limits the resolution of
the images to be later inspected. The article proposes a novel approach using
an autonomous UAV equipped with an RGB and a thermal camera for PV module
tracking. The UAV moves along PV module rows at a lower height than usual and
inspects them back and forth in a boustrophedon way by ignoring "empty" areas
with no PV modules. Experimental tests performed in simulation and an actual PV
plant are reported.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 12:41:28 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2022 14:48:59 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Oct 2022 12:56:29 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Morando",
"Luca",
""
],
[
"Recchiuto",
"Carmine Tommaso",
""
],
[
"Callà",
"Jacopo",
""
],
[
"Scuteri",
"Paolo",
""
],
[
"Sgorbissa",
"Antonio",
""
]
] |
new_dataset
| 0.999008 |
2202.03202
|
Caroline S. Wagner
|
Caroline S. Wagner, Xiaojing Cai, Yi Zhang, Caroline V. Fry
|
One-Year In: COVID-19 Research at the International Level in CORD-19
Data
|
39 pages, 8 figures, Appendix
| null |
10.1371/journal.pone.0261624
| null |
cs.DL cs.SI physics.soc-ph stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
The appearance of a novel coronavirus in late 2019 radically changed the
community of researchers working on coronaviruses since the 2002 SARS epidemic.
In 2020, coronavirus-related publications grew by 20 times over the previous
two years, with 130,000 more researchers publishing on related topics. The
United States, the United Kingdom and China led dozens of nations working on
coronavirus prior to the pandemic, but leadership consolidated among these
three nations in 2020, which collectively accounted for 50% of all papers,
garnering well more than 60% of citations. China took an early lead on COVID-19
research, but dropped rapidly in production and international participation
through the year. Europe showed an opposite pattern, beginning slowly in
publications but growing in contributions during the year. The share of
internationally collaborative publications dropped from pre-pandemic rates;
single-authored publications grew. For all nations, including China, the number
of publications about COVID track closely with the outbreak of COVID-19 cases.
Lower-income nations participate very little in COVID-19 research in 2020.
Topic maps of internationally collaborative work show the rise of patient care
and public health clusters, two topics that were largely absent from
coronavirus research in the two years prior to 2020. Findings are consistent
with global science as a self-organizing system operating on a reputation-based
dynamic.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 21:13:34 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Wagner",
"Caroline S.",
""
],
[
"Cai",
"Xiaojing",
""
],
[
"Zhang",
"Yi",
""
],
[
"Fry",
"Caroline V.",
""
]
] |
new_dataset
| 0.973092 |
2202.08413
|
Luis A. Pineda
|
Rafael Morales and No\'e Hern\'andez and Ricardo Cruz and Victor D.
Cruz and Luis A. Pineda
|
Entropic Associative Memory for Manuscript Symbols
|
24 pages, 13 figures
| null |
10.1371/journal.pone.0272386
| null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Manuscript symbols can be stored, recognized and retrieved from an entropic
digital memory that is associative and distributed but yet declarative; memory
retrieval is a constructive operation, memory cues to objects not contained in
the memory are rejected directly without search, and memory operations can be
performed through parallel computations. Manuscript symbols, both letters and
numerals, are represented in Associative Memory Registers that have an
associated entropy. The memory recognition operation obeys an entropy trade-off
between precision and recall, and the entropy level impacts on the quality of
the objects recovered through the memory retrieval operation. The present
proposal is contrasted in several dimensions with neural networks models of
associative memory. We discuss the operational characteristics of the entropic
associative memory for retrieving objects with both complete and incomplete
information, such as severe occlusions. The experiments reported in this paper
add evidence on the potential of this framework for developing practical
applications and computational models of natural memory.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 02:29:33 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Morales",
"Rafael",
""
],
[
"Hernández",
"Noé",
""
],
[
"Cruz",
"Ricardo",
""
],
[
"Cruz",
"Victor D.",
""
],
[
"Pineda",
"Luis A.",
""
]
] |
new_dataset
| 0.998269 |
2203.00431
|
Zhuofa Chen
|
Zhuofa Chen, Yousif Khaireddin, Anna K. Swan
|
Identifying charge density and dielectric environment of graphene using
Raman spectroscopy and deep learning
|
5 figures, 22 pages
| null |
10.1039/D2AN00129B
| null |
cs.LG physics.app-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The impact of the environment on graphene's properties such as strain, charge
density, and dielectric environment can be evaluated by Raman spectroscopy.
These environmental interactions are not trivial to determine, since they
affect the spectra in overlapping ways. Data preprocessing such as background
subtraction and peak fitting is typically used. Moreover, collected
spectroscopic data vary due to different experimental setups and environments.
Such variations, artifacts, and environmental differences pose a challenge in
accurate spectral analysis. In this work, we developed a deep learning model to
overcome the effects of such variations and classify graphene Raman spectra
according to different charge densities and dielectric environments. We
consider two approaches: deep learning models and machine learning algorithms
to classify spectra with slightly different charge density or dielectric
environment. These two approaches show similar success rates for high
Signal-to-Noise data. However, deep learning models are less sensitive to
noise. To improve the accuracy and generalization of all models, we use data
augmentation through additive noise and peak shifting. We demonstrated the
spectra classification with 99% accuracy using a convolutional neural net (CNN)
model. The CNN model is able to classify Raman spectra of graphene with
different charge doping levels and even subtle variation in the spectra between
graphene on SiO$_2$ and graphene on silanized SiO$_2$. Our approach has the
potential for fast and reliable estimation of graphene doping levels and
dielectric environments. The proposed model paves the way for achieving
efficient analytical tools to evaluate the properties of graphene.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 00:25:01 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Chen",
"Zhuofa",
""
],
[
"Khaireddin",
"Yousif",
""
],
[
"Swan",
"Anna K.",
""
]
] |
new_dataset
| 0.992634 |
2204.09672
|
Alberto Alvarez
|
Alberto Alvarez, Jose Font
|
TropeTwist: Trope-based Narrative Structure Generation
|
8 pages, Accepted and to appear in Proceedings of the 13th Workshop
on Procedural Content Generation, at the Foundations of Digital Games (FDG),
2022
| null | null | null |
cs.AI cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Games are complex, multi-faceted systems that share common elements and
underlying narratives, such as the conflict between a hero and a big bad enemy
or pursuing a goal that requires overcoming challenges. However, identifying
and describing these elements together is non-trivial as they might differ in
certain properties and how players might encounter the narratives. Likewise,
generating narratives also pose difficulties when encoding, interpreting, and
evaluating them. To address this, we present TropeTwist, a trope-based system
that can describe narrative structures in games in a more abstract and generic
level, allowing the definition of games' narrative structures and their
generation using interconnected tropes, called narrative graphs. To demonstrate
the system, we represent the narrative structure of three different games. We
use MAP-Elites to generate and evaluate novel quality-diverse narrative graphs
encoded as graph grammars, using these three hand-made narrative structures as
targets. Both hand-made and generated narrative graphs are evaluated based on
their coherence and interestingness, which are improved through evolution.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 16:02:17 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Oct 2022 16:07:25 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Alvarez",
"Alberto",
""
],
[
"Font",
"Jose",
""
]
] |
new_dataset
| 0.999761 |
2204.12581
|
Marc Rigter
|
Marc Rigter, Bruno Lacerda, Nick Hawes
|
RAMBO-RL: Robust Adversarial Model-Based Offline Reinforcement Learning
|
NeurIPS 2022
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Offline reinforcement learning (RL) aims to find performant policies from
logged data without further environment interaction. Model-based algorithms,
which learn a model of the environment from the dataset and perform
conservative policy optimisation within that model, have emerged as a promising
approach to this problem. In this work, we present Robust Adversarial
Model-Based Offline RL (RAMBO), a novel approach to model-based offline RL. We
formulate the problem as a two-player zero sum game against an adversarial
environment model. The model is trained to minimise the value function while
still accurately predicting the transitions in the dataset, forcing the policy
to act conservatively in areas not covered by the dataset. To approximately
solve the two-player game, we alternate between optimising the policy and
adversarially optimising the model. The problem formulation that we address is
theoretically grounded, resulting in a probably approximately correct (PAC)
performance guarantee and a pessimistic value function which lower bounds the
value function in the true environment. We evaluate our approach on widely
studied offline RL benchmarks, and demonstrate that it outperforms existing
state-of-the-art baselines.
|
[
{
"version": "v1",
"created": "Tue, 26 Apr 2022 20:42:14 GMT"
},
{
"version": "v2",
"created": "Wed, 25 May 2022 14:41:42 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Oct 2022 06:19:27 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Rigter",
"Marc",
""
],
[
"Lacerda",
"Bruno",
""
],
[
"Hawes",
"Nick",
""
]
] |
new_dataset
| 0.978617 |
2205.11966
|
Avishai Gretz
|
Shai Gretz, Assaf Toledo, Roni Friedman, Dan Lahav, Rose Weeks, Naor
Bar-Zeev, Jo\~ao Sedoc, Pooja Sangha, Yoav Katz, Noam Slonim
|
Benchmark Data and Evaluation Framework for Intent Discovery Around
COVID-19 Vaccine Hesitancy
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The COVID-19 pandemic has made a huge global impact and cost millions of
lives. As COVID-19 vaccines were rolled out, they were quickly met with
widespread hesitancy. To address the concerns of hesitant people, we launched
VIRA, a public dialogue system aimed at addressing questions and concerns
surrounding the COVID-19 vaccines. Here, we release VIRADialogs, a dataset of
over 8k dialogues conducted by actual users with VIRA, providing a unique
real-world conversational dataset. In light of rapid changes in users' intents,
due to updates in guidelines or in response to new information, we highlight
the important task of intent discovery in this use-case. We introduce a novel
automatic evaluation framework for intent discovery, leveraging the existing
intent classifier of VIRA. We use this framework to report baseline intent
discovery results over VIRADialogs, that highlight the difficulty of this task.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 10:58:11 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Oct 2022 06:56:07 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Gretz",
"Shai",
""
],
[
"Toledo",
"Assaf",
""
],
[
"Friedman",
"Roni",
""
],
[
"Lahav",
"Dan",
""
],
[
"Weeks",
"Rose",
""
],
[
"Bar-Zeev",
"Naor",
""
],
[
"Sedoc",
"João",
""
],
[
"Sangha",
"Pooja",
""
],
[
"Katz",
"Yoav",
""
],
[
"Slonim",
"Noam",
""
]
] |
new_dataset
| 0.999539 |
2207.12576
|
Yonatan Bitton
|
Yonatan Bitton, Nitzan Bitton Guetta, Ron Yosef, Yuval Elovici, Mohit
Bansal, Gabriel Stanovsky, Roy Schwartz
|
WinoGAViL: Gamified Association Benchmark to Challenge
Vision-and-Language Models
|
Accepted to NeurIPS 2022, Datasets and Benchmarks. Website:
https://winogavil.github.io/
| null | null | null |
cs.CL cs.AI cs.CV cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While vision-and-language models perform well on tasks such as visual
question answering, they struggle when it comes to basic human commonsense
reasoning skills. In this work, we introduce WinoGAViL: an online game of
vision-and-language associations (e.g., between werewolves and a full moon),
used as a dynamic evaluation benchmark. Inspired by the popular card game
Codenames, a spymaster gives a textual cue related to several visual
candidates, and another player tries to identify them. Human players are
rewarded for creating associations that are challenging for a rival AI model
but still solvable by other human players. We use the game to collect 3.5K
instances, finding that they are intuitive for humans (>90% Jaccard index) but
challenging for state-of-the-art AI models, where the best model (ViLT)
achieves a score of 52%, succeeding mostly where the cue is visually salient.
Our analysis as well as the feedback we collect from players indicate that the
collected associations require diverse reasoning skills, including general
knowledge, common sense, abstraction, and more. We release the dataset, the
code and the interactive game, allowing future data collection that can be used
to develop models with better association abilities.
|
[
{
"version": "v1",
"created": "Mon, 25 Jul 2022 23:57:44 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Oct 2022 13:59:53 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Bitton",
"Yonatan",
""
],
[
"Guetta",
"Nitzan Bitton",
""
],
[
"Yosef",
"Ron",
""
],
[
"Elovici",
"Yuval",
""
],
[
"Bansal",
"Mohit",
""
],
[
"Stanovsky",
"Gabriel",
""
],
[
"Schwartz",
"Roy",
""
]
] |
new_dataset
| 0.998609 |
2208.04682
|
Philip Bourne
|
Philip E. Bourne, Vivien Bonazzi, Amy Brand, Bonnie Carroll, Ian
Foster, Ramanathan V. Guha, Robert Hanisch, Sallie Ann Keller, Mary Lee
Kennedy, Christine Kirkpatrick, Barend Mons, Sarah M. Nusser, Michael
Stebbins, George Strawn, and Alex Szalay
|
Playing catch-up in building an open research commons
|
3 pages on the AAS template
| null |
10.1126/science.abo5947
| null |
cs.DL cs.GL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
On August 2, 2021 a group of concerned scientists and US funding agency and
federal government officials met for an informal discussion to explore the
value and need for a well-coordinated US Open Research Commons (ORC); an
interoperable collection of data and compute resources within both the public
and private sectors which are easy to use and accessible to all.
|
[
{
"version": "v1",
"created": "Fri, 15 Jul 2022 17:34:00 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Bourne",
"Philip E.",
""
],
[
"Bonazzi",
"Vivien",
""
],
[
"Brand",
"Amy",
""
],
[
"Carroll",
"Bonnie",
""
],
[
"Foster",
"Ian",
""
],
[
"Guha",
"Ramanathan V.",
""
],
[
"Hanisch",
"Robert",
""
],
[
"Keller",
"Sallie Ann",
""
],
[
"Kennedy",
"Mary Lee",
""
],
[
"Kirkpatrick",
"Christine",
""
],
[
"Mons",
"Barend",
""
],
[
"Nusser",
"Sarah M.",
""
],
[
"Stebbins",
"Michael",
""
],
[
"Strawn",
"George",
""
],
[
"Szalay",
"Alex",
""
]
] |
new_dataset
| 0.973687 |
2209.02529
|
Mengdi Sun
|
Mengdi Sun, Ligan Cai, Weiwei Cui, Yanqiu Wu, Yang Shi, Nan Cao
|
Erato: Cooperative Data Story Editing via Fact Interpolation
| null | null |
10.1109/TVCG.2022.3209428
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As an effective form of narrative visualization, visual data stories are
widely used in data-driven storytelling to communicate complex insights and
support data understanding. Although important, they are difficult to create,
as a variety of interdisciplinary skills, such as data analysis and design, are
required. In this work, we introduce Erato, a human-machine cooperative data
story editing system, which allows users to generate insightful and fluent data
stories together with the computer. Specifically, Erato only requires a number
of keyframes provided by the user to briefly describe the topic and structure
of a data story. Meanwhile, our system leverages a novel interpolation
algorithm to help users insert intermediate frames between the keyframes to
smooth the transition. We evaluated the effectiveness and usefulness of the
Erato system via a series of evaluations including a Turing test, a controlled
user study, a performance validation, and interviews with three expert users.
The evaluation results showed that the proposed interpolation technique was
able to generate coherent story content and help users create data stories more
efficiently.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 14:32:27 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Sun",
"Mengdi",
""
],
[
"Cai",
"Ligan",
""
],
[
"Cui",
"Weiwei",
""
],
[
"Wu",
"Yanqiu",
""
],
[
"Shi",
"Yang",
""
],
[
"Cao",
"Nan",
""
]
] |
new_dataset
| 0.96871 |
2209.13017
|
Karish Grover
|
Karish Grover, S.M. Phaneendra Angara, Md. Shad Akhtar, Tanmoy
Chakraborty
|
Public Wisdom Matters! Discourse-Aware Hyperbolic Fourier Co-Attention
for Social-Text Classification
|
NeurIPS 2022
| null | null | null |
cs.CL cs.LG cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Social media has become the fulcrum of all forms of communication.
Classifying social texts such as fake news, rumour, sarcasm, etc. has gained
significant attention. The surface-level signals expressed by a social-text
itself may not be adequate for such tasks; therefore, recent methods attempted
to incorporate other intrinsic signals such as user behavior and the underlying
graph structure. Oftentimes, the `public wisdom' expressed through the
comments/replies to a social-text acts as a surrogate of crowd-sourced view and
may provide us with complementary signals. State-of-the-art methods on
social-text classification tend to ignore such a rich hierarchical signal.
Here, we propose Hyphen, a discourse-aware hyperbolic spectral co-attention
network. Hyphen is a fusion of hyperbolic graph representation learning with a
novel Fourier co-attention mechanism in an attempt to generalise the
social-text classification tasks by incorporating public discourse. We parse
public discourse as an Abstract Meaning Representation (AMR) graph and use the
powerful hyperbolic geometric representation to model graphs with hierarchical
structure. Finally, we equip it with a novel Fourier co-attention mechanism to
capture the correlation between the source post and public discourse. Extensive
experiments on four different social-text classification tasks, namely
detecting fake news, hate speech, rumour, and sarcasm, show that Hyphen
generalises well, and achieves state-of-the-art results on ten benchmark
datasets. We also employ a sentence-level fact-checked and annotated dataset to
evaluate how Hyphen is capable of producing explanations as analogous evidence
to the final prediction.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 16:04:32 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Oct 2022 15:57:31 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Grover",
"Karish",
""
],
[
"Angara",
"S. M. Phaneendra",
""
],
[
"Akhtar",
"Md. Shad",
""
],
[
"Chakraborty",
"Tanmoy",
""
]
] |
new_dataset
| 0.958068 |
2210.02040
|
Jinsung Jeon
|
Jinsung Jeon, Jeonghak Kim, Haryong Song, Seunghyeon Cho, Noseong Park
|
GT-GAN: General Purpose Time Series Synthesis with Generative
Adversarial Networks
|
NeurIPs 2022
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Time series synthesis is an important research topic in the field of deep
learning, which can be used for data augmentation. Time series data types can
be broadly classified into regular or irregular. However, there are no existing
generative models that show good performance for both types without any model
changes. Therefore, we present a general purpose model capable of synthesizing
regular and irregular time series data. To our knowledge, we are the first
designing a general purpose time series synthesis model, which is one of the
most challenging settings for time series synthesis. To this end, we design a
generative adversarial network-based method, where many related techniques are
carefully integrated into a single framework, ranging from neural
ordinary/controlled differential equations to continuous time-flow processes.
Our method outperforms all existing methods.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 06:18:06 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Oct 2022 05:09:45 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Oct 2022 06:41:27 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Jeon",
"Jinsung",
""
],
[
"Kim",
"Jeonghak",
""
],
[
"Song",
"Haryong",
""
],
[
"Cho",
"Seunghyeon",
""
],
[
"Park",
"Noseong",
""
]
] |
new_dataset
| 0.993857 |
2210.04080
|
Jared Coleman
|
Jared Coleman, Evangelos Kranakis, Danny Krizanc, Oscar Morales-Ponce
|
Delivery to Safety with Two Cooperating Robots
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Two cooperating, autonomous mobile robots with arbitrary nonzero max speeds
are placed at arbitrary initial positions in the plane. A remotely detonated
bomb is discovered at some source location and must be moved to a safe distance
away from its initial location as quickly as possible. In the Bomb Squad
problem, the robots cooperate by communicating face-to-face in order to pick up
the bomb from the source and carry it away to the boundary of a disk centered
at the source in the shortest possible time. The goal is to specify
trajectories which define the robots' paths from start to finish and their
meeting points which enable face-to-face collaboration by exchanging
information and passing the bomb from robot to robot.
We design algorithms reflecting the robots' knowledge about orientation and
each other's speed and location. In the offline case, we design an optimal
algorithm. For the limited knowledge cases, we provide online algorithms which
consider robots' level of agreement on orientation as per OneAxis and NoAxis
models, and knowledge of the boundary as per Visible, Discoverable, and
Invisible. In all cases, we provide upper and lower bounds for the competitive
ratios of the online problems.
|
[
{
"version": "v1",
"created": "Sat, 8 Oct 2022 18:19:07 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Oct 2022 17:22:05 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Coleman",
"Jared",
""
],
[
"Kranakis",
"Evangelos",
""
],
[
"Krizanc",
"Danny",
""
],
[
"Morales-Ponce",
"Oscar",
""
]
] |
new_dataset
| 0.962596 |
2210.04951
|
Abel Souza
|
Abel Souza, Noman Bashir, Jorge Murillo, Walid Hanafy, Qianlin Liang,
David Irwin, Prashant Shenoy
|
Ecovisor: A Virtual Energy System for Carbon-Efficient Applications
| null | null | null | null |
cs.OS cs.DC cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Cloud platforms' rapid growth is raising significant concerns about their
carbon emissions. To reduce emissions, future cloud platforms will need to
increase their reliance on renewable energy sources, such as solar and wind,
which have zero emissions but are highly unreliable. Unfortunately, today's
energy systems effectively mask this unreliability in hardware, which prevents
applications from optimizing their carbon-efficiency, or work done per kilogram
of carbon emitted. To address this problem, we design an "ecovisor", which
virtualizes the energy system and exposes software-defined control of it to
applications. An ecovisor enables each application to handle clean energy's
unreliability in software based on its own specific requirements. We implement
a small-scale ecovisor prototype that virtualizes a physical energy system to
enable software-based application-level i) visibility into variable grid
carbon-intensity and renewable generation and ii) control of server power usage
and battery charging/discharging. We evaluate the ecovisor approach by showing
how multiple applications can concurrently exercise their virtual energy system
in different ways to better optimize carbon-efficiency based on their specific
requirements compared to a general system-wide policy.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 18:41:56 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Souza",
"Abel",
""
],
[
"Bashir",
"Noman",
""
],
[
"Murillo",
"Jorge",
""
],
[
"Hanafy",
"Walid",
""
],
[
"Liang",
"Qianlin",
""
],
[
"Irwin",
"David",
""
],
[
"Shenoy",
"Prashant",
""
]
] |
new_dataset
| 0.99919 |
2210.05001
|
Pavithiran Ganeshkumar
|
Pavithiran G, Sharan Padmanabhan, Ashwin Kumar BR, Vetriselvi A
|
Social Media Personal Event Notifier Using NLP and Machine Learning
|
4 pages, 5 figures
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Social media apps have become very promising and omnipresent in daily life.
Most social media apps are used to deliver vital information to those nearby
and far away. As our lives become more hectic, many of us strive to limit our
usage of social media apps because they are too addictive, and the majority of
us have gotten preoccupied with our daily lives. Because of this, we frequently
overlook crucial information, such as invitations to weddings, interviews,
birthday parties, etc., or find ourselves unable to attend the event. In most
cases, this happens because users are more likely to discover the invitation or
information only before the event, giving them little time to prepare. To solve
this issue, in this study, we created a system that will collect social media
chat and filter it using Natural Language Processing (NLP) methods like
Tokenization, Stop Words Removal, Lemmatization, Segmentation, and Named Entity
Recognition (NER). Also, Machine Learning Algorithms such as K-Nearest Neighbor
(KNN) Algorithm are implemented to prioritize the received invitation and to
sort the level of priority. Finally, a customized notification will be
delivered to the users where they acknowledge the upcoming event. So, the
chances of missing the event are less or can be planned.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 20:11:40 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"G",
"Pavithiran",
""
],
[
"Padmanabhan",
"Sharan",
""
],
[
"BR",
"Ashwin Kumar",
""
],
[
"A",
"Vetriselvi",
""
]
] |
new_dataset
| 0.992596 |
2210.05018
|
Chenxi Liu
|
Chenxi Liu, Zhaoqi Leng, Pei Sun, Shuyang Cheng, Charles R. Qi, Yin
Zhou, Mingxing Tan, Dragomir Anguelov
|
LidarNAS: Unifying and Searching Neural Architectures for 3D Point
Clouds
|
ECCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developing neural models that accurately understand objects in 3D point
clouds is essential for the success of robotics and autonomous driving.
However, arguably due to the higher-dimensional nature of the data (as compared
to images), existing neural architectures exhibit a large variety in their
designs, including but not limited to the views considered, the format of the
neural features, and the neural operations used. Lack of a unified framework
and interpretation makes it hard to put these designs in perspective, as well
as systematically explore new ones. In this paper, we begin by proposing a
unified framework of such, with the key idea being factorizing the neural
networks into a series of view transforms and neural layers. We demonstrate
that this modular framework can reproduce a variety of existing works while
allowing a fair comparison of backbone designs. Then, we show how this
framework can easily materialize into a concrete neural architecture search
(NAS) space, allowing a principled NAS-for-3D exploration. In performing
evolutionary NAS on the 3D object detection task on the Waymo Open Dataset, not
only do we outperform the state-of-the-art models, but also report the
interesting finding that NAS tends to discover the same macro-level
architecture concept for both the vehicle and pedestrian classes.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 21:21:41 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Liu",
"Chenxi",
""
],
[
"Leng",
"Zhaoqi",
""
],
[
"Sun",
"Pei",
""
],
[
"Cheng",
"Shuyang",
""
],
[
"Qi",
"Charles R.",
""
],
[
"Zhou",
"Yin",
""
],
[
"Tan",
"Mingxing",
""
],
[
"Anguelov",
"Dragomir",
""
]
] |
new_dataset
| 0.970866 |
2210.05068
|
Rhys Newbury
|
Jason Toskov, Rhys Newbury, Mustafa Mukadam, Dana Kuli\'c, Akansel
Cosgun
|
In-Hand Gravitational Pivoting Using Tactile Sensing
|
Accepted as poster presentation to Conference on Robot Learning
(CoRL) 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study gravitational pivoting, a constrained version of in-hand
manipulation, where we aim to control the rotation of an object around the grip
point of a parallel gripper. To achieve this, instead of controlling the
gripper to avoid slip, we embrace slip to allow the object to rotate in-hand.
We collect two real-world datasets, a static tracking dataset and a
controller-in-the loop dataset, both annotated with object angle and angular
velocity labels. Both datasets contain force-based tactile information on ten
different household objects. We train an LSTM model to predict the angular
position and velocity of the held object from purely tactile data. We integrate
this model with a controller that opens and closes the gripper allowing the
object to rotate to desired relative angles. We conduct real-world experiments
where the robot is tasked to achieve a relative target angle. We show that our
approach outperforms a sliding-window based MLP in a zero-shot generalization
setting with unseen objects. Furthermore, we show a 16.6% improvement in
performance when the LSTM model is fine-tuned on a small set of data collected
with both the LSTM model and the controller in-the-loop. Code and videos are
available at https://rhys-newbury.github.io/projects/pivoting/
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 00:41:38 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Toskov",
"Jason",
""
],
[
"Newbury",
"Rhys",
""
],
[
"Mukadam",
"Mustafa",
""
],
[
"Kulić",
"Dana",
""
],
[
"Cosgun",
"Akansel",
""
]
] |
new_dataset
| 0.998875 |
2210.05076
|
Yuanzhi Su None
|
Wanpeng Fan, Yuanzhi Su, Yuxin Huang
|
ConchShell: A Generative Adversarial Networks that Turns Pictures into
Piano Music
|
5 pages
| null | null | null |
cs.SD cs.IR eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ConchShell, a multi-modal generative adversarial framework that
takes pictures as input to the network and generates piano music samples that
match the picture context. Inspired by I3D, we introduce a novel image feature
representation method: time-convolutional neural network (TCNN), which is used
to forge features for images in the temporal dimension. Although our image data
consists of only six categories, our proposed framework will be innovative and
commercially meaningful. The project will provide technical ideas for work such
as 3D game voice overs, short-video soundtracks, and real-time generation of
metaverse background music.We have also released a new dataset, the
Beach-Ocean-Piano Dataset (BOPD) 1, which contains more than 3,000 images and
more than 1,500 piano pieces. This dataset will support multimodal
image-to-music research.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 01:04:39 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Fan",
"Wanpeng",
""
],
[
"Su",
"Yuanzhi",
""
],
[
"Huang",
"Yuxin",
""
]
] |
new_dataset
| 0.998717 |
2210.05092
|
Xiaoyi Qin
|
Xiaoyi Qin, Na Li, Yuke Lin, Yiwei Ding, Chao Weng, Dan Su, Ming Li
|
The DKU-Tencent System for the VoxCeleb Speaker Recognition Challenge
2022
| null | null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This paper is the system description of the DKU-Tencent System for the
VoxCeleb Speaker Recognition Challenge 2022 (VoxSRC22). In this challenge, we
focus on track1 and track3. For track1, multiple backbone networks are adopted
to extract frame-level features. Since track1 focus on the cross-age scenarios,
we adopt the cross-age trials and perform QMF to calibrate score. The
magnitude-based quality measures achieve a large improvement. For track3, the
semi-supervised domain adaptation task, the pseudo label method is adopted to
make domain adaptation. Considering the noise labels in clustering, the ArcFace
is replaced by Sub-center ArcFace. The final submission achieves 0.107 mDCF in
task1 and 7.135% EER in task3.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 02:09:40 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Qin",
"Xiaoyi",
""
],
[
"Li",
"Na",
""
],
[
"Lin",
"Yuke",
""
],
[
"Ding",
"Yiwei",
""
],
[
"Weng",
"Chao",
""
],
[
"Su",
"Dan",
""
],
[
"Li",
"Ming",
""
]
] |
new_dataset
| 0.970989 |
2210.05093
|
Christian Jung
|
Christian Jung, Claudia Redenbach
|
Crack Modeling via Minimum-Weight Surfaces in 3d Voronoi Diagrams
| null | null | null | null |
cs.GR cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Shortest paths play an important role in mathematical modeling and image
processing. Usually, shortest path problems are formulated on planar graphs
that consist of vertices and weighted arcs. In this context, one is interested
in finding a path of minimum weight from a start vertex to an end vertex. The
concept of minimum-weight surfaces extends shortest paths to 3d. The
minimum-weight surface problem is formulated on a cellular complex with
weighted facets. A cycle on the arcs of the complex serves as input and one is
interested in finding a surface of minimum weight bounded by that cycle. In
practice, minimum-weight surfaces can be used to segment 3d images. Vice versa,
it is possible to use them as a modeling tool for geometric structures such as
cracks. In this work, we present an approach for using minimum-weight surfaces
in bounded Voronoi diagrams to generate synthetic 3d images of cracks.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 02:12:11 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Jung",
"Christian",
""
],
[
"Redenbach",
"Claudia",
""
]
] |
new_dataset
| 0.990801 |
2210.05109
|
Rifat Shahriyar
|
Ajwad Akil, Najrin Sultana, Abhik Bhattacharjee and Rifat Shahriyar
|
BanglaParaphrase: A High-Quality Bangla Paraphrase Dataset
|
AACL 2022 (camera-ready)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present BanglaParaphrase, a high-quality synthetic Bangla
Paraphrase dataset curated by a novel filtering pipeline. We aim to take a step
towards alleviating the low resource status of the Bangla language in the NLP
domain through the introduction of BanglaParaphrase, which ensures quality by
preserving both semantics and diversity, making it particularly useful to
enhance other Bangla datasets. We show a detailed comparative analysis between
our dataset and models trained on it with other existing works to establish the
viability of our synthetic paraphrase data generation pipeline. We are making
the dataset and models publicly available at
https://github.com/csebuetnlp/banglaparaphrase to further the state of Bangla
NLP.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 02:52:31 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Akil",
"Ajwad",
""
],
[
"Sultana",
"Najrin",
""
],
[
"Bhattacharjee",
"Abhik",
""
],
[
"Shahriyar",
"Rifat",
""
]
] |
new_dataset
| 0.999719 |
2210.05112
|
Haneul Yoo
|
Haneul Yoo, Jiho Jin, Juhee Son, JinYeong Bak, Kyunghyun Cho, Alice Oh
|
HUE: Pretrained Model and Dataset for Understanding Hanja Documents of
Ancient Korea
|
Findings of NAACL 2022
| null |
10.18653/v1/2022.findings-naacl.140
| null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Historical records in Korea before the 20th century were primarily written in
Hanja, an extinct language based on Chinese characters and not understood by
modern Korean or Chinese speakers. Historians with expertise in this time
period have been analyzing the documents, but that process is very difficult
and time-consuming, and language models would significantly speed up the
process. Toward building and evaluating language models for Hanja, we release
the Hanja Understanding Evaluation dataset consisting of chronological
attribution, topic classification, named entity recognition, and summary
retrieval tasks. We also present BERT-based models continued training on the
two major corpora from the 14th to the 19th centuries: the Annals of the Joseon
Dynasty and Diaries of the Royal Secretariats. We compare the models with
several baselines on all tasks and show there are significant improvements
gained by training on the two corpora. Additionally, we run zero-shot
experiments on the Daily Records of the Royal Court and Important Officials
(DRRI). The DRRI dataset has not been studied much by the historians, and not
at all by the NLP community.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 03:04:28 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Yoo",
"Haneul",
""
],
[
"Jin",
"Jiho",
""
],
[
"Son",
"Juhee",
""
],
[
"Bak",
"JinYeong",
""
],
[
"Cho",
"Kyunghyun",
""
],
[
"Oh",
"Alice",
""
]
] |
new_dataset
| 0.999741 |
2210.05168
|
Lev Utkin
|
Andrei V. Konstantinov and Lev V. Utkin
|
LARF: Two-level Attention-based Random Forests with a Mixture of
Contamination Models
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
New models of the attention-based random forests called LARF (Leaf
Attention-based Random Forest) are proposed. The first idea behind the models
is to introduce a two-level attention, where one of the levels is the "leaf"
attention and the attention mechanism is applied to every leaf of trees. The
second level is the tree attention depending on the "leaf" attention. The
second idea is to replace the softmax operation in the attention with the
weighted sum of the softmax operations with different parameters. It is
implemented by applying a mixture of the Huber's contamination models and can
be regarded as an analog of the multi-head attention with "heads" defined by
selecting a value of the softmax parameter. Attention parameters are simply
trained by solving the quadratic optimization problem. To simplify the tuning
process of the models, it is proposed to make the tuning contamination
parameters to be training and to compute them by solving the quadratic
optimization problem. Many numerical experiments with real datasets are
performed for studying LARFs. The code of proposed algorithms can be found in
https://github.com/andruekonst/leaf-attention-forest.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 06:14:12 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Konstantinov",
"Andrei V.",
""
],
[
"Utkin",
"Lev V.",
""
]
] |
new_dataset
| 0.993204 |
2210.05170
|
Ziling Heng
|
Ziling Heng, Xinran Wang, Xiaoru Li
|
Constructions of cyclic codes and extended primitive cyclic codes with
their applications
|
21 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear codes with a few weights have many nice applications including
combinatorial design, distributed storage system, secret sharing schemes and so
on. In this paper, we construct two families of linear codes with a few weights
based on special polynomials over finite fields. The first family of linear
codes are extended primitive cyclic codes which are affine-invariant. The
second family of linear codes are reducible cyclic codes. The parameters of
these codes and their duals are determined. As the first application, we prove
that these two families of linear codes hold $t$-designs, where $t=2,3$. As the
second application, the minimum localities of the codes are also determined and
optimal locally recoverable codes are derived.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 06:16:14 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Heng",
"Ziling",
""
],
[
"Wang",
"Xinran",
""
],
[
"Li",
"Xiaoru",
""
]
] |
new_dataset
| 0.999324 |
2210.05180
|
Xiaowu Sun
|
Xiaowu Sun, Yasser Shoukry
|
Neurosymbolic Motion and Task Planning for Linear Temporal Logic Tasks
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a neurosymbolic framework to solve motion planning
problems for mobile robots involving temporal goals. The temporal goals are
described using temporal logic formulas such as Linear Temporal Logic (LTL) to
capture complex tasks. The proposed framework trains Neural Network (NN)-based
planners that enjoy strong correctness guarantees when applying to unseen
tasks, i.e., the exact task (including workspace, LTL formula, and dynamic
constraints of a robot) is unknown during the training of NNs. Our approach to
achieving theoretical guarantees and computational efficiency is based on two
insights. First, we incorporate a symbolic model into the training of NNs such
that the resulting NN-based planner inherits the interpretability and
correctness guarantees of the symbolic model. Moreover, the symbolic model
serves as a discrete "memory", which is necessary for satisfying temporal logic
formulas. Second, we train a library of neural networks offline and combine a
subset of the trained NNs into a single NN-based planner at runtime when a task
is revealed. In particular, we develop a novel constrained NN training
procedure, named formal NN training, to enforce that each neural network in the
library represents a "symbol" in the symbolic model. As a result, our
neurosymbolic framework enjoys the scalability and flexibility benefits of
machine learning and inherits the provable guarantees from control-theoretic
and formal-methods techniques. We demonstrate the effectiveness of our
framework in both simulations and on an actual robotic vehicle, and show that
our framework can generalize to unknown tasks where state-of-the-art
meta-reinforcement learning techniques fail.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 06:33:58 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Sun",
"Xiaowu",
""
],
[
"Shoukry",
"Yasser",
""
]
] |
new_dataset
| 0.999114 |
2210.05217
|
Guillaume Bau
|
Guillaume Bau, Antoine Min\'e, Vincent Botbol, Mehdi Bouaziz
|
Abstract interpretation of Michelson smart-contracts
| null |
SOAP '22: 11th ACM SIGPLAN International Workshop on the State Of
the Art in Program Analysis, Jun 2022, San Diego, CA, United States. pp.36-43
|
10.1145/3520313.3534660
| null |
cs.CR cs.PL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Static analysis of smart-contracts is becoming more widespread on blockchain
platforms. Analyzers rely on techniques like symbolic execution or model
checking, but few of them can provide strong soundness properties and guarantee
the analysis termination at the same time. As smart-contracts often manipulate
economic assets, proving numerical properties beyond the absence of runtime
errors is also desirable. Smart-contract execution models differ considerably
from mainstream programming languages and vary from one blockchain to another,
making state-of-the-art analyses hard to adapt. For instance, smart-contract
calls may modify a persistent storage impacting subsequent calls. This makes it
difficult for tools to infer invariants required to formally ensure the absence
of exploitable vulnerabilities. The Michelson smart-contract language, used in
the Tezos blockchain, is strongly typed, stack-based, and has a strict
execution model leaving few opportunities for implicit runtime errors. We
present a work in progress static analyzer for Michelson based on Abstract
Interpretation and implemented within MOPSA, a modular static analyzer. Our
tool supports the Michelson semantic features, including inner calls to
external contracts. It can prove the absence of runtime errors and infer
invariants on the persistent storage over an unbounded number of calls. It is
also being extended to prove high-level numerical and security properties. CCS
Concepts: $\bullet$ Security and privacy $\rightarrow$ Logic and verification;
$\bullet$ Software and its engineering $\rightarrow$ Automated static analysis.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 07:32:56 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Bau",
"Guillaume",
""
],
[
"Miné",
"Antoine",
""
],
[
"Botbol",
"Vincent",
""
],
[
"Bouaziz",
"Mehdi",
""
]
] |
new_dataset
| 0.994709 |
2210.05265
|
Fan Yu
|
Fan Yu, Shiliang Zhang, Pengcheng Guo, Yuhao Liang, Zhihao Du, Yuxiao
Lin, Lei Xie
|
MFCCA:Multi-Frame Cross-Channel attention for multi-speaker ASR in
Multi-party meeting scenario
|
Accepted by SLT 2022
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recently cross-channel attention, which better leverages multi-channel
signals from microphone array, has shown promising results in the multi-party
meeting scenario. Cross-channel attention focuses on either learning global
correlations between sequences of different channels or exploiting fine-grained
channel-wise information effectively at each time step. Considering the delay
of microphone array receiving sound, we propose a multi-frame cross-channel
attention, which models cross-channel information between adjacent frames to
exploit the complementarity of both frame-wise and channel-wise knowledge.
Besides, we also propose a multi-layer convolutional mechanism to fuse the
multi-channel output and a channel masking strategy to combat the channel
number mismatch problem between training and inference. Experiments on the
AliMeeting, a real-world corpus, reveal that our proposed model outperforms
single-channel model by 31.7\% and 37.0\% CER reduction on Eval and Test sets.
Moreover, with comparable model parameters and training data, our proposed
model achieves a new SOTA performance on the AliMeeting corpus, as compared
with the top ranking systems in the ICASSP2022 M2MeT challenge, a recently held
multi-channel multi-speaker ASR challenge.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 08:54:17 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Yu",
"Fan",
""
],
[
"Zhang",
"Shiliang",
""
],
[
"Guo",
"Pengcheng",
""
],
[
"Liang",
"Yuhao",
""
],
[
"Du",
"Zhihao",
""
],
[
"Lin",
"Yuxiao",
""
],
[
"Xie",
"Lei",
""
]
] |
new_dataset
| 0.99953 |
2210.05313
|
Loic Themyr
|
Loic Themyr, Cl\'ement Rambour, Nicolas Thome, Toby Collins, Alexandre
Hostettler
|
Memory transformers for full context and high-resolution 3D Medical
Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer models achieve state-of-the-art results for image segmentation.
However, achieving long-range attention, necessary to capture global context,
with high-resolution 3D images is a fundamental challenge. This paper
introduces the Full resolutIoN mEmory (FINE) transformer to overcome this
issue. The core idea behind FINE is to learn memory tokens to indirectly model
full range interactions while scaling well in both memory and computational
costs. FINE introduces memory tokens at two levels: the first one allows full
interaction between voxels within local image regions (patches), the second one
allows full interactions between all regions of the 3D volume. Combined, they
allow full attention over high resolution images, e.g. 512 x 512 x 256 voxels
and above. Experiments on the BCV image segmentation dataset shows better
performances than state-of-the-art CNN and transformer baselines, highlighting
the superiority of our full attention mechanism compared to recent transformer
baselines, e.g. CoTr, and nnFormer.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 10:11:05 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Themyr",
"Loic",
""
],
[
"Rambour",
"Clément",
""
],
[
"Thome",
"Nicolas",
""
],
[
"Collins",
"Toby",
""
],
[
"Hostettler",
"Alexandre",
""
]
] |
new_dataset
| 0.976318 |
2210.05372
|
Md. Bakhtiar Hasan
|
Mohsinul Kabir, Tasnim Ahmed, Md. Bakhtiar Hasan, Md Tahmid Rahman
Laskar, Tarun Kumar Joarder, Hasan Mahmud, Kamrul Hasan
|
DEPTWEET: A Typology for Social Media Texts to Detect Depression
Severities
|
17 pages, 6 figures, 6 tables, Accepted in Computers in Human
Behavior
|
Computers in Human Behavior, 107503 (2022)
|
10.1016/j.chb.2022.107503
| null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Mental health research through data-driven methods has been hindered by a
lack of standard typology and scarcity of adequate data. In this study, we
leverage the clinical articulation of depression to build a typology for social
media texts for detecting the severity of depression. It emulates the standard
clinical assessment procedure Diagnostic and Statistical Manual of Mental
Disorders (DSM-5) and Patient Health Questionnaire (PHQ-9) to encompass subtle
indications of depressive disorders from tweets. Along with the typology, we
present a new dataset of 40191 tweets labeled by expert annotators. Each tweet
is labeled as 'non-depressed' or 'depressed'. Moreover, three severity levels
are considered for 'depressed' tweets: (1) mild, (2) moderate, and (3) severe.
An associated confidence score is provided with each label to validate the
quality of annotation. We examine the quality of the dataset via representing
summary statistics while setting strong baseline results using attention-based
models like BERT and DistilBERT. Finally, we extensively address the
limitations of the study to provide directions for further research.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 08:23:57 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Kabir",
"Mohsinul",
""
],
[
"Ahmed",
"Tasnim",
""
],
[
"Hasan",
"Md. Bakhtiar",
""
],
[
"Laskar",
"Md Tahmid Rahman",
""
],
[
"Joarder",
"Tarun Kumar",
""
],
[
"Mahmud",
"Hasan",
""
],
[
"Hasan",
"Kamrul",
""
]
] |
new_dataset
| 0.991611 |
2210.05401
|
Cagri Toraman
|
Cagri Toraman, Oguzhan Ozcelik, Furkan \c{S}ahinu\c{c}, Fazli Can
|
Not Good Times for Lies: Misinformation Detection on the Russia-Ukraine
War, COVID-19, and Refugees
| null | null | null | null |
cs.SI cs.CL cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Misinformation spread in online social networks is an urgent-to-solve problem
having harmful consequences that threaten human health, public safety,
economics, and so on. In this study, we construct a novel dataset, called
MiDe-22, having 5,284 English and 5,064 Turkish tweets with their
misinformation labels under several recent events, including the Russia-Ukraine
war, COVID-19 pandemic, and Refugees. Moreover, we provide the user engagements
to the tweets in terms of likes, replies, retweets, and quotes. We present a
detailed data analysis with descriptive statistics and temporal analysis, and
provide the experimental results of a benchmark evaluation for misinformation
detection on our novel dataset.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 12:25:26 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Toraman",
"Cagri",
""
],
[
"Ozcelik",
"Oguzhan",
""
],
[
"Şahinuç",
"Furkan",
""
],
[
"Can",
"Fazli",
""
]
] |
new_dataset
| 0.996801 |
2210.05405
|
Ruolin Xing
|
Ruolin Xing, Xiao Ma, Ao Zhou, Schahram Dustdar, Shangguang Wang
|
From Earth to Space: A First Deployment of 5G Core Network on Satellite
|
This paper has been accepted by China Communications
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent developments in the aerospace industry have led to a dramatic
reduction in the manufacturing and launch costs of low Earth orbit satellites.
The new trend enables the paradigm shift of satellite-terrestrial integrated
networks with global coverage. In particular, the integration of 5G
communication systems and satellites has the potential to restructure
next-generation mobile networks. By leveraging the network function
virtualization and network slicing, the orbital 5G core networks will
facilitate the coordination and management of network functions in
satellite-terrestrial integrated networks. We are the first to deploy a
lightweight 5G core network on a real-world satellite to investigate its
feasibility. We conducted experiments to validate the onboard 5G core network
functions. The validated procedures include registration and session setup
procedures. The results show that the 5G core network can function normally and
generate correct signaling.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 12:28:54 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Xing",
"Ruolin",
""
],
[
"Ma",
"Xiao",
""
],
[
"Zhou",
"Ao",
""
],
[
"Dustdar",
"Schahram",
""
],
[
"Wang",
"Shangguang",
""
]
] |
new_dataset
| 0.996287 |
2210.05480
|
Tosin Adewumi
|
Tosin Adewumi, Sana Sabah Sabry, Nosheen Abid, Foteini Liwicki and
Marcus Liwicki
|
T5 for Hate Speech, Augmented Data and Ensemble
|
15 pages, 18 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We conduct relatively extensive investigations of automatic hate speech (HS)
detection using different state-of-the-art (SoTA) baselines over 11 subtasks of
6 different datasets. Our motivation is to determine which of the recent SoTA
models is best for automatic hate speech detection and what advantage methods
like data augmentation and ensemble may have on the best model, if any. We
carry out 6 cross-task investigations. We achieve new SoTA on two subtasks -
macro F1 scores of 91.73% and 53.21% for subtasks A and B of the HASOC 2020
dataset, where previous SoTA are 51.52% and 26.52%, respectively. We achieve
near-SoTA on two others - macro F1 scores of 81.66% for subtask A of the OLID
2019 dataset and 82.54% for subtask A of the HASOC 2021 dataset, where SoTA are
82.9% and 83.05%, respectively. We perform error analysis and use two
explainable artificial intelligence (XAI) algorithms (IG and SHAP) to reveal
how two of the models (Bi-LSTM and T5) make the predictions they do by using
examples. Other contributions of this work are 1) the introduction of a simple,
novel mechanism for correcting out-of-class (OOC) predictions in T5, 2) a
detailed description of the data augmentation methods, 3) the revelation of the
poor data annotations in the HASOC 2021 dataset by using several examples and
XAI (buttressing the need for better quality control), and 4) the public
release of our model checkpoints and codes to foster transparency.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 14:32:39 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Adewumi",
"Tosin",
""
],
[
"Sabry",
"Sana Sabah",
""
],
[
"Abid",
"Nosheen",
""
],
[
"Liwicki",
"Foteini",
""
],
[
"Liwicki",
"Marcus",
""
]
] |
new_dataset
| 0.951363 |
2210.05513
|
Nicholas Meegan
|
Nicholas Meegan, Hansi Liu, Bryan Cao, Abrar Alali, Kristin Dana,
Marco Gruteser, Shubham Jain and Ashwin Ashok
|
ViFiCon: Vision and Wireless Association Via Self-Supervised Contrastive
Learning
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce ViFiCon, a self-supervised contrastive learning scheme which
uses synchronized information across vision and wireless modalities to perform
cross-modal association. Specifically, the system uses pedestrian data
collected from RGB-D camera footage as well as WiFi Fine Time Measurements
(FTM) from a user's smartphone device. We represent the temporal sequence by
stacking multi-person depth data spatially within a banded image. Depth data
from RGB-D (vision domain) is inherently linked with an observable pedestrian,
but FTM data (wireless domain) is associated only to a smartphone on the
network. To formulate the cross-modal association problem as self-supervised,
the network learns a scene-wide synchronization of the two modalities as a
pretext task, and then uses that learned representation for the downstream task
of associating individual bounding boxes to specific smartphones, i.e.
associating vision and wireless information. We use a pre-trained region
proposal model on the camera footage and then feed the extrapolated bounding
box information into a dual-branch convolutional neural network along with the
FTM data. We show that compared to fully supervised SoTA models, ViFiCon
achieves high performance vision-to-wireless association, finding which
bounding box corresponds to which smartphone device, without hand-labeled
association examples for training data.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 15:04:05 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Meegan",
"Nicholas",
""
],
[
"Liu",
"Hansi",
""
],
[
"Cao",
"Bryan",
""
],
[
"Alali",
"Abrar",
""
],
[
"Dana",
"Kristin",
""
],
[
"Gruteser",
"Marco",
""
],
[
"Jain",
"Shubham",
""
],
[
"Ashok",
"Ashwin",
""
]
] |
new_dataset
| 0.966861 |
2210.05665
|
Yue Jiang
|
Yue Jiang, Marc Habermann, Vladislav Golyanik, Christian Theobalt
|
HiFECap: Monocular High-Fidelity and Expressive Capture of Human
Performances
|
Got accepted by BMVC2022
| null | null | null |
cs.CV cs.AI cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monocular 3D human performance capture is indispensable for many applications
in computer graphics and vision for enabling immersive experiences. However,
detailed capture of humans requires tracking of multiple aspects, including the
skeletal pose, the dynamic surface, which includes clothing, hand gestures as
well as facial expressions. No existing monocular method allows joint tracking
of all these components. To this end, we propose HiFECap, a new neural human
performance capture approach, which simultaneously captures human pose,
clothing, facial expression, and hands just from a single RGB video. We
demonstrate that our proposed network architecture, the carefully designed
training strategy, and the tight integration of parametric face and hand models
to a template mesh enable the capture of all these individual aspects.
Importantly, our method also captures high-frequency details, such as deforming
wrinkles on the clothes, better than the previous works. Furthermore, we show
that HiFECap outperforms the state-of-the-art human performance capture
approaches qualitatively and quantitatively while for the first time capturing
all aspects of the human.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 17:57:45 GMT"
}
] | 2022-10-12T00:00:00 |
[
[
"Jiang",
"Yue",
""
],
[
"Habermann",
"Marc",
""
],
[
"Golyanik",
"Vladislav",
""
],
[
"Theobalt",
"Christian",
""
]
] |
new_dataset
| 0.99921 |
1908.06504
|
Daniel Perz
|
Oswin Aichholzer and Matias Korman and Yoshio Okamoto and Irene Parada
and Daniel Perz and Andr\'e van Renssen and Birgit Vogtenhuber
|
Graphs with large total angular resolution
|
Some parts appeared in the Proceedings of the 27th International
Symposium on Graph Drawing and Network Visualization (GD 2019)
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
The total angular resolution of a straight-line drawing is the minimum angle
between two edges of the drawing. It combines two properties contributing to
the readability of a drawing: the angular resolution, which is the minimum
angle between incident edges, and the crossing resolution, which is the minimum
angle between crossing edges. We consider the total angular resolution of a
graph, which is the maximum total angular resolution of a straight-line drawing
of this graph. We prove that, up to a finite number of well specified
exceptions of constant size, the number of edges of a graph with $n$ vertices
and a total angular resolution greater than $60^{\circ}$ is bounded by $2n-6$.
This bound is tight. In addition, we show that deciding whether a graph has
total angular resolution at least $60^{\circ}$ is NP-hard.
|
[
{
"version": "v1",
"created": "Sun, 18 Aug 2019 19:29:53 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Oct 2022 23:08:01 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Aichholzer",
"Oswin",
""
],
[
"Korman",
"Matias",
""
],
[
"Okamoto",
"Yoshio",
""
],
[
"Parada",
"Irene",
""
],
[
"Perz",
"Daniel",
""
],
[
"van Renssen",
"André",
""
],
[
"Vogtenhuber",
"Birgit",
""
]
] |
new_dataset
| 0.99983 |
2010.02870
|
Mert Kayaalp
|
Mert Kayaalp, Stefan Vlaski, Ali H. Sayed
|
Dif-MAML: Decentralized Multi-Agent Meta-Learning
| null |
IEEE Open Journal of Signal Processing, vol: 3, p. 71 - 93 , Jan.
2022
|
10.1109/OJSP.2021.3140000
| null |
cs.LG cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The objective of meta-learning is to exploit the knowledge obtained from
observed tasks to improve adaptation to unseen tasks. As such, meta-learners
are able to generalize better when they are trained with a larger number of
observed tasks and with a larger amount of data per task. Given the amount of
resources that are needed, it is generally difficult to expect the tasks, their
respective data, and the necessary computational capacity to be available at a
single central location. It is more natural to encounter situations where these
resources are spread across several agents connected by some graph topology.
The formalism of meta-learning is actually well-suited to this decentralized
setting, where the learner would be able to benefit from information and
computational power spread across the agents. Motivated by this observation, in
this work, we propose a cooperative fully-decentralized multi-agent
meta-learning algorithm, referred to as Diffusion-based MAML or Dif-MAML.
Decentralized optimization algorithms are superior to centralized
implementations in terms of scalability, avoidance of communication
bottlenecks, and privacy guarantees. The work provides a detailed theoretical
analysis to show that the proposed strategy allows a collection of agents to
attain agreement at a linear rate and to converge to a stationary point of the
aggregate MAML objective even in non-convex environments. Simulation results
illustrate the theoretical findings and the superior performance relative to
the traditional non-cooperative setting.
|
[
{
"version": "v1",
"created": "Tue, 6 Oct 2020 16:51:09 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Kayaalp",
"Mert",
""
],
[
"Vlaski",
"Stefan",
""
],
[
"Sayed",
"Ali H.",
""
]
] |
new_dataset
| 0.996224 |
2107.06056
|
Prathamesh Kalamkar
|
Prathamesh Kalamkar, Janani Venugopalan Ph.D., Vivek Raghavan Ph.D
|
Indian Legal NLP Benchmarks : A Survey
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Availability of challenging benchmarks is the key to advancement of AI in a
specific field.Since Legal Text is significantly different than normal English
text, there is a need to create separate Natural Language Processing benchmarks
for Indian Legal Text which are challenging and focus on tasks specific to
Legal Systems. This will spur innovation in applications of Natural language
Processing for Indian Legal Text and will benefit AI community and Legal
fraternity. We review the existing work in this area and propose ideas to
create new benchmarks for Indian Legal Natural Language Processing.
|
[
{
"version": "v1",
"created": "Tue, 13 Jul 2021 13:10:10 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Kalamkar",
"Prathamesh",
""
],
[
"D.",
"Janani Venugopalan Ph.",
""
],
[
"D",
"Vivek Raghavan Ph.",
""
]
] |
new_dataset
| 0.998854 |
2109.08833
|
Mingda Chen
|
Mingda Chen, Kevin Gimpel
|
TVStoryGen: A Dataset for Generating Stories with Character Descriptions
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce TVStoryGen, a story generation dataset that requires generating
detailed TV show episode recaps from a brief summary and a set of documents
describing the characters involved. Unlike other story generation datasets,
TVStoryGen contains stories that are authored by professional screen-writers
and that feature complex interactions among multiple characters. Generating
stories in TVStoryGen requires drawing relevant information from the lengthy
provided documents about characters based on the brief summary. In addition, we
propose to train reverse models on our dataset for evaluating the faithfulness
of generated stories. We create TVStoryGen from fan-contributed websites, which
allows us to collect 26k episode recaps with 1868.7 tokens on average.
Empirically, we take a hierarchical story generation approach and find that the
neural model that uses oracle content selectors for character descriptions
demonstrates the best performance on automatic metrics, showing the potential
of our dataset to inspire future research on story generation with constraints.
Qualitative analysis shows that the best-performing model sometimes generates
content that is unfaithful to the short summaries, suggesting promising
directions for future work.
|
[
{
"version": "v1",
"created": "Sat, 18 Sep 2021 05:02:29 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Oct 2022 04:29:11 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Chen",
"Mingda",
""
],
[
"Gimpel",
"Kevin",
""
]
] |
new_dataset
| 0.99984 |
2110.12942
|
Jiajun Deng
|
Hao Feng, Yuechen Wang, Wengang Zhou, Jiajun Deng, Houqiang Li
|
DocTr: Document Image Transformer for Geometric Unwarping and
Illumination Correction
|
This paper has been accepted by ACM Multimedia 2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose a new framework, called Document Image Transformer
(DocTr), to address the issue of geometry and illumination distortion of the
document images. Specifically, DocTr consists of a geometric unwarping
transformer and an illumination correction transformer. By setting a set of
learned query embedding, the geometric unwarping transformer captures the
global context of the document image by self-attention mechanism and decodes
the pixel-wise displacement solution to correct the geometric distortion. After
geometric unwarping, our illumination correction transformer further removes
the shading artifacts to improve the visual quality and OCR accuracy. Extensive
evaluations are conducted on several datasets, and superior results are
reported against the state-of-the-art methods. Remarkably, our DocTr achieves
20.02% Character Error Rate (CER), a 15% absolute improvement over the
state-of-the-art methods. Moreover, it also shows high efficiency on running
time and parameter count. The results will be available at
https://github.com/fh2019ustc/DocTr for further comparison.
|
[
{
"version": "v1",
"created": "Mon, 25 Oct 2021 13:27:10 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Oct 2022 06:29:24 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Feng",
"Hao",
""
],
[
"Wang",
"Yuechen",
""
],
[
"Zhou",
"Wengang",
""
],
[
"Deng",
"Jiajun",
""
],
[
"Li",
"Houqiang",
""
]
] |
new_dataset
| 0.999145 |
2111.05223
|
Ivan Heibi
|
Ivan Heibi, Silvio Peroni
|
A quantitative and qualitative open citation analysis of retracted
articles in the humanities
| null | null | null | null |
cs.DL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
In this article, we show and discuss the results of a quantitative and
qualitative analysis of open citations to retracted publications in the
humanities domain. Our study was conducted by selecting retracted papers in the
humanities domain and marking their main characteristics (e.g., retraction
reason). Then, we gathered the citing entities and annotated their basic
metadata (e.g., title, venue, subject, etc.) and the characteristics of their
in-text citations (e.g., intent, sentiment, etc.). Using these data, we
performed a quantitative and qualitative study of retractions in the
humanities, presenting descriptive statistics and a topic modeling analysis of
the citing entities' abstracts and the in-text citation contexts. As part of
our main findings, we noticed that there was no drop in the overall number of
citations after the year of retraction, with few entities which have either
mentioned the retraction or expressed a negative sentiment toward the cited
publication. In addition, on several occasions, we noticed a higher
concern/awareness when it was about citing a retracted publication, by the
citing entities belonging to the health sciences domain, if compared to the
humanities and the social science domains. Philosophy, arts, and history are
the humanities areas that showed the higher concern toward the retraction.
|
[
{
"version": "v1",
"created": "Tue, 9 Nov 2021 16:02:16 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Aug 2022 10:51:06 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Oct 2022 14:11:54 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Heibi",
"Ivan",
""
],
[
"Peroni",
"Silvio",
""
]
] |
new_dataset
| 0.991365 |
2111.07867
|
Zichao Zhang
|
Zichao Zhang, Melda Yuksel, Halim Yanikomeroglu
|
Faster-than-Nyquist Signaling for MIMO Communications
|
Have been submitted to IEEE transactions on wireless communications
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Faster-than-Nyquist (FTN) signaling is a non-orthogonal transmission
technique, which has the potential to provide significant spectral efficiency
improvement. This paper studies the capacity of FTN signaling for both
frequency-flat and for frequency-selective multiple-input multiple-output
(MIMO) channels. We show that precoding in time and waterfilling in space is
capacity achieving for frequency-flat MIMO FTN. For frequency-selective fading,
joint waterfilling in time, space and frequency is required.
|
[
{
"version": "v1",
"created": "Mon, 15 Nov 2021 16:15:11 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Nov 2021 02:08:57 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Mar 2022 20:09:01 GMT"
},
{
"version": "v4",
"created": "Sat, 23 Jul 2022 15:49:09 GMT"
},
{
"version": "v5",
"created": "Tue, 27 Sep 2022 04:05:44 GMT"
},
{
"version": "v6",
"created": "Thu, 29 Sep 2022 14:24:18 GMT"
},
{
"version": "v7",
"created": "Fri, 7 Oct 2022 20:49:34 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Zhang",
"Zichao",
""
],
[
"Yuksel",
"Melda",
""
],
[
"Yanikomeroglu",
"Halim",
""
]
] |
new_dataset
| 0.985707 |
2202.02056
|
Didem Makaroglu
|
Didem Makaroglu, Altan Cakir, Behcet Ugur Toreyin
|
Unsupervised Behaviour Analysis of News Consumption in Turkish Media
|
Submitted to Big Data Research
| null | null | null |
cs.SI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Clickstream data, which come with a massive volume generated by human
activities on websites, have become a prominent feature for identifying
readers' characteristics by newsrooms after the digitization of news outlets.
Although the nature of clickstream data has a similar logic within websites, it
has inherent limitations in recognizing human behaviours when looking from a
broad perspective, which brings the need to limit the problem in niche areas.
This study investigates the anonymized readers' click activities on the
organizations' websites to identify news consumption patterns following
referrals from Twitter,who incidentally reach but propensity is mainly routed
news content. Methodologies for ensemble cluster analysis with mixed-type
embedding strategies are applied and compared to find similar reader groups and
interests independent of time. Various internal validation perspectives are
used to determine the optimality of the quality of clusters, where the Calinski
Harabasz Index (CHI) is found to give a generalizable result. Our findings
demonstrate that clustering a mixed-type dataset approaches the optimal
internal validation scores, which we define to discriminate the clusters and
algorithms considering applied strategies when embedded by Uniform Manifold
Approximation and Projection (UMAP) and using a consensus function as a key to
access the most applicable hyperparameter configurations in the given ensemble
rather than using consensus function results directly. Evaluation of the
resulting clusters highlights specific clusters repeatedly present in the
separated monthly samples by Adjusted Mutual Information scores greater than
0.5, which provide insights to the news organizations and overcome the
degradation of the modeling behaviours due to the change in the interest over
time.
|
[
{
"version": "v1",
"created": "Fri, 4 Feb 2022 09:57:13 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Oct 2022 18:19:49 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Makaroglu",
"Didem",
""
],
[
"Cakir",
"Altan",
""
],
[
"Toreyin",
"Behcet Ugur",
""
]
] |
new_dataset
| 0.950453 |
2202.13665
|
Tomer Gafni
|
Tomer Gafni, Michal Yemini, Kobi Cohen
|
Restless Multi-Armed Bandits under Exogenous Global Markov Process
|
Accepted for presentation at IEEE ICASSP 2022. arXiv admin note:
substantial text overlap with arXiv:2112.09484
| null | null | null |
cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider an extension to the restless multi-armed bandit (RMAB) problem
with unknown arm dynamics, where an unknown exogenous global Markov process
governs the rewards distribution of each arm. Under each global state, the
rewards process of each arm evolves according to an unknown Markovian rule,
which is non-identical among different arms. At each time, a player chooses an
arm out of N arms to play, and receives a random reward from a finite set of
reward states. The arms are restless, that is, their local state evolves
regardless of the player's actions. Motivated by recent studies on related RMAB
settings, the regret is defined as the reward loss with respect to a player
that knows the dynamics of the problem, and plays at each time t the arm that
maximizes the expected immediate value. The objective is to develop an
arm-selection policy that minimizes the regret. To that end, we develop the
Learning under Exogenous Markov Process (LEMP) algorithm. We analyze LEMP
theoretically and establish a finite-sample bound on the regret. We show that
LEMP achieves a logarithmic regret order with time. We further analyze LEMP
numerically and present simulation results that support the theoretical
findings and demonstrate that LEMP significantly outperforms alternative
algorithms.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 10:29:42 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Oct 2022 11:31:49 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Gafni",
"Tomer",
""
],
[
"Yemini",
"Michal",
""
],
[
"Cohen",
"Kobi",
""
]
] |
new_dataset
| 0.999235 |
2203.02035
|
M Charity
|
M Charity, Isha Dave, Ahmed Khalifa, Julian Togelius
|
Baba is Y'all 2.0: Design and Investigation of a Collaborative
Mixed-Initiative System
|
15 pages
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes a new version of the mixed-initiative collaborative
level designing system: Baba is Y'all, as well as the results of a user study
on the system. Baba is Y'all is a prototype for AI-assisted game design in
collaboration with others. The updated version includes a more user-friendly
interface, a better level-evolver and recommendation system, and extended site
features. The system was evaluated via a user study where participants were
required to play a previously submitted level from the site and then create
their own levels using the editor. They reported on their individual process
creating the level and their overall experience interacting with the site. The
results have shown both the benefits and limitations of this mixed-initiative
system and how it can help with creating a diversity of `Baba is You' levels
that are both human and AI designed while maintaining their quality.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 22:04:15 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Oct 2022 13:19:20 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Charity",
"M",
""
],
[
"Dave",
"Isha",
""
],
[
"Khalifa",
"Ahmed",
""
],
[
"Togelius",
"Julian",
""
]
] |
new_dataset
| 0.997957 |
2205.05177
|
Chirag Raman
|
Chirag Raman, Jose Vargas-Quiros, Stephanie Tan, Ashraful Islam, Ekin
Gedik, Hayley Hung
|
ConfLab: A Data Collection Concept, Dataset, and Benchmark for Machine
Analysis of Free-Standing Social Interactions in the Wild
|
In Proceedings of the Neural Information Processing Systems Track on
Datasets and Benchmarks (NeurIPS D&B)
| null | null | null |
cs.MM cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recording the dynamics of unscripted human interactions in the wild is
challenging due to the delicate trade-offs between several factors: participant
privacy, ecological validity, data fidelity, and logistical overheads. To
address these, following a 'datasets for the community by the community' ethos,
we propose the Conference Living Lab (ConfLab): a new concept for multimodal
multisensor data collection of in-the-wild free-standing social conversations.
For the first instantiation of ConfLab described here, we organized a real-life
professional networking event at a major international conference. Involving 48
conference attendees, the dataset captures a diverse mix of status,
acquaintance, and networking motivations. Our capture setup improves upon the
data fidelity of prior in-the-wild datasets while retaining privacy
sensitivity: 8 videos (1920x1080, 60 fps) from a non-invasive overhead view,
and custom wearable sensors with onboard recording of body motion (full 9-axis
IMU), privacy-preserving low-frequency audio (1250 Hz), and Bluetooth-based
proximity. Additionally, we developed custom solutions for distributed hardware
synchronization at acquisition and time-efficient continuous annotation of body
keypoints and actions at high sampling rates. Our benchmarks showcase some of
the open research tasks related to in-the-wild privacy-preserving social data
analysis: keypoints detection from overhead camera views, skeleton-based
no-audio speaker detection, and F-formation detection.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 21:30:10 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Jul 2022 10:35:21 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Oct 2022 18:30:10 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Raman",
"Chirag",
""
],
[
"Vargas-Quiros",
"Jose",
""
],
[
"Tan",
"Stephanie",
""
],
[
"Islam",
"Ashraful",
""
],
[
"Gedik",
"Ekin",
""
],
[
"Hung",
"Hayley",
""
]
] |
new_dataset
| 0.999818 |
2205.10184
|
Shane Gilroy
|
Shane Gilroy, Darragh Mullins, Edward Jones, Ashkan Parsi and Martin
Glavin
|
E-Scooter Rider Detection and Classification in Dense Urban Environments
| null | null |
10.1016/j.rineng.2022.100677
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Accurate detection and classification of vulnerable road users is a safety
critical requirement for the deployment of autonomous vehicles in heterogeneous
traffic. Although similar in physical appearance to pedestrians, e-scooter
riders follow distinctly different characteristics of movement and can reach
speeds of up to 45kmph. The challenge of detecting e-scooter riders is
exacerbated in urban environments where the frequency of partial occlusion is
increased as riders navigate between vehicles, traffic infrastructure and other
road users. This can lead to the non-detection or mis-classification of
e-scooter riders as pedestrians, providing inaccurate information for accident
mitigation and path planning in autonomous vehicle applications. This research
introduces a novel benchmark for partially occluded e-scooter rider detection
to facilitate the objective characterization of detection models. A novel,
occlusion-aware method of e-scooter rider detection is presented that achieves
a 15.93% improvement in detection performance over the current state of the
art.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 13:50:36 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Gilroy",
"Shane",
""
],
[
"Mullins",
"Darragh",
""
],
[
"Jones",
"Edward",
""
],
[
"Parsi",
"Ashkan",
""
],
[
"Glavin",
"Martin",
""
]
] |
new_dataset
| 0.965761 |
2205.12522
|
Ashish V. Thapliyal
|
Ashish V. Thapliyal, Jordi Pont-Tuset, Xi Chen, Radu Soricut
|
Crossmodal-3600: A Massively Multilingual Multimodal Evaluation Dataset
|
EMNLP 2022
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Research in massively multilingual image captioning has been severely
hampered by a lack of high-quality evaluation datasets. In this paper we
present the Crossmodal-3600 dataset (XM3600 in short), a geographically diverse
set of 3600 images annotated with human-generated reference captions in 36
languages. The images were selected from across the world, covering regions
where the 36 languages are spoken, and annotated with captions that achieve
consistency in terms of style across all languages, while avoiding annotation
artifacts due to direct translation. We apply this benchmark to model selection
for massively multilingual image captioning models, and show superior
correlation results with human evaluations when using XM3600 as golden
references for automatic metrics.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 06:30:19 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Oct 2022 10:39:10 GMT"
}
] | 2022-10-11T00:00:00 |
[
[
"Thapliyal",
"Ashish V.",
""
],
[
"Pont-Tuset",
"Jordi",
""
],
[
"Chen",
"Xi",
""
],
[
"Soricut",
"Radu",
""
]
] |
new_dataset
| 0.999093 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.