id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.05977
|
Ilan Tennenhouse
|
Ilan Tennenhouse and Netanel Raviv
|
Transaction Confirmation in Coded Blockchain
|
To appear in 2023 IEEE International Symposium on Information Theory
| null | null | null |
cs.IT cs.CR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As blockchains continue to seek to scale to a larger number of nodes, the
communication complexity of protocols has become a significant priority as the
network can quickly become overburdened. Several schemes have attempted to
address this, one of which uses coded computation to lighten the load. Here we
seek to address one issue with all such coded blockchain schemes known to the
authors: transaction confirmation. In a coded blockchain, only the leader has
access to the uncoded block, while the nodes receive encoded data that makes it
effectively impossible for them to identify which transactions were included in
the block. As a result, a Byzantine leader might choose not to notify a sender
or receiver of a transaction that the transaction went into the block, and even
with an honest leader, they would not be able to produce a proof of a
transaction's inclusion. To address this, we have constructed a protocol to
send the nodes enough information so that a client sending or receiving a
transaction is guaranteed to not only be notified but also to receive a proof
of that transaction's inclusion in the block. Crucially, we do this without
substantially increasing the bit complexity of the original coded blockchain
protocol.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 08:38:14 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Tennenhouse",
"Ilan",
""
],
[
"Raviv",
"Netanel",
""
]
] |
new_dataset
| 0.960434 |
2305.05991
|
Chu Chen
|
Chu Chen, Yanqi Ma, Bingcheng Dong, Junjie Cao
|
DMNR: Unsupervised De-noising of Point Clouds Corrupted by Airborne
Particles
|
8 pages, 6 figures, 15 references, submitted paper
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
LiDAR sensors are critical for autonomous driving and robotics applications
due to their ability to provide accurate range measurements and their
robustness to lighting conditions. However, airborne particles, such as fog,
rain, snow, and dust, will degrade its performance and it is inevitable to
encounter these inclement environmental conditions outdoors. It would be a
straightforward approach to remove them by supervised semantic segmentation.
But annotating these particles point wisely is too laborious. To address this
problem and enhance the perception under inclement conditions, we develop two
dynamic filtering methods called Dynamic Multi-threshold Noise Removal (DMNR)
and DMNR-H by accurate analysis of the position distribution and intensity
characteristics of noisy points and clean points on publicly available WADS and
DENSE datasets. Both DMNR and DMNR-H outperform state-of-the-art unsupervised
methods by a significant margin on the two datasets and are slightly better
than supervised deep learning-based methods. Furthermore, our methods are more
robust to different LiDAR sensors and airborne particles, such as snow and fog.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 08:58:54 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Chen",
"Chu",
""
],
[
"Ma",
"Yanqi",
""
],
[
"Dong",
"Bingcheng",
""
],
[
"Cao",
"Junjie",
""
]
] |
new_dataset
| 0.999632 |
2305.05992
|
Jianbin Zheng
|
Jianbin Zheng, Daqing Liu, Chaoyue Wang, Minghui Hu, Zuopeng Yang,
Changxing Ding, Dacheng Tao
|
MMoT: Mixture-of-Modality-Tokens Transformer for Composed Multimodal
Conditional Image Synthesis
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Existing multimodal conditional image synthesis (MCIS) methods generate
images conditioned on any combinations of various modalities that require all
of them must be exactly conformed, hindering the synthesis controllability and
leaving the potential of cross-modality under-exploited. To this end, we
propose to generate images conditioned on the compositions of multimodal
control signals, where modalities are imperfectly complementary, i.e., composed
multimodal conditional image synthesis (CMCIS). Specifically, we observe two
challenging issues of the proposed CMCIS task, i.e., the modality coordination
problem and the modality imbalance problem. To tackle these issues, we
introduce a Mixture-of-Modality-Tokens Transformer (MMoT) that adaptively fuses
fine-grained multimodal control signals, a multimodal balanced training loss to
stabilize the optimization of each modality, and a multimodal sampling guidance
to balance the strength of each modality control signal. Comprehensive
experimental results demonstrate that MMoT achieves superior performance on
both unimodal conditional image synthesis (UCIS) and MCIS tasks with
high-quality and faithful image synthesis on complex multimodal conditions. The
project website is available at https://jabir-zheng.github.io/MMoT.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 09:00:04 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Zheng",
"Jianbin",
""
],
[
"Liu",
"Daqing",
""
],
[
"Wang",
"Chaoyue",
""
],
[
"Hu",
"Minghui",
""
],
[
"Yang",
"Zuopeng",
""
],
[
"Ding",
"Changxing",
""
],
[
"Tao",
"Dacheng",
""
]
] |
new_dataset
| 0.99966 |
2305.05994
|
Siyu Yuan
|
Siyu Yuan, Jiangjie Chen, Changzhi Sun, Jiaqing Liang, Yanghua Xiao,
Deqing Yang
|
ANALOGYKB: Unlocking Analogical Reasoning of Language Models with A
Million-scale Knowledge Base
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Analogical reasoning is a fundamental cognitive ability of humans. However,
current language models (LMs) still struggle to achieve human-like performance
in analogical reasoning tasks due to a lack of resources for model training. In
this work, we address this gap by proposing ANALOGYKB, a million-scale analogy
knowledge base (KB) derived from existing knowledge graphs (KGs). ANALOGYKB
identifies two types of analogies from the KGs: 1) analogies of the same
relations, which can be directly extracted from the KGs, and 2) analogies of
analogous relations, which are identified with a selection and filtering
pipeline enabled by large LMs (InstructGPT), followed by minor human efforts
for data quality control. Evaluations on a series of datasets of two analogical
reasoning tasks (analogy recognition and generation) demonstrate that ANALOGYKB
successfully enables LMs to achieve much better results than previous
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 09:03:01 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Yuan",
"Siyu",
""
],
[
"Chen",
"Jiangjie",
""
],
[
"Sun",
"Changzhi",
""
],
[
"Liang",
"Jiaqing",
""
],
[
"Xiao",
"Yanghua",
""
],
[
"Yang",
"Deqing",
""
]
] |
new_dataset
| 0.994441 |
2305.06006
|
Bastian Heinlein
|
Bastian Heinlein, Lukas Brand, Malcolm Egan, Maximilian Sch\"afer,
Robert Schober, Sebastian Lotter
|
Stochastic Chemical Reaction Networks for MAP Detection in Cellular
Receivers
|
7 pages, 4 figures. This paper has been submitted to the 10th ACM
International Conference on Nanoscale Computing and Communication, Coventry,
UK
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to fully exploit the potential of molecular communication (MC) for
intra-body communication, practically implementable cellular receivers are an
important long-term goal. A variety of receiver architectures based on chemical
reaction networks (CRNs) and gene-regulatory networks (GRNs) has been
introduced in the literature, because cells use these concepts to perform
computations in nature. However, practical feasibility is still limited by
stochastic fluctuations of chemical reactions and long computation times in
GRNs. Therefore, in this paper, we propose two receiver designs based on
stochastic CRNs, i.e., CRNs that perform computations by exploiting the
intrinsic fluctuations of chemical reactions with very low molecule counts. The
first CRN builds on a recent result from chemistry that showed how Boltzmann
machines (BMs), a commonly used machine learning model, can be implemented with
CRNs. We show that BMs with optimal parameter values and their CRN
implementations can act as maximum-a-posteriori (MAP) detectors. Furthermore,
we show that BMs can be efficiently trained from simulation data to achieve
close-to-MAP performance. While this approach yields a fixed CRN once deployed,
our second approach based on a manually designed CRN can be trained with pilot
symbols even within the cell and thus adapt to changing channel conditions. We
extend the literature by showing that practical robust detectors can achieve
close-to-MAP performance even without explicit channel knowledge.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 09:33:29 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Heinlein",
"Bastian",
""
],
[
"Brand",
"Lukas",
""
],
[
"Egan",
"Malcolm",
""
],
[
"Schäfer",
"Maximilian",
""
],
[
"Schober",
"Robert",
""
],
[
"Lotter",
"Sebastian",
""
]
] |
new_dataset
| 0.982709 |
2305.06043
|
Hongwei Sheng
|
Hongwei Sheng, Xin Yu, Feiyu Wang, MD Wahiduzzaman Khan, Hexuan Weng,
Sahar Shariflou, S.Mojtaba Golzan
|
Autonomous Stabilization of Retinal Videos for Streamlining Assessment
of Spontaneous Venous Pulsations
|
EMBC, 4 pages, 6 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spontaneous retinal Venous Pulsations (SVP) are rhythmic changes in the
caliber of the central retinal vein and are observed in the optic disc region
(ODR) of the retina. Its absence is a critical indicator of various ocular or
neurological abnormalities. Recent advances in imaging technology have enabled
the development of portable smartphone-based devices for observing the retina
and assessment of SVPs. However, the quality of smartphone-based retinal videos
is often poor due to noise and image jitting, which in return, can severely
obstruct the observation of SVPs. In this work, we developed a fully automated
retinal video stabilization method that enables the examination of SVPs
captured by various mobile devices. Specifically, we first propose an ODR
Spatio-Temporal Localization (ODR-STL) module to localize visible ODR and
remove noisy and jittering frames. Then, we introduce a Noise-Aware Template
Matching (NATM) module to stabilize high-quality video segments at a fixed
position in the field of view. After the processing, the SVPs can be easily
observed in the stabilized videos, significantly facilitating user
observations. Furthermore, our method is cost-effective and has been tested in
both subjective and objective evaluations. Both of the evaluations support its
effectiveness in facilitating the observation of SVPs. This can improve the
timely diagnosis and treatment of associated diseases, making it a valuable
tool for eye health professionals.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 10:52:11 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Sheng",
"Hongwei",
""
],
[
"Yu",
"Xin",
""
],
[
"Wang",
"Feiyu",
""
],
[
"Khan",
"MD Wahiduzzaman",
""
],
[
"Weng",
"Hexuan",
""
],
[
"Shariflou",
"Sahar",
""
],
[
"Golzan",
"S. Mojtaba",
""
]
] |
new_dataset
| 0.990764 |
2305.06074
|
Nikolas Vitsakis
|
Nikolas Vitsakis, Amit Parekh, Tanvi Dinkar, Gavin Abercrombie,
Ioannis Konstas, Verena Rieser
|
iLab at SemEval-2023 Task 11 Le-Wi-Di: Modelling Disagreement or
Modelling Perspectives?
|
To appear in the Proceedings of the 17th International Workshop on
Semantic Evaluation (SemEval-2023). Association for Computational
Linguistics, 2023
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
There are two competing approaches for modelling annotator disagreement:
distributional soft-labelling approaches (which aim to capture the level of
disagreement) or modelling perspectives of individual annotators or groups
thereof. We adapt a multi-task architecture -- which has previously shown
success in modelling perspectives -- to evaluate its performance on the SEMEVAL
Task 11. We do so by combining both approaches, i.e. predicting individual
annotator perspectives as an interim step towards predicting annotator
disagreement. Despite its previous success, we found that a multi-task approach
performed poorly on datasets which contained distinct annotator opinions,
suggesting that this approach may not always be suitable when modelling
perspectives. Furthermore, our results explain that while strongly
perspectivist approaches might not achieve state-of-the-art performance
according to evaluation metrics used by distributional approaches, our approach
allows for a more nuanced understanding of individual perspectives present in
the data. We argue that perspectivist approaches are preferable because they
enable decision makers to amplify minority views, and that it is important to
re-evaluate metrics to reflect this goal.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 11:55:17 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Vitsakis",
"Nikolas",
""
],
[
"Parekh",
"Amit",
""
],
[
"Dinkar",
"Tanvi",
""
],
[
"Abercrombie",
"Gavin",
""
],
[
"Konstas",
"Ioannis",
""
],
[
"Rieser",
"Verena",
""
]
] |
new_dataset
| 0.994064 |
2305.06099
|
Long Ma
|
Long Ma, Kai Lu, Tianbo Che, Hailong Huang, Weiguo Gao, Xuan Li
|
PAI at SemEval-2023 Task 2: A Universal System for Named Entity
Recognition with External Entity Information
|
win 2 first places, 4 second places, and 1 third place out of 13
tracks
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The MultiCoNER II task aims to detect complex, ambiguous, and fine-grained
named entities in low-context situations and noisy scenarios like the presence
of spelling mistakes and typos for multiple languages. The task poses
significant challenges due to the scarcity of contextual information, the high
granularity of the entities(up to 33 classes), and the interference of noisy
data. To address these issues, our team {\bf PAI} proposes a universal Named
Entity Recognition (NER) system that integrates external entity information to
improve performance. Specifically, our system retrieves entities with
properties from the knowledge base (i.e. Wikipedia) for a given text, then
concatenates entity information with the input sentence and feeds it into
Transformer-based models. Finally, our system wins 2 first places, 4 second
places, and 1 third place out of 13 tracks. The code is publicly available at
\url{https://github.com/diqiuzhuanzhuan/semeval-2023}.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 12:40:48 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Ma",
"Long",
""
],
[
"Lu",
"Kai",
""
],
[
"Che",
"Tianbo",
""
],
[
"Huang",
"Hailong",
""
],
[
"Gao",
"Weiguo",
""
],
[
"Li",
"Xuan",
""
]
] |
new_dataset
| 0.998209 |
2305.06123
|
Marc Leinweber
|
Marc Leinweber and Hannes Hartenstein
|
Let It TEE: Asynchronous Byzantine Atomic Broadcast with $n \geq 2f+1$
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Asynchronous Byzantine Atomic Broadcast (ABAB) promises, in comparison to
partially synchronous approaches, simplicity in implementation, increased
performance, and increased robustness. For partially synchronous approaches, it
is well-known that small Trusted Execution Environments (TEE), e.g., MinBFT's
unique sequential identifier generator (USIG), are capable of reducing the
communication effort while increasing the fault tolerance. For ABAB, the
research community assumes that the use of TEEs increases performance and
robustness. However, despite the existence of a fault-model compiler, a
concrete TEE-based approach is not directly available yet. In this brief
announcement, we show that the recently proposed DAG-Rider approach can be
transformed to provide ABAB with $n\geq 2f+1$ processes, of which $f$ are
faulty. We leverage MinBFT's USIG to implement Reliable Broadcast with $n>f$
processes and show that the quorum-critical proofs of DAG-Rider still hold when
adapting the quorum size to $\lfloor \frac{n}{2} \rfloor + 1$.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 13:11:35 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Leinweber",
"Marc",
""
],
[
"Hartenstein",
"Hannes",
""
]
] |
new_dataset
| 0.986158 |
2305.06133
|
Chenghao Li
|
Chenghao Li, Chaoning Zhang
|
When ChatGPT for Computer Vision Will Come? From 2D to 3D
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
ChatGPT and its improved variant GPT4 have revolutionized the NLP field with
a single model solving almost all text related tasks. However, such a model for
computer vision does not exist, especially for 3D vision. This article first
provides a brief view on the progress of deep learning in text, image and 3D
fields from the model perspective. Moreover, this work further discusses how
AIGC evolves from the data perspective. On top of that, this work presents an
outlook on the development of AIGC in 3D from the data perspective.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 13:29:51 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Li",
"Chenghao",
""
],
[
"Zhang",
"Chaoning",
""
]
] |
new_dataset
| 0.98326 |
2305.06147
|
Md Tahmid Rahman Laskar
|
Md Tahmid Rahman Laskar, Mizanur Rahman, Israt Jahan, Enamul Hoque,
Jimmy Huang
|
CQSumDP: A ChatGPT-Annotated Resource for Query-Focused Abstractive
Summarization Based on Debatepedia
|
10 Pages + References
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Debatepedia is a publicly available dataset consisting of arguments and
counter-arguments on controversial topics that has been widely used for the
single-document query-focused abstractive summarization task in recent years.
However, it has been recently found that this dataset is limited by noise and
even most queries in this dataset do not have any relevance to the respective
document. In this paper, we present a methodology for cleaning the Debatepedia
dataset by leveraging the generative power of large language models to make it
suitable for query-focused abstractive summarization. More specifically, we
harness the language generation capabilities of ChatGPT to regenerate its
queries. We evaluate the effectiveness of the proposed ChatGPT annotated
version of the Debatepedia dataset using several benchmark summarization models
and demonstrate that the newly annotated version of Debatepedia outperforms the
original dataset in terms of both query relevance as well as summary generation
quality. We will make this annotated and cleaned version of the dataset
publicly available.
|
[
{
"version": "v1",
"created": "Fri, 31 Mar 2023 15:39:54 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Laskar",
"Md Tahmid Rahman",
""
],
[
"Rahman",
"Mizanur",
""
],
[
"Jahan",
"Israt",
""
],
[
"Hoque",
"Enamul",
""
],
[
"Huang",
"Jimmy",
""
]
] |
new_dataset
| 0.999801 |
2305.06156
|
Nghi D. Q. Bui
|
Dung Nguyen Manh, Nam Le Hai, Anh T. V. Dau, Anh Minh Nguyen, Khanh
Nghiem, Jin Guo, Nghi D. Q. Bui
|
The Vault: A Comprehensive Multilingual Dataset for Advancing Code
Understanding and Generation
| null | null | null | null |
cs.CL cs.AI cs.PL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present The Vault, an open-source, large-scale code-text dataset designed
to enhance the training of code-focused large language models (LLMs). Existing
open-source datasets for training code-based LLMs often face challenges in
terms of size, quality (due to noisy signals), and format (only containing code
function and text explanation pairings). The Vault overcomes these limitations
by providing 40 million code-text pairs across 10 popular programming
languages, thorough cleaning for 10+ prevalent issues, and various levels of
code-text pairings, including class, function, and line levels. Researchers and
practitioners can utilize The Vault for training diverse code-focused LLMs or
incorporate the provided data cleaning methods and scripts to improve their
datasets. By employing The Vault as the training dataset for code-centric LLMs,
we anticipate significant advancements in code understanding and generation
tasks, fostering progress in both artificial intelligence research and software
development practices.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 09:35:03 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Manh",
"Dung Nguyen",
""
],
[
"Hai",
"Nam Le",
""
],
[
"Dau",
"Anh T. V.",
""
],
[
"Nguyen",
"Anh Minh",
""
],
[
"Nghiem",
"Khanh",
""
],
[
"Guo",
"Jin",
""
],
[
"Bui",
"Nghi D. Q.",
""
]
] |
new_dataset
| 0.999173 |
2305.06158
|
Guangyuan Shen
|
Guangyuan Shen, Shengjie Sun, Dehong Gao, Libin Yang, Yongping Shi and
Wei Ning
|
EdgeNet : Encoder-decoder generative Network for Auction Design in
E-commerce Online Advertising
|
under review. arXiv admin note: substantial text overlap with
arXiv:2106.03593 by other authors
| null | null | null |
cs.IR cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present a new encoder-decoder generative network dubbed EdgeNet, which
introduces a novel encoder-decoder framework for data-driven auction design in
online e-commerce advertising. We break the neural auction paradigm of
Generalized-Second-Price(GSP), and improve the utilization efficiency of data
while ensuring the economic characteristics of the auction mechanism.
Specifically, EdgeNet introduces a transformer-based encoder to better capture
the mutual influence among different candidate advertisements. In contrast to
GSP based neural auction model, we design an autoregressive decoder to better
utilize the rich context information in online advertising auctions. EdgeNet is
conceptually simple and easy to extend to the existing end-to-end neural
auction framework. We validate the efficiency of EdgeNet on a wide range of
e-commercial advertising auction, demonstrating its potential in improving user
experience and platform revenue.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 09:14:28 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Shen",
"Guangyuan",
""
],
[
"Sun",
"Shengjie",
""
],
[
"Gao",
"Dehong",
""
],
[
"Yang",
"Libin",
""
],
[
"Shi",
"Yongping",
""
],
[
"Ning",
"Wei",
""
]
] |
new_dataset
| 0.993515 |
2305.06161
|
Harm de Vries
|
Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis
Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim,
Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier
Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, Jo\~ao Monteiro, Oleh
Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh
Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang,
Rudra Murthy, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco
Zocca, Manan Dey, Zhihan Zhang, Nour Fahmy, Urvashi Bhattacharyya, Wenhao Yu,
Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov,
Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger,
Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer
Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor,
Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Mu\~noz
Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, Harm de
Vries
|
StarCoder: may the source be with you!
| null | null | null | null |
cs.CL cs.AI cs.PL cs.SE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The BigCode community, an open-scientific collaboration working on the
responsible development of Large Language Models for Code (Code LLMs),
introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context
length, infilling capabilities and fast large-batch inference enabled by
multi-query attention. StarCoderBase is trained on 1 trillion tokens sourced
from The Stack, a large collection of permissively licensed GitHub repositories
with inspection tools and an opt-out process. We fine-tuned StarCoderBase on
35B Python tokens, resulting in the creation of StarCoder. We perform the most
comprehensive evaluation of Code LLMs to date and show that StarCoderBase
outperforms every open Code LLM that supports multiple programming languages
and matches or outperforms the OpenAI code-cushman-001 model. Furthermore,
StarCoder outperforms every model that is fine-tuned on Python, can be prompted
to achieve 40\% pass@1 on HumanEval, and still retains its performance on other
programming languages. We take several important steps towards a safe
open-access model release, including an improved PII redaction pipeline and a
novel attribution tracing tool, and make the StarCoder models publicly
available under a more commercially viable version of the Open Responsible AI
Model license.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 08:16:42 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Li",
"Raymond",
""
],
[
"Allal",
"Loubna Ben",
""
],
[
"Zi",
"Yangtian",
""
],
[
"Muennighoff",
"Niklas",
""
],
[
"Kocetkov",
"Denis",
""
],
[
"Mou",
"Chenghao",
""
],
[
"Marone",
"Marc",
""
],
[
"Akiki",
"Christopher",
""
],
[
"Li",
"Jia",
""
],
[
"Chim",
"Jenny",
""
],
[
"Liu",
"Qian",
""
],
[
"Zheltonozhskii",
"Evgenii",
""
],
[
"Zhuo",
"Terry Yue",
""
],
[
"Wang",
"Thomas",
""
],
[
"Dehaene",
"Olivier",
""
],
[
"Davaadorj",
"Mishig",
""
],
[
"Lamy-Poirier",
"Joel",
""
],
[
"Monteiro",
"João",
""
],
[
"Shliazhko",
"Oleh",
""
],
[
"Gontier",
"Nicolas",
""
],
[
"Meade",
"Nicholas",
""
],
[
"Zebaze",
"Armel",
""
],
[
"Yee",
"Ming-Ho",
""
],
[
"Umapathi",
"Logesh Kumar",
""
],
[
"Zhu",
"Jian",
""
],
[
"Lipkin",
"Benjamin",
""
],
[
"Oblokulov",
"Muhtasham",
""
],
[
"Wang",
"Zhiruo",
""
],
[
"Murthy",
"Rudra",
""
],
[
"Stillerman",
"Jason",
""
],
[
"Patel",
"Siva Sankalp",
""
],
[
"Abulkhanov",
"Dmitry",
""
],
[
"Zocca",
"Marco",
""
],
[
"Dey",
"Manan",
""
],
[
"Zhang",
"Zhihan",
""
],
[
"Fahmy",
"Nour",
""
],
[
"Bhattacharyya",
"Urvashi",
""
],
[
"Yu",
"Wenhao",
""
],
[
"Singh",
"Swayam",
""
],
[
"Luccioni",
"Sasha",
""
],
[
"Villegas",
"Paulo",
""
],
[
"Kunakov",
"Maxim",
""
],
[
"Zhdanov",
"Fedor",
""
],
[
"Romero",
"Manuel",
""
],
[
"Lee",
"Tony",
""
],
[
"Timor",
"Nadav",
""
],
[
"Ding",
"Jennifer",
""
],
[
"Schlesinger",
"Claire",
""
],
[
"Schoelkopf",
"Hailey",
""
],
[
"Ebert",
"Jan",
""
],
[
"Dao",
"Tri",
""
],
[
"Mishra",
"Mayank",
""
],
[
"Gu",
"Alex",
""
],
[
"Robinson",
"Jennifer",
""
],
[
"Anderson",
"Carolyn Jane",
""
],
[
"Dolan-Gavitt",
"Brendan",
""
],
[
"Contractor",
"Danish",
""
],
[
"Reddy",
"Siva",
""
],
[
"Fried",
"Daniel",
""
],
[
"Bahdanau",
"Dzmitry",
""
],
[
"Jernite",
"Yacine",
""
],
[
"Ferrandis",
"Carlos Muñoz",
""
],
[
"Hughes",
"Sean",
""
],
[
"Wolf",
"Thomas",
""
],
[
"Guha",
"Arjun",
""
],
[
"von Werra",
"Leandro",
""
],
[
"de Vries",
"Harm",
""
]
] |
new_dataset
| 0.999745 |
2305.06173
|
Elwin Huaman
|
Elwin Huaman, David Lindemann, Valeria Caruso, Jorge Luis Huaman
|
QICHWABASE: A Quechua Language and Knowledge Base for Quechua
Communities
|
3 pages, 2 figures, submitted to The Terminology & Ontology: Theories
and applications Conference (TOTh 2023)
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Over the last decade, the Web has increasingly become a space of language and
knowledge representation. However, it is only true for well-spread languages
and well-established communities, while minority communities and their
resources received less attention. In this paper, we propose QICHWABASE to
support the harmonization process of the Quechua language and knowledge, and
its community. For doing it, we adopt methods and tools that could become a
game changer in favour of Quechua communities around the world. We conclude
that the methodology and tools adopted on building QICHWABASE, which is a
Wikibase instance, could enhance the presence of minorities on the Web.
|
[
{
"version": "v1",
"created": "Sat, 29 Apr 2023 09:14:55 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Huaman",
"Elwin",
""
],
[
"Lindemann",
"David",
""
],
[
"Caruso",
"Valeria",
""
],
[
"Huaman",
"Jorge Luis",
""
]
] |
new_dataset
| 0.995823 |
2305.06185
|
Chidi Agbo
|
Chidi Agbo, Hoda Mehrpouyan
|
Conflict Analysis and Resolution of Safety and Security Boundary
Conditions for Industrial Control Systems
|
12 pages, 10 figures, 2022 6th International Conference on System
Reliability and Safety (ICSRS)|978-1-6654-7092-6 @IEEE
| null |
10.1109/ICSRS56243.2022.10067393
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Safety and security are the two most important properties of industrial
control systems (ICS), and their integration is necessary to ensure that safety
goals do not undermine security goals and vice versa. Sometimes, safety and
security co-engineering leads to conflicting requirements or violations capable
of impacting the normal behavior of the system. Identification, analysis, and
resolution of conflicts arising from safety and security co-engineering is a
major challenge, an under-researched area in safety-critical systems(ICS). This
paper presents an STPA-SafeSec-CDCL approach that addresses the challenge. Our
proposed methodology combines the STPA-SafeSec approach for safety and security
analysis and the Conflict-Driven Clause Learning (CDCL) approach for the
identification, analysis, and resolution of conflicts where conflicting
constraints are encoded in satisfiability (SAT) problems. We apply our
framework to the Tennessee Eastman Plant process model, a chemical process
model developed specifically for the study of industrial control processes, to
demonstrate how to use the proposed method. Our methodology goes beyond the
requirement analysis phase and can be applied to the early stages of system
design and development to increase system reliability, robustness, and
resilience.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 14:16:49 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Agbo",
"Chidi",
""
],
[
"Mehrpouyan",
"Hoda",
""
]
] |
new_dataset
| 0.991863 |
2305.06194
|
Milad Azizkhani
|
Jia Shen, Yifan Wang, Milad Azizkhani, Deqiang Qiu, Yue Chen
|
Concentric Tube Robot Redundancy Resolution via Velocity/Compliance
Manipulability Optimization
|
8 pages, 5 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Concentric Tube Robots (CTR) have the potential to enable effective minimally
invasive surgeries. While extensive modeling and control schemes have been
proposed in the past decade, limited efforts have been made to improve the
trajectory tracking performance from the perspective of manipulability , which
can be critical to generate safe motion and feasible actuator commands. In this
paper, we propose a gradient-based redundancy resolution framework that
optimizes velocity/compliance manipulability-based performance indices during
trajectory tracking for a kinematically redundant CTR. We efficiently calculate
the gradients of manipulabilities by propagating the first- and second-order
derivatives of state variables of the Cosserat rod model along the CTR arc
length, reducing the gradient computation time by 68\% compared to finite
difference method. Task-specific performance indices are optimized by
projecting the gradient into the null-space of trajectory tracking. The
proposed method is validated in three exemplary scenarios that involve
trajectory tracking, obstacle avoidance, and external load compensation,
respectively. Simulation results show that the proposed method is able to
accomplish the required tasks while commonly used redundancy resolution
approaches underperform or even fail.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 14:26:33 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Shen",
"Jia",
""
],
[
"Wang",
"Yifan",
""
],
[
"Azizkhani",
"Milad",
""
],
[
"Qiu",
"Deqiang",
""
],
[
"Chen",
"Yue",
""
]
] |
new_dataset
| 0.986991 |
2305.06226
|
Piotr Sowinski
|
Piotr Sowinski, Maria Ganzha, Marcin Paprzycki
|
RiverBench: an Open RDF Streaming Benchmark Suite
|
RiverBench is available online here: https://w3id.org/riverbench
| null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
RDF data streaming has been explored by the Semantic Web community from many
angles, resulting in multiple task formulations and streaming methods. However,
for many existing formulations of the problem, reliably benchmarking streaming
solutions has been challenging due to the lack of well-described and
appropriately diverse benchmark datasets. Existing datasets and evaluations,
except a few notable cases, suffer from unclear streaming task scopes,
underspecified benchmarks, and errors in the data. To address these issues, we
firstly systematize the different RDF data streaming tasks in a clear taxonomy
and outline practical requirements for benchmark datasets. We then propose
RiverBench, an open and collaborative RDF streaming benchmark suite that
applies these principles in practice. RiverBench leverages continuous,
community-driven processes, established best practices (e.g., FAIR), and
built-in quality guarantees. The suite distributes datasets in a common,
accessible format, with clear documentation, licensing, and machine-readable
metadata. The current release includes a diverse collection of non-synthetic
datasets generated by the Semantic Web community, representing many
applications of RDF data streaming, all major task formulations, and emerging
RDF features (RDF-star). Finally, we present a list of research applications
for the suite, demonstrating its versatility and value even beyond the realm of
RDF streaming.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 15:03:33 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Sowinski",
"Piotr",
""
],
[
"Ganzha",
"Maria",
""
],
[
"Paprzycki",
"Marcin",
""
]
] |
new_dataset
| 0.999215 |
2305.06243
|
Ladislau B\"ol\"oni
|
Samuel Matloob, Partha P. Datta, O. Patrick Kreidl, Ayan Dutta,
Swapnoneel Roy and Ladislau B\"ol\"oni
|
Waterberry Farms: A Novel Benchmark For Informative Path Planning
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent developments in robotic and sensor hardware make data collection with
mobile robots (ground or aerial) feasible and affordable to a wide population
of users. The newly emergent applications, such as precision agriculture,
weather damage assessment, or personal home security often do not satisfy the
simplifying assumptions made by previous research: the explored areas have
complex shapes and obstacles, multiple phenomena need to be sensed and
estimated simultaneously and the measured quantities might change during
observations. The future progress of path planning and estimation algorithms
requires a new generation of benchmarks that provide representative
environments and scoring methods that capture the demands of these
applications.
This paper describes the Waterberry Farms benchmark (WBF) that models a
precision agriculture application at a Florida farm growing multiple crop
types. The benchmark captures the dynamic nature of the spread of plant
diseases and variations of soil humidity while the scoring system measures the
performance of a given combination of a movement policy and an information
model estimator. By benchmarking several examples of representative path
planning and estimator algorithms, we demonstrate WBF's ability to provide
insight into their properties and quantify future progress.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 15:24:25 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Matloob",
"Samuel",
""
],
[
"Datta",
"Partha P.",
""
],
[
"Kreidl",
"O. Patrick",
""
],
[
"Dutta",
"Ayan",
""
],
[
"Roy",
"Swapnoneel",
""
],
[
"Bölöni",
"Ladislau",
""
]
] |
new_dataset
| 0.999793 |
2305.06278
|
Can Pu
|
Can Pu, Chuanyu Yang, Jinnian Pu, Radim Tylecek, Robert B. Fisher
|
A Multi-modal Garden Dataset and Hybrid 3D Dense Reconstruction
Framework Based on Panoramic Stereo Images for a Trimming Robot
|
32 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recovering an outdoor environment's surface mesh is vital for an agricultural
robot during task planning and remote visualization. Our proposed solution is
based on a newly-designed panoramic stereo camera along with a hybrid novel
software framework that consists of three fusion modules. The panoramic stereo
camera with a pentagon shape consists of 5 stereo vision camera pairs to stream
synchronized panoramic stereo images for the following three fusion modules. In
the disparity fusion module, rectified stereo images produce the initial
disparity maps using multiple stereo vision algorithms. Then, these initial
disparity maps, along with the intensity images, are input into a disparity
fusion network to produce refined disparity maps. Next, the refined disparity
maps are converted into full-view point clouds or single-view point clouds for
the pose fusion module. The pose fusion module adopts a two-stage
global-coarse-to-local-fine strategy. In the first stage, each pair of
full-view point clouds is registered by a global point cloud matching algorithm
to estimate the transformation for a global pose graph's edge, which
effectively implements loop closure. In the second stage, a local point cloud
matching algorithm is used to match single-view point clouds in different
nodes. Next, we locally refine the poses of all corresponding edges in the
global pose graph using three proposed rules, thus constructing a refined pose
graph. The refined pose graph is optimized to produce a global pose trajectory
for volumetric fusion. In the volumetric fusion module, the global poses of all
the nodes are used to integrate the single-view point clouds into the volume to
produce the mesh of the whole garden. The proposed framework and its three
fusion modules are tested on a real outdoor garden dataset to show the
superiority of the performance.
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 16:15:16 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Pu",
"Can",
""
],
[
"Yang",
"Chuanyu",
""
],
[
"Pu",
"Jinnian",
""
],
[
"Tylecek",
"Radim",
""
],
[
"Fisher",
"Robert B.",
""
]
] |
new_dataset
| 0.999683 |
2305.06315
|
Sarah McGuire
|
Sarah McGuire, Elizabeth Munch, Matthew Hirn
|
NervePool: A Simplicial Pooling Layer
|
22 pages, 9 figures
| null | null | null |
cs.CG cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For deep learning problems on graph-structured data, pooling layers are
important for down sampling, reducing computational cost, and to minimize
overfitting. We define a pooling layer, NervePool, for data structured as
simplicial complexes, which are generalizations of graphs that include
higher-dimensional simplices beyond vertices and edges; this structure allows
for greater flexibility in modeling higher-order relationships. The proposed
simplicial coarsening scheme is built upon partitions of vertices, which allow
us to generate hierarchical representations of simplicial complexes, collapsing
information in a learned fashion. NervePool builds on the learned vertex
cluster assignments and extends to coarsening of higher dimensional simplices
in a deterministic fashion. While in practice, the pooling operations are
computed via a series of matrix operations, the topological motivation is a
set-theoretic construction based on unions of stars of simplices and the nerve
complex
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 17:05:55 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"McGuire",
"Sarah",
""
],
[
"Munch",
"Elizabeth",
""
],
[
"Hirn",
"Matthew",
""
]
] |
new_dataset
| 0.999429 |
2305.06355
|
Yi Wang
|
KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali
Wang, Limin Wang, Yu Qiao
|
VideoChat: Chat-Centric Video Understanding
|
Technical report
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this study, we initiate an exploration into video understanding by
introducing VideoChat, an end-to-end chat-centric video understanding system.
It integrates video foundation models and large language models via a learnable
neural interface, excelling in spatiotemporal reasoning, event localization,
and causal relationship inference. To instructively tune this system, we
propose a video-centric instruction dataset, composed of thousands of videos
matched with detailed descriptions and conversations. This dataset emphasizes
spatiotemporal reasoning and causal relationships, providing a valuable asset
for training chat-centric video understanding systems. Preliminary qualitative
experiments reveal our system's potential across a broad spectrum of video
applications and set the standard for future research. Access our code and data
at https://github.com/OpenGVLab/Ask-Anything
|
[
{
"version": "v1",
"created": "Wed, 10 May 2023 17:59:04 GMT"
}
] | 2023-05-11T00:00:00 |
[
[
"Li",
"KunChang",
""
],
[
"He",
"Yinan",
""
],
[
"Wang",
"Yi",
""
],
[
"Li",
"Yizhuo",
""
],
[
"Wang",
"Wenhai",
""
],
[
"Luo",
"Ping",
""
],
[
"Wang",
"Yali",
""
],
[
"Wang",
"Limin",
""
],
[
"Qiao",
"Yu",
""
]
] |
new_dataset
| 0.999528 |
2104.12732
|
Yang Yue
|
Yang Yue, Xiaoran Yu, Xinyi You, Yi Wang, David Redmiles
|
Ideology in Open Source Development
|
To be published in CHASE 2021
| null |
10.1109/CHASE52884.2021.00016
| null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Open source development, to a great extent, is a type of social movement in
which shared ideologies play critical roles. For participants of open source
development, ideology determines how they make sense of things, shapes their
thoughts, actions, and interactions, enables rich social dynamics in their
projects and communities, and hereby realizes profound impacts at both
individual and organizational levels. While software engineering researchers
have been increasingly recognizing ideology's importance in open source
development, the notion of "ideology" has shown significant ambiguity and
vagueness, and resulted in theoretical and empirical confusion. In this
article, we first examine the historical development of ideology's
conceptualization, and its theories in multiple disciplines. Then, we review
the extant software engineering literature related to ideology. We further
argue the imperatives of developing an empirical theory of ideology in open
source development, and propose a research agenda for developing such a theory.
How such a theory could be applied is also discussed.
|
[
{
"version": "v1",
"created": "Mon, 26 Apr 2021 17:23:54 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Yue",
"Yang",
""
],
[
"Yu",
"Xiaoran",
""
],
[
"You",
"Xinyi",
""
],
[
"Wang",
"Yi",
""
],
[
"Redmiles",
"David",
""
]
] |
new_dataset
| 0.987067 |
2111.11331
|
Lachlan McPheat
|
Lachlan McPheat, Hadi Wazni, Mehrnoosh Sadrzadeh
|
Vector Space Semantics for Lambek Calculus with Soft Subexponentials
|
arXiv admin note: substantial text overlap with arXiv:2005.03074,
arXiv:2101.10486
|
Compositionality 5, 2 (2023)
|
10.32408/compositionality-5-2
| null |
cs.LO cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We develop a vector space semantics for Lambek Calculus with Soft
Subexponentials, apply the calculus to construct compositional vector
interpretations for parasitic gap noun phrases and discourse units with
anaphora and ellipsis, and experiment with the constructions in a
distributional sentence similarity task. As opposed to previous work, which
used Lambek Calculus with a Relevant Modality the calculus used in this paper
uses a bounded version of the modality and is decidable. The vector space
semantics of this new modality allows us to meaningfully define contraction as
projection and provide a linear theory behind what we could previously only
achieve via nonlinear maps.
|
[
{
"version": "v1",
"created": "Mon, 22 Nov 2021 16:39:30 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 15:06:23 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"McPheat",
"Lachlan",
""
],
[
"Wazni",
"Hadi",
""
],
[
"Sadrzadeh",
"Mehrnoosh",
""
]
] |
new_dataset
| 0.957586 |
2112.05504
|
Yuanbo Xiangli
|
Yuanbo Xiangli, Linning Xu, Xingang Pan, Nanxuan Zhao, Anyi Rao,
Christian Theobalt, Bo Dai, Dahua Lin
|
BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale
Scene Rendering
|
Accepted to ECCV22; Previous version: CityNeRF: Building NeRF at City
Scale; Project page can be found in https://city-super.github.io/citynerf
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Neural radiance fields (NeRF) has achieved outstanding performance in
modeling 3D objects and controlled scenes, usually under a single scale. In
this work, we focus on multi-scale cases where large changes in imagery are
observed at drastically different scales. This scenario vastly exists in
real-world 3D environments, such as city scenes, with views ranging from
satellite level that captures the overview of a city, to ground level imagery
showing complex details of an architecture; and can also be commonly identified
in landscape and delicate minecraft 3D models. The wide span of viewing
positions within these scenes yields multi-scale renderings with very different
levels of detail, which poses great challenges to neural radiance field and
biases it towards compromised results. To address these issues, we introduce
BungeeNeRF, a progressive neural radiance field that achieves level-of-detail
rendering across drastically varied scales. Starting from fitting distant views
with a shallow base block, as training progresses, new blocks are appended to
accommodate the emerging details in the increasingly closer views. The strategy
progressively activates high-frequency channels in NeRF's positional encoding
inputs and successively unfolds more complex details as the training proceeds.
We demonstrate the superiority of BungeeNeRF in modeling diverse multi-scale
scenes with drastically varying views on multiple data sources (city models,
synthetic, and drone captured data) and its support for high-quality rendering
in different levels of detail.
|
[
{
"version": "v1",
"created": "Fri, 10 Dec 2021 13:16:21 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Dec 2021 03:37:49 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Jul 2022 05:03:26 GMT"
},
{
"version": "v4",
"created": "Tue, 9 May 2023 05:48:39 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Xiangli",
"Yuanbo",
""
],
[
"Xu",
"Linning",
""
],
[
"Pan",
"Xingang",
""
],
[
"Zhao",
"Nanxuan",
""
],
[
"Rao",
"Anyi",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Dai",
"Bo",
""
],
[
"Lin",
"Dahua",
""
]
] |
new_dataset
| 0.998415 |
2201.08448
|
Richard Sutcliffe
|
Ephrem A. Retta, Richard Sutcliffe, Eiad Almekhlafi, Yosef K. Enku,
Eyob Alemu, Tigist D. Gemechu, Michael A. Berwo, Mustafa Mhamed, Jun Feng
|
Kinit Classification in Ethiopian Chants, Azmaris and Modern Music: A
New Dataset and CNN Benchmark
|
11 pages, 4 tables, 3 figures
| null |
10.1371/journal.pone.0284560
| null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we create EMIR, the first-ever Music Information Retrieval
dataset for Ethiopian music. EMIR is freely available for research purposes and
contains 600 sample recordings of Orthodox Tewahedo chants, traditional Azmari
songs and contemporary Ethiopian secular music. Each sample is classified by
five expert judges into one of four well-known Ethiopian Kinits, Tizita, Bati,
Ambassel and Anchihoye. Each Kinit uses its own pentatonic scale and also has
its own stylistic characteristics. Thus, Kinit classification needs to combine
scale identification with genre recognition. After describing the dataset, we
present the Ethio Kinits Model (EKM), based on VGG, for classifying the EMIR
clips. In Experiment 1, we investigated whether Filterbank, Mel-spectrogram,
Chroma, or Mel-frequency Cepstral coefficient (MFCC) features work best for
Kinit classification using EKM. MFCC was found to be superior and was therefore
adopted for Experiment 2, where the performance of EKM models using MFCC was
compared using three different audio sample lengths. 3s length gave the best
results. In Experiment 3, EKM and four existing models were compared on the
EMIR dataset: AlexNet, ResNet50, VGG16 and LSTM. EKM was found to have the best
accuracy (95.00%) as well as the fastest training time. We hope this work will
encourage others to explore Ethiopian music and to experiment with other models
for Kinit classification.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 20:48:07 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Retta",
"Ephrem A.",
""
],
[
"Sutcliffe",
"Richard",
""
],
[
"Almekhlafi",
"Eiad",
""
],
[
"Enku",
"Yosef K.",
""
],
[
"Alemu",
"Eyob",
""
],
[
"Gemechu",
"Tigist D.",
""
],
[
"Berwo",
"Michael A.",
""
],
[
"Mhamed",
"Mustafa",
""
],
[
"Feng",
"Jun",
""
]
] |
new_dataset
| 0.999769 |
2202.02397
|
Yana Nehme
|
Yana Nehm\'e, Johanna Delanoy, Florent Dupont, Jean-Philippe Farrugia,
Patrick Le Callet and Guillaume Lavou\'e
|
Textured Mesh Quality Assessment: Large-Scale Dataset and Deep
Learning-based Quality Metric
| null | null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Over the past decade, 3D graphics have become highly detailed to mimic the
real world, exploding their size and complexity. Certain applications and
device constraints necessitate their simplification and/or lossy compression,
which can degrade their visual quality. Thus, to ensure the best Quality of
Experience (QoE), it is important to evaluate the visual quality to accurately
drive the compression and find the right compromise between visual quality and
data size. In this work, we focus on subjective and objective quality
assessment of textured 3D meshes. We first establish a large-scale dataset,
which includes 55 source models quantitatively characterized in terms of
geometric, color, and semantic complexity, and corrupted by combinations of 5
types of compression-based distortions applied on the geometry, texture mapping
and texture image of the meshes. This dataset contains over 343k distorted
stimuli. We propose an approach to select a challenging subset of 3000 stimuli
for which we collected 148929 quality judgments from over 4500 participants in
a large-scale crowdsourced subjective experiment. Leveraging our subject-rated
dataset, a learning-based quality metric for 3D graphics was proposed. Our
metric demonstrates state-of-the-art results on our dataset of textured meshes
and on a dataset of distorted meshes with vertex colors. Finally, we present an
application of our metric and dataset to explore the influence of distortion
interactions and content characteristics on the perceived quality of compressed
textured meshes.
|
[
{
"version": "v1",
"created": "Fri, 4 Feb 2022 21:29:43 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Mar 2022 13:05:12 GMT"
},
{
"version": "v3",
"created": "Mon, 8 May 2023 20:01:58 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Nehmé",
"Yana",
""
],
[
"Delanoy",
"Johanna",
""
],
[
"Dupont",
"Florent",
""
],
[
"Farrugia",
"Jean-Philippe",
""
],
[
"Callet",
"Patrick Le",
""
],
[
"Lavoué",
"Guillaume",
""
]
] |
new_dataset
| 0.999146 |
2205.12590
|
Rilwan Adewoyin
|
Rilwan A. Adewoyin, Ritabrata Dutta, Yulan He
|
RSTGen: Imbuing Fine-Grained Interpretable Control into Long-FormText
Generators
|
NAACL 2022
| null |
10.18653/v1/2022.naacl-main.133
| null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we study the task of improving the cohesion and coherence of
long-form text generated by language models. To this end, we propose RSTGen, a
framework that utilises Rhetorical Structure Theory (RST), a classical language
theory, to control the discourse structure, semantics and topics of generated
text. Firstly, we demonstrate our model's ability to control structural
discourse and semantic features of generated text in open generation
evaluation. Then we experiment on the two challenging long-form text tasks of
argument generation and story generation. Evaluation using automated metrics
and a metric with high correlation to human evaluation, shows that our model
performs competitively against existing models, while offering significantly
more controls over generated text than alternative methods.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 09:06:04 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Adewoyin",
"Rilwan A.",
""
],
[
"Dutta",
"Ritabrata",
""
],
[
"He",
"Yulan",
""
]
] |
new_dataset
| 0.997551 |
2206.06119
|
Nikolai Kalischek
|
Nikolai Kalischek, Nico Lang, C\'ecile Renier, Rodrigo Caye Daudt,
Thomas Addoah, William Thompson, Wilma J. Blaser-Hart, Rachael Garrett,
Konrad Schindler, Jan D. Wegner
|
Satellite-based high-resolution maps of cocoa planted area for C\^ote
d'Ivoire and Ghana
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
C\^ote d'Ivoire and Ghana, the world's largest producers of cocoa, account
for two thirds of the global cocoa production. In both countries, cocoa is the
primary perennial crop, providing income to almost two million farmers. Yet
precise maps of cocoa planted area are missing, hindering accurate
quantification of expansion in protected areas, production and yields, and
limiting information available for improved sustainability governance. Here, we
combine cocoa plantation data with publicly available satellite imagery in a
deep learning framework and create high-resolution maps of cocoa plantations
for both countries, validated in situ. Our results suggest that cocoa
cultivation is an underlying driver of over 37% and 13% of forest loss in
protected areas in C\^ote d'Ivoire and Ghana, respectively, and that official
reports substantially underestimate the planted area, up to 40% in Ghana. These
maps serve as a crucial building block to advance understanding of conservation
and economic development in cocoa producing regions.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 12:58:35 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Jul 2022 09:37:00 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Oct 2022 06:46:42 GMT"
},
{
"version": "v4",
"created": "Mon, 10 Oct 2022 07:57:34 GMT"
},
{
"version": "v5",
"created": "Tue, 9 May 2023 08:58:11 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Kalischek",
"Nikolai",
""
],
[
"Lang",
"Nico",
""
],
[
"Renier",
"Cécile",
""
],
[
"Daudt",
"Rodrigo Caye",
""
],
[
"Addoah",
"Thomas",
""
],
[
"Thompson",
"William",
""
],
[
"Blaser-Hart",
"Wilma J.",
""
],
[
"Garrett",
"Rachael",
""
],
[
"Schindler",
"Konrad",
""
],
[
"Wegner",
"Jan D.",
""
]
] |
new_dataset
| 0.988639 |
2207.06870
|
Matteo Nardelli
|
Marco Benedetti, Francesco De Sclavis, Marco Favorito, Giuseppe
Galano, Sara Giammusso, Antonio Muci, Matteo Nardelli
|
A PoW-less Bitcoin with Certified Byzantine Consensus
|
This version adds the evaluation section
| null | null |
ART Technical Report CFC.CRYPTO.CS/2022/1
|
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Distributed Ledger Technologies (DLTs), when managed by a few trusted
validators, require most but not all of the machinery available in public DLTs.
In this work, we explore one possible way to profit from this state of affairs.
We devise a combination of a modified Practical Byzantine Fault Tolerant (PBFT)
protocol and a revised Flexible Round-Optimized Schnorr Threshold Signatures
(FROST) scheme, and then we inject the resulting proof-of-authority consensus
algorithm into Bitcoin (chosen for the reliability, openness, and liveliness it
brings in), replacing its PoW machinery. The combined protocol may operate as a
modern, safe foundation for digital payment systems and Central Bank Digital
Currencies (CBDC).
|
[
{
"version": "v1",
"created": "Thu, 14 Jul 2022 12:47:37 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 14:06:33 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Benedetti",
"Marco",
""
],
[
"De Sclavis",
"Francesco",
""
],
[
"Favorito",
"Marco",
""
],
[
"Galano",
"Giuseppe",
""
],
[
"Giammusso",
"Sara",
""
],
[
"Muci",
"Antonio",
""
],
[
"Nardelli",
"Matteo",
""
]
] |
new_dataset
| 0.999582 |
2209.04362
|
Celyn Walters
|
Celyn Walters, Simon Hadfield
|
EDeNN: Event Decay Neural Networks for low latency vision
|
14 pages, 5 figures
| null | null | null |
cs.CV cs.AI cs.LG cs.NE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Despite the success of neural networks in computer vision tasks, digital
'neurons' are a very loose approximation of biological neurons. Today's
learning approaches are designed to function on digital devices with digital
data representations such as image frames. In contrast, biological vision
systems are generally much more capable and efficient than state-of-the-art
digital computer vision algorithms. Event cameras are an emerging sensor
technology which imitates biological vision with asynchronously firing pixels,
eschewing the concept of the image frame. To leverage modern learning
techniques, many event-based algorithms are forced to accumulate events back to
image frames, somewhat squandering the advantages of event cameras.
We follow the opposite paradigm and develop a new type of neural network
which operates closer to the original event data stream. We demonstrate
state-of-the-art performance in angular velocity regression and competitive
optical flow estimation, while avoiding difficulties related to training SNN.
Furthermore, the processing latency of our proposed approach is less than 1/10
any other implementation, while continuous inference increases this improvement
by another order of magnitude.
|
[
{
"version": "v1",
"created": "Fri, 9 Sep 2022 15:51:39 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 14:22:17 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Walters",
"Celyn",
""
],
[
"Hadfield",
"Simon",
""
]
] |
new_dataset
| 0.999387 |
2211.01224
|
Douglas A. Creager
|
Douglas A. Creager and Hendrik van Antwerpen
|
Stack graphs: Name resolution at scale
|
12 pages, accepted to Eelco Visser Commemorative Symposium 2023
[updated with correct journal DOI]
| null |
10.4230/OASIcs.EVCS.2023.8
| null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
We present stack graphs, an extension of Visser et al.'s scope graphs
framework. Stack graphs power Precise Code Navigation at GitHub, allowing users
to navigate name binding references both within and across repositories. Like
scope graphs, stack graphs encode the name binding information about a program
in a graph structure, in which paths represent valid name bindings. Resolving a
reference to its definition is then implemented with a simple path-finding
search.
GitHub hosts millions of repositories, containing petabytes of total code,
implemented in hundreds of different programming languages, and receiving
thousands of pushes per minute. To support this scale, we ensure that the graph
construction and path-finding judgments are file-incremental: for each source
file, we create an isolated subgraph without any knowledge of, or visibility
into, any other file in the program. This lets us eliminate the storage and
compute costs of reanalyzing file versions that we have already seen. Since
most commits change a small fraction of the files in a repository, this greatly
amortizes the operational costs of indexing large, frequently changed
repositories over time. To handle type-directed name lookups (which require
"pausing" the current lookup to resolve another name), our name resolution
algorithm maintains a stack of the currently paused (but still pending)
lookups. Stack graphs can be constructed via a purely syntactic analysis of the
program's source code, using a new declarative graph construction language.
This means that we can extract name binding information for every repository
without any per-package configuration, and without having to invoke an
arbitrary, untrusted, package-specific build process.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 16:04:18 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Jan 2023 18:56:33 GMT"
},
{
"version": "v3",
"created": "Tue, 9 May 2023 17:41:06 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Creager",
"Douglas A.",
""
],
[
"van Antwerpen",
"Hendrik",
""
]
] |
new_dataset
| 0.99977 |
2303.09421
|
Ben Wu
|
Ben Wu, Olesya Razuvayevskaya, Freddy Heppell, Jo\~ao A. Leite,
Carolina Scarton, Kalina Bontcheva and Xingyi Song
|
SheffieldVeraAI at SemEval-2023 Task 3: Mono and multilingual approaches
for news genre, topic and persuasion technique classification
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes our approach for SemEval-2023 Task 3: Detecting the
category, the framing, and the persuasion techniques in online news in a
multi-lingual setup. For Subtask 1 (News Genre), we propose an ensemble of
fully trained and adapter mBERT models which was ranked joint-first for German,
and had the highest mean rank of multi-language teams. For Subtask 2 (Framing),
we achieved first place in 3 languages, and the best average rank across all
the languages, by using two separate ensembles: a monolingual
RoBERTa-MUPPETLARGE and an ensemble of XLM-RoBERTaLARGE with adapters and task
adaptive pretraining. For Subtask 3 (Persuasion Techniques), we train a
monolingual RoBERTa-Base model for English and a multilingual mBERT model for
the remaining languages, which achieved top 10 for all languages, including 2nd
for English. For each subtask, we compared monolingual and multilingual
approaches, and considered class imbalance techniques.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 15:54:23 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 09:33:33 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Wu",
"Ben",
""
],
[
"Razuvayevskaya",
"Olesya",
""
],
[
"Heppell",
"Freddy",
""
],
[
"Leite",
"João A.",
""
],
[
"Scarton",
"Carolina",
""
],
[
"Bontcheva",
"Kalina",
""
],
[
"Song",
"Xingyi",
""
]
] |
new_dataset
| 0.995823 |
2303.17564
|
Mark Dredze
|
Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze,
Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, Gideon Mann
|
BloombergGPT: A Large Language Model for Finance
|
Updated to include Training Chronicles (Appendix C)
| null | null | null |
cs.LG cs.AI cs.CL q-fin.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of NLP in the realm of financial technology is broad and complex,
with applications ranging from sentiment analysis and named entity recognition
to question answering. Large Language Models (LLMs) have been shown to be
effective on a variety of tasks; however, no LLM specialized for the financial
domain has been reported in literature. In this work, we present BloombergGPT,
a 50 billion parameter language model that is trained on a wide range of
financial data. We construct a 363 billion token dataset based on Bloomberg's
extensive data sources, perhaps the largest domain-specific dataset yet,
augmented with 345 billion tokens from general purpose datasets. We validate
BloombergGPT on standard LLM benchmarks, open financial benchmarks, and a suite
of internal benchmarks that most accurately reflect our intended usage. Our
mixed dataset training leads to a model that outperforms existing models on
financial tasks by significant margins without sacrificing performance on
general LLM benchmarks. Additionally, we explain our modeling choices, training
process, and evaluation methodology. We release Training Chronicles (Appendix
C) detailing our experience in training BloombergGPT.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 17:30:36 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 16:06:35 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Wu",
"Shijie",
""
],
[
"Irsoy",
"Ozan",
""
],
[
"Lu",
"Steven",
""
],
[
"Dabravolski",
"Vadim",
""
],
[
"Dredze",
"Mark",
""
],
[
"Gehrmann",
"Sebastian",
""
],
[
"Kambadur",
"Prabhanjan",
""
],
[
"Rosenberg",
"David",
""
],
[
"Mann",
"Gideon",
""
]
] |
new_dataset
| 0.989597 |
2304.00262
|
Jing Yang
|
Weidong Wang and Jing Yang
|
Two Variants of Bezout Subresultants for Several Univariate Polynomials
| null | null | null | null |
cs.SC
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we develop two variants of Bezout subresultant formulas for
several polynomials, i.e., hybrid Bezout subresultant polynomial and
non-homogeneous Bezout subresultant polynomial. Rather than simply extending
the variants of Bezout subresultant formulas developed by Diaz-Toca and
Gonzalez-Vega in 2004 for two polynomials to arbitrary number of polynomials,
we propose a new approach to formulating two variants of the Bezout-type
subresultant polynomials for a set of univariate polynomials. Experimental
results show that the Bezout-type subresultant formulas behave better than
other known formulas when used to compute multi-polynomial subresultants, among
which the non-homogeneous Bezout-type formula shows the best performance.
|
[
{
"version": "v1",
"created": "Sat, 1 Apr 2023 08:38:06 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 00:58:06 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Wang",
"Weidong",
""
],
[
"Yang",
"Jing",
""
]
] |
new_dataset
| 0.97949 |
2304.03957
|
Gilda Rech Bansimba
|
Gilda Rech Bansimba, Regis Freguin Babindamana and Basile Guy R.
Bossoto
|
A Continued Fraction-Hyperbola based Attack on RSA cryptosystem
| null | null | null | null |
cs.CR math.NT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we present new arithmetical and algebraic results following the
work of Babindamana and al. on hyperbolas and describe in the new results an
approach to attacking a RSA-type modulus based on continued fractions,
independent and not bounded by the size of the private key $d$ nor the public
exponent $e$ compared to Wiener's attack. When successful, this attack is
bounded by $\displaystyle\mathcal{O}\left(
b\log{\alpha_{j4}}\log{(\alpha_{i3}+\alpha_{j3})}\right)$ with $b=10^{y}$,
$\alpha_{i3}+\alpha_{j3}$ a non trivial factor of $n$ and $\alpha_{j4}$ such
that $(n+1)/(n-1)=\alpha_{i4}/\alpha_{j4}$. The primary goal of this attack is
to find a point $\displaystyle X_{\alpha}=\left(-\alpha_{3}, \ \alpha_{3}+1
\right) \in \mathbb{Z}^{2}_{\star}$ that satisfies $\displaystyle\left\langle
X_{\alpha_{3}}, \ P_{3} \right\rangle =0$ from a convergent of
$\displaystyle\frac{\alpha_{i4}}{\alpha_{j4}}+\delta$, with $P_{3}\in
\mathcal{B}_{n}(x, y)_{\mid_{x\geq 4n}}$. We finally present some experimental
examples. We believe these results constitute a new direction in RSA
Cryptanalysis using continued fractions independently of parameters $e$ and
$d$.
|
[
{
"version": "v1",
"created": "Sat, 8 Apr 2023 08:46:19 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 08:30:29 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Bansimba",
"Gilda Rech",
""
],
[
"Babindamana",
"Regis Freguin",
""
],
[
"Bossoto",
"Basile Guy R.",
""
]
] |
new_dataset
| 0.999221 |
2304.04840
|
J Andres Montoya
|
Santiago Flum, J. Andres Montoya
|
NL Is Strictly Contained in P
|
We find a error that must be fixed
| null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
We prove that NL is strictly contained in P. We get this separation as a
corollary of the following result: the set of context-free languages is not
contained in NL. The reader should recall that CFL is contained in DTIME(n^3)
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 19:56:23 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Apr 2023 20:58:04 GMT"
},
{
"version": "v3",
"created": "Tue, 9 May 2023 10:04:43 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Flum",
"Santiago",
""
],
[
"Montoya",
"J. Andres",
""
]
] |
new_dataset
| 0.999566 |
2304.10268
|
Quancheng Wang
|
Quancheng Wang, Xige Zhang, Han Wang, Yuzhe Gu, Ming Tang
|
BackCache: Mitigating Contention-Based Cache Timing Attacks by Hiding
Cache Line Evictions
|
15 pages, 11 figures
| null | null | null |
cs.CR cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Caches are used to reduce the speed differential between the CPU and memory
to improve the performance of modern processors. However, attackers can use
contention-based cache timing attacks to steal sensitive information from
victim processes through carefully designed cache eviction sets. And L1 data
cache attacks are widely exploited and pose a significant privacy and
confidentiality threat. Existing hardware-based countermeasures mainly focus on
cache partitioning, randomization, and cache line flushing, which unfortunately
either incur high overhead or can be circumvented by sophisticated attacks. In
this paper, we propose a novel hardware-software co-design called BackCache
with the idea of always achieving cache hits instead of cache misses to
mitigate contention-based cache timing attacks on the L1 data cache. BackCache
places the evicted cache lines from the L1 data cache into a fully-associative
backup cache to hide the evictions. To improve the security of BackCache, we
introduce a randomly used replacement policy (RURP) and a dynamic backup cache
resizing mechanism. We also present a theoretical security analysis to
demonstrate the effectiveness of BackCache. Our evaluation on the gem5
simulator shows that BackCache can degrade the performance by 1.33%, 7.34%, and
7.59% For OS kernel, single-thread, and multi-thread benchmarks.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 12:47:11 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Apr 2023 05:35:38 GMT"
},
{
"version": "v3",
"created": "Tue, 9 May 2023 02:37:48 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Wang",
"Quancheng",
""
],
[
"Zhang",
"Xige",
""
],
[
"Wang",
"Han",
""
],
[
"Gu",
"Yuzhe",
""
],
[
"Tang",
"Ming",
""
]
] |
new_dataset
| 0.973364 |
2304.11584
|
Jiahao Nie
|
Jiahao Nie, Zhiwei He, Yuxiang Yang, Zhengyi Bao, Mingyu Gao, Jing
Zhang
|
OSP2B: One-Stage Point-to-Box Network for 3D Siamese Tracking
|
Accepted to IJCAI'23. Code will be available at
https://github.com/haooozi/OSP2B
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Two-stage point-to-box network acts as a critical role in the recent popular
3D Siamese tracking paradigm, which first generates proposals and then predicts
corresponding proposal-wise scores. However, such a network suffers from
tedious hyper-parameter tuning and task misalignment, limiting the tracking
performance. Towards these concerns, we propose a simple yet effective
one-stage point-to-box network for point cloud-based 3D single object tracking.
It synchronizes 3D proposal generation and center-ness score prediction by a
parallel predictor without tedious hyper-parameters. To guide a task-aligned
score ranking of proposals, a center-aware focal loss is proposed to supervise
the training of the center-ness branch, which enhances the network's
discriminative ability to distinguish proposals of different quality. Besides,
we design a binary target classifier to identify target-relevant points. By
integrating the derived classification scores with the center-ness scores, the
resulting network can effectively suppress interference proposals and further
mitigate task misalignment. Finally, we present a novel one-stage Siamese
tracker OSP2B equipped with the designed network. Extensive experiments on
challenging benchmarks including KITTI and Waymo SOT Dataset show that our
OSP2B achieves leading performance with a considerable real-time speed.Code
will be available at https://github.com/haooozi/OSP2B.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 08:52:36 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 02:27:49 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Nie",
"Jiahao",
""
],
[
"He",
"Zhiwei",
""
],
[
"Yang",
"Yuxiang",
""
],
[
"Bao",
"Zhengyi",
""
],
[
"Gao",
"Mingyu",
""
],
[
"Zhang",
"Jing",
""
]
] |
new_dataset
| 0.999143 |
2305.00984
|
Laszlo Kish
|
Laszlo B. Kish
|
Ternary Instantaneous Noise-based Logic
|
submitted for publication, new reference added, small corrections
| null | null | null |
cs.ET cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the possible representations of three-valued instantaneous noise-based
logic is proposed. The third value is an uncertain bit value, which can be
useful in artificial intelligence applications. There is a forth value, too,
that can represent a non-existing bit (vacuum-state) that is the same (1
numeric value) for all bits, however that is a squeezed state common for all
bits. Some logic gates are explored. A ternary Universe has a significant
advantage compared to the standard binary one: its amplitude is never zero
during any clock period. All the known binary logic gates work for the binary
bit values in the same way as earlier therefore the former binary algorithms
can be run in the ternary system with no change and without the problems posed
by zero values of the Universe.
|
[
{
"version": "v1",
"created": "Mon, 1 May 2023 00:02:09 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 12:20:45 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Kish",
"Laszlo B.",
""
]
] |
new_dataset
| 0.996609 |
2305.02444
|
Shixun Wu
|
Shixun Wu, Yujia Zhai, Jiajun Huang, Zizhe Jian, and Zizhong Chen
|
FT-GEMM: A Fault Tolerant High Performance GEMM Implementation on x86
CPUs
|
arXiv admin note: substantial text overlap with arXiv:2104.00897
| null |
10.1145/3588195.3595947
| null |
cs.DC cs.PF
|
http://creativecommons.org/licenses/by-sa/4.0/
|
General matrix/matrix multiplication (GEMM) is crucial for scientific
computing and machine learning. However, the increased scale of the computing
platforms raises concerns about hardware and software reliability. In this
poster, we present FT-GEMM, a high-performance GEMM being capable of tolerating
soft errors on-the-fly. We incorporate the fault tolerant functionality at
algorithmic level by fusing the memory-intensive operations into the GEMM
assembly kernels. We design a cache-friendly scheme for parallel FT-GEMM.
Experimental results on Intel Cascade Lake demonstrate that FT-GEMM offers high
reliability and performance -- faster than Intel MKL, OpenBLAS, and BLIS by
3.50\%$\sim$ 22.14\% for both serial and parallel GEMM, even under hundreds of
errors injected per minute.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 22:08:37 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 02:12:56 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Wu",
"Shixun",
""
],
[
"Zhai",
"Yujia",
""
],
[
"Huang",
"Jiajun",
""
],
[
"Jian",
"Zizhe",
""
],
[
"Chen",
"Zizhong",
""
]
] |
new_dataset
| 0.998884 |
2305.03276
|
Nishant Balepur
|
Nishant Balepur, Jie Huang, Kevin Chen-Chuan Chang
|
Expository Text Generation: Imitate, Retrieve, Paraphrase
|
In progress preprint
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Expository documents are vital resources for conveying complex information to
readers. Despite their usefulness, writing expository documents by hand is a
time-consuming and labor-intensive process that requires knowledge of the
domain of interest, careful content planning, and the ability to synthesize
information from multiple sources. To ease these burdens, we introduce the task
of expository text generation, which seeks to automatically generate an
accurate and informative expository document from a knowledge source. We solve
our task by developing IRP, an iterative framework that overcomes the
limitations of language models and separately tackles the steps of content
planning, fact selection, and rephrasing. Through experiments on three diverse
datasets, we demonstrate that IRP produces high-quality expository documents
that accurately inform readers.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 04:26:29 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Balepur",
"Nishant",
""
],
[
"Huang",
"Jie",
""
],
[
"Chang",
"Kevin Chen-Chuan",
""
]
] |
new_dataset
| 0.998973 |
2305.04166
|
Nghia Hieu Nguyen
|
Doanh C. Bui, Nghia Hieu Nguyen, Khang Nguyen
|
UIT-OpenViIC: A Novel Benchmark for Evaluating Image Captioning in
Vietnamese
|
10 pages, 7 figures, submitted to Elsevier
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Image Captioning is one of the vision-language tasks that still interest the
research community worldwide in the 2020s. MS-COCO Caption benchmark is
commonly used to evaluate the performance of advanced captioning models,
although it was published in 2015. Recent captioning models trained on the
MS-COCO Caption dataset only have good performance in language patterns of
English; they do not have such good performance in contexts captured in Vietnam
or fluently caption images using Vietnamese. To contribute to the low-resources
research community as in Vietnam, we introduce a novel image captioning dataset
in Vietnamese, the Open-domain Vietnamese Image Captioning dataset
(UIT-OpenViIC). The introduced dataset includes complex scenes captured in
Vietnam and manually annotated by Vietnamese under strict rules and
supervision. In this paper, we present in more detail the dataset creation
process. From preliminary analysis, we show that our dataset is challenging to
recent state-of-the-art (SOTA) Transformer-based baselines, which performed
well on the MS COCO dataset. Then, the modest results prove that UIT-OpenViIC
has room to grow, which can be one of the standard benchmarks in Vietnamese for
the research community to evaluate their captioning models. Furthermore, we
present a CAMO approach that effectively enhances the image representation
ability by a multi-level encoder output fusion mechanism, which helps improve
the quality of generated captions compared to previous captioning models.
|
[
{
"version": "v1",
"created": "Sun, 7 May 2023 02:48:47 GMT"
},
{
"version": "v2",
"created": "Tue, 9 May 2023 12:46:06 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Bui",
"Doanh C.",
""
],
[
"Nguyen",
"Nghia Hieu",
""
],
[
"Nguyen",
"Khang",
""
]
] |
new_dataset
| 0.999731 |
2305.04992
|
Tsvetan Yordanov
|
Tsvetan R. Yordanov, Ameen Abu-Hanna, Anita CJ Ravelli, Iacopo
Vagliano
|
Autoencoder-based prediction of ICU clinical codes
|
Extended version of 5-page short paper submitted to AIME23 conference
| null | null | null |
cs.LG cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Availability of diagnostic codes in Electronic Health Records (EHRs) is
crucial for patient care as well as reimbursement purposes. However, entering
them in the EHR is tedious, and some clinical codes may be overlooked. Given an
in-complete list of clinical codes, we investigate the performance of ML
methods on predicting the complete ones, and assess the added predictive value
of including other clinical patient data in this task. We used the MIMIC-III
dataset and frame the task of completing the clinical codes as a recommendation
problem. We con-sider various autoencoder approaches plus two strong baselines;
item co-occurrence and Singular Value Decomposition (SVD). Inputs are 1) a
record's known clinical codes, 2) the codes plus variables. The
co-occurrence-based ap-proach performed slightly better (F1 score=0.26, Mean
Average Precision [MAP]=0.19) than the SVD (F1=0.24, MAP=0.18). However, the
adversarial autoencoder achieved the best performance when using the codes plus
variables (F1=0.32, MAP=0.25). Adversarial autoencoders performed best in terms
of F1 and were equal to vanilla and denoising autoencoders in term of MAP.
Using clinical variables in addition to the incomplete codes list, improves the
predictive performance of the models.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 18:56:37 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Yordanov",
"Tsvetan R.",
""
],
[
"Abu-Hanna",
"Ameen",
""
],
[
"Ravelli",
"Anita CJ",
""
],
[
"Vagliano",
"Iacopo",
""
]
] |
new_dataset
| 0.998863 |
2305.05033
|
Alexandros Daglis
|
Albert Cho and Anish Saxena and Moinuddin Qureshi and Alexandros
Daglis
|
A Case for CXL-Centric Server Processors
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
The memory system is a major performance determinant for server processors.
Ever-growing core counts and datasets demand higher bandwidth and capacity as
well as lower latency from the memory system. To keep up with growing demands,
DDR--the dominant processor interface to memory over the past two decades--has
offered higher bandwidth with every generation. However, because each parallel
DDR interface requires a large number of on-chip pins, the processor's memory
bandwidth is ultimately restrained by its pin-count, which is a scarce
resource. With limited bandwidth, multiple memory requests typically contend
for each memory channel, resulting in significant queuing delays that often
overshadow DRAM's service time and degrade performance.
We present CoaXiaL, a server design that overcomes memory bandwidth
limitations by replacing \textit{all} DDR interfaces to the processor with the
more pin-efficient CXL interface. The widespread adoption and industrial
momentum of CXL makes such a transition possible, offering $4\times$ higher
bandwidth per pin compared to DDR at a modest latency overhead. We demonstrate
that, for a broad range of workloads, CXL's latency premium is more than offset
by its higher bandwidth. As CoaXiaL distributes memory requests across more
channels, it drastically reduces queuing delays and thereby both the average
value and variance of memory access latency. Our evaluation with a variety of
workloads shows that CoaXiaL improves the performance of manycore
throughput-oriented servers by $1.52\times$ on average and by up to $3\times$.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 20:21:39 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Cho",
"Albert",
""
],
[
"Saxena",
"Anish",
""
],
[
"Qureshi",
"Moinuddin",
""
],
[
"Daglis",
"Alexandros",
""
]
] |
new_dataset
| 0.978438 |
2305.05057
|
Zehui Zhu
|
Zehui Zhu, Imad L. Al-Qadi
|
Crack Detection of Asphalt Concrete Using Combined Fracture Mechanics
and Digital Image Correlation
| null |
Journal of Transportation Engineering, Part B: Pavements, 149(3),
04023012 (2023)
|
10.1061/JPEODX.PVENG-1249
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Cracking is a common failure mode in asphalt concrete (AC) pavements. Many
tests have been developed to characterize the fracture behavior of AC. Accurate
crack detection during testing is crucial to describe AC fracture behavior.
This paper proposed a framework to detect surface cracks in AC specimens using
two-dimensional digital image correlation (DIC). Two significant drawbacks in
previous research in this field were addressed. First, a multi-seed incremental
reliability-guided DIC was proposed to solve the decorrelation issue due to
large deformation and discontinuities. The method was validated using synthetic
deformed images. A correctly implemented analysis could accurately measure
strains up to 450\%, even with significant discontinuities (cracks) present in
the deformed image. Second, a robust method was developed to detect cracks
based on displacement fields. The proposed method uses critical crack tip
opening displacement ($\delta_c$) to define the onset of cleavage fracture. The
proposed method relies on well-developed fracture mechanics theory. The
proposed threshold $\delta_c$ has a physical meaning and can be easily
determined from DIC measurement. The method was validated using an extended
finite element model. The framework was implemented to measure the crack
propagation rate while conducting the Illinois-flexibility index test on two AC
mixes. The calculated rates could distinguish mixes based on their cracking
potential. The proposed framework could be applied to characterize AC cracking
phenomenon, evaluate its fracture properties, assess asphalt mixture testing
protocols, and develop theoretical models.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 21:28:40 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Zhu",
"Zehui",
""
],
[
"Al-Qadi",
"Imad L.",
""
]
] |
new_dataset
| 0.995592 |
2305.05161
|
Akash Godbole
|
Akash Godbole, Steven A. Grosz, and Anil K. Jain
|
Child Palm-ID: Contactless Palmprint Recognition for Children
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Effective distribution of nutritional and healthcare aid for children,
particularly infants and toddlers, in some of the least developed and most
impoverished countries of the world, is a major problem due to the lack of
reliable identification documents. Biometric authentication technology has been
investigated to address child recognition in the absence of reliable ID
documents. We present a mobile-based contactless palmprint recognition system,
called Child Palm-ID, which meets the requirements of usability, hygiene, cost,
and accuracy for child recognition. Using a contactless child palmprint
database, Child-PalmDB1, consisting of 19,158 images from 1,020 unique palms
(in the age range of 6 mos. to 48 mos.), we report a TAR=94.11% @ FAR=0.1%. The
proposed Child Palm-ID system is also able to recognize adults, achieving a
TAR=99.4% on the CASIA contactless palmprint database and a TAR=100% on the
COEP contactless adult palmprint database, both @ FAR=0.1%. These accuracies
are competitive with the SOTA provided by COTS systems. Despite these high
accuracies, we show that the TAR for time-separated child-palmprints is only
78.1% @ FAR=0.1%.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 04:08:14 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Godbole",
"Akash",
""
],
[
"Grosz",
"Steven A.",
""
],
[
"Jain",
"Anil K.",
""
]
] |
new_dataset
| 0.999579 |
2305.05176
|
Lingjiao Chen
|
Lingjiao Chen and Matei Zaharia and James Zou
|
FrugalGPT: How to Use Large Language Models While Reducing Cost and
Improving Performance
| null | null | null | null |
cs.LG cs.AI cs.CL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is a rapidly growing number of large language models (LLMs) that users
can query for a fee. We review the cost associated with querying popular LLM
APIs, e.g. GPT-4, ChatGPT, J1-Jumbo, and find that these models have
heterogeneous pricing structures, with fees that can differ by two orders of
magnitude. In particular, using LLMs on large collections of queries and text
can be expensive. Motivated by this, we outline and discuss three types of
strategies that users can exploit to reduce the inference cost associated with
using LLMs: 1) prompt adaptation, 2) LLM approximation, and 3) LLM cascade. As
an example, we propose FrugalGPT, a simple yet flexible instantiation of LLM
cascade which learns which combinations of LLMs to use for different queries in
order to reduce cost and improve accuracy. Our experiments show that FrugalGPT
can match the performance of the best individual LLM (e.g. GPT-4) with up to
98% cost reduction or improve the accuracy over GPT-4 by 4% with the same cost.
The ideas and findings presented here lay a foundation for using LLMs
sustainably and efficiently.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 05:11:02 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Chen",
"Lingjiao",
""
],
[
"Zaharia",
"Matei",
""
],
[
"Zou",
"James",
""
]
] |
new_dataset
| 0.972706 |
2305.05179
|
Thomas Burns
|
Thomas F Burns, Tomoki Fukai
|
Simplicial Hopfield networks
|
36 pages, 7 figures, published as a conference paper at ICLR 2023
|
International Conference on Learning Representations 2023
| null | null |
cs.NE cs.AI q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Hopfield networks are artificial neural networks which store memory patterns
on the states of their neurons by choosing recurrent connection weights and
update rules such that the energy landscape of the network forms attractors
around the memories. How many stable, sufficiently-attracting memory patterns
can we store in such a network using $N$ neurons? The answer depends on the
choice of weights and update rule. Inspired by setwise connectivity in biology,
we extend Hopfield networks by adding setwise connections and embedding these
connections in a simplicial complex. Simplicial complexes are higher
dimensional analogues of graphs which naturally represent collections of
pairwise and setwise relationships. We show that our simplicial Hopfield
networks increase memory storage capacity. Surprisingly, even when connections
are limited to a small random subset of equivalent size to an all-pairwise
network, our networks still outperform their pairwise counterparts. Such
scenarios include non-trivial simplicial topology. We also test analogous
modern continuous Hopfield networks, offering a potentially promising avenue
for improving the attention mechanism in Transformer models.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 05:23:04 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Burns",
"Thomas F",
""
],
[
"Fukai",
"Tomoki",
""
]
] |
new_dataset
| 0.996499 |
2305.05183
|
Bo Sun
|
Bo Sun, Baoxin Wang, Yixuan Wang, Wanxiang Che, Dayong Wu, Shijin Wang
and Ting Liu
|
CSED: A Chinese Semantic Error Diagnosis Corpus
|
12 pages. arXiv admin note: text overlap with arXiv:2204.07464
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, much Chinese text error correction work has focused on Chinese
Spelling Check (CSC) and Chinese Grammatical Error Diagnosis (CGED). In
contrast, little attention has been paid to the complicated problem of Chinese
Semantic Error Diagnosis (CSED), which lacks relevant datasets. The study of
semantic errors is important because they are very common and may lead to
syntactic irregularities or even problems of comprehension. To investigate
this, we build the CSED corpus, which includes two datasets. The one is for the
CSED-Recognition (CSED-R) task. The other is for the CSED-Correction (CSED-C)
task. Our annotation guarantees high-quality data through quality assurance
mechanisms. Our experiments show that powerful pre-trained models perform
poorly on this corpus. We also find that the CSED task is challenging, as
evidenced by the fact that even humans receive a low score. This paper proposes
syntax-aware models to specifically adapt to the CSED task. The experimental
results show that the introduction of the syntax-aware approach is meaningful.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 05:33:31 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Sun",
"Bo",
""
],
[
"Wang",
"Baoxin",
""
],
[
"Wang",
"Yixuan",
""
],
[
"Che",
"Wanxiang",
""
],
[
"Wu",
"Dayong",
""
],
[
"Wang",
"Shijin",
""
],
[
"Liu",
"Ting",
""
]
] |
new_dataset
| 0.999039 |
2305.05205
|
Jesse Geneson
|
Jesse Geneson and Shen-Fu Tsai
|
Random processes for generating task-dependency graphs
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate random processes for generating task-dependency graphs of
order $n$ with $m$ edges and a specified number of initial vertices and
terminal vertices. In order to do so, we consider two random processes for
generating task-dependency graphs that can be combined to accomplish this task.
In the $(x, y)$ edge-removal process, we start with a maximally connected
task-dependency graph and remove edges uniformly at random as long as they do
not cause the number of initial vertices to exceed $x$ or the number of
terminal vertices to exceed $y$. In the $(x, y)$ edge-addition process, we
start with an empty task-dependency graph and add edges uniformly at random as
long as they do not cause the number of initial vertices to be less than $x$ or
the number of terminal vertices to be less than $y$. In the $(x, y)$
edge-addition process, we halt if there are exactly $x$ initial vertices and
$y$ terminal vertices. For both processes, we determine the values of $x$ and
$y$ for which the resulting task-dependency graph is guaranteed to have exactly
$x$ initial vertices and $y$ terminal vertices, and we also find the extremal
values for the number of edges in the resulting task-dependency graphs as a
function of $x$, $y$, and the number of vertices. Furthermore, we
asymptotically bound the expected number of edges in the resulting
task-dependency graphs. Finally, we define a random process using only
edge-addition and edge-removal, and we show that with high probability this
random process generates an $(x, y)$ task-dependency graph of order $n$ with
$m$ edges.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 06:56:23 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Geneson",
"Jesse",
""
],
[
"Tsai",
"Shen-Fu",
""
]
] |
new_dataset
| 0.980783 |
2305.05206
|
Lioba Heimbach
|
Andrei Constantinescu, Diana Ghinea, Lioba Heimbach, Zilin Wang, Roger
Wattenhofer
|
A Fair and Resilient Decentralized Clock Network for Transaction
Ordering
| null | null | null | null |
cs.DC cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional blockchain design gives miners or validators full control over
transaction ordering, i.e.,~they can freely choose which transactions to
include or exclude, as well as in which order. While not an issue initially,
the emergence of decentralized finance has introduced new transaction order
dependencies allowing parties in control of the ordering to make a profit by
front-running others' transactions.
In this work, we present the Decentralized Clock Network, a new approach for
achieving fair transaction ordering. Users submit their transactions to the
network's clocks, which run an agreement protocol that provides each
transaction with a timestamp of receipt which is then used to define the
transactions' order. By separating agreement from ordering, our protocol is
efficient and has a simpler design compared to other available solutions.
Moreover, our protocol brings to the blockchain world the paradigm of
asynchronous fallback, where the algorithm operates with stronger fairness
guarantees during periods of synchronous use, switching to an asynchronous mode
only during times of increased network delay.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 06:59:41 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Constantinescu",
"Andrei",
""
],
[
"Ghinea",
"Diana",
""
],
[
"Heimbach",
"Lioba",
""
],
[
"Wang",
"Zilin",
""
],
[
"Wattenhofer",
"Roger",
""
]
] |
new_dataset
| 0.996834 |
2305.05302
|
Eliya Habba
|
Eliya Habba, Renana Keydar, Dan Bareket, Gabriel Stanovsky
|
The Perfect Victim: Computational Analysis of Judicial Attitudes towards
Victims of Sexual Violence
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We develop computational models to analyze court statements in order to
assess judicial attitudes toward victims of sexual violence in the Israeli
court system. The study examines the resonance of "rape myths" in the criminal
justice system's response to sex crimes, in particular in judicial assessment
of victim's credibility. We begin by formulating an ontology for evaluating
judicial attitudes toward victim's credibility, with eight ordinal labels and
binary categorizations. Second, we curate a manually annotated dataset for
judicial assessments of victim's credibility in the Hebrew language, as well as
a model that can extract credibility labels from court cases. The dataset
consists of 855 verdict decision documents in sexual assault cases from
1990-2021, annotated with the help of legal experts and trained law students.
The model uses a combined approach of syntactic and latent structures to find
sentences that convey the judge's attitude towards the victim and classify them
according to the credibility label set. Our ontology, data, and models will be
made available upon request, in the hope they spur future progress in this
judicial important task.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 09:45:44 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Habba",
"Eliya",
""
],
[
"Keydar",
"Renana",
""
],
[
"Bareket",
"Dan",
""
],
[
"Stanovsky",
"Gabriel",
""
]
] |
new_dataset
| 0.998601 |
2305.05303
|
Ilias Dimitriadis
|
Efstratios Voulgaris, Ilias Dimitriadis, Dimitrios P. Giakatos, Athena
Vakali, Athanasios Papakonstantinou, Dimitris Chatzigiannis
|
ENCOVIZ: An open-source, secure and multi-role energy consumption
visualisation platform
|
5 pages, 4 figures
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
The need for a more energy efficient future is now more evident than ever and
has led to the continuous growth of sectors with greater potential for energy
savings, such as smart buildings, energy consumption meters, etc. The large
volume of energy related data produced is a huge advantage but, at the same
time, it creates a new problem; The need to structure, organize and efficiently
present this meaningful information. In this context, we present the ENCOVIZ
platform, a multi-role, extensible, secure, energy consumption visualization
platform with built-in analytics. ENCOVIZ has been built in accordance with the
best visualisation practices, on top of open source technologies and includes
(i) multi-role functionalities, (ii) the automated ingestion of energy
consumption data and (iii) proper visualisations and information to support
effective decision making both for energy providers and consumers.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 09:48:09 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Voulgaris",
"Efstratios",
""
],
[
"Dimitriadis",
"Ilias",
""
],
[
"Giakatos",
"Dimitrios P.",
""
],
[
"Vakali",
"Athena",
""
],
[
"Papakonstantinou",
"Athanasios",
""
],
[
"Chatzigiannis",
"Dimitris",
""
]
] |
new_dataset
| 0.999728 |
2305.05317
|
Xia Wu
|
X. Wu, W. Lu, X. P. Qin, X. W. Cao
|
Minimal Linear Codes Constructed from hierarchical posets with two
levels
|
arXiv admin note: text overlap with arXiv:1911.11632,
arXiv:1911.07648
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
J. Y. Hyun, et al. (Des. Codes Cryptogr., vol. 88, pp. 2475-2492, 2020)
constructed some optimal and minimal binary linear codes generated by one or
two order ideals in hierarchical posets of two levels. At the end of their
paper, they left an open problem: it also should be interesting to investigate
the cases of more than two orders in hierarchical posets with two levels or
many levels. In this paper, we use the geometric method to determine the
minimality of linear codes generated by any orders in hierarchical posets with
two levels. We generalize their cases of one or two orders to any orders and
determine the minimality of the linear codes completely.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 10:12:17 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Wu",
"X.",
""
],
[
"Lu",
"W.",
""
],
[
"Qin",
"X. P.",
""
],
[
"Cao",
"X. W.",
""
]
] |
new_dataset
| 0.998347 |
2305.05320
|
Xia Wu
|
W. Lu, X. Wu, X. W. Cao, G. J. Luo, X. P. Qin
|
Minimal Linear Codes Constructed from partial spreads
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Partial spread is important in finite geometry and can be used to construct
linear codes.
From the results in (Designs, Codes and Cryptography 90:1-15, 2022) by Xia
Li, Qin Yue and Deng Tang, we know that if the number of the elements in a
partial spread is ``big enough", then the corresponding linear code is minimal.
They used the sufficient condition in (IEEE Trans. Inf. Theory 44(5):
2010-2017, 1998) to prove the minimality of such linear codes. In this paper,
we use the geometric approach to study the minimality of linear codes
constructed from partial spreads in all cases.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 10:12:28 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Lu",
"W.",
""
],
[
"Wu",
"X.",
""
],
[
"Cao",
"X. W.",
""
],
[
"Luo",
"G. J.",
""
],
[
"Qin",
"X. P.",
""
]
] |
new_dataset
| 0.974518 |
2305.05340
|
Luca Mariot
|
Luca Mariot and Federico Mazzone
|
On the Minimum Distance of Subspace Codes Generated by Linear Cellular
Automata
|
14 pages, 1 figure. Submitted to AUTOMATA 2023
| null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by applications to noncoherent network coding, we study subspace
codes defined by sets of linear cellular automata (CA). As a first remark, we
show that a family of linear CA where the local rules have the same diameter --
and thus the associated polynomials have the same degree -- induces a
Grassmannian code. Then, we prove that the minimum distance of such a code is
determined by the maximum degree occurring among the pairwise greatest common
divisors (GCD) of the polynomials in the family. Finally, we consider the
setting where all such polynomials have the same GCD, and determine the
cardinality of the corresponding Grassmannian code. As a particular case, we
show that if all polynomials in the family are pairwise coprime, the resulting
Grassmannian code has the highest minimum distance possible.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 11:03:03 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Mariot",
"Luca",
""
],
[
"Mazzone",
"Federico",
""
]
] |
new_dataset
| 0.990816 |
2305.05377
|
David Noever
|
David Noever and Matt Ciolino
|
Professional Certification Benchmark Dataset: The First 500 Jobs For
Large Language Models
| null | null | null | null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The research creates a professional certification survey to test large
language models and evaluate their employable skills. It compares the
performance of two AI models, GPT-3 and Turbo-GPT3.5, on a benchmark dataset of
1149 professional certifications, emphasizing vocational readiness rather than
academic performance. GPT-3 achieved a passing score (>70% correct) in 39% of
the professional certifications without fine-tuning or exam preparation. The
models demonstrated qualifications in various computer-related fields, such as
cloud and virtualization, business analytics, cybersecurity, network setup and
repair, and data analytics. Turbo-GPT3.5 scored 100% on the valuable Offensive
Security Certified Professional (OSCP) exam. The models also displayed
competence in other professional domains, including nursing, licensed
counseling, pharmacy, and teaching. Turbo-GPT3.5 passed the Financial Industry
Regulatory Authority (FINRA) Series 6 exam with a 70% grade without
preparation. Interestingly, Turbo-GPT3.5 performed well on customer service
tasks, suggesting potential applications in human augmentation for chatbots in
call centers and routine advice services. The models also score well on sensory
and experience-based tests such as wine sommelier, beer taster, emotional
quotient, and body language reader. The OpenAI model improvement from Babbage
to Turbo resulted in a median 60% better-graded performance in less than a few
years. This progress suggests that focusing on the latest model's shortcomings
could lead to a highly performant AI capable of mastering the most demanding
professional certifications. We open-source the benchmark to expand the range
of testable professional skills as the models improve or gain emergent
capabilities.
|
[
{
"version": "v1",
"created": "Sun, 7 May 2023 00:56:58 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Noever",
"David",
""
],
[
"Ciolino",
"Matt",
""
]
] |
new_dataset
| 0.999817 |
2305.05383
|
Shuai Lu
|
Chenxiao Liu, Shuai Lu, Weizhu Chen, Daxin Jiang, Alexey Svyatkovskiy,
Shengyu Fu, Neel Sundaresan and Nan Duan
|
Code Execution with Pre-trained Language Models
|
Accepted to the Findings of ACL 2023
| null | null | null |
cs.PL cs.AI cs.CL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Code execution is a fundamental aspect of programming language semantics that
reflects the exact behavior of the code. However, most pre-trained models for
code intelligence ignore the execution trace and only rely on source code and
syntactic structures. In this paper, we investigate how well pre-trained models
can understand and perform code execution. We develop a mutation-based data
augmentation technique to create a large-scale and realistic Python dataset and
task for code execution, which challenges existing models such as Codex. We
then present CodeExecutor, a Transformer model that leverages code execution
pre-training and curriculum learning to enhance its semantic comprehension. We
evaluate CodeExecutor on code execution and show its promising performance and
limitations. We also demonstrate its potential benefits for code intelligence
tasks such as zero-shot code-to-code search and text-to-code generation. Our
analysis provides insights into the learning and generalization abilities of
pre-trained models for code execution.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 10:00:05 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Liu",
"Chenxiao",
""
],
[
"Lu",
"Shuai",
""
],
[
"Chen",
"Weizhu",
""
],
[
"Jiang",
"Daxin",
""
],
[
"Svyatkovskiy",
"Alexey",
""
],
[
"Fu",
"Shengyu",
""
],
[
"Sundaresan",
"Neel",
""
],
[
"Duan",
"Nan",
""
]
] |
new_dataset
| 0.999495 |
2305.05390
|
Jincenzi Wu
|
Jincenzi Wu, Zhuang Chen, Jiawen Deng, Sahand Sabour, Minlie Huang
|
COKE: A Cognitive Knowledge Graph for Machine Theory of Mind
|
Work in progress
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Theory of mind (ToM) refers to humans' ability to understand and infer the
desires, beliefs, and intentions of others. The acquisition of ToM plays a key
role in humans' social cognition and interpersonal relations. Though
indispensable for social intelligence, ToM is still lacking for modern AI and
NLP systems since they cannot access the human mental state and cognitive
process beneath the training corpus. To empower AI systems with the ToM ability
and narrow the gap between them and humans, in this paper, we propose COKE: the
first cognitive knowledge graph for machine theory of mind. Specifically, COKE
formalizes ToM as a collection of 45k+ manually verified cognitive chains that
characterize human mental activities and subsequent behavioral/affective
responses when facing specific social circumstances. Beyond that, we further
generalize COKE using pre-trained language models and build a powerful
cognitive generation model COKE+. Experimental results in both automatic and
human evaluation demonstrate the high quality of COKE and the superior ToM
ability of COKE+.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 12:36:58 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Wu",
"Jincenzi",
""
],
[
"Chen",
"Zhuang",
""
],
[
"Deng",
"Jiawen",
""
],
[
"Sabour",
"Sahand",
""
],
[
"Huang",
"Minlie",
""
]
] |
new_dataset
| 0.998513 |
2305.05417
|
Moritz Laupichler
|
Moritz Laupichler and Peter Sanders
|
Fast Many-to-Many Routing for Ridesharing with Multiple Pickup and
Dropoff Locations
|
29 pages, 6 figures, 5 tables Submitted to the European Symposium on
Algorithms (ESA) 2023
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce KaRRi, an improved algorithm for scheduling a fleet of shared
vehicles as it is used by services like UberXShare and Lyft Shared. We speed up
the basic online algorithm that looks for all possible insertions of a new
customer into a set of existing routes, we generalize the objective function,
and efficiently support a large number of possible pick-up and drop-off
locations. This lays an algorithmic foundation for ridesharing systems with
higher vehicle occupancy -- enabling greatly reduced cost and ecological impact
at comparable service quality. We find that our algorithm computes assignments
between vehicles and riders several times faster than a previous
state-of-the-art approach. Further, we observe that allowing meeting points for
vehicles and riders can reduce the operating cost of vehicle fleets by up to
$15\%$ while also reducing passenger wait and trip times.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 13:05:10 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Laupichler",
"Moritz",
""
],
[
"Sanders",
"Peter",
""
]
] |
new_dataset
| 0.997474 |
2305.05421
|
Iris de G\'elis
|
Iris de G\'elis (1 and 2), S\'ebastien Lef\`evre (2) and Thomas
Corpetti (3) ((1) Magellium, (2) Institut de Recherche en Informatique et
Syst\`emes Al\'eatoires IRISA - UMR 6074 - Universit\'e Bretagne Sud, (3)
Littoral - Environnement - T\'el\'ed\'etection - G\'eomatique LETG - UMR 6554
- Universit\'e Rennes 2)
|
DC3DCD: unsupervised learning for multiclass 3D point cloud change
detection
|
This work has been submitted to Elsevier for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a constant evolving world, change detection is of prime importance to keep
updated maps. To better sense areas with complex geometry (urban areas in
particular), considering 3D data appears to be an interesting alternative to
classical 2D images. In this context, 3D point clouds (PCs) obtained by LiDAR
or photogrammetry are very interesting. While recent studies showed the
considerable benefit of using deep learning-based methods to detect and
characterize changes into raw 3D PCs, these studies rely on large annotated
training data to obtain accurate results. The collection of these annotations
are tricky and time-consuming. The availability of unsupervised or weakly
supervised approaches is then of prime interest. In this paper, we propose an
unsupervised method, called DeepCluster 3D Change Detection (DC3DCD), to detect
and categorize multiclass changes at point level. We classify our approach in
the unsupervised family given the fact that we extract in a completely
unsupervised way a number of clusters associated with potential changes. Let us
precise that in the end of the process, the user has only to assign a label to
each of these clusters to derive the final change map. Our method builds upon
the DeepCluster approach, originally designed for image classification, to
handle complex raw 3D PCs and perform change segmentation task. An assessment
of the method on both simulated and real public dataset is provided. The
proposed method allows to outperform fully-supervised traditional machine
learning algorithm and to be competitive with fully-supervised deep learning
networks applied on rasterization of 3D PCs with a mean of IoU over classes of
change of 57.06% and 66.69% for the simulated and the real datasets,
respectively.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 13:13:53 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"de Gélis",
"Iris",
"",
"1 and 2"
],
[
"Lefèvre",
"Sébastien",
""
],
[
"Corpetti",
"Thomas",
""
]
] |
new_dataset
| 0.99755 |
2305.05432
|
Andrea Burns
|
Andrea Burns, Krishna Srinivasan, Joshua Ainslie, Geoff Brown, Bryan
A. Plummer, Kate Saenko, Jianmo Ni, Mandy Guo
|
WikiWeb2M: A Page-Level Multimodal Wikipedia Dataset
|
Accepted at the WikiWorkshop 2023. Data is readily available at
https://github.com/google-research-datasets/wit/blob/main/wikiweb2m.md. arXiv
admin note: text overlap with arXiv:2305.03668
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Webpages have been a rich resource for language and vision-language tasks.
Yet only pieces of webpages are kept: image-caption pairs, long text articles,
or raw HTML, never all in one place. Webpage tasks have resultingly received
little attention and structured image-text data underused. To study multimodal
webpage understanding, we introduce the Wikipedia Webpage 2M (WikiWeb2M) suite;
the first to retain the full set of images, text, and structure data available
in a page. WikiWeb2M can be used for tasks like page description generation,
section summarization, and contextual image captioning.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 13:20:59 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Burns",
"Andrea",
""
],
[
"Srinivasan",
"Krishna",
""
],
[
"Ainslie",
"Joshua",
""
],
[
"Brown",
"Geoff",
""
],
[
"Plummer",
"Bryan A.",
""
],
[
"Saenko",
"Kate",
""
],
[
"Ni",
"Jianmo",
""
],
[
"Guo",
"Mandy",
""
]
] |
new_dataset
| 0.999898 |
2305.05455
|
Shengkai Lin
|
Shengkai Lin, Peirui Cao, Tianyi Huang, Shizhen Zhao, Quan Tian, Qi
Wu, Donghai Han, Xinbing Wang, Chenghu Zhou
|
XMasq: Low-Overhead Container Overlay Network Based on eBPF
| null | null | null | null |
cs.NI cs.OS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recent years have witnessed a widespread adoption of containers in cloud
computing. While containers simplify and accelerate application development,
the existing container network technologies either incur significant overhead,
which hurts performance for distributed applications, or lose flexibility or
universality, which hinders the widespread deployment in production.
We design and implement XMasq, an eBPF-based container overlay network, to
eliminate the extra overhead while keeping flexibility and universality. We
take full advantage of eBPF and design a cache-based network virtualization
mechanism and a redirect-based intra-host data path in XMasq. XMasq closes the
performance gap between overlay networks and host networks. Compared to
standard overlay networks, XMasq improves the TCP throughput by 18% and the
Request-Response transaction rate by 101%; XMasq also reduces the latency of
Memcached by 28.3%, PostgreSQL by 14.6% and Nginx by 29%. Compared to container
native-routing networks, XMasq does not require the underlay network being able
to foward packets using container IPs. Compared to Slim, which only supports
TCP traffic, XMasq is protocol independent and thus all the applications can
benefit from XMasq. We deploy XMasq as a plugin of Antrea, which is a Container
Network Interface (CNI).
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 10:15:06 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Lin",
"Shengkai",
""
],
[
"Cao",
"Peirui",
""
],
[
"Huang",
"Tianyi",
""
],
[
"Zhao",
"Shizhen",
""
],
[
"Tian",
"Quan",
""
],
[
"Wu",
"Qi",
""
],
[
"Han",
"Donghai",
""
],
[
"Wang",
"Xinbing",
""
],
[
"Zhou",
"Chenghu",
""
]
] |
new_dataset
| 0.990825 |
2305.05456
|
Ravi Tejwani
|
Ravi Tejwani, Chengyuan Ma, Paolo Bonato and H. Harry Asada
|
Language Control in Robotics
| null | null | null | null |
cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
For robots performing a assistive tasks for the humans, it is crucial to
synchronize their speech with their motions, in order to achieve natural and
effective human-robot interaction. When a robot's speech is out of sync with
their motions, it can cause confusion, frustration, and misinterpretation of
the robot's intended meaning. Humans are accustomed to using both verbal and
nonverbal cues to understand and coordinate with each other, and robots that
can align their speech with their actions can tap into this natural mode of
communication. In this research, we propose a language controller for robots to
control the pace, tone, and pauses of their speech along with it's motion in
the trajectory. The robot's speed is adjusted using an admittance controller
based on the force input from the user, and the robot's speech speed is
modulated using phase-vocoders.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 06:17:25 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Tejwani",
"Ravi",
""
],
[
"Ma",
"Chengyuan",
""
],
[
"Bonato",
"Paolo",
""
],
[
"Asada",
"H. Harry",
""
]
] |
new_dataset
| 0.986649 |
2305.05486
|
Piotr Rybak
|
Piotr Rybak
|
MAUPQA: Massive Automatically-created Polish Question Answering Dataset
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, open-domain question answering systems have begun to rely heavily
on annotated datasets to train neural passage retrievers. However, manually
annotating such datasets is both difficult and time-consuming, which limits
their availability for less popular languages. In this work, we experiment with
several methods for automatically collecting weakly labeled datasets and show
how they affect the performance of the neural passage retrieval models. As a
result of our work, we publish the MAUPQA dataset, consisting of nearly 400,000
question-passage pairs for Polish, as well as the HerBERT-QA neural retriever.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 14:36:04 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Rybak",
"Piotr",
""
]
] |
new_dataset
| 0.977737 |
2305.05507
|
Saul Youssef
|
Saul Youssef
|
Pure Data Foundation of Mathematics and Computing
|
14 pages
| null | null | null |
cs.DC math.CT math.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We propose an axiomatic foundation of mathematics based on the finite
sequence as the foundational concept, rather than based on logic and set, as in
set theory, or based on type as in dependent type theories. Finite sequences
lead to a concept of pure data, which is used to represent all mathematical
objects. As an axiomatic system, the foundation has only one axiom which
defines what constitutes a valid definition. Using the axiom, an internal
true/false/undecided valued logic and an internal language are defined, making
logic and language-related axioms unnecessary. Valid proof and valid
computation are defined in terms of equality of pure data. An algebra of pure
data leads to a rich theory of spaces and morphisms which play a role similar
to the role of Category Theory in modern Mathematics. As applications, we
explore Mathematical Machine Learning, the consistency of Mathematics and
address paradoxes due to Godel, Berry, Curry and Yablo.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 14:56:36 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Youssef",
"Saul",
""
]
] |
new_dataset
| 0.985846 |
2305.05508
|
Sigrid Dimce
|
Sigrid Dimce, Anatolij Zubow, Alireza Bayesteh, Giuseppe Caire, and
Falko Dressler
|
Practical Channel Splicing using OFDM Waveforms for Joint Communication
and Sensing in the IoT
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Channel splicing is a rather new and very promising concept. It allows to
realize a wideband channel sounder by combining multiple narrow-band
measurements. Among others, channel splicing is a sparse sensing techniques
suggested for use in joint communication and sensing (JCAS), channel
measurements and prediction using cheap hardware that cannot measure wideband
channels directly such as in the internet of things (IoT). This work validates
the practicality of a channel splicing technique by integrating it into an
OFDM-based IEEE 802.11ac system, which we consider representative for many IoT
solutions. Our system allows computing both the channel impulse response (CIR)
and the channel frequency response (CFR). In this paper, we concentrate on the
impact of the number of sub-bands in our study and show that even using only
50% of the overall spectrum leads to very accurate CIR measures. We validate
the system in simulation and confirm the results in an experimental in-door
scenario using software defined radios.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 14:57:12 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Dimce",
"Sigrid",
""
],
[
"Zubow",
"Anatolij",
""
],
[
"Bayesteh",
"Alireza",
""
],
[
"Caire",
"Giuseppe",
""
],
[
"Dressler",
"Falko",
""
]
] |
new_dataset
| 0.991501 |
2305.05552
|
Samuel Lensgraf
|
Samuel Lensgraf, Devin Balkcom, Alberto Quattrini Li
|
Buoyancy enabled autonomous underwater construction with cement blocks
|
Accepted at ICRA 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present the first free-floating autonomous underwater construction system
capable of using active ballasting to transport cement building blocks
efficiently. It is the first free-floating autonomous construction robot to use
a paired set of resources: compressed air for buoyancy and a battery for
thrusters. In construction trials, our system built structures of up to 12
components and weighing up to 100Kg (75Kg in water). Our system achieves this
performance by combining a novel one-degree-of-freedom manipulator, a novel
two-component cement block construction system that corrects errors in
placement, and a simple active ballasting system combined with compliant
placement and grasp behaviors. The passive error correcting components of the
system minimize the required complexity in sensing and control. We also explore
the problem of buoyancy allocation for building structures at scale by defining
a convex program which allocates buoyancy to minimize the predicted energy cost
for transporting blocks.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 15:43:47 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Lensgraf",
"Samuel",
""
],
[
"Balkcom",
"Devin",
""
],
[
"Li",
"Alberto Quattrini",
""
]
] |
new_dataset
| 0.987575 |
2305.05566
|
Adam Michalski
|
Adam Michalski, Filippos Christianos, Stefano V. Albrecht
|
SMAClite: A Lightweight Environment for Multi-Agent Reinforcement
Learning
| null | null | null | null |
cs.LG cs.AI cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is a lack of standard benchmarks for Multi-Agent Reinforcement Learning
(MARL) algorithms. The Starcraft Multi-Agent Challenge (SMAC) has been widely
used in MARL research, but is built on top of a heavy, closed-source computer
game, StarCraft II. Thus, SMAC is computationally expensive and requires
knowledge and the use of proprietary tools specific to the game for any
meaningful alteration or contribution to the environment. We introduce SMAClite
-- a challenge based on SMAC that is both decoupled from Starcraft II and
open-source, along with a framework which makes it possible to create new
content for SMAClite without any special knowledge. We conduct experiments to
show that SMAClite is equivalent to SMAC, by training MARL algorithms on
SMAClite and reproducing SMAC results. We then show that SMAClite outperforms
SMAC in both runtime speed and memory.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 15:55:19 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Michalski",
"Adam",
""
],
[
"Christianos",
"Filippos",
""
],
[
"Albrecht",
"Stefano V.",
""
]
] |
new_dataset
| 0.999513 |
2305.05572
|
Md. Masudur Rahman
|
Md. Masudur Rahman, Toukir Ahammed, Md. Mahbubul Alam Joarder and Kazi
Sakib
|
Does Code Smell Frequency Have a Relationship with Fault-proneness?
|
6 pages, 2 figures, 3 tables; EASE 2023 Conference (Accepted as
poster track): https://doi.org/10.1145/3593434.3593457
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Fault-proneness is an indication of programming errors that decreases
software quality and maintainability. On the contrary, code smell is a symptom
of potential design problems which has impact on fault-proneness. In the
literature, negative impact of code smells on fault-proneness has been
investigated. However, it is still unclear that how frequency of each code
smell type impacts on the fault-proneness. To mitigate this research gap, we
present an empirical study to identify whether frequency of individual code
smell types has a relationship with fault-proneness. More specifically, we
identify 13 code smell types and fault-proneness of the corresponding smelly
classes in the well-known open source systems from Apache and Eclipse
ecosystems. Then we analyse the relationship between their frequency of
occurrences based on the correlation. The results show that Anti Singleton,
Blob and Class Data Should Be Private smell types have strong relationship with
fault-proneness though their frequencies are not very high. On the other hand,
comparatively high frequent code smell types such as Complex Class, Large Class
and Long Parameter List have moderate relationship with fault-proneness. These
findings will assist developers to prioritize code smells while performing
refactoring activities in order to improve software quality.
|
[
{
"version": "v1",
"created": "Fri, 28 Apr 2023 17:38:31 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Rahman",
"Md. Masudur",
""
],
[
"Ahammed",
"Toukir",
""
],
[
"Joarder",
"Md. Mahbubul Alam",
""
],
[
"Sakib",
"Kazi",
""
]
] |
new_dataset
| 0.96834 |
2305.05592
|
Ela Liberman Pincu
|
Ela Liberman-Pincu and Tal Oron-Gilad
|
A Robotic Medical Clown (RMC): Forming a Design Space Model
|
Working paper based on the poster presented at ICRA 2023
| null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Medical clowns help hospitalized children in reducing pain and anxiety
symptoms and increase the level of satisfaction in children's wards.
Unfortunately, there is a shortage of medical clowns around the world.
Furthermore, isolated children can not enjoy this service. This study explored
the concept of a Robotic Medical Clown (RMC) and its role. We used mixed
methods of elicitation to create a design space model for future robotic
medical clowns. We investigated the needs, perceptions, and preferences of
children and teenagers using four methods: interviewing medical clowns to learn
how they perceive their role and the potential role of an RMC, conducting focus
groups with teenagers, a one-on-one experience of children with a robot, and an
online questionnaire. The concept of RMCs was acceptable to children,
teenagers, and medical clowns. We found that the RMC's appearance affects the
perception of its characters and role. Future work should investigate the
interaction in hospitals.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 16:31:36 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Liberman-Pincu",
"Ela",
""
],
[
"Oron-Gilad",
"Tal",
""
]
] |
new_dataset
| 0.998527 |
2305.05594
|
Yiqun Wang
|
Yiqun Wang, Ivan Skorokhodov, Peter Wonka
|
PET-NeuS: Positional Encoding Tri-Planes for Neural Surfaces
|
CVPR 2023; 20 Pages; Project page:
\url{https://github.com/yiqun-wang/PET-NeuS}
| null | null | null |
cs.CV cs.AI cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A signed distance function (SDF) parametrized by an MLP is a common
ingredient of neural surface reconstruction. We build on the successful recent
method NeuS to extend it by three new components. The first component is to
borrow the tri-plane representation from EG3D and represent signed distance
fields as a mixture of tri-planes and MLPs instead of representing it with MLPs
only. Using tri-planes leads to a more expressive data structure but will also
introduce noise in the reconstructed surface. The second component is to use a
new type of positional encoding with learnable weights to combat noise in the
reconstruction process. We divide the features in the tri-plane into multiple
frequency scales and modulate them with sin and cos functions of different
frequencies. The third component is to use learnable convolution operations on
the tri-plane features using self-attention convolution to produce features
with different frequency bands. The experiments show that PET-NeuS achieves
high-fidelity surface reconstruction on standard datasets. Following previous
work and using the Chamfer metric as the most important way to measure surface
reconstruction quality, we are able to improve upon the NeuS baseline by 57% on
Nerf-synthetic (0.84 compared to 1.97) and by 15.5% on DTU (0.71 compared to
0.84). The qualitative evaluation reveals how our method can better control the
interference of high-frequency noise. Code available at
\url{https://github.com/yiqun-wang/PET-NeuS}.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 16:35:39 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Wang",
"Yiqun",
""
],
[
"Skorokhodov",
"Ivan",
""
],
[
"Wonka",
"Peter",
""
]
] |
new_dataset
| 0.999465 |
2305.05651
|
Mikhail Papkov
|
Mikhail Papkov and Pavel Chizhov
|
SwinIA: Self-Supervised Blind-Spot Image Denoising with Zero
Convolutions
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The essence of self-supervised image denoising is to restore the signal from
the noisy image alone. State-of-the-art solutions for this task rely on the
idea of masking pixels and training a fully-convolutional neural network to
impute them. This most often requires multiple forward passes, information
about the noise model, and intricate regularization functions. In this paper,
we propose a Swin Transformer-based Image Autoencoder (SwinIA), the first
convolution-free architecture for self-supervised denoising. It can be trained
end-to-end with a simple mean squared error loss without masking and does not
require any prior knowledge about clean data or noise distribution. Despite its
simplicity, SwinIA establishes state-of-the-art on several common benchmarks.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 17:49:27 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Papkov",
"Mikhail",
""
],
[
"Chizhov",
"Pavel",
""
]
] |
new_dataset
| 0.980316 |
2305.05658
|
Jimmy Wu
|
Jimmy Wu, Rika Antonova, Adam Kan, Marion Lepert, Andy Zeng, Shuran
Song, Jeannette Bohg, Szymon Rusinkiewicz, Thomas Funkhouser
|
TidyBot: Personalized Robot Assistance with Large Language Models
|
Project page: https://tidybot.cs.princeton.edu
| null | null | null |
cs.RO cs.AI cs.CL cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a robot to personalize physical assistance effectively, it must learn
user preferences that can be generally reapplied to future scenarios. In this
work, we investigate personalization of household cleanup with robots that can
tidy up rooms by picking up objects and putting them away. A key challenge is
determining the proper place to put each object, as people's preferences can
vary greatly depending on personal taste or cultural background. For instance,
one person may prefer storing shirts in the drawer, while another may prefer
them on the shelf. We aim to build systems that can learn such preferences from
just a handful of examples via prior interactions with a particular person. We
show that robots can combine language-based planning and perception with the
few-shot summarization capabilities of large language models (LLMs) to infer
generalized user preferences that are broadly applicable to future
interactions. This approach enables fast adaptation and achieves 91.2% accuracy
on unseen objects in our benchmark dataset. We also demonstrate our approach on
a real-world mobile manipulator called TidyBot, which successfully puts away
85.0% of objects in real-world test scenarios.
|
[
{
"version": "v1",
"created": "Tue, 9 May 2023 17:52:59 GMT"
}
] | 2023-05-10T00:00:00 |
[
[
"Wu",
"Jimmy",
""
],
[
"Antonova",
"Rika",
""
],
[
"Kan",
"Adam",
""
],
[
"Lepert",
"Marion",
""
],
[
"Zeng",
"Andy",
""
],
[
"Song",
"Shuran",
""
],
[
"Bohg",
"Jeannette",
""
],
[
"Rusinkiewicz",
"Szymon",
""
],
[
"Funkhouser",
"Thomas",
""
]
] |
new_dataset
| 0.974399 |
1706.06932
|
Deepak Garg
|
Abhishek Bichhawat and Vineet Rajani and Jinank Jain and Deepak Garg
and Christian Hammer
|
WebPol: Fine-grained Information Flow Policies for Web Browsers
|
ESORICS '17
| null |
10.1007/978-3-319-66402-6_15
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the standard web browser programming model, third-party scripts included
in an application execute with the same privilege as the application's own
code. This leaves the application's confidential data vulnerable to theft and
leakage by malicious code and inadvertent bugs in the third-party scripts.
Security mechanisms in modern browsers (the same-origin policy, cross-origin
resource sharing and content security policies) are too coarse to suit this
programming model. All these mechanisms (and their extensions) describe whether
or not a script can access certain data, whereas the meaningful requirement is
to allow untrusted scripts access to confidential data that they need and to
prevent the scripts from leaking data on the side. Motivated by this gap, we
propose WebPol, a policy mechanism that allows a website developer to include
fine-grained policies on confidential application data in the familiar syntax
of the JavaScript programming language. The policies can be associated with any
webpage element, and specify what aspects of the element can be accessed by
which third-party domains. A script can access data that the policy allows it
to, but it cannot pass the data (or data derived from it) to other scripts or
remote hosts in contravention of the policy. To specify the policies, we expose
a small set of new native APIs in JavaScript. Our policies can be enforced
using any of the numerous existing proposals for information flow tracking in
web browsers. We have integrated our policies into one such proposal that we
use to evaluate performance overheads and to test our examples.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2017 14:35:45 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2017 04:11:59 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Jun 2017 08:25:56 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Bichhawat",
"Abhishek",
""
],
[
"Rajani",
"Vineet",
""
],
[
"Jain",
"Jinank",
""
],
[
"Garg",
"Deepak",
""
],
[
"Hammer",
"Christian",
""
]
] |
new_dataset
| 0.999216 |
2107.00932
|
Conghao Wong
|
Conghao Wong, Beihao Xia, Qinmu Peng, Wei Yuan and Xinge You
|
MSN: Multi-Style Network for Trajectory Prediction
|
Accepted by IEEE Transactions on Intelligent Transportation Systems
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Trajectory prediction aims to forecast agents' possible future locations
considering their observations along with the video context. It is strongly
needed by many autonomous platforms like tracking, detection, robot navigation,
and self-driving cars. Whether it is agents' internal personality factors,
interactive behaviors with the neighborhood, or the influence of surroundings,
they all impact agents' future planning. However, many previous methods model
and predict agents' behaviors with the same strategy or feature distribution,
making them challenging to make predictions with sufficient style differences.
This paper proposes the Multi-Style Network (MSN), which utilizes style
proposal and stylized prediction using two sub-networks, to provide multi-style
predictions in a novel categorical way adaptively. The proposed network
contains a series of style channels, and each channel is bound to a unique and
specific behavior style. We use agents' end-point plannings and their
interaction context as the basis for the behavior classification, so as to
adaptively learn multiple diverse behavior styles through these channels. Then,
we assume that the target agents may plan their future behaviors according to
each of these categorized styles, thus utilizing different style channels to
make predictions with significant style differences in parallel. Experiments
show that the proposed MSN outperforms current state-of-the-art methods up to
10% quantitatively on two widely used datasets, and presents better multi-style
characteristics qualitatively.
|
[
{
"version": "v1",
"created": "Fri, 2 Jul 2021 09:43:59 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Sep 2021 08:57:41 GMT"
},
{
"version": "v3",
"created": "Mon, 15 Nov 2021 08:59:14 GMT"
},
{
"version": "v4",
"created": "Fri, 1 Jul 2022 03:30:05 GMT"
},
{
"version": "v5",
"created": "Mon, 8 May 2023 07:30:35 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Wong",
"Conghao",
""
],
[
"Xia",
"Beihao",
""
],
[
"Peng",
"Qinmu",
""
],
[
"Yuan",
"Wei",
""
],
[
"You",
"Xinge",
""
]
] |
new_dataset
| 0.985534 |
2202.10005
|
El\'ias Javier Garc\'ia Claro E. J. Garc\'ia-Claro
|
E. J. Garc\'ia-Claro and I. S. Guti\'errez
|
On Grid Codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Versions of the Hamming and Gilbert-Varshamov bounds for codes in
$\prod_{i=1}^{n}[0,m_{i}-1]$ with respect to the Manhattan distance are
presented. Given an abelian group $G$ isomorphic to $C_{m_{1}}\times \cdots
\times C_{m_{n}}$, the Hamming, Manhattan, and Lee distances are defined in
$G$; a formula for the minimum Hamming distance of codes that are cyclic
subgroups of $G$ is provided, and some lower bounds for the minimum Manhattan
distance of these codes are determined in terms of their minimum Hamming and
Lee distances. Examples illustrating the main results and an application of
these are provided.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 06:04:42 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2022 04:38:09 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Jul 2022 07:23:58 GMT"
},
{
"version": "v4",
"created": "Sat, 6 May 2023 00:38:52 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"García-Claro",
"E. J.",
""
],
[
"Gutiérrez",
"I. S.",
""
]
] |
new_dataset
| 0.99859 |
2206.05852
|
Ruslan Khalitov
|
Ruslan Khalitov, Tong Yu, Lei Cheng, Zhirong Yang
|
ChordMixer: A Scalable Neural Attention Model for Sequences with
Different Lengths
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Sequential data naturally have different lengths in many domains, with some
very long sequences. As an important modeling tool, neural attention should
capture long-range interaction in such sequences. However, most existing neural
attention models admit only short sequences, or they have to employ chunking or
padding to enforce a constant input length. Here we propose a simple neural
network building block called ChordMixer which can model the attention for long
sequences with variable lengths. Each ChordMixer block consists of a
position-wise rotation layer without learnable parameters and an element-wise
MLP layer. Repeatedly applying such blocks forms an effective network backbone
that mixes the input signals towards the learning targets. We have tested
ChordMixer on the synthetic adding problem, long document classification, and
DNA sequence-based taxonomy classification. The experiment results show that
our method substantially outperforms other neural attention models.
|
[
{
"version": "v1",
"created": "Sun, 12 Jun 2022 22:39:41 GMT"
},
{
"version": "v2",
"created": "Fri, 5 May 2023 21:58:41 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Khalitov",
"Ruslan",
""
],
[
"Yu",
"Tong",
""
],
[
"Cheng",
"Lei",
""
],
[
"Yang",
"Zhirong",
""
]
] |
new_dataset
| 0.997684 |
2206.11825
|
Weisheng Li
|
Weisheng Li and Lin Huang
|
YOLOSA: Object detection based on 2D local feature superimposed
self-attention
|
This paper is under consideration at Pattern Recognition Letters
| null |
10.1016/j.patrec.2023.03.003
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We analyzed the network structure of real-time object detection models and
found that the features in the feature concatenation stage are very rich.
Applying an attention module here can effectively improve the detection
accuracy of the model. However, the commonly used attention module or
self-attention module shows poor performance in detection accuracy and
inference efficiency. Therefore, we propose a novel self-attention module,
called 2D local feature superimposed self-attention, for the feature
concatenation stage of the neck network. This self-attention module reflects
global features through local features and local receptive fields. We also
propose and optimize an efficient decoupled head and AB-OTA, and achieve SOTA
results. Average precisions of 49.0% (71FPS, 14ms), 46.1% (85FPS, 11.7ms), and
39.1% (107FPS, 9.3ms) were obtained for large, medium, and small-scale models
built using our proposed improvements. Our models exceeded YOLOv5 by 0.8% --
3.1% in average precision.
|
[
{
"version": "v1",
"created": "Thu, 23 Jun 2022 16:49:21 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2022 09:10:41 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Li",
"Weisheng",
""
],
[
"Huang",
"Lin",
""
]
] |
new_dataset
| 0.992693 |
2208.08289
|
Zongjie Li
|
Zongjie Li, Chaozheng Wang, Zhibo Liu, Haoxuan Wang, Dong Chen, Shuai
Wang, Cuiyun Gao
|
CCTEST: Testing and Repairing Code Completion Systems
|
13 pages, 10 figures, 5 tables. Accepted by ICSE 2023
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Code completion, a highly valuable topic in the software development domain,
has been increasingly promoted for use by recent advances in large language
models (LLMs). To date, visible LLM-based code completion frameworks such as
GitHub Copilot and GPT are trained using deep learning over vast quantities of
unstructured text and open source code. As the paramount component and the
cornerstone in daily programming tasks, code completion has largely boosted
professionals' efficiency in building real-world software systems. In contrast
to this flourishing market, we find that code completion systems often output
suspicious results, and to date, an automated testing and enhancement framework
for code completion systems is not available. This research proposes CCTEST, a
framework to test and repair code completion systems in blackbox settings.
CCTEST features a set of novel mutation strategies, namely program
structure-correlated (PSC) mutations, to generate mutated code completion
inputs. Then, it detects inconsistent outputs, representing possibly erroneous
cases, from all the completed code cases. Moreover, CCTEST repairs the code
completion outputs by selecting the output that mostly reflects the "average"
appearance of all output cases, as the final output of the code completion
systems. We detected a total of 33,540 inputs (with a true positive rate of
86%) that can trigger erroneous cases from eight popular LLM-based code
completion systems. With repairing, we show that the accuracy of code
completion systems is notably increased by 40% and 67% with respect to BLEU
score and Levenshtein edit similarity.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 13:37:03 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Dec 2022 12:17:28 GMT"
},
{
"version": "v3",
"created": "Mon, 8 May 2023 13:01:08 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Li",
"Zongjie",
""
],
[
"Wang",
"Chaozheng",
""
],
[
"Liu",
"Zhibo",
""
],
[
"Wang",
"Haoxuan",
""
],
[
"Chen",
"Dong",
""
],
[
"Wang",
"Shuai",
""
],
[
"Gao",
"Cuiyun",
""
]
] |
new_dataset
| 0.997765 |
2209.09444
|
Liang Ding
|
Changtong Zan, Keqin Peng, Liang Ding, Baopu Qiu, Boan Liu, Shwai He,
Qingyu Lu, Zheng Zhang, Chuang Liu, Weifeng Liu, Yibing Zhan, Dacheng Tao
|
Vega-MT: The JD Explore Academy Translation System for WMT22
|
WMT 2022 (Among all constrained systems, Vega-MT won 7 champions, 2
runners-up and 1 third place w.r.t sacreBLEU, and won 8 champions and 2
runners-up w.r.t COMET.)
| null | null | null |
cs.CL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
We describe the JD Explore Academy's submission of the WMT 2022 shared
general translation task. We participated in all high-resource tracks and one
medium-resource track, including Chinese-English, German-English,
Czech-English, Russian-English, and Japanese-English. We push the limit of our
previous work -- bidirectional training for translation by scaling up two main
factors, i.e. language pairs and model sizes, namely the \textbf{Vega-MT}
system. As for language pairs, we scale the "bidirectional" up to the
"multidirectional" settings, covering all participating languages, to exploit
the common knowledge across languages, and transfer them to the downstream
bilingual tasks. As for model sizes, we scale the Transformer-Big up to the
extremely large model that owns nearly 4.7 Billion parameters, to fully enhance
the model capacity for our Vega-MT. Also, we adopt the data augmentation
strategies, e.g. cycle translation for monolingual data, and bidirectional
self-training for bilingual and monolingual data, to comprehensively exploit
the bilingual and monolingual data. To adapt our Vega-MT to the general domain
test set, generalization tuning is designed. Based on the official automatic
scores of constrained systems, in terms of the sacreBLEU shown in Figure-1, we
got the 1st place on {Zh-En (33.5), En-Zh (49.7), De-En (33.7), En-De (37.8),
Cs-En (54.9), En-Cs (41.4) and En-Ru (32.7)}, 2nd place on {Ru-En (45.1) and
Ja-En (25.6)}, and 3rd place on {En-Ja(41.5)}, respectively; W.R.T the COMET,
we got the 1st place on {Zh-En (45.1), En-Zh (61.7), De-En (58.0), En-De
(63.2), Cs-En (74.7), Ru-En (64.9), En-Ru (69.6) and En-Ja (65.1)}, 2nd place
on {En-Cs (95.3) and Ja-En (40.6)}, respectively.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 03:45:24 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Sep 2022 07:52:06 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Oct 2022 11:16:29 GMT"
},
{
"version": "v4",
"created": "Sat, 6 May 2023 08:11:17 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Zan",
"Changtong",
""
],
[
"Peng",
"Keqin",
""
],
[
"Ding",
"Liang",
""
],
[
"Qiu",
"Baopu",
""
],
[
"Liu",
"Boan",
""
],
[
"He",
"Shwai",
""
],
[
"Lu",
"Qingyu",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Liu",
"Chuang",
""
],
[
"Liu",
"Weifeng",
""
],
[
"Zhan",
"Yibing",
""
],
[
"Tao",
"Dacheng",
""
]
] |
new_dataset
| 0.998988 |
2210.00051
|
Jeremy Collins
|
Jeremy A. Collins, Patrick Grady, Charles C. Kemp
|
Force/Torque Sensing for Soft Grippers using an External Camera
|
Accepted for presentation at 2023 IEEE International Conference on
Robotics and Automation (ICRA)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic manipulation can benefit from wrist-mounted force/torque (F/T)
sensors, but conventional F/T sensors can be expensive, difficult to install,
and damaged by high loads. We present Visual Force/Torque Sensing (VFTS), a
method that visually estimates the 6-axis F/T measurement that would be
reported by a conventional F/T sensor. In contrast to approaches that sense
loads using internal cameras placed behind soft exterior surfaces, our approach
uses an external camera with a fisheye lens that observes a soft gripper. VFTS
includes a deep learning model that takes a single RGB image as input and
outputs a 6-axis F/T estimate. We trained the model with sensor data collected
while teleoperating a robot (Stretch RE1 from Hello Robot Inc.) to perform
manipulation tasks. VFTS outperformed F/T estimates based on motor currents,
generalized to a novel home environment, and supported three autonomous tasks
relevant to healthcare: grasping a blanket, pulling a blanket over a manikin,
and cleaning a manikin's limbs. VFTS also performed well with a manually
operated pneumatic gripper. Overall, our results suggest that an external
camera observing a soft gripper can perform useful visual force/torque sensing
for a variety of manipulation tasks.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 19:27:49 GMT"
},
{
"version": "v2",
"created": "Sat, 21 Jan 2023 21:17:55 GMT"
},
{
"version": "v3",
"created": "Mon, 8 May 2023 01:05:36 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Collins",
"Jeremy A.",
""
],
[
"Grady",
"Patrick",
""
],
[
"Kemp",
"Charles C.",
""
]
] |
new_dataset
| 0.980101 |
2210.01627
|
Linus Nwankwo
|
Nwankwo Linus, Fritze Clemens, Konrad Bartsch, Elmar Rueckert
|
ROMR: A ROS-based Open-source Mobile Robot
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Currently, commercially available intelligent transport robots that are
capable of carrying up to 90kg of load can cost \$5,000 or even more. This
makes real-world experimentation prohibitively expensive and limits the
applicability of such systems to everyday home or industrial tasks. Aside from
their high cost, the majority of commercially available platforms are either
closed-source, platform-specific or use difficult-to-customize hardware and
firmware. In this work, we present a low-cost, open-source and modular
alternative, referred to herein as "ROS-based Open-source Mobile Robot
($ROMR$)". $ROMR$ utilizes off-the-shelf (OTS) components, additive
manufacturing technologies, aluminium profiles, and a consumer hoverboard with
high-torque brushless direct current (BLDC) motors. $ROMR$ is fully compatible
with the robot operating system (ROS), has a maximum payload of 90kg, and costs
less than \$1500. Furthermore, ROMR offers a simple yet robust framework for
contextualizing simultaneous localization and mapping (SLAM) algorithms, an
essential prerequisite for autonomous robot navigation. The robustness and
performance of the $ROMR$ were validated through real-world and simulation
experiments. All the design, construction and software files are freely
available online under the GNU GPL v3 licence at
https://doi.org/10.17605/OSF.IO/K83X7. A descriptive video of $ROMR$ can be
found at https://osf.io/ku8ag.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 14:16:10 GMT"
},
{
"version": "v2",
"created": "Mon, 8 May 2023 12:29:08 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Linus",
"Nwankwo",
""
],
[
"Clemens",
"Fritze",
""
],
[
"Bartsch",
"Konrad",
""
],
[
"Rueckert",
"Elmar",
""
]
] |
new_dataset
| 0.999743 |
2210.06887
|
Christopher E. Mower
|
Christopher E. Mower, Theodoros Stouraitis, Jo\~ao Moura, Christian
Rauch, Lei Yan, Nazanin Zamani Behabadi, Michael Gienger, Tom Vercauteren,
Christos Bergeles, Sethu Vijayakumar
|
ROS-PyBullet Interface: A Framework for Reliable Contact Simulation and
Human-Robot Interaction
| null | null | null |
https://proceedings.mlr.press/v205/mower23a.html
|
cs.RO cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Reliable contact simulation plays a key role in the development of
(semi-)autonomous robots, especially when dealing with contact-rich
manipulation scenarios, an active robotics research topic. Besides simulation,
components such as sensing, perception, data collection, robot hardware
control, human interfaces, etc. are all key enablers towards applying machine
learning algorithms or model-based approaches in real world systems. However,
there is a lack of software connecting reliable contact simulation with the
larger robotics ecosystem (i.e. ROS, Orocos), for a more seamless application
of novel approaches, found in the literature, to existing robotic hardware. In
this paper, we present the ROS-PyBullet Interface, a framework that provides a
bridge between the reliable contact/impact simulator PyBullet and the Robot
Operating System (ROS). Furthermore, we provide additional utilities for
facilitating Human-Robot Interaction (HRI) in the simulated environment. We
also present several use-cases that highlight the capabilities and usefulness
of our framework. Please check our video, source code, and examples included in
the supplementary material. Our full code base is open source and can be found
at https://github.com/cmower/ros_pybullet_interface.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 10:31:36 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Mower",
"Christopher E.",
""
],
[
"Stouraitis",
"Theodoros",
""
],
[
"Moura",
"João",
""
],
[
"Rauch",
"Christian",
""
],
[
"Yan",
"Lei",
""
],
[
"Behabadi",
"Nazanin Zamani",
""
],
[
"Gienger",
"Michael",
""
],
[
"Vercauteren",
"Tom",
""
],
[
"Bergeles",
"Christos",
""
],
[
"Vijayakumar",
"Sethu",
""
]
] |
new_dataset
| 0.956895 |
2210.09245
|
Haoming Li
|
Haoming Li, Xinzhuo Lin, Yang Zhou, Xiang Li, Yuchi Huo, Jiming Chen
and Qi Ye
|
Contact2Grasp: 3D Grasp Synthesis via Hand-Object Contact Constraint
|
Accepted at IJCAI 2023
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D grasp synthesis generates grasping poses given an input object. Existing
works tackle the problem by learning a direct mapping from objects to the
distributions of grasping poses. However, because the physical contact is
sensitive to small changes in pose, the high-nonlinear mapping between 3D
object representation to valid poses is considerably non-smooth, leading to
poor generation efficiency and restricted generality. To tackle the challenge,
we introduce an intermediate variable for grasp contact areas to constrain the
grasp generation; in other words, we factorize the mapping into two sequential
stages by assuming that grasping poses are fully constrained given contact
maps: 1) we first learn contact map distributions to generate the potential
contact maps for grasps; 2) then learn a mapping from the contact maps to the
grasping poses. Further, we propose a penetration-aware optimization with the
generated contacts as a consistency constraint for grasp refinement. Extensive
validations on two public datasets show that our method outperforms
state-of-the-art methods regarding grasp generation on various metrics.
|
[
{
"version": "v1",
"created": "Mon, 17 Oct 2022 16:39:25 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Feb 2023 10:38:52 GMT"
},
{
"version": "v3",
"created": "Sat, 6 May 2023 07:53:13 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Li",
"Haoming",
""
],
[
"Lin",
"Xinzhuo",
""
],
[
"Zhou",
"Yang",
""
],
[
"Li",
"Xiang",
""
],
[
"Huo",
"Yuchi",
""
],
[
"Chen",
"Jiming",
""
],
[
"Ye",
"Qi",
""
]
] |
new_dataset
| 0.999838 |
2210.10842
|
Yuhao Chen
|
Yuhao Chen, Hayden Gunraj, E. Zhixuan Zeng, Robbie Meyer, Maximilian
Gilles, Alexander Wong
|
MMRNet: Improving Reliability for Multimodal Object Detection and
Segmentation for Bin Picking via Multimodal Redundancy
|
Accepted to CVPR TCV Workshop
| null | null | null |
cs.CV cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, there has been tremendous interest in industry 4.0 infrastructure
to address labor shortages in global supply chains. Deploying artificial
intelligence-enabled robotic bin picking systems in real world has become
particularly important for reducing stress and physical demands of workers
while increasing speed and efficiency of warehouses. To this end, artificial
intelligence-enabled robotic bin picking systems may be used to automate order
picking, but with the risk of causing expensive damage during an abnormal event
such as sensor failure. As such, reliability becomes a critical factor for
translating artificial intelligence research to real world applications and
products. In this paper, we propose a reliable object detection and
segmentation system with MultiModal Redundancy (MMRNet) for tackling object
detection and segmentation for robotic bin picking using data from different
modalities. This is the first system that introduces the concept of multimodal
redundancy to address sensor failure issues during deployment. In particular,
we realize the multimodal redundancy framework with a gate fusion module and
dynamic ensemble learning. Finally, we present a new label-free multi-modal
consistency (MC) score that utilizes the output from all modalities to measure
the overall system output reliability and uncertainty. Through experiments, we
demonstrate that in an event of missing modality, our system provides a much
more reliable performance compared to baseline models. We also demonstrate that
our MC score is a more reliability indicator for outputs during inference time
compared to the model generated confidence scores that are often
over-confident.
|
[
{
"version": "v1",
"created": "Wed, 19 Oct 2022 19:15:07 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Apr 2023 03:05:53 GMT"
},
{
"version": "v3",
"created": "Sun, 7 May 2023 16:04:40 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Chen",
"Yuhao",
""
],
[
"Gunraj",
"Hayden",
""
],
[
"Zeng",
"E. Zhixuan",
""
],
[
"Meyer",
"Robbie",
""
],
[
"Gilles",
"Maximilian",
""
],
[
"Wong",
"Alexander",
""
]
] |
new_dataset
| 0.996304 |
2211.06625
|
Gianluigi Grandesso
|
Gianluigi Grandesso, Elisa Alboni, Gastone P. Rosati Papini, Patrick
M. Wensing and Andrea Del Prete
|
CACTO: Continuous Actor-Critic with Trajectory Optimization -- Towards
global optimality
|
8 pages, 8 figures. Submitted to IEEE RA-L
|
"CACTO: Continuous Actor-Critic With Trajectory
Optimization---Towards Global Optimality," in IEEE Robotics and Automation
Letters, vol. 8, no. 6, pp. 3318-3325, June 2023
|
10.1109/LRA.2023.3266985
| null |
cs.RO cs.LG math.OC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper presents a novel algorithm for the continuous control of dynamical
systems that combines Trajectory Optimization (TO) and Reinforcement Learning
(RL) in a single framework. The motivations behind this algorithm are the two
main limitations of TO and RL when applied to continuous nonlinear systems to
minimize a non-convex cost function. Specifically, TO can get stuck in poor
local minima when the search is not initialized close to a "good" minimum. On
the other hand, when dealing with continuous state and control spaces, the RL
training process may be excessively long and strongly dependent on the
exploration strategy. Thus, our algorithm learns a "good" control policy via
TO-guided RL policy search that, when used as initial guess provider for TO,
makes the trajectory optimization process less prone to converge to poor local
optima. Our method is validated on several reaching problems featuring
non-convex obstacle avoidance with different dynamical systems, including a car
model with 6D state, and a 3-joint planar manipulator. Our results show the
great capabilities of CACTO in escaping local minima, while being more
computationally efficient than the Deep Deterministic Policy Gradient (DDPG)
and Proximal Policy Optimization (PPO) RL algorithms.
|
[
{
"version": "v1",
"created": "Sat, 12 Nov 2022 10:16:35 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2023 10:52:32 GMT"
},
{
"version": "v3",
"created": "Mon, 8 May 2023 12:48:25 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Grandesso",
"Gianluigi",
""
],
[
"Alboni",
"Elisa",
""
],
[
"Papini",
"Gastone P. Rosati",
""
],
[
"Wensing",
"Patrick M.",
""
],
[
"Del Prete",
"Andrea",
""
]
] |
new_dataset
| 0.996318 |
2211.10260
|
Selen Gecgel Cetin
|
Selen Gecgel Cetin, Berna Ozbek, Gunes Karabulut Kurt
|
Integrated Space Domain Awareness and Communication System
| null | null | null | null |
cs.CR cs.LG cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Space has been reforming and this evolution brings new threats that, together
with technological developments and malicious intent, can pose a major
challenge. Space domain awareness (SDA), a new conceptual idea, has come to the
forefront. It aims sensing, detection, identification and countermeasures by
providing autonomy, intelligence and flexibility against potential threats in
space. In this study, we first present an insightful and clear view of the new
space. Secondly, we propose an integrated SDA and communication (ISDAC) system
for attacker detection. We assume that the attacker has beam-steering antennas
and is capable to vary attack scenarios, such as random attacks on some
receiver antennas. To track random patterns and meet SDA requirements, a
lightweight convolutional neural network architecture is developed. The
proposed ISDAC system shows superior and robust performance under 12 different
attacker configurations with a detection accuracy of over 97.8%.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 14:33:49 GMT"
},
{
"version": "v2",
"created": "Sun, 7 May 2023 17:57:39 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Cetin",
"Selen Gecgel",
""
],
[
"Ozbek",
"Berna",
""
],
[
"Kurt",
"Gunes Karabulut",
""
]
] |
new_dataset
| 0.965235 |
2212.03293
|
Muheng Li
|
Muheng Li, Yueqi Duan, Jie Zhou, Jiwen Lu
|
Diffusion-SDF: Text-to-Shape via Voxelized Diffusion
|
Accepted to CVPR 2023, project page:
https://ttlmh.github.io/DiffusionSDF/
| null | null | null |
cs.CV cs.AI cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rising industrial attention to 3D virtual modeling technology,
generating novel 3D content based on specified conditions (e.g. text) has
become a hot issue. In this paper, we propose a new generative 3D modeling
framework called Diffusion-SDF for the challenging task of text-to-shape
synthesis. Previous approaches lack flexibility in both 3D data representation
and shape generation, thereby failing to generate highly diversified 3D shapes
conforming to the given text descriptions. To address this, we propose a SDF
autoencoder together with the Voxelized Diffusion model to learn and generate
representations for voxelized signed distance fields (SDFs) of 3D shapes.
Specifically, we design a novel UinU-Net architecture that implants a
local-focused inner network inside the standard U-Net architecture, which
enables better reconstruction of patch-independent SDF representations. We
extend our approach to further text-to-shape tasks including text-conditioned
shape completion and manipulation. Experimental results show that Diffusion-SDF
generates both higher quality and more diversified 3D shapes that conform well
to given text descriptions when compared to previous approaches. Code is
available at: https://github.com/ttlmh/Diffusion-SDF
|
[
{
"version": "v1",
"created": "Tue, 6 Dec 2022 19:46:47 GMT"
},
{
"version": "v2",
"created": "Sun, 7 May 2023 18:46:50 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Li",
"Muheng",
""
],
[
"Duan",
"Yueqi",
""
],
[
"Zhou",
"Jie",
""
],
[
"Lu",
"Jiwen",
""
]
] |
new_dataset
| 0.978252 |
2212.06782
|
Karthik Murali
|
Therese Biedl and Karthik Murali
|
On computing the vertex connectivity of 1-plane graphs
|
To appear in ICALP 2023
| null | null | null |
cs.CG cs.DS math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
A graph is called 1-plane if it has an embedding in the plane where each edge
is crossed at most once by another edge.A crossing of a 1-plane graph is called
an $\times$-crossing if there are no other edges connecting the endpoints of
the crossing (apart from the crossing pair of edges). In this paper, we show
how to compute the vertex connectivity of a 1-plane graph $G$ without
$\times$-crossings in linear time. To do so, we show that for any two vertices
$u,v$ in a minimum separating set $S$, the distance between $u$ and $v$ in an
auxiliary graph $\Lambda(G)$ (obtained by planarizing $G$ and then inserting
into each face a new vertex adjacent to all vertices of the face) is small. It
hence suffices to search for a minimum separating set in various subgraphs
$\Lambda_i$ of $\Lambda(G)$ with small diameter. Since $\Lambda(G)$ is planar,
the subgraphs $\Lambda_i$ have small treewidth. Each minimum separating set $S$
then gives rise to a partition of $\Lambda_i$ into three vertex sets with
special properties; such a partition can be found via Courcelle's theorem in
linear time.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 17:57:59 GMT"
},
{
"version": "v2",
"created": "Fri, 5 May 2023 22:06:17 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Biedl",
"Therese",
""
],
[
"Murali",
"Karthik",
""
]
] |
new_dataset
| 0.999439 |
2301.08880
|
Zinuo Li
|
Zinuo Li, Xuhang Chen, Shuqiang Wang, Chi-Man Pun
|
A Large-scale Film Style Dataset for Learning Multi-frequency Driven
Film Enhancement
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Film, a classic image style, is culturally significant to the whole
photographic industry since it marks the birth of photography. However, film
photography is time-consuming and expensive, necessitating a more efficient
method for collecting film-style photographs. Numerous datasets that have
emerged in the field of image enhancement so far are not film-specific. In
order to facilitate film-based image stylization research, we construct
FilmSet, a large-scale and high-quality film style dataset. Our dataset
includes three different film types and more than 5000 in-the-wild high
resolution images. Inspired by the features of FilmSet images, we propose a
novel framework called FilmNet based on Laplacian Pyramid for stylizing images
across frequency bands and achieving film style outcomes. Experiments reveal
that the performance of our model is superior than state-of-the-art techniques.
The link of code and data is \url{https://github.com/CXH-Research/FilmNet}.
|
[
{
"version": "v1",
"created": "Sat, 21 Jan 2023 03:52:35 GMT"
},
{
"version": "v2",
"created": "Mon, 8 May 2023 02:56:15 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Li",
"Zinuo",
""
],
[
"Chen",
"Xuhang",
""
],
[
"Wang",
"Shuqiang",
""
],
[
"Pun",
"Chi-Man",
""
]
] |
new_dataset
| 0.999674 |
2301.11301
|
Todd Schmid
|
Tobias Kapp\'e, Todd Schmid, Alexandra Silva
|
A Complete Inference System for Skip-free Guarded Kleene Algebra with
Tests
| null | null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Guarded Kleene Algebra with Tests (GKAT) is a fragment of Kleene Algebra with
Tests (KAT) that was recently introduced to reason efficiently about imperative
programs. In contrast to KAT, GKAT does not have an algebraic axiomatization,
but relies on an analogue of Salomaa's axiomatization of Kleene Algebra. In
this paper, we present an algebraic axiomatization and prove two completeness
results for a large fragment of GKAT consisting of skip-free programs.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 18:39:19 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Feb 2023 15:45:17 GMT"
},
{
"version": "v3",
"created": "Mon, 8 May 2023 15:13:05 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Kappé",
"Tobias",
""
],
[
"Schmid",
"Todd",
""
],
[
"Silva",
"Alexandra",
""
]
] |
new_dataset
| 0.995218 |
2302.01327
|
Manoj Kumar
|
Manoj Kumar, Mostafa Dehghani, Neil Houlsby
|
Dual PatchNorm
|
TMLR 2023 (https://openreview.net/forum?id=jgMqve6Qhw)
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We propose Dual PatchNorm: two Layer Normalization layers (LayerNorms),
before and after the patch embedding layer in Vision Transformers. We
demonstrate that Dual PatchNorm outperforms the result of exhaustive search for
alternative LayerNorm placement strategies in the Transformer block itself. In
our experiments, incorporating this trivial modification, often leads to
improved accuracy over well-tuned Vision Transformers and never hurts.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 18:56:25 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Feb 2023 09:53:56 GMT"
},
{
"version": "v3",
"created": "Mon, 8 May 2023 16:06:13 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Kumar",
"Manoj",
""
],
[
"Dehghani",
"Mostafa",
""
],
[
"Houlsby",
"Neil",
""
]
] |
new_dataset
| 0.998647 |
2303.04222
|
Hannah Rose Kirk Miss
|
Hannah Rose Kirk, Wenjie Yin, Bertie Vidgen, Paul R\"ottger
|
SemEval-2023 Task 10: Explainable Detection of Online Sexism
|
SemEval-2023 Task 10 (ACL 2023)
| null | null | null |
cs.CL cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Online sexism is a widespread and harmful phenomenon. Automated tools can
assist the detection of sexism at scale. Binary detection, however, disregards
the diversity of sexist content, and fails to provide clear explanations for
why something is sexist. To address this issue, we introduce SemEval Task 10 on
the Explainable Detection of Online Sexism (EDOS). We make three main
contributions: i) a novel hierarchical taxonomy of sexist content, which
includes granular vectors of sexism to aid explainability; ii) a new dataset of
20,000 social media comments with fine-grained labels, along with larger
unlabelled datasets for model adaptation; and iii) baseline models as well as
an analysis of the methods, results and errors for participant submissions to
our task.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 20:28:39 GMT"
},
{
"version": "v2",
"created": "Mon, 8 May 2023 14:34:49 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Kirk",
"Hannah Rose",
""
],
[
"Yin",
"Wenjie",
""
],
[
"Vidgen",
"Bertie",
""
],
[
"Röttger",
"Paul",
""
]
] |
new_dataset
| 0.992026 |
2303.15708
|
Hanjia Lyu
|
Jinsheng Pan, Weihong Qi, Zichen Wang, Hanjia Lyu, Jiebo Luo
|
Bias or Diversity? Unraveling Fine-Grained Thematic Discrepancy in U.S.
News Headlines
|
Accepted for publication in Proceedings of the Workshop on News Media
and Computational Journalism (MEDIATE), AAAI International Conference on Web
and Social Media (ICWSM), 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is a broad consensus that news media outlets incorporate ideological
biases in their news articles. However, prior studies on measuring the
discrepancies among media outlets and further dissecting the origins of
thematic differences suffer from small sample sizes and limited scope and
granularity. In this study, we use a large dataset of 1.8 million news
headlines from major U.S. media outlets spanning from 2014 to 2022 to
thoroughly track and dissect the fine-grained thematic discrepancy in U.S. news
media. We employ multiple correspondence analysis (MCA) to quantify the
fine-grained thematic discrepancy related to four prominent topics - domestic
politics, economic issues, social issues, and foreign affairs in order to
derive a more holistic analysis. Additionally, we compare the most frequent
$n$-grams in media headlines to provide further qualitative insights into our
analysis. Our findings indicate that on domestic politics and social issues,
the discrepancy can be attributed to a certain degree of media bias. Meanwhile,
the discrepancy in reporting foreign affairs is largely attributed to the
diversity in individual journalistic styles. Finally, U.S. media outlets show
consistency and high similarity in their coverage of economic issues.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 03:31:37 GMT"
},
{
"version": "v2",
"created": "Sat, 6 May 2023 03:57:30 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Pan",
"Jinsheng",
""
],
[
"Qi",
"Weihong",
""
],
[
"Wang",
"Zichen",
""
],
[
"Lyu",
"Hanjia",
""
],
[
"Luo",
"Jiebo",
""
]
] |
new_dataset
| 0.995828 |
2304.01238
|
Maxime Labonne
|
Maxime Labonne and Sean Moran
|
Spam-T5: Benchmarking Large Language Models for Few-Shot Email Spam
Detection
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper investigates the effectiveness of large language models (LLMs) in
email spam detection by comparing prominent models from three distinct
families: BERT-like, Sentence Transformers, and Seq2Seq. Additionally, we
examine well-established machine learning techniques for spam detection, such
as Na\"ive Bayes and LightGBM, as baseline methods. We assess the performance
of these models across four public datasets, utilizing different numbers of
training samples (full training set and few-shot settings). Our findings reveal
that, in the majority of cases, LLMs surpass the performance of the popular
baseline techniques, particularly in few-shot scenarios. This adaptability
renders LLMs uniquely suited to spam detection tasks, where labeled samples are
limited in number and models require frequent updates. Additionally, we
introduce Spam-T5, a Flan-T5 model that has been specifically adapted and
fine-tuned for the purpose of detecting email spam. Our results demonstrate
that Spam-T5 surpasses baseline models and other LLMs in the majority of
scenarios, particularly when there are a limited number of training samples
available. Our code is publicly available at
https://github.com/jpmorganchase/emailspamdetection.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 10:27:53 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Apr 2023 13:38:54 GMT"
},
{
"version": "v3",
"created": "Sun, 7 May 2023 10:57:51 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Labonne",
"Maxime",
""
],
[
"Moran",
"Sean",
""
]
] |
new_dataset
| 0.975348 |
2305.00082
|
Jun Kataoka
|
Jun Kataoka and Hyunsoo Yoon
|
AVATAR: Adversarial self-superVised domain Adaptation network for TARget
domain
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents an unsupervised domain adaptation (UDA) method for
predicting unlabeled target domain data, specific to complex UDA tasks where
the domain gap is significant. Mainstream UDA models aim to learn from both
domains and improve target discrimination by utilizing labeled source domain
data. However, the performance boost may be limited when the discrepancy
between the source and target domains is large or the target domain contains
outliers. To explicitly address this issue, we propose the Adversarial
self-superVised domain Adaptation network for the TARget domain (AVATAR)
algorithm. It outperforms state-of-the-art UDA models by concurrently reducing
domain discrepancy while enhancing discrimination through domain adversarial
learning, self-supervised learning, and sample selection strategy for the
target domain, all guided by deep clustering. Our proposed model significantly
outperforms state-of-the-art methods on three UDA benchmarks, and extensive
ablation studies and experiments demonstrate the effectiveness of our approach
for addressing complex UDA tasks.
|
[
{
"version": "v1",
"created": "Fri, 28 Apr 2023 20:31:56 GMT"
},
{
"version": "v2",
"created": "Mon, 8 May 2023 03:35:14 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Kataoka",
"Jun",
""
],
[
"Yoon",
"Hyunsoo",
""
]
] |
new_dataset
| 0.989579 |
2305.01241
|
Hendric Vo{\ss}
|
Hendric Vo{\ss} and Stefan Kopp
|
AQ-GT: a Temporally Aligned and Quantized GRU-Transformer for Co-Speech
Gesture Synthesis
| null | null | null | null |
cs.HC cs.GR cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The generation of realistic and contextually relevant co-speech gestures is a
challenging yet increasingly important task in the creation of multimodal
artificial agents. Prior methods focused on learning a direct correspondence
between co-speech gesture representations and produced motions, which created
seemingly natural but often unconvincing gestures during human assessment. We
present an approach to pre-train partial gesture sequences using a generative
adversarial network with a quantization pipeline. The resulting codebook
vectors serve as both input and output in our framework, forming the basis for
the generation and reconstruction of gestures. By learning the mapping of a
latent space representation as opposed to directly mapping it to a vector
representation, this framework facilitates the generation of highly realistic
and expressive gestures that closely replicate human movement and behavior,
while simultaneously avoiding artifacts in the generation process. We evaluate
our approach by comparing it with established methods for generating co-speech
gestures as well as with existing datasets of human behavior. We also perform
an ablation study to assess our findings. The results show that our approach
outperforms the current state of the art by a clear margin and is partially
indistinguishable from human gesturing. We make our data pipeline and the
generation framework publicly available.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 07:59:38 GMT"
},
{
"version": "v2",
"created": "Mon, 8 May 2023 11:59:12 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Voß",
"Hendric",
""
],
[
"Kopp",
"Stefan",
""
]
] |
new_dataset
| 0.998848 |
2305.02679
|
Alon Jacovi
|
Alon Jacovi, Hendrik Schuff, Heike Adel, Ngoc Thang Vu, Yoav Goldberg
|
Neighboring Words Affect Human Interpretation of Saliency Explanations
|
Accepted to Findings of ACL 2023
| null | null | null |
cs.CL cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Word-level saliency explanations ("heat maps over words") are often used to
communicate feature-attribution in text-based models. Recent studies found that
superficial factors such as word length can distort human interpretation of the
communicated saliency scores. We conduct a user study to investigate how the
marking of a word's neighboring words affect the explainee's perception of the
word's importance in the context of a saliency explanation. We find that
neighboring words have significant effects on the word's importance rating.
Concretely, we identify that the influence changes based on neighboring
direction (left vs. right) and a-priori linguistic and computational measures
of phrases and collocations (vs. unrelated neighboring words). Our results
question whether text-based saliency explanations should be continued to be
communicated at word level, and inform future research on alternative saliency
explanation methods.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 09:50:25 GMT"
},
{
"version": "v2",
"created": "Sat, 6 May 2023 12:22:19 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Jacovi",
"Alon",
""
],
[
"Schuff",
"Hendrik",
""
],
[
"Adel",
"Heike",
""
],
[
"Vu",
"Ngoc Thang",
""
],
[
"Goldberg",
"Yoav",
""
]
] |
new_dataset
| 0.967548 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.