id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.12190
|
Zoran Medi\'c
|
Zoran Medi\'c, Jan \v{S}najder
|
Paragraph-level Citation Recommendation based on Topic Sentences as
Queries
| null | null | null | null |
cs.IR cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Citation recommendation (CR) models may help authors find relevant articles
at various stages of the paper writing process. Most research has dealt with
either global CR, which produces general recommendations suitable for the
initial writing stage, or local CR, which produces specific recommendations
more fitting for the final writing stages. We propose the task of
paragraph-level CR as a middle ground between the two approaches, where the
paragraph's topic sentence is taken as input and recommendations for citing
within the paragraph are produced at the output. We propose a model for this
task, fine-tune it using the quadruplet loss on the dataset of ACL papers, and
show improvements over the baselines.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 13:28:22 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Medić",
"Zoran",
""
],
[
"Šnajder",
"Jan",
""
]
] |
new_dataset
| 0.998338 |
2305.12200
|
Yuyue Wang
|
Yuyue Wang, Huan Xiao, Yihan Wu, Ruihua Song
|
ComedicSpeech: Text To Speech For Stand-up Comedies in Low-Resource
Scenarios
|
5 pages, 4 tables, 2 figure
| null | null | null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text to Speech (TTS) models can generate natural and high-quality speech, but
it is not expressive enough when synthesizing speech with dramatic
expressiveness, such as stand-up comedies. Considering comedians have diverse
personal speech styles, including personal prosody, rhythm, and fillers, it
requires real-world datasets and strong speech style modeling capabilities,
which brings challenges. In this paper, we construct a new dataset and develop
ComedicSpeech, a TTS system tailored for the stand-up comedy synthesis in
low-resource scenarios. First, we extract prosody representation by the prosody
encoder and condition it to the TTS model in a flexible way. Second, we enhance
the personal rhythm modeling by a conditional duration predictor. Third, we
model the personal fillers by introducing comedian-related special tokens.
Experiments show that ComedicSpeech achieves better expressiveness than
baselines with only ten-minute training data for each comedian. The audio
samples are available at https://xh621.github.io/stand-up-comedy-demo/
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 14:24:45 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Wang",
"Yuyue",
""
],
[
"Xiao",
"Huan",
""
],
[
"Wu",
"Yihan",
""
],
[
"Song",
"Ruihua",
""
]
] |
new_dataset
| 0.999845 |
2305.12257
|
Ankur Sinha PhD
|
Ankur Sinha, Satishwar Kedas, Rishu Kumar, Pekka Malo
|
SEntFiN 1.0: Entity-Aware Sentiment Analysis for Financial News
|
32 Pages
| null |
10.1002/asi.24634
| null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Fine-grained financial sentiment analysis on news headlines is a challenging
task requiring human-annotated datasets to achieve high performance. Limited
studies have tried to address the sentiment extraction task in a setting where
multiple entities are present in a news headline. In an effort to further
research in this area, we make publicly available SEntFiN 1.0, a
human-annotated dataset of 10,753 news headlines with entity-sentiment
annotations, of which 2,847 headlines contain multiple entities, often with
conflicting sentiments. We augment our dataset with a database of over 1,000
financial entities and their various representations in news media amounting to
over 5,000 phrases. We propose a framework that enables the extraction of
entity-relevant sentiments using a feature-based approach rather than an
expression-based approach. For sentiment extraction, we utilize 12 different
learning schemes utilizing lexicon-based and pre-trained sentence
representations and five classification approaches. Our experiments indicate
that lexicon-based n-gram ensembles are above par with pre-trained word
embedding schemes such as GloVe. Overall, RoBERTa and finBERT (domain-specific
BERT) achieve the highest average accuracy of 94.29% and F1-score of 93.27%.
Further, using over 210,000 entity-sentiment predictions, we validate the
economic effect of sentiments on aggregate market movements over a long
duration.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 18:20:39 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Sinha",
"Ankur",
""
],
[
"Kedas",
"Satishwar",
""
],
[
"Kumar",
"Rishu",
""
],
[
"Malo",
"Pekka",
""
]
] |
new_dataset
| 0.99973 |
2305.12261
|
Zichao Zhang
|
Zichao Zhang, Melda Yuksel, Halim Yanikomeroglu, Benjamin K. Ng,
Chan-Tong Lam
|
MIMO Asynchronous MAC with Faster-than-Nyquist (FTN) Signaling
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Faster-than-Nyquist (FTN) signaling is a nonorthogonal transmission
technique, which brings in intentional inter-symbol interference. This way it
can significantly enhance spectral efficiency for practical pulse shapes such
as the root raised cosine pulses. This paper proposes an achievable rate region
for the multiple antenna (MIMO) asynchronous multiple access channel (aMAC)
with FTN signaling. The scheme applies waterfilling in the spatial domain and
precoding in time. Waterfilling in space provides better power allocation and
precoding helps mitigate inter-symbol interference due to asynchronous
transmission and FTN. The results show that the gains due to asynchronous
transmission and FTN are more emphasized in MIMO aMAC than in single antenna
aMAC. Moreover, FTN improves single-user rates, and asynchronous transmission
improves the sum-rate, due to better inter-user interference management.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 18:30:40 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Zhang",
"Zichao",
""
],
[
"Yuksel",
"Melda",
""
],
[
"Yanikomeroglu",
"Halim",
""
],
[
"Ng",
"Benjamin K.",
""
],
[
"Lam",
"Chan-Tong",
""
]
] |
new_dataset
| 0.993602 |
2305.12265
|
Tao Long
|
Tao Long, Dorothy Zhang, Grace Li, Batool Taraif, Samia Menon, Kynnedy
Simone Smith, Sitong Wang, Katy Ilonka Gero, Lydia B. Chilton
|
Tweetorial Hooks: Generative AI Tools to Motivate Science on Social
Media
|
10 pages, 10 figures. Proceedings of the 14th International
Conference on Computational Creativity (ICCC'23)
| null | null | null |
cs.HC cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Communicating science and technology is essential for the public to
understand and engage in a rapidly changing world. Tweetorials are an emerging
phenomenon where experts explain STEM topics on social media in creative and
engaging ways. However, STEM experts struggle to write an engaging "hook" in
the first tweet that captures the reader's attention. We propose methods to use
large language models (LLMs) to help users scaffold their process of writing a
relatable hook for complex scientific topics. We demonstrate that LLMs can help
writers find everyday experiences that are relatable and interesting to the
public, avoid jargon, and spark curiosity. Our evaluation shows that the system
reduces cognitive load and helps people write better hooks. Lastly, we discuss
the importance of interactivity with LLMs to preserve the correctness,
effectiveness, and authenticity of the writing.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 18:47:40 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Long",
"Tao",
""
],
[
"Zhang",
"Dorothy",
""
],
[
"Li",
"Grace",
""
],
[
"Taraif",
"Batool",
""
],
[
"Menon",
"Samia",
""
],
[
"Smith",
"Kynnedy Simone",
""
],
[
"Wang",
"Sitong",
""
],
[
"Gero",
"Katy Ilonka",
""
],
[
"Chilton",
"Lydia B.",
""
]
] |
new_dataset
| 0.986712 |
2305.12295
|
Liangming Pan
|
Liangming Pan, Alon Albalak, Xinyi Wang, William Yang Wang
|
Logic-LM: Empowering Large Language Models with Symbolic Solvers for
Faithful Logical Reasoning
|
Technical Report
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Large Language Models (LLMs) have shown human-like reasoning abilities but
still struggle with complex logical problems. This paper introduces a novel
framework, Logic-LM, which integrates LLMs with symbolic reasoning to improve
logical problem-solving. Our method first utilizes LLMs to translate a natural
language problem into a symbolic formulation. Afterward, a deterministic
symbolic solver performs inference on the formulated problem. We also introduce
a self-refinement stage, which utilizes the symbolic solver's error messages to
revise symbolic formalizations. We demonstrate Logic-LM's effectiveness on four
logical reasoning datasets: ProofWriter, PrOntoQA, FOLIO, and LogicalDeduction.
Our results show significant improvement compared to LLMs alone, with an
average performance boost of 62.6% over standard prompting and 23.5% over
chain-of-thought prompting. Our findings suggest that Logic-LM, by combining
LLMs with symbolic logic, offers a promising avenue for faithful logical
reasoning. Code and data are publicly available at
https://github.com/teacherpeterpan/Logic-LLM.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 22:25:38 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Pan",
"Liangming",
""
],
[
"Albalak",
"Alon",
""
],
[
"Wang",
"Xinyi",
""
],
[
"Wang",
"William Yang",
""
]
] |
new_dataset
| 0.99805 |
2305.12301
|
Soujanya Poria
|
Yi Xuan Tan, Navonil Majumder, Soujanya Poria
|
Sentence Embedder Guided Utterance Encoder (SEGUE) for Spoken Language
Understanding
|
Interspeech 2023
| null | null | null |
cs.CL cs.AI cs.SD eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The pre-trained speech encoder wav2vec 2.0 performs very well on various
spoken language understanding (SLU) tasks. However, on many tasks, it trails
behind text encoders with textual input. To improve the understanding
capability of SLU encoders, various studies have used knowledge distillation to
transfer knowledge from natural language understanding (NLU) encoders. We use a
very simple method of distilling from a textual sentence embedder directly into
wav2vec 2.0 as pre-training, utilizing paired audio-text datasets. We observed
that this method is indeed capable of improving SLU task performance in
fine-tuned settings, as well as full-data and few-shot transfer on a frozen
encoder. However, the model performs worse on certain tasks highlighting the
strengths and weaknesses of our approach.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 23:55:55 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Tan",
"Yi Xuan",
""
],
[
"Majumder",
"Navonil",
""
],
[
"Poria",
"Soujanya",
""
]
] |
new_dataset
| 0.953179 |
2305.12311
|
Ziyi Yang
|
Ziyi Yang, Mahmoud Khademi, Yichong Xu, Reid Pryzant, Yuwei Fang,
Chenguang Zhu, Dongdong Chen, Yao Qian, Mei Gao, Yi-Ling Chen, Robert Gmyr,
Naoyuki Kanda, Noel Codella, Bin Xiao, Yu Shi, Lu Yuan, Takuya Yoshioka,
Michael Zeng, Xuedong Huang
|
i-Code V2: An Autoregressive Generation Framework over Vision, Language,
and Speech Data
| null | null | null | null |
cs.CL cs.AI cs.CV cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
The convergence of text, visual, and audio data is a key step towards
human-like artificial intelligence, however the current Vision-Language-Speech
landscape is dominated by encoder-only models which lack generative abilities.
We propose closing this gap with i-Code V2, the first model capable of
generating natural language from any combination of Vision, Language, and
Speech data. i-Code V2 is an integrative system that leverages state-of-the-art
single-modality encoders, combining their outputs with a new modality-fusing
encoder in order to flexibly project combinations of modalities into a shared
representational space. Next, language tokens are generated from these
representations via an autoregressive decoder. The whole framework is
pretrained end-to-end on a large collection of dual- and single-modality
datasets using a novel text completion objective that can be generalized across
arbitrary combinations of modalities. i-Code V2 matches or outperforms
state-of-the-art single- and dual-modality baselines on 7 multimodal tasks,
demonstrating the power of generative multimodal pretraining across a diversity
of tasks and signals.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 01:25:44 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Yang",
"Ziyi",
""
],
[
"Khademi",
"Mahmoud",
""
],
[
"Xu",
"Yichong",
""
],
[
"Pryzant",
"Reid",
""
],
[
"Fang",
"Yuwei",
""
],
[
"Zhu",
"Chenguang",
""
],
[
"Chen",
"Dongdong",
""
],
[
"Qian",
"Yao",
""
],
[
"Gao",
"Mei",
""
],
[
"Chen",
"Yi-Ling",
""
],
[
"Gmyr",
"Robert",
""
],
[
"Kanda",
"Naoyuki",
""
],
[
"Codella",
"Noel",
""
],
[
"Xiao",
"Bin",
""
],
[
"Shi",
"Yu",
""
],
[
"Yuan",
"Lu",
""
],
[
"Yoshioka",
"Takuya",
""
],
[
"Zeng",
"Michael",
""
],
[
"Huang",
"Xuedong",
""
]
] |
new_dataset
| 0.981544 |
2305.12328
|
Bosheng Qin
|
Bosheng Qin, Juncheng Li, Siliang Tang, Tat-Seng Chua, Yueting Zhuang
|
InstructVid2Vid: Controllable Video Editing with Natural Language
Instructions
|
21 pages, 9 figures
| null | null | null |
cs.CV cs.AI cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an end-to-end diffusion-based method for editing videos with human
language instructions, namely $\textbf{InstructVid2Vid}$. Our approach enables
the editing of input videos based on natural language instructions without any
per-example fine-tuning or inversion. The proposed InstructVid2Vid model
combines a pretrained image generation model, Stable Diffusion, with a
conditional 3D U-Net architecture to generate time-dependent sequence of video
frames. To obtain the training data, we incorporate the knowledge and expertise
of different models, including ChatGPT, BLIP, and Tune-a-Video, to synthesize
video-instruction triplets, which is a more cost-efficient alternative to
collecting data in real-world scenarios. To improve the consistency between
adjacent frames of generated videos, we propose the Frame Difference Loss,
which is incorporated during the training process. During inference, we extend
the classifier-free guidance to text-video input to guide the generated
results, making them more related to both the input video and instruction.
Experiments demonstrate that InstructVid2Vid is able to generate high-quality,
temporally coherent videos and perform diverse edits, including attribute
editing, change of background, and style transfer. These results highlight the
versatility and effectiveness of our proposed method. Code is released in
$\href{https://github.com/BrightQin/InstructVid2Vid}{InstructVid2Vid}$.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 03:28:13 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Qin",
"Bosheng",
""
],
[
"Li",
"Juncheng",
""
],
[
"Tang",
"Siliang",
""
],
[
"Chua",
"Tat-Seng",
""
],
[
"Zhuang",
"Yueting",
""
]
] |
new_dataset
| 0.993015 |
2305.12344
|
Wahyu Pebrianto
|
Wahyu Pebrianto, Panca Mudjirahardjo, Sholeh Hadi Pramono, Rahmadwati,
Raden Arief Setyawan
|
YOLOv3 with Spatial Pyramid Pooling for Object Detection with Unmanned
Aerial Vehicles
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Object detection with Unmanned Aerial Vehicles (UAVs) has attracted much
attention in the research field of computer vision. However, not easy to
accurately detect objects with data obtained from UAVs, which capture images
from very high altitudes, making the image dominated by small object sizes,
that difficult to detect. Motivated by that challenge, we aim to improve the
performance of the one-stage detector YOLOv3 by adding a Spatial Pyramid
Pooling (SPP) layer on the end of the backbone darknet-53 to obtain more
efficient feature extraction process in object detection tasks with UAVs. We
also conducted an evaluation study on different versions of YOLOv3 methods.
Includes YOLOv3 with SPP, YOLOv3, and YOLOv3-tiny, which we analyzed with the
VisDrone2019-Det dataset. Here we show that YOLOv3 with SPP can get results mAP
0.6% higher than YOLOv3 and 26.6% than YOLOv3-Tiny at 640x640 input scale and
is even able to maintain accuracy at different input image scales than other
versions of the YOLOv3 method. Those results prove that the addition of SPP
layers to YOLOv3 can be an efficient solution for improving the performance of
the object detection method with data obtained from UAVs.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 04:41:52 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Pebrianto",
"Wahyu",
""
],
[
"Mudjirahardjo",
"Panca",
""
],
[
"Pramono",
"Sholeh Hadi",
""
],
[
"Rahmadwati",
"",
""
],
[
"Setyawan",
"Raden Arief",
""
]
] |
new_dataset
| 0.998985 |
2305.12359
|
B.Sundar Rajan
|
Navya Saxena, Anjana A. Mahesh, and B. Sundar Rajan
|
An Optimal Two-Step Decoding at Receivers with Side Information in
PSK-Modulated Index Coding
|
24 pages and 7 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies noisy index coding problems over single-input
single-output broadcast channels. The codewords from a chosen index code of
length $N$ are transmitted after $2^N$-PSK modulation over an AWGN channel. In
"Index Coded PSK Modulation for prioritized Receivers," the authors showed that
when a length-$N$ index code is transmitted as a $2^N$-PSK symbol, the ML
decoder at a receiver decodes directly to the message bit rather than following
the two-step decoding process of first demodulating the PSK symbol and
equivalently the index-coded bits and then doing index-decoding. In this paper,
we consider unprioritized receivers and follow the two-step decoding process at
the receivers. After estimating the PSK symbol using an ML decoder, at a
receiver, there might be more than one decoding strategy, i.e., a linear
combination of index-coded bits and different subsets of side information bits,
that can be used to estimate the requested message. Thomas et al. in ["Single
Uniprior Index Coding With Min Max Probability of Error Over Fading Channels,"]
showed that for binary-modulated index code transmissions, minimizing the
number of transmissions used to decode a requested message is equivalent to
minimizing the probability of error. This paper shows that this is no longer
the case while employing multi-level modulations. Further, we consider that the
side information available to each receiver is also noisy and derive an
expression for the probability that a requested message bit is estimated
erroneously at a receiver. We also show that the criterion for choosing a
decoding strategy that gives the best probability of error performance at a
receiver changes with the signal-to-noise ratio at which the side information
is broadcast.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 06:06:37 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Saxena",
"Navya",
""
],
[
"Mahesh",
"Anjana A.",
""
],
[
"Rajan",
"B. Sundar",
""
]
] |
new_dataset
| 0.996758 |
2305.12369
|
Yubin Kim
|
Yubin Kim, Dong Won Lee, Paul Pu Liang, Sharifa Algohwinem, Cynthia
Breazeal, Hae Won Park
|
HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with
Cross-person Memory Transformer
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Accurately modeling affect dynamics, which refers to the changes and
fluctuations in emotions and affective displays during human conversations, is
crucial for understanding human interactions. By analyzing affect dynamics, we
can gain insights into how people communicate, respond to different situations,
and form relationships. However, modeling affect dynamics is challenging due to
contextual factors, such as the complex and nuanced nature of interpersonal
relationships, the situation, and other factors that influence affective
displays. To address this challenge, we propose a Cross-person Memory
Transformer (CPM-T) framework which is able to explicitly model affective
dynamics (intrapersonal and interpersonal influences) by identifying verbal and
non-verbal cues, and with a large language model to utilize the pre-trained
knowledge and perform verbal reasoning. The CPM-T framework maintains memory
modules to store and update the contexts within the conversation window,
enabling the model to capture dependencies between earlier and later parts of a
conversation. Additionally, our framework employs cross-modal attention to
effectively align information from multi-modalities and leverage cross-person
attention to align behaviors in multi-party interactions. We evaluate the
effectiveness and generalizability of our approach on three publicly available
datasets for joint engagement, rapport, and human beliefs prediction tasks.
Remarkably, the CPM-T framework outperforms baseline models in average
F1-scores by up to 7.3%, 9.3%, and 2.0% respectively. Finally, we demonstrate
the importance of each component in the framework via ablation studies with
respect to multimodal temporal behavior.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 06:43:35 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Kim",
"Yubin",
""
],
[
"Lee",
"Dong Won",
""
],
[
"Liang",
"Paul Pu",
""
],
[
"Algohwinem",
"Sharifa",
""
],
[
"Breazeal",
"Cynthia",
""
],
[
"Park",
"Hae Won",
""
]
] |
new_dataset
| 0.955223 |
2305.12424
|
Tetsuya Kobayashi J
|
Mengji Zhang, Yusuke Hiki, Akira Funahashi, Tetsuya J. Kobayashi
|
Mol-PECO: a deep learning model to predict human olfactory perception
from molecular structures
|
17 pages, 8 figures
| null | null | null |
cs.LG cs.AI q-bio.BM q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
While visual and auditory information conveyed by wavelength of light and
frequency of sound have been decoded, predicting olfactory information encoded
by the combination of odorants remains challenging due to the unknown and
potentially discontinuous perceptual space of smells and odorants. Herein, we
develop a deep learning model called Mol-PECO (Molecular Representation by
Positional Encoding of Coulomb Matrix) to predict olfactory perception from
molecular structures. Mol-PECO updates the learned atom embedding by
directional graph convolutional networks (GCN), which model the Laplacian
eigenfunctions as positional encoding, and Coulomb matrix, which encodes atomic
coordinates and charges. With a comprehensive dataset of 8,503 molecules,
Mol-PECO directly achieves an area-under-the-receiver-operating-characteristic
(AUROC) of 0.813 in 118 odor descriptors, superior to the machine learning of
molecular fingerprints (AUROC of 0.761) and GCN of adjacency matrix (AUROC of
0.678). The learned embeddings by Mol-PECO also capture a meaningful odor space
with global clustering of descriptors and local retrieval of similar odorants.
Our work may promote the understanding and decoding of the olfactory sense and
mechanisms.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 10:44:02 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Zhang",
"Mengji",
""
],
[
"Hiki",
"Yusuke",
""
],
[
"Funahashi",
"Akira",
""
],
[
"Kobayashi",
"Tetsuya J.",
""
]
] |
new_dataset
| 0.998879 |
2305.12431
|
Aswathylakshmi P
|
P Aswathylakshmi and Radha Krishna Ganti
|
Pilotless Uplink for Massive MIMO Systems
|
6 pages, 9 figures, submitted to IEEE Global Communications
Conference (Globecom) 2023
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Massive MIMO antennas in cellular systems help support a large number of
users in the same time-frequency resource and also provide significant array
gain for uplink reception. However, channel estimation in such large antenna
systems can be tricky, not only since pilot assignment for multiple users is
challenging, but also because the pilot overhead especially for rapidly
changing channels can diminish the system throughput quite significantly. A
pilotless transceiver where the receiver can perform blind demodulation can
solve these issues and boost system throughput by eliminating the need for
pilots in channel estimation. In this paper, we propose an iterative matrix
decomposition algorithm for the blind demodulation of massive MIMO OFDM
signals. This new decomposition technique provides estimates of both the user
symbols and the user channel in the frequency domain simultaneously (to a
scaling factor) without any pilots. Simulation results demonstrate that the
lack of pilots does not affect the error performance of the proposed algorithm
when compared to maximal-ratio-combining (MRC) with pilot-based channel
estimation across a wide range of signal strengths.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 11:19:45 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Aswathylakshmi",
"P",
""
],
[
"Ganti",
"Radha Krishna",
""
]
] |
new_dataset
| 0.959481 |
2305.12445
|
Detai Xin
|
Detai Xin, Shinnosuke Takamichi, Hiroshi Saruwatari
|
JNV Corpus: A Corpus of Japanese Nonverbal Vocalizations with Diverse
Phrases and Emotions
|
4 pages, 3 figures
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present JNV (Japanese Nonverbal Vocalizations) corpus, a corpus of
Japanese nonverbal vocalizations (NVs) with diverse phrases and emotions.
Existing Japanese NV corpora lack phrase or emotion diversity, which makes it
difficult to analyze NVs and support downstream tasks like emotion recognition.
We first propose a corpus-design method that contains two phases: (1)
collecting NVs phrases based on crowd-sourcing; (2) recording NVs by
stimulating speakers with emotional scenarios. We then collect $420$ audio
clips from $4$ speakers that cover $6$ emotions based on the proposed method.
Results of comprehensive objective and subjective experiments demonstrate that
the collected NVs have high emotion recognizability and authenticity that are
comparable to previous corpora of English NVs. Additionally, we analyze the
distributions of vowel types in Japanese NVs. To our best knowledge, JNV is
currently the largest Japanese NVs corpus in terms of phrase and emotion
diversities.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 12:32:03 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Xin",
"Detai",
""
],
[
"Takamichi",
"Shinnosuke",
""
],
[
"Saruwatari",
"Hiroshi",
""
]
] |
new_dataset
| 0.999155 |
2305.12478
|
Xiaoya Li
|
Jinchuan Cui and Xiaoya Li
|
The airplane refueling problem is NP-complete and is solvable in
polynomial time
|
6 pages, 2 figures
| null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The airplane refueling problem is a nonlinear combinatorial optimization
problem, and its equivalent problem the $n$-vehicle exploration problem is
proved to be NP-complete (arXiv:2304.03965v1, The $n$-vehicle exploration
problem is NP-complete). In Article (arXiv:2210.11634v2, A polynomial-time
algorithm to solve the aircraft refueling problem: the sequential search
algorithm), we designed the sequential search algorithm for solving large scale
of airplane refueling instances, and we proved that the computational
complexity increases to polynomial time with increasing number of airplanes.
Thus the airplane refueling problem, as an NP-complete problem, is solvable in
polynomial time when its input scale is sufficiently large.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 14:46:27 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Cui",
"Jinchuan",
""
],
[
"Li",
"Xiaoya",
""
]
] |
new_dataset
| 0.980098 |
2305.12481
|
Yang Yu
|
Yang Yu, Huiwen Jia, Xiaoyun Wang
|
Compact Lattice Gadget and Its Applications to Hash-and-Sign Signatures
|
Accepted to Crypto 2023
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
This work aims to improve the practicality of gadget-based cryptosystems,
with a focus on hash-and-sign signatures. To this end, we develop a compact
gadget framework in which the used gadget is a square matrix instead of the
short and fat one used in previous constructions. To work with this compact
gadget, we devise a specialized gadget sampler, called semi-random sampler, to
compute the approximate preimage. It first deterministically computes the error
and then randomly samples the preimage. We show that for uniformly random
targets, the preimage and error distributions are simulatable without knowing
the trapdoor. This ensures the security of the signature applications. Compared
to the Gaussian-distributed errors in previous algorithms, the deterministic
errors have a smaller size, which lead to a substantial gain in security and
enables a practically working instantiation.
As the applications, we present two practically efficient gadget-based
signature schemes based on NTRU and Ring-LWE respectively. The NTRU-based
scheme offers comparable efficiency to Falcon and Mitaka and a simple
implementation without the need of generating the NTRU trapdoor. The LWE-based
scheme also achieves a desirable overall performance. It not only greatly
outperforms the state-of-the-art LWE-based hash-and-sign signatures, but also
has an even smaller size than the LWE-based Fiat-Shamir signature scheme
Dilithium. These results fill the long-term gap in practical gadget-based
signatures.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 15:13:58 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Yu",
"Yang",
""
],
[
"Jia",
"Huiwen",
""
],
[
"Wang",
"Xiaoyun",
""
]
] |
new_dataset
| 0.99716 |
2305.12506
|
Xiaoguang Li
|
Xiaoguang Li
|
CNN-based Dendrite Core Detection from Microscopic Images of
Directionally Solidified Ni-base Alloys
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dendrite core is the center point of the dendrite. The information of
dendrite core is very helpful for material scientists to analyze the properties
of materials. Therefore, detecting the dendrite core is a very important task
in the material science field. Meanwhile, because of some special properties of
the dendrites, this task is also very challenging. Different from the typical
detection problems in the computer vision field, detecting the dendrite core
aims to detect a single point location instead of the bounding-box. As a
result, the existing regressing bounding-box based detection methods can not
work well on this task because the calculated center point location based on
the upper-left and lower-right corners of the bounding-box is usually not
precise. In this work, we formulate the dendrite core detection problem as a
segmentation task and proposed a novel detection method to detect the dendrite
core directly. Our whole pipeline contains three steps: Easy Sample Detection
(ESD), Hard Sample Detection(HSD), and Hard Sample Refinement (HSR).
Specifically, ESD and HSD focus on the easy samples and hard samples of
dendrite cores respectively. Both of them employ the same Central Point
Detection Network (CPDN) but do not share parameters. To make HSD only focus on
the feature of hard samples of dendrite cores, we destroy the structure of the
easy samples of dendrites which are detected by ESD and force HSD to learn the
feature of hard samples. HSR is a binary classifier which is used to filter out
the false positive prediction of HSD. We evaluate our method on the dendrite
dataset. Our method outperforms the state-of-the-art baselines on three
metrics, i.e., Recall, Precision, and F-score.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 16:51:15 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Li",
"Xiaoguang",
""
]
] |
new_dataset
| 0.999719 |
2305.12518
|
Shivam Mhaskar
|
Shivam Mhaskar, Vineet Bhat, Akshay Batheja, Sourabh Deoghare,
Paramveer Choudhary, Pushpak Bhattacharyya
|
VAKTA-SETU: A Speech-to-Speech Machine Translation Service in Select
Indic Languages
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this work, we present our deployment-ready Speech-to-Speech Machine
Translation (SSMT) system for English-Hindi, English-Marathi, and Hindi-Marathi
language pairs. We develop the SSMT system by cascading Automatic Speech
Recognition (ASR), Disfluency Correction (DC), Machine Translation (MT), and
Text-to-Speech Synthesis (TTS) models. We discuss the challenges faced during
the research and development stage and the scalable deployment of the SSMT
system as a publicly accessible web service. On the MT part of the pipeline
too, we create a Text-to-Text Machine Translation (TTMT) service in all six
translation directions involving English, Hindi, and Marathi. To mitigate data
scarcity, we develop a LaBSE-based corpus filtering tool to select high-quality
parallel sentences from a noisy pseudo-parallel corpus for training the TTMT
system. All the data used for training the SSMT and TTMT systems and the best
models are being made publicly available. Users of our system are (a) Govt. of
India in the context of its new education policy (NEP), (b) tourists who
criss-cross the multilingual landscape of India, (c) Indian Judiciary where a
leading cause of the pendency of cases (to the order of 10 million as on date)
is the translation of case papers, (d) farmers who need weather and price
information and so on. We also share the feedback received from various
stakeholders when our SSMT and TTMT systems were demonstrated in large public
events.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 17:23:54 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Mhaskar",
"Shivam",
""
],
[
"Bhat",
"Vineet",
""
],
[
"Batheja",
"Akshay",
""
],
[
"Deoghare",
"Sourabh",
""
],
[
"Choudhary",
"Paramveer",
""
],
[
"Bhattacharyya",
"Pushpak",
""
]
] |
new_dataset
| 0.999611 |
2305.12520
|
Jordi Armengol-Estap\'e
|
Jordi Armengol-Estap\'e, Jackson Woodruff, Chris Cummins, Michael F.P.
O'Boyle
|
SLaDe: A Portable Small Language Model Decompiler for Optimized
Assembler
| null | null | null | null |
cs.PL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decompilation is a well-studied area with numerous high-quality tools
available. These are frequently used for security tasks and to port legacy
code. However, they regularly generate difficult-to-read programs and require a
large amount of engineering effort to support new programming languages and
ISAs. Recent interest in neural approaches has produced portable tools that
generate readable code. However, to-date such techniques are usually restricted
to synthetic programs without optimization, and no models have evaluated their
portability. Furthermore, while the code generated may be more readable, it is
usually incorrect. This paper presents SLaDe, a Small Language model Decompiler
based on a sequence-to-sequence transformer trained over real-world code. We
develop a novel tokenizer and exploit no-dropout training to produce
high-quality code. We utilize type-inference to generate programs that are more
readable and accurate than standard analytic and recent neural approaches.
Unlike standard approaches, SLaDe can infer out-of-context types and unlike
neural approaches, it generates correct code. We evaluate SLaDe on over 4,000
functions from AnghaBench on two ISAs and at two optimizations levels. SLaDe is
up to 6 times more accurate than Ghidra, a state-of-the-art,
industrial-strength decompiler and up to 4 times more accurate than the large
language model ChatGPT and generates significantly more readable code than
both.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 17:31:39 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Armengol-Estapé",
"Jordi",
""
],
[
"Woodruff",
"Jackson",
""
],
[
"Cummins",
"Chris",
""
],
[
"O'Boyle",
"Michael F. P.",
""
]
] |
new_dataset
| 0.997395 |
2305.12537
|
Larry Liebovitch
|
Larry S. Liebovitch (1 and 2), William Powers (1), Lin Shi (1),
Allegra Chen-Carrel (3), Philippe Loustaunau (4), Peter T. Coleman (2) ((1)
Queens College City University of New York, (2) Columbia University, (3)
University of San Francisco, (4) Vista Consulting)
|
Word differences in news media of lower and higher peace countries
revealed by natural language processing and machine learning
|
21 pages, 4 figures
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Language is both a cause and a consequence of the social processes that lead
to conflict or peace. Hate speech can mobilize violence and destruction. What
are the characteristics of peace speech that reflect and support the social
processes that maintain peace? This study used existing peace indices, machine
learning, and on-line, news media sources to identify the words most associated
with lower-peace versus higher-peace countries. As each peace index measures
different social properties, there is little consensus on the numerical values
of these indices. There is however greater consensus with these indices for the
countries that are at the extremes of lower-peace and higher-peace. Therefore,
a data driven approach was used to find the words most important in
distinguishing lower-peace and higher-peace countries. Rather than assuming a
theoretical framework that predicts which words are more likely in lower-peace
and higher-peace countries, and then searching for those words in news media,
in this study, natural language processing and machine learning were used to
identify the words that most accurately classified a country as lower-peace or
higher-peace. Once the machine learning model was trained on the word
frequencies from the extreme lower-peace and higher-peace countries, that model
was also used to compute a quantitative peace index for these and other
intermediate-peace countries. The model successfully yielded a quantitative
peace index for intermediate-peace countries that was in between that of the
lower-peace and higher-peace, even though they were not in the training set.
This study demonstrates how natural language processing and machine learning
can help to generate new quantitative measures of social systems, which in this
study, were linguistic differences resulting in a quantitative index of peace
for countries at different levels of peacefulness.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 18:43:25 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Liebovitch",
"Larry S.",
"",
"1 and 2"
],
[
"Powers",
"William",
""
],
[
"Shi",
"Lin",
""
],
[
"Chen-Carrel",
"Allegra",
""
],
[
"Loustaunau",
"Philippe",
""
],
[
"Coleman",
"Peter T.",
""
]
] |
new_dataset
| 0.983154 |
2305.12561
|
Roberto Daza
|
\'Alvaro Becerra, Roberto Daza, Ruth Cobos, Aythami Morales, Mutlu
Cukurova, Julian Fierrez
|
M2LADS: A System for Generating MultiModal Learning Analytics Dashboards
in Open Education
|
Accepted in "Workshop on Open Education Resources (OER) of COMPSAC
2023"
| null | null | null |
cs.HC cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we present a Web-based System called M2LADS, which supports
the integration and visualization of multimodal data recorded in learning
sessions in a MOOC in the form of Web-based Dashboards. Based on the edBB
platform, the multimodal data gathered contains biometric and behavioral
signals including electroencephalogram data to measure learners' cognitive
attention, heart rate for affective measures, visual attention from the video
recordings. Additionally, learners' static background data and their learning
performance measures are tracked using LOGCE and MOOC tracking logs
respectively, and both are included in the Web-based System. M2LADS provides
opportunities to capture learners' holistic experience during their
interactions with the MOOC, which can in turn be used to improve their learning
outcomes through feedback visualizations and interventions, as well as to
enhance learning analytics models and improve the open content of the MOOC.
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 20:22:38 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Becerra",
"Álvaro",
""
],
[
"Daza",
"Roberto",
""
],
[
"Cobos",
"Ruth",
""
],
[
"Morales",
"Aythami",
""
],
[
"Cukurova",
"Mutlu",
""
],
[
"Fierrez",
"Julian",
""
]
] |
new_dataset
| 0.994411 |
2305.12564
|
Jin Kim
|
Jared Wong and Jin Kim
|
ChatGPT Is More Likely to Be Perceived as Male Than Female
| null | null | null | null |
cs.HC cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We investigate how people perceive ChatGPT, and, in particular, how they
assign human-like attributes such as gender to the chatbot. Across five
pre-registered studies (N = 1,552), we find that people are more likely to
perceive ChatGPT to be male than female. Specifically, people perceive male
gender identity (1) following demonstrations of ChatGPT's core abilities (e.g.,
providing information or summarizing text), (2) in the absence of such
demonstrations, and (3) across different methods of eliciting perceived gender
(using various scales and asking to name ChatGPT). Moreover, we find that this
seemingly default perception of ChatGPT as male can reverse when ChatGPT's
feminine-coded abilities are highlighted (e.g., providing emotional support for
a user).
|
[
{
"version": "v1",
"created": "Sun, 21 May 2023 20:57:12 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Wong",
"Jared",
""
],
[
"Kim",
"Jin",
""
]
] |
new_dataset
| 0.997004 |
2305.12612
|
Luke Gessler
|
Luke Gessler
|
PrOnto: Language Model Evaluations for 859 Languages
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Evaluation datasets are critical resources for measuring the quality of
pretrained language models. However, due to the high cost of dataset
annotation, these resources are scarce for most languages other than English,
making it difficult to assess the quality of language models. In this work, we
present a new method for evaluation dataset construction which enables any
language with a New Testament translation to receive a suite of evaluation
datasets suitable for pretrained language model evaluation. The method
critically involves aligning verses with those in the New Testament portion of
English OntoNotes, and then projecting annotations from English to the target
language, with no manual annotation required. We apply this method to 1051 New
Testament translations in 859 and make them publicly available. Additionally,
we conduct experiments which demonstrate the efficacy of our method for
creating evaluation tasks which can assess language model quality.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 00:33:52 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Gessler",
"Luke",
""
]
] |
new_dataset
| 0.997136 |
2305.12649
|
Mingkui Tan
|
Hongbin Lin, Mingkui Tan, Yifan Zhang, Zhen Qiu, Shuaicheng Niu, Dong
Liu, Qing Du and Yanxia Liu
|
Imbalance-Agnostic Source-Free Domain Adaptation via Avatar Prototype
Alignment
|
arXiv admin note: text overlap with arXiv:2106.15326
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Source-free Unsupervised Domain Adaptation (SF-UDA) aims to adapt a
well-trained source model to an unlabeled target domain without access to the
source data. One key challenge is the lack of source data during domain
adaptation. To handle this, we propose to mine the hidden knowledge of the
source model and exploit it to generate source avatar prototypes. To this end,
we propose a Contrastive Prototype Generation and Adaptation (CPGA) method.
CPGA consists of two stages: Prototype generation and Prototype adaptation.
Extensive experiments on three UDA benchmark datasets demonstrate the
superiority of CPGA. However, existing SF.UDA studies implicitly assume
balanced class distributions for both the source and target domains, which
hinders their real applications. To address this issue, we study a more
practical SF-UDA task, termed imbalance-agnostic SF-UDA, where the class
distributions of both the unseen source domain and unlabeled target domain are
unknown and could be arbitrarily skewed. This task is much more challenging
than vanilla SF-UDA due to the co-occurrence of covariate shifts and
unidentified class distribution shifts between the source and target domains.
To address this task, we extend CPGA and propose a new Target-aware Contrastive
Prototype Generation and Adaptation (T-CPGA) method. Specifically, for better
prototype adaptation in the imbalance-agnostic scenario, T-CPGA applies a new
pseudo label generation strategy to identify unknown target class distribution
and generate accurate pseudo labels, by utilizing the collective intelligence
of the source model and an additional contrastive language-image pre-trained
model. Meanwhile, we further devise a target label-distribution-aware
classifier to adapt the model to the unknown target class distribution. We
empirically show that T-CPGA significantly outperforms CPGA and other SF-UDA
methods in imbalance-agnostic SF-UDA.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 02:46:34 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Lin",
"Hongbin",
""
],
[
"Tan",
"Mingkui",
""
],
[
"Zhang",
"Yifan",
""
],
[
"Qiu",
"Zhen",
""
],
[
"Niu",
"Shuaicheng",
""
],
[
"Liu",
"Dong",
""
],
[
"Du",
"Qing",
""
],
[
"Liu",
"Yanxia",
""
]
] |
new_dataset
| 0.994338 |
2305.12655
|
Sihem Mesnager
|
Kwang Ho Kim, Sihem Mesnager, Ye Bong Kim
|
On the Boomerang Spectrum of Power Permutation
$X^{2^{3n}+2^{2n}+2^{n}-1}$ over $\GF{2^{4n}}$ and Extraction of Optimal
Uniformity Boomerang Functions
| null | null | null | null |
cs.IT math.IT math.NT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A substitution box (S-box) in a symmetric primitive is a mapping $F$ that
takes $k$ binary inputs and whose image is a binary $m$-tuple for some positive
integers $k$ and $m$, which is usually the only nonlinear element of the most
modern block ciphers. Therefore, employing S-boxes with good cryptographic
properties to resist various attacks is significant. For power permutation $F$
over finite field $\GF{2^k}$, the multiset of
values $\beta_F(1,b)=\#\{x\in \GF{2^k}\mid
F^{-1}(F(x)+b)+F^{-1}(F(x+1)+b)=1\}$ for $b\in \GF{2^k}$ is called the
boomerang spectrum of $F$. The maximum value in the boomerang spectrum is
called boomerang uniformity. This paper determines the boomerang spectrum of
the power permutation $X^{2^{3n}+2^{2n}+2^{n}-1}$ over $\GF{2^{4n}}$. The
boomerang uniformity of that power permutation is $3(2^{2n}-2^n)$. However, on
a large subset $\{b\in \GF{2^{4n}}\mid \mathbf{Tr}_n^{4n}(b)\neq 0\}$ of
$\GF{2^{4n}}$ of cardinality $2^{4n}-2^{3n}$ (where $ \mathbf{Tr}_n^{4n}$ is
the (relative) trace function from $\GF{2^{4n}}$ to $\GF{2^{n}}$), we prove
that the studied function $F$ achieves the optimal boomerang uniformity $2$.
It is known that obtaining such functions is a challenging problem.
More importantly, the set of $b$'s giving this value is explicitly determined
for any value in the boomerang spectrum.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 02:53:55 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Kim",
"Kwang Ho",
""
],
[
"Mesnager",
"Sihem",
""
],
[
"Kim",
"Ye Bong",
""
]
] |
new_dataset
| 0.97612 |
2305.12659
|
Zhenghao Zhang
|
Zhenghao Zhang and Zhichao Wei and Shengfan Zhang and Zuozhuo Dai and
Siyu Zhu
|
UVOSAM: A Mask-free Paradigm for Unsupervised Video Object Segmentation
via Segment Anything Model
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Unsupervised video object segmentation has made significant progress in
recent years, but the manual annotation of video mask datasets is expensive and
limits the diversity of available datasets. The Segment Anything Model (SAM)
has introduced a new prompt-driven paradigm for image segmentation, unlocking a
range of previously unexplored capabilities. In this paper, we propose a novel
paradigm called UVOSAM, which leverages SAM for unsupervised video object
segmentation without requiring video mask labels. To address SAM's limitations
in instance discovery and identity association, we introduce a video salient
object tracking network that automatically generates trajectories for prominent
foreground objects. These trajectories then serve as prompts for SAM to produce
video masks on a frame-by-frame basis. Our experimental results demonstrate
that UVOSAM significantly outperforms current mask-supervised methods. These
findings suggest that UVOSAM has the potential to improve unsupervised video
object segmentation and reduce the cost of manual annotation.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 03:03:29 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Zhang",
"Zhenghao",
""
],
[
"Wei",
"Zhichao",
""
],
[
"Zhang",
"Shengfan",
""
],
[
"Dai",
"Zuozhuo",
""
],
[
"Zhu",
"Siyu",
""
]
] |
new_dataset
| 0.999765 |
2305.12669
|
Jie Yang
|
Jie Yang, Chao-Kai Wen, Jing Xu, Hang Que, Haikun Wei, Shi Jin
|
Angle-based SLAM on 5G mmWave Systems: Design, Implementation, and
Measurement
|
Accepted by the IEEE Internet of Things Journal
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simultaneous localization and mapping (SLAM) is a key technology that
provides user equipment (UE) tracking and environment mapping services,
enabling the deep integration of sensing and communication. The millimeter-wave
(mmWave) communication, with its larger bandwidths and antenna arrays,
inherently facilitates more accurate delay and angle measurements than sub-6
GHz communication, thereby providing opportunities for SLAM. However, none of
the existing works have realized the SLAM function under the 5G New Radio (NR)
standard due to specification and hardware constraints. In this study, we
investigate how 5G mmWave communication systems can achieve situational
awareness without changing the transceiver architecture and 5G NR standard. We
implement 28 GHz mmWave transceivers that deploy OFDM-based 5G NR waveform with
160 MHz channel bandwidth, and we realize beam management following the 5G NR.
Furthermore, we develop an efficient successive cancellation-based angle
extraction approach to obtain angles of arrival and departure from the
reference signal received power measurements. On the basis of angle
measurements, we propose an angle-only SLAM algorithm to track UE and map
features in the radio environment. Thorough experiments and ray tracing-based
computer simulations verify that the proposed angle-based SLAM can achieve
sub-meter level localization and mapping accuracy with a single base station
and without the requirement of strict time synchronization. Our experiments
also reveal many propagation properties critical to the success of SLAM in 5G
mmWave communication systems.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 03:17:02 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Yang",
"Jie",
""
],
[
"Wen",
"Chao-Kai",
""
],
[
"Xu",
"Jing",
""
],
[
"Que",
"Hang",
""
],
[
"Wei",
"Haikun",
""
],
[
"Jin",
"Shi",
""
]
] |
new_dataset
| 0.996021 |
2305.12720
|
Masanori Hirano
|
Masanori Hirano, Masahiro Suzuki, Hiroki Sakaji
|
llm-japanese-dataset v0: Construction of Japanese Chat Dataset for Large
Language Models and its Methodology
|
12 pages
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study constructed a Japanese chat dataset for tuning large language
models (LLMs), which consist of about 8.4 million records. Recently, LLMs have
been developed and gaining popularity. However, high-performing LLMs are
usually mainly for English. There are two ways to support languages other than
English by those LLMs: constructing LLMs from scratch or tuning existing
models. However, in both ways, datasets are necessary parts. In this study, we
focused on supporting Japanese in those LLMs and making a dataset for training
or tuning LLMs in Japanese. The dataset we constructed consisted of various
tasks, such as translation and knowledge tasks. In our experiment, we tuned an
existing LLM using our dataset and evaluated the performance qualitatively. The
results suggest that our dataset is possibly beneficial for LLMs. However, we
also revealed some difficulties in constructing LLMs in languages other than
English.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 04:59:33 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Hirano",
"Masanori",
""
],
[
"Suzuki",
"Masahiro",
""
],
[
"Sakaji",
"Hiroki",
""
]
] |
new_dataset
| 0.999658 |
2305.12725
|
Engin Arslan
|
Tasdiqul Islam and Engin Arslan
|
Quantum Key Distribution with Minimal Qubit Transmission Based on
MultiQubit Greenberger Horne Zeilinger State
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conventional Quantum Key Distribution (QKD) requires the transmission of
multiple qubits equivalent to the length of the key. As quantum networks are
still in their infancy thus, they are expected to have a limited capacity,
necessitating too many qubit transmissions for QKD might limit the effective
use of limited network bandwidth of quantum networks. To address this challenge
and enhance the practicality of QKD, we propose a Multi-Qubit Greenberger Horne
Zeilinger (GHZ) State-based QKD scheme that requires a small number of qubit
transmissions. The proposed method transmits one qubit between endpoints and
reuses it for the transmission of multiple classical bits with the help of
Quantum nondemolition (QND) measurements. We show that one can transfer L-1
classical bits by generating an L-qubit GHZ state and transferring one to the
remote party. We further show that the proposed QKD algorithm can be extended
to enable multi-party QKD. It can also support QKD between parties with minimal
quantum resources. As a result, the proposed scheme offers a quantum
network-efficient alternative QKD.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 05:19:09 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Islam",
"Tasdiqul",
""
],
[
"Arslan",
"Engin",
""
]
] |
new_dataset
| 0.966034 |
2305.12749
|
Zihan Wang
|
Zihan Wang, Tianle Wang, Dheeraj Mekala, Jingbo Shang
|
A Benchmark on Extremely Weakly Supervised Text Classification:
Reconcile Seed Matching and Prompting Approaches
|
ACL 2023 Findings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Etremely Weakly Supervised Text Classification (XWS-TC) refers to text
classification based on minimal high-level human guidance, such as a few
label-indicative seed words or classification instructions. There are two
mainstream approaches for XWS-TC, however, never being rigorously compared: (1)
training classifiers based on pseudo-labels generated by (softly) matching seed
words (SEED) and (2) prompting (and calibrating) language models using
classification instruction (and raw texts) to decode label words (PROMPT). This
paper presents the first XWS-TC benchmark to compare the two approaches on fair
grounds, where the datasets, supervisions, and hyperparameter choices are
standardized across methods. Our benchmarking results suggest that (1) Both
SEED and PROMPT approaches are competitive and there is no clear winner; (2)
SEED is empirically more tolerant than PROMPT to human guidance (e.g., seed
words, classification instructions, and label words) changes; (3) SEED is
empirically more selective than PROMPT to the pre-trained language models; (4)
Recent SEED and PROMPT methods have close connections and a clustering
post-processing step based on raw in-domain texts is a strong performance
booster to both. We hope this benchmark serves as a guideline in selecting
XWS-TC methods in different scenarios and stimulate interest in developing
guidance- and model-robust XWS-TC methods. We release the repo at
https://github.com/ZihanWangKi/x-TC.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 06:18:23 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Wang",
"Zihan",
""
],
[
"Wang",
"Tianle",
""
],
[
"Mekala",
"Dheeraj",
""
],
[
"Shang",
"Jingbo",
""
]
] |
new_dataset
| 0.999068 |
2305.12759
|
Hao Wang
|
Hao Wang, Hirofumi Shimizu, Daisuke Kawahara
|
Kanbun-LM: Reading and Translating Classical Chinese in Japanese Methods
by Language Models
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent studies in natural language processing (NLP) have focused on modern
languages and achieved state-of-the-art results in many tasks. Meanwhile,
little attention has been paid to ancient texts and related tasks. Classical
Chinese first came to Japan approximately 2,000 years ago. It was gradually
adapted to a Japanese form called Kanbun-Kundoku (Kanbun) in Japanese reading
and translating methods, which has significantly impacted Japanese literature.
However, compared to the rich resources for ancient texts in mainland China,
Kanbun resources remain scarce in Japan. To solve this problem, we construct
the first Classical-Chinese-to-Kanbun dataset in the world. Furthermore, we
introduce two tasks, character reordering and machine translation, both of
which play a significant role in Kanbun comprehension. We also test the current
language models on these tasks and discuss the best evaluation method by
comparing the results with human scores. We release our code and dataset on
GitHub.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 06:30:02 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Wang",
"Hao",
""
],
[
"Shimizu",
"Hirofumi",
""
],
[
"Kawahara",
"Daisuke",
""
]
] |
new_dataset
| 0.999725 |
2305.12778
|
Baihua Shi
|
Baihua Shi, Yang Wang, Danqi Li, Wenlong Cai, Jinyong Lin, Shuo Zhang,
Weiping Shi, Shihao Yan, and Feng Shu
|
STAR-RIS-UAV Aided Coordinated Multipoint Cellular System for Multi-user
Networks
|
10 pages, 6 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Different with conventional reconfigurable intelligent surface (RIS),
simultaneous transmitting and reflecting RIS (STAR-RIS) can reflect and
transmit the signals to the receiver. In this paper, to serve more ground users
and increase the deployment flexibility, we investigate an unmanned aerial
vehicle equipped with a STAR-RIS (STAR-RIS-UAV) aided wireless communications
for multi-user networks. Energy splitting (ES) and mode switching (MS)
protocols are considered to control the reflection and transmission
coefficients of STAR-RIS elements. To maximize the sum rate of the STAR-RIS-UAV
aided coordinated multipoint cellular system for multi-user networks, the
corresponding beamforming vectors as well as transmitted and reflected
coefficients matrices are optimized. Specifically, instead of adopting the
alternating optimization, we design an iteration method to optimize all
variables for both ES and MS protocols at the same time. Simulation results
reveal that STAR-RIS-UAV aided wireless communication system has a much higher
sum rate than the system with conventional RIS or without RIS. Furthermore, the
proposed structure is more flexible than a fixed STAR-RIS and could greatly
promote the sum rate.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 07:19:34 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Shi",
"Baihua",
""
],
[
"Wang",
"Yang",
""
],
[
"Li",
"Danqi",
""
],
[
"Cai",
"Wenlong",
""
],
[
"Lin",
"Jinyong",
""
],
[
"Zhang",
"Shuo",
""
],
[
"Shi",
"Weiping",
""
],
[
"Yan",
"Shihao",
""
],
[
"Shu",
"Feng",
""
]
] |
new_dataset
| 0.975922 |
2305.12784
|
Jason Kim
|
Hritvik Taneja, Jason Kim, Jie Jeff Xu, Stephan van Schaik, Daniel
Genkin, Yuval Yarom
|
Hot Pixels: Frequency, Power, and Temperature Attacks on GPUs and ARM
SoCs
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The drive to create thinner, lighter, and more energy efficient devices has
resulted in modern SoCs being forced to balance a delicate tradeoff between
power consumption, heat dissipation, and execution speed (i.e., frequency).
While beneficial, these DVFS mechanisms have also resulted in software-visible
hybrid side-channels, which use software to probe analog properties of
computing devices. Such hybrid attacks are an emerging threat that can bypass
countermeasures for traditional microarchitectural side-channel attacks.
Given the rise in popularity of both Arm SoCs and GPUs, in this paper we
investigate the susceptibility of these devices to information leakage via
power, temperature and frequency, as measured via internal sensors. We
demonstrate that the sensor data observed correlates with both instructions
executed and data processed, allowing us to mount software-visible hybrid
side-channel attacks on these devices.
To demonstrate the real-world impact of this issue, we present
JavaScript-based pixel stealing and history sniffing attacks on Chrome and
Safari, with all side channel countermeasures enabled. Finally, we also show
website fingerprinting attacks, without any elevated privileges.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 07:29:05 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Taneja",
"Hritvik",
""
],
[
"Kim",
"Jason",
""
],
[
"Xu",
"Jie Jeff",
""
],
[
"van Schaik",
"Stephan",
""
],
[
"Genkin",
"Daniel",
""
],
[
"Yarom",
"Yuval",
""
]
] |
new_dataset
| 0.981043 |
2305.12785
|
Hanxing Ding
|
Hanxing Ding, Liang Pang, Zihao Wei, Huawei Shen, Xueqi Cheng,
Tat-Seng Chua
|
MacLaSa: Multi-Aspect Controllable Text Generation via Efficient
Sampling from Compact Latent Space
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-aspect controllable text generation aims to generate fluent sentences
that possess multiple desired attributes simultaneously. Traditional methods
either combine many operators in the decoding stage, often with costly
iteration or search in the discrete text space, or train separate controllers
for each aspect, resulting in a degeneration of text quality due to the
discrepancy between different aspects. To address these limitations, we
introduce a novel approach for multi-aspect control, namely MacLaSa, that
estimates compact latent space for multiple aspects and performs efficient
sampling with a robust sampler based on ordinary differential equations (ODEs).
To eliminate the domain gaps between different aspects, we utilize a
Variational Autoencoder (VAE) network to map text sequences from varying data
sources into close latent representations. The estimated latent space enables
the formulation of joint energy-based models (EBMs) and the plugging in of
arbitrary attribute discriminators to achieve multi-aspect control. Afterwards,
we draw latent vector samples with an ODE-based sampler and feed sampled
examples to the VAE decoder to produce target text sequences. Experimental
results demonstrate that MacLaSa outperforms several strong baselines on
attribute relevance and textual quality while maintaining a high inference
speed.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 07:30:35 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Ding",
"Hanxing",
""
],
[
"Pang",
"Liang",
""
],
[
"Wei",
"Zihao",
""
],
[
"Shen",
"Huawei",
""
],
[
"Cheng",
"Xueqi",
""
],
[
"Chua",
"Tat-Seng",
""
]
] |
new_dataset
| 0.994404 |
2305.12798
|
Chi Han
|
Chi Han, Jialiang Xu, Manling Li, Yi Fung, Chenkai Sun, Nan Jiang,
Tarek Abdelzaher, Heng Ji
|
LM-Switch: Lightweight Language Model Conditioning in Word Embedding
Space
|
9 pages, 3 figures
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, large language models (LMs) have achieved remarkable
progress across various natural language processing tasks. As pre-training and
fine-tuning are costly and might negatively impact model performance, it is
desired to efficiently adapt an existing model to different conditions such as
styles, sentiments or narratives, when facing different audiences or scenarios.
However, efficient adaptation of a language model to diverse conditions remains
an open challenge. This work is inspired by the observation that text
conditions are often associated with selection of certain words in a context.
Therefore we introduce LM-Switch, a theoretically grounded, lightweight and
simple method for generative language model conditioning. We begin by
investigating the effect of conditions in Hidden Markov Models (HMMs), and
establish a theoretical connection with language model. Our finding suggests
that condition shifts in HMMs are associated with linear transformations in
word embeddings. LM-Switch is then designed to deploy a learnable linear factor
in the word embedding space for language model conditioning. We show that
LM-Switch can model diverse tasks, and achieves comparable or better
performance compared with state-of-the-art baselines in LM detoxification and
generation control, despite requiring no more than 1% of parameters compared
with baselines and little extra time overhead compared with base LMs. It is
also able to learn from as few as a few sentences or one document. Moreover, a
learned LM-Switch can be transferred to other LMs of different sizes, achieving
a detoxification performance similar to the best baseline. We will make our
code available to the research community following publication.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 07:52:04 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Han",
"Chi",
""
],
[
"Xu",
"Jialiang",
""
],
[
"Li",
"Manling",
""
],
[
"Fung",
"Yi",
""
],
[
"Sun",
"Chenkai",
""
],
[
"Jiang",
"Nan",
""
],
[
"Abdelzaher",
"Tarek",
""
],
[
"Ji",
"Heng",
""
]
] |
new_dataset
| 0.979508 |
2305.12821
|
Youngwoon Lee
|
Minho Heo and Youngwoon Lee and Doohyun Lee and Joseph J. Lim
|
FurnitureBench: Reproducible Real-World Benchmark for Long-Horizon
Complex Manipulation
|
Robotics: Science and Systems (RSS) 2023. Website:
https://clvrai.com/furniture-bench
| null | null | null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement learning (RL), imitation learning (IL), and task and motion
planning (TAMP) have demonstrated impressive performance across various robotic
manipulation tasks. However, these approaches have been limited to learning
simple behaviors in current real-world manipulation benchmarks, such as pushing
or pick-and-place. To enable more complex, long-horizon behaviors of an
autonomous robot, we propose to focus on real-world furniture assembly, a
complex, long-horizon robot manipulation task that requires addressing many
current robotic manipulation challenges to solve. We present FurnitureBench, a
reproducible real-world furniture assembly benchmark aimed at providing a low
barrier for entry and being easily reproducible, so that researchers across the
world can reliably test their algorithms and compare them against prior work.
For ease of use, we provide 200+ hours of pre-collected data (5000+
demonstrations), 3D printable furniture models, a robotic environment setup
guide, and systematic task initialization. Furthermore, we provide
FurnitureSim, a fast and realistic simulator of FurnitureBench. We benchmark
the performance of offline RL and IL algorithms on our assembly tasks and
demonstrate the need to improve such algorithms to be able to solve our tasks
in the real world, providing ample opportunities for future research.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 08:29:00 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Heo",
"Minho",
""
],
[
"Lee",
"Youngwoon",
""
],
[
"Lee",
"Doohyun",
""
],
[
"Lim",
"Joseph J.",
""
]
] |
new_dataset
| 0.999139 |
2305.12945
|
Dongfang Li
|
Dongfang Li, Jindi Yu, Baotian Hu, Zhenran Xu and Min Zhang
|
ExplainCPE: A Free-text Explanation Benchmark of Chinese Pharmacist
Examination
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As ChatGPT and GPT-4 spearhead the development of Large Language Models
(LLMs), more researchers are investigating their performance across various
tasks. But more research needs to be done on the interpretability capabilities
of LLMs, that is, the ability to generate reasons after an answer has been
given. Existing explanation datasets are mostly English-language general
knowledge questions, which leads to insufficient thematic and linguistic
diversity. To address the language bias and lack of medical resources in
generating rationales QA datasets, we present ExplainCPE (over 7k instances), a
challenging medical benchmark in Simplified Chinese. We analyzed the errors of
ChatGPT and GPT-4, pointing out the limitations of current LLMs in
understanding text and computational reasoning. During the experiment, we also
found that different LLMs have different preferences for in-context learning.
ExplainCPE presents a significant challenge, but its potential for further
investigation is promising, and it can be used to evaluate the ability of a
model to generate explanations. AI safety and trustworthiness need more
attention, and this work makes the first step to explore the medical
interpretability of LLMs.The dataset is available at
https://github.com/HITsz-TMG/ExplainCPE.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 11:45:42 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Li",
"Dongfang",
""
],
[
"Yu",
"Jindi",
""
],
[
"Hu",
"Baotian",
""
],
[
"Xu",
"Zhenran",
""
],
[
"Zhang",
"Min",
""
]
] |
new_dataset
| 0.999638 |
2305.12952
|
Donatella Darsena
|
Donatella Darsena, Francesco Verde
|
On the capacity of TDMA downlink with a reconfigurable intelligent
surface
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We provide accurate approximations of the sum-rate capacity of a
time-division multiple access (TDMA) down-link, when a reconfigurable
intelligent surface (RIS) assists the transmission from a single-antenna base
station (BS) to K single-antenna user equipments (UEs). We consider the fading
effects of both the direct (i.e., BS-to-UEs) and reflection (i.e,
BS-to-RIS-to-UEs) links, by developing two approximations: the former one is
based on hardening of the reflection channel for large values of the number Q
of meta-atoms at the RIS; the latter one relies on the distribution of the sum
of Nakagami variates and does not require channel hardening. Our derivations
show the dependence of the sum-rate capacity as a function of both K and Q, as
well as to establish a comparison with a TDMA downlink without an RIS.
Numerical results corroborate the accuracy of the proposed approximations and
the validity of the mathematical analysis.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 11:55:00 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Darsena",
"Donatella",
""
],
[
"Verde",
"Francesco",
""
]
] |
new_dataset
| 0.966391 |
2305.12955
|
Stefanie Walz
|
Stefanie Walz and Mario Bijelic and Andrea Ramazzina and Amanpreet
Walia and Fahim Mannan and Felix Heide
|
Gated Stereo: Joint Depth Estimation from Gated and Wide-Baseline Active
Stereo Cues
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Gated Stereo, a high-resolution and long-range depth estimation
technique that operates on active gated stereo images. Using active and high
dynamic range passive captures, Gated Stereo exploits multi-view cues alongside
time-of-flight intensity cues from active gating. To this end, we propose a
depth estimation method with a monocular and stereo depth prediction branch
which are combined in a final fusion stage. Each block is supervised through a
combination of supervised and gated self-supervision losses. To facilitate
training and validation, we acquire a long-range synchronized gated stereo
dataset for automotive scenarios. We find that the method achieves an
improvement of more than 50 % MAE compared to the next best RGB stereo method,
and 74 % MAE to existing monocular gated methods for distances up to 160 m. Our
code,models and datasets are available here.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 12:03:20 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Walz",
"Stefanie",
""
],
[
"Bijelic",
"Mario",
""
],
[
"Ramazzina",
"Andrea",
""
],
[
"Walia",
"Amanpreet",
""
],
[
"Mannan",
"Fahim",
""
],
[
"Heide",
"Felix",
""
]
] |
new_dataset
| 0.996131 |
2305.12971
|
James Stovold
|
James Stovold
|
Neural Cellular Automata Can Respond to Signals
|
Accepted to main track at ALIFE 2023
| null | null | null |
cs.NE cs.AI cs.DC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Neural Cellular Automata (NCAs) are a model of morphogenesis, capable of
growing two-dimensional artificial organisms from a single seed cell. In this
paper, we show that NCAs can be trained to respond to signals. Two types of
signal are used: internal (genomically-coded) signals, and external
(environmental) signals. Signals are presented to a single pixel for a single
timestep.
Results show NCAs are able to grow into multiple distinct forms based on
internal signals, and are able to change colour based on external signals.
Overall these contribute to the development of NCAs as a model of artificial
morphogenesis, and pave the way for future developments embedding dynamic
behaviour into the NCA model.
Code and target images are available through GitHub:
https://github.com/jstovold/ALIFE2023
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 12:26:46 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Stovold",
"James",
""
]
] |
new_dataset
| 0.995349 |
2305.13019
|
Zihao Zhang
|
Zihao Zhang, Susan L. Epstein, Casey Breen, Sophia Xia, Zhigang Zhu,
Christian Volkmann
|
Robots in the Garden: Artificial Intelligence and Adaptive Landscapes
|
4 figures, 9 pages
|
Journal of Digital Landscape Architecture, 2023
|
10.14627/537740028
| null |
cs.RO cs.AI cs.CV cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces ELUA, the Ecological Laboratory for Urban Agriculture,
a collaboration among landscape architects, architects and computer scientists
who specialize in artificial intelligence, robotics and computer vision. ELUA
has two gantry robots, one indoors and the other outside on the rooftop of a
6-story campus building. Each robot can seed, water, weed, and prune in its
garden. To support responsive landscape research, ELUA also includes sensor
arrays, an AI-powered camera, and an extensive network infrastructure. This
project demonstrates a way to integrate artificial intelligence into an
evolving urban ecosystem, and encourages landscape architects to develop an
adaptive design framework where design becomes a long-term engagement with the
environment.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 13:21:59 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Zhang",
"Zihao",
""
],
[
"Epstein",
"Susan L.",
""
],
[
"Breen",
"Casey",
""
],
[
"Xia",
"Sophia",
""
],
[
"Zhu",
"Zhigang",
""
],
[
"Volkmann",
"Christian",
""
]
] |
new_dataset
| 0.997127 |
2305.13021
|
Ambre Davat
|
Ambre Davat (GIPSA-PCMD,LIG), V\'eronique Auberg\'e (LIG), Gang Feng
(GIPSA-lab)
|
Can we hear physical and social space together through prosody?
| null |
Speech Prosody 2020, May 2020, Tokyo, Japan. pp.715-719
|
10.21437/SpeechProsody.2020-146
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When human listeners try to guess the spatial position of a speech source,
they are influenced by the speaker's production level, regardless of the
intensity level reaching their ears. Because the perception of distance is a
very difficult task, they rely on their own experience, which tells them that a
whispering talker is close to them, and that a shouting talker is far away.
This study aims to test if similar results could be obtained for prosodic
variations produced by a human speaker in an everyday life environment. It
consists in a localization task, during which blindfolded subjects had to
estimate the incoming voice direction, speaker orientation and distance of a
trained female speaker, who uttered single words, following instructions
concerning intensity and social-affect to be performed. This protocol was
implemented in two experiments. First, a complex pretext task was used in order
to distract the subjects from the strange behavior of the speaker. On the
contrary, during the second experiment, the subjects were fully aware of the
prosodic variations, which allowed them to adapt their perception. Results show
the importance of the pretext task, and suggest that the perception of the
speaker's orientation can be influenced by voice intensity.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 13:25:01 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Davat",
"Ambre",
"",
"GIPSA-PCMD,LIG"
],
[
"Aubergé",
"Véronique",
"",
"LIG"
],
[
"Feng",
"Gang",
"",
"GIPSA-lab"
]
] |
new_dataset
| 0.998207 |
2305.13026
|
Wietse de Vries
|
Wietse de Vries, Martijn Wieling and Malvina Nissim
|
DUMB: A Benchmark for Smart Evaluation of Dutch Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We introduce the Dutch Model Benchmark: DUMB. The benchmark includes a
diverse set of datasets for low-, medium- and high-resource tasks. The total
set of eight tasks include three tasks that were previously not available in
Dutch. Instead of relying on a mean score across tasks, we propose Relative
Error Reduction (RER), which compares the DUMB performance of models to a
strong baseline which can be referred to in the future even when assessing
different sets of models. Through a comparison of 14 pre-trained models (mono-
and multi-lingual, of varying sizes), we assess the internal consistency of the
benchmark tasks, as well as the factors that likely enable high performance.
Our results indicate that current Dutch monolingual models under-perform and
suggest training larger Dutch models with other architectures and pre-training
objectives. At present, the highest performance is achieved by DeBERTaV3
(large), XLM-R (large) and mDeBERTaV3 (base). In addition to highlighting best
strategies for training larger Dutch models, DUMB will foster further research
on Dutch. A public leaderboard is available at https://dumbench.nl.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 13:27:37 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"de Vries",
"Wietse",
""
],
[
"Wieling",
"Martijn",
""
],
[
"Nissim",
"Malvina",
""
]
] |
new_dataset
| 0.991357 |
2305.13086
|
Ruochen Xu
|
Ruochen Xu, Song Wang, Yang Liu, Shuohang Wang, Yichong Xu, Dan Iter,
Chenguang Zhu, Michael Zeng
|
LMGQS: A Large-scale Dataset for Query-focused Summarization
|
work in progress
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Query-focused summarization (QFS) aims to extract or generate a summary of an
input document that directly answers or is relevant to a given query. The lack
of large-scale datasets in the form of documents, queries, and summaries has
hindered model development in this area. In contrast, multiple large-scale
high-quality datasets for generic summarization exist. We hypothesize that
there is a hidden query for each summary sentence in a generic summarization
annotation, and we utilize a large-scale pretrained language model to recover
it. In this way, we convert four generic summarization benchmarks into a new
QFS benchmark dataset, LMGQS, which consists of over 1 million
document-query-summary samples. We thoroughly investigate the properties of our
proposed dataset and establish baselines with state-of-the-art summarization
models. By fine-tuning a language model on LMGQS, we achieve state-of-the-art
zero-shot and supervised performance on multiple existing QFS benchmarks,
demonstrating the high quality and diversity of LMGQS.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 14:53:45 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Xu",
"Ruochen",
""
],
[
"Wang",
"Song",
""
],
[
"Liu",
"Yang",
""
],
[
"Wang",
"Shuohang",
""
],
[
"Xu",
"Yichong",
""
],
[
"Iter",
"Dan",
""
],
[
"Zhu",
"Chenguang",
""
],
[
"Zeng",
"Michael",
""
]
] |
new_dataset
| 0.999726 |
2305.13124
|
Alexander Hoelzemann
|
Alexander Hoelzemann, Julia Lee Romero, Marius Bock, Kristof Van
Laerhoven, Qin Lv
|
Hang-Time HAR: A Benchmark Dataset for Basketball Activity Recognition
using Wrist-worn Inertial Sensors
| null | null | null | null |
cs.LG cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We present a benchmark dataset for evaluating physical human activity
recognition methods from wrist-worn sensors, for the specific setting of
basketball training, drills, and games. Basketball activities lend themselves
well for measurement by wrist-worn inertial sensors, and systems that are able
to detect such sport-relevant activities could be used in applications toward
game analysis, guided training, and personal physical activity tracking. The
dataset was recorded for two teams from separate countries (USA and Germany)
with a total of 24 players who wore an inertial sensor on their wrist, during
both repetitive basketball training sessions and full games. Particular
features of this dataset include an inherent variance through cultural
differences in game rules and styles as the data was recorded in two countries,
as well as different sport skill levels, since the participants were
heterogeneous in terms of prior basketball experience. We illustrate the
dataset's features in several time-series analyses and report on a baseline
classification performance study with two state-of-the-art deep learning
architectures.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 15:25:29 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Hoelzemann",
"Alexander",
""
],
[
"Romero",
"Julia Lee",
""
],
[
"Bock",
"Marius",
""
],
[
"Van Laerhoven",
"Kristof",
""
],
[
"Lv",
"Qin",
""
]
] |
new_dataset
| 0.999867 |
2305.13186
|
Xinyuan Lu
|
Xinyuan Lu, Liangming Pan, Qian Liu, Preslav Nakov, Min-Yen Kan
|
SCITAB: A Challenging Benchmark for Compositional Reasoning and Claim
Verification on Scientific Tables
|
Technical Report
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Scientific fact-checking is crucial for ensuring the accuracy, reliability,
and trustworthiness of scientific claims. However, existing benchmarks are
limited in terms of their claim diversity, reliance on text-based evidence, and
oversimplification of scientific reasoning. To address these gaps, we introduce
SCITAB, a novel dataset comprising 1,225 challenging scientific claims
requiring compositional reasoning with scientific tables. The claims in SCITAB
are derived from the actual scientific statements, and the evidence is
presented as tables, closely mirroring real-world fact-checking scenarios. We
establish benchmarks on SCITAB using state-of-the-art models, revealing its
inherent difficulty and highlighting limitations in existing prompting methods.
Our error analysis identifies unique challenges, including ambiguous
expressions and irrelevant claims, suggesting future research directions. The
code and the data are publicly available at
https://github.com/XinyuanLu00/SciTab.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 16:13:50 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Lu",
"Xinyuan",
""
],
[
"Pan",
"Liangming",
""
],
[
"Liu",
"Qian",
""
],
[
"Nakov",
"Preslav",
""
],
[
"Kan",
"Min-Yen",
""
]
] |
new_dataset
| 0.999798 |
2305.13190
|
Daniela Inclezan
|
Daniela Inclezan
|
An ASP Framework for the Refinement of Authorization and Obligation
Policies
|
Paper accepted for presentation at the 39th International Conference
on Logic Programming (ICLP 2023), 16 pages
| null | null | null |
cs.LO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces a framework for assisting policy authors in refining
and improving their policies. In particular, we focus on authorization and
obligation policies that can be encoded in Gelfond and Lobo's AOPL language for
policy specification. We propose a framework that detects the statements that
make a policy inconsistent, underspecified, or ambiguous with respect to an
action being executed in a given state. We also give attention to issues that
arise at the intersection of authorization and obligation policies, for
instance when the policy requires an unauthorized action to be executed. The
framework is encoded in Answer Set Programming. Under consideration for
acceptance in TPLP.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 16:23:11 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Inclezan",
"Daniela",
""
]
] |
new_dataset
| 0.98264 |
2305.13194
|
Elizabeth Clark
|
Elizabeth Clark, Shruti Rijhwani, Sebastian Gehrmann, Joshua Maynez,
Roee Aharoni, Vitaly Nikolaev, Thibault Sellam, Aditya Siddhant, Dipanjan
Das, Ankur P. Parikh
|
SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization
Evaluation
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Reliable automatic evaluation of summarization systems is challenging due to
the multifaceted and subjective nature of the task. This is especially the case
for languages other than English, where human evaluations are scarce. In this
work, we introduce SEAHORSE, a dataset for multilingual, multifaceted
summarization evaluation. SEAHORSE consists of 96K summaries with human ratings
along 6 quality dimensions: comprehensibility, repetition, grammar,
attribution, main ideas, and conciseness, covering 6 languages, 9 systems and 4
datasets. As a result of its size and scope, SEAHORSE can serve both as a
benchmark to evaluate learnt metrics, as well as a large-scale resource for
training such metrics. We show that metrics trained with SEAHORSE achieve
strong performance on the out-of-domain meta-evaluation benchmarks TRUE
(Honovich et al., 2022) and mFACE (Aharoni et al., 2022). We make SEAHORSE
publicly available for future research on multilingual and multifaceted
summarization evaluation.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 16:25:07 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Clark",
"Elizabeth",
""
],
[
"Rijhwani",
"Shruti",
""
],
[
"Gehrmann",
"Sebastian",
""
],
[
"Maynez",
"Joshua",
""
],
[
"Aharoni",
"Roee",
""
],
[
"Nikolaev",
"Vitaly",
""
],
[
"Sellam",
"Thibault",
""
],
[
"Siddhant",
"Aditya",
""
],
[
"Das",
"Dipanjan",
""
],
[
"Parikh",
"Ankur P.",
""
]
] |
new_dataset
| 0.999702 |
2305.13256
|
Joongwon Kim
|
Joongwon Kim, Akari Asai, Gabriel Ilharco, Hannaneh Hajishirzi
|
TaskWeb: Selecting Better Source Tasks for Multi-task NLP
|
22 pages, 16 figures
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recent work in NLP has shown promising results in training models on large
amounts of tasks to achieve better generalization. However, it is not
well-understood how tasks are related, and how helpful training tasks can be
chosen for a new task. In this work, we investigate whether knowing task
relationships via pairwise task transfer improves choosing one or more source
tasks that help to learn a new target task. We provide TaskWeb, a large-scale
benchmark of pairwise task transfers for 22 NLP tasks using three different
model types, sizes, and adaptation methods, spanning about 25,000 experiments.
Then, we design a new method TaskShop based on our analysis of TaskWeb.
TaskShop uses TaskWeb to estimate the benefit of using a source task for
learning a new target, and to choose a subset of helpful training tasks for
multi-task learning. Our method improves overall rankings and top-k precision
of source tasks by 12% and 29%, respectively. We also use TaskShop to build
smaller multi-task training sets that improve zero-shot performances across 11
different target tasks by at least 4.3%.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 17:27:57 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Kim",
"Joongwon",
""
],
[
"Asai",
"Akari",
""
],
[
"Ilharco",
"Gabriel",
""
],
[
"Hajishirzi",
"Hannaneh",
""
]
] |
new_dataset
| 0.998925 |
2305.13258
|
David Herron
|
David Herron, Ernesto Jim\'enez-Ruiz, Giacomo Tarroni and Tillman
Weyde
|
NeSy4VRD: A Multifaceted Resource for Neurosymbolic AI Research using
Knowledge Graphs in Visual Relationship Detection
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
NeSy4VRD is a multifaceted resource designed to support the development of
neurosymbolic AI (NeSy) research. NeSy4VRD re-establishes public access to the
images of the VRD dataset and couples them with an extensively revised,
quality-improved version of the VRD visual relationship annotations. Crucially,
NeSy4VRD provides a well-aligned, companion OWL ontology that describes the
dataset domain.It comes with open source infrastructure that provides
comprehensive support for extensibility of the annotations (which, in turn,
facilitates extensibility of the ontology), and open source code for loading
the annotations to/from a knowledge graph. We are contributing NeSy4VRD to the
computer vision, NeSy and Semantic Web communities to help foster more NeSy
research using OWL-based knowledge graphs.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 17:28:25 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Herron",
"David",
""
],
[
"Jiménez-Ruiz",
"Ernesto",
""
],
[
"Tarroni",
"Giacomo",
""
],
[
"Weyde",
"Tillman",
""
]
] |
new_dataset
| 0.996476 |
2305.13272
|
Shashank Sonkar
|
Shashank Sonkar, Lucy Liu, Debshila Basu Mallick, Richard G. Baraniuk
|
CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning
Science Principles
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present a design framework called Conversational Learning with Analytical
Step-by-Step Strategies (CLASS) for developing high-performance Intelligent
Tutoring Systems (ITS). The CLASS framework aims to empower ITS with with two
critical capabilities: imparting tutor-like step-by-step guidance and enabling
tutor-like conversations in natural language to effectively engage learners. To
empower ITS with the aforementioned capabilities, the CLASS framework employs
two carefully curated synthetic datasets. The first scaffolding dataset
encompasses a variety of elements, including problems, their corresponding
subproblems, hints, incorrect solutions, and tailored feedback. This dataset
provides ITS with essential problem-solving strategies necessary for guiding
students through each step of the conversation. The second conversational
dataset contains simulated student-tutor conversations that involve the
application of problem-solving strategies learned from the first dataset. In
the second dataset, the tutoring system adheres to a pre-defined response
template, which helps to maintain consistency and structure in ITS's responses
during its interactions. This structured methodology facilitates seamless
integration of user feedback and yields valuable insights into ITS's internal
decision-making process, allowing for continuous refinement and improvement of
the system. We also present a proof-of-concept ITS, referred to as SPOCK,
trained using the CLASS framework with a focus on college level introductory
biology content. A carefully constructed protocol was developed for SPOCK's
preliminary evaluation, examining aspects such as the factual accuracy and
relevance of its responses. Experts in the field of biology offered favorable
remarks, particularly highlighting SPOCK's capability to break down questions
into manageable subproblems and provide step-by-step guidance to students.
|
[
{
"version": "v1",
"created": "Mon, 22 May 2023 17:35:05 GMT"
}
] | 2023-05-23T00:00:00 |
[
[
"Sonkar",
"Shashank",
""
],
[
"Liu",
"Lucy",
""
],
[
"Mallick",
"Debshila Basu",
""
],
[
"Baraniuk",
"Richard G.",
""
]
] |
new_dataset
| 0.987868 |
2104.12462
|
Francesc Llu\'is
|
Francesc Llu\'is, Vasileios Chatziioannou, Alex Hofmann
|
Points2Sound: From mono to binaural audio using 3D point cloud scenes
|
Code, data, and listening examples:
https://github.com/francesclluis/points2sound
|
EURASIP Journal on Audio, Speech, and Music Processing 2022 (1),
1-15
|
10.1186/s13636-022-00265-4
| null |
cs.SD cs.CV eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
For immersive applications, the generation of binaural sound that matches its
visual counterpart is crucial to bring meaningful experiences to people in a
virtual environment. Recent studies have shown the possibility of using neural
networks for synthesizing binaural audio from mono audio by using 2D visual
information as guidance. Extending this approach by guiding the audio with 3D
visual information and operating in the waveform domain may allow for a more
accurate auralization of a virtual audio scene. We propose Points2Sound, a
multi-modal deep learning model which generates a binaural version from mono
audio using 3D point cloud scenes. Specifically, Points2Sound consists of a
vision network and an audio network. The vision network uses 3D sparse
convolutions to extract a visual feature from the point cloud scene. Then, the
visual feature conditions the audio network, which operates in the waveform
domain, to synthesize the binaural version. Results show that 3D visual
information can successfully guide multi-modal deep learning models for the
task of binaural synthesis. We also investigate how 3D point cloud attributes,
learning objectives, different reverberant conditions, and several types of
mono mixture signals affect the binaural audio synthesis performance of
Points2Sound for the different numbers of sound sources present in the scene.
|
[
{
"version": "v1",
"created": "Mon, 26 Apr 2021 10:44:01 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Nov 2021 14:46:58 GMT"
},
{
"version": "v3",
"created": "Fri, 19 May 2023 12:54:02 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Lluís",
"Francesc",
""
],
[
"Chatziioannou",
"Vasileios",
""
],
[
"Hofmann",
"Alex",
""
]
] |
new_dataset
| 0.997919 |
2109.06275
|
Cristian-Paul Bara
|
Cristian-Paul Bara, Sky CH-Wang, Joyce Chai
|
MindCraft: Theory of Mind Modeling for Situated Dialogue in
Collaborative Tasks
| null | null |
10.18653/v1/2021.emnlp-main.85
| null |
cs.AI cs.CL cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
An ideal integration of autonomous agents in a human world implies that they
are able to collaborate on human terms. In particular, theory of mind plays an
important role in maintaining common ground during human collaboration and
communication. To enable theory of mind modeling in situated interactions, we
introduce a fine-grained dataset of collaborative tasks performed by pairs of
human subjects in the 3D virtual blocks world of Minecraft. It provides
information that captures partners' beliefs of the world and of each other as
an interaction unfolds, bringing abundant opportunities to study human
collaborative behaviors in situated language communication. As a first step
towards our goal of developing embodied AI agents able to infer belief states
of collaborative partners in situ, we build and present results on
computational models for several theory of mind tasks.
|
[
{
"version": "v1",
"created": "Mon, 13 Sep 2021 19:26:19 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Bara",
"Cristian-Paul",
""
],
[
"CH-Wang",
"Sky",
""
],
[
"Chai",
"Joyce",
""
]
] |
new_dataset
| 0.974046 |
2109.13751
|
Ulysse Ran\c{c}on
|
Ulysse Ran\c{c}on, Javier Cuadrado-Anibarro, Benoit R. Cottereau and
Timoth\'ee Masquelier
|
StereoSpike: Depth Learning with a Spiking Neural Network
| null | null |
10.1109/ACCESS.2022.3226484
| null |
cs.CV cs.AI cs.LG cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Depth estimation is an important computer vision task, useful in particular
for navigation in autonomous vehicles, or for object manipulation in robotics.
Here we solved it using an end-to-end neuromorphic approach, combining two
event-based cameras and a Spiking Neural Network (SNN) with a slightly modified
U-Net-like encoder-decoder architecture, that we named StereoSpike. More
specifically, we used the Multi Vehicle Stereo Event Camera Dataset (MVSEC). It
provides a depth ground-truth, which was used to train StereoSpike in a
supervised manner, using surrogate gradient descent. We propose a novel readout
paradigm to obtain a dense analog prediction -- the depth of each pixel -- from
the spikes of the decoder. We demonstrate that this architecture generalizes
very well, even better than its non-spiking counterparts, leading to
state-of-the-art test accuracy. To the best of our knowledge, it is the first
time that such a large-scale regression problem is solved by a fully spiking
network. Finally, we show that low firing rates (<10%) can be obtained via
regularization, with a minimal cost in accuracy. This means that StereoSpike
could be efficiently implemented on neuromorphic chips, opening the door for
low power and real time embedded systems.
|
[
{
"version": "v1",
"created": "Tue, 28 Sep 2021 14:11:36 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Nov 2021 14:01:38 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Nov 2022 12:35:43 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Rançon",
"Ulysse",
""
],
[
"Cuadrado-Anibarro",
"Javier",
""
],
[
"Cottereau",
"Benoit R.",
""
],
[
"Masquelier",
"Timothée",
""
]
] |
new_dataset
| 0.999417 |
2204.02577
|
Tao Zheng
|
Tao Zheng, Lihong Zhi
|
A Vergleichsstellensatz of Strassen's Type for a Noncommutative
Preordered Semialgebra through the Semialgebra of its Fractions
|
32 pages
| null | null | null |
cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Preordered semialgebras and semirings are two kinds of algebraic structures
occurring in real algebraic geometry frequently and usually play important
roles therein. They have many interesting and promising applications in the
fields of real algebraic geometry, probability theory, theoretical computer
science, quantum information theory, \emph{etc.}. In these applications,
Strassen's Vergleichsstellensatz and its generalized versions, which are
analogs of those Positivstellens\"atze in real algebraic geometry, play
important roles. While these Vergleichsstellens\"atze accept only a commutative
setting (for the semirings in question), we prove in this paper a
noncommutative version of one of the generalized Vergleichsstellens\"atze
proposed by Fritz [\emph{Comm. Algebra}, 49 (2) (2021), pp. 482-499]. The most
crucial step in our proof is to define the semialgebra of the fractions of a
noncommutative semialgebra, which generalizes the definitions in the
literature. Our new Vergleichsstellensatz characterizes the relaxed preorder on
a noncommutative semialgebra induced by all monotone homomorphisms to
$\mathbb{R}_+$ by three other equivalent conditions on the semialgebra of its
fractions equipped with the derived preorder, which may result in more
applications in the future.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 04:47:34 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2022 09:00:52 GMT"
},
{
"version": "v3",
"created": "Fri, 19 May 2023 12:59:42 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Zheng",
"Tao",
""
],
[
"Zhi",
"Lihong",
""
]
] |
new_dataset
| 0.996544 |
2206.03408
|
Kaitao Meng
|
Kaitao Meng, Qingqing Wu, Jie Xu, Wen Chen, Zhiyong Feng, Robert
Schober, and A. Lee Swindlehurst
|
UAV-Enabled Integrated Sensing and Communication: Opportunities and
Challenges
|
9 pages, 6 figures
|
IEEE Wireless Communications, 2023
|
10.1109/MWC.131.2200442.
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned aerial vehicle (UAV)-enabled integrated sensing and communication
(ISAC) has attracted growing research interests in the context of
sixth-generation (6G) wireless networks, in which UAVs will be exploited as
aerial wireless platforms to provide better coverage and enhanced sensing and
communication (S&C) services. However, due to the UAVs' size, weight, and power
(SWAP) constraints, controllable mobility, and line-of-sight (LoS) air-ground
channels, UAV-enabled ISAC introduces both new opportunities and challenges.
This article provides an overview of UAV-enabled ISAC, and proposes various
solutions for optimizing the S&C performance. In particular, we first introduce
UAV-enabled joint S&C, and discuss UAV motion control, wireless resource
allocation, and interference management for the cases of single and multiple
UAVs. Then, we present two application scenarios for exploiting the synergy
between S&C, namely sensing-assisted UAV communication and
communication-assisted UAV sensing. Finally, we highlight several interesting
research directions to guide and motivate future work.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 15:59:34 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 06:42:19 GMT"
},
{
"version": "v3",
"created": "Fri, 19 May 2023 13:59:56 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Meng",
"Kaitao",
""
],
[
"Wu",
"Qingqing",
""
],
[
"Xu",
"Jie",
""
],
[
"Chen",
"Wen",
""
],
[
"Feng",
"Zhiyong",
""
],
[
"Schober",
"Robert",
""
],
[
"Swindlehurst",
"A. Lee",
""
]
] |
new_dataset
| 0.997291 |
2206.11987
|
Aditya Kulkarni
|
Aditya Kulkarni, Min Kim, Joydeep Bhattacharya, Jayanta Bhattacharya
|
Businesses in high-income zip codes often saw sharper visit reductions
during the COVID-19 pandemic
|
18 pages, 6 figures, 3 tables
| null | null | null |
cs.CY cs.DB cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
As the COVID-19 pandemic unfolded, the mobility patterns of people worldwide
changed drastically. While travel time, costs, and trip convenience have always
influenced mobility, the risk of infection and policy actions such as lockdowns
and stay-at-home orders emerged as new factors to consider in the
location-visitation calculus. We use SafeGraph mobility data from Minnesota,
USA, to demonstrate that businesses (especially those requiring extended indoor
visits) located in affluent zip codes witnessed sharper reductions in visits
(relative to pre-pandemic times) outside of the lockdown periods than their
poorer counterparts. To the extent visits translate into sales, we contend that
post-pandemic recovery efforts should prioritize relief funding, keeping the
losses relating to diminished visits in mind.
|
[
{
"version": "v1",
"created": "Thu, 23 Jun 2022 21:26:33 GMT"
},
{
"version": "v2",
"created": "Thu, 18 May 2023 19:29:27 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Kulkarni",
"Aditya",
""
],
[
"Kim",
"Min",
""
],
[
"Bhattacharya",
"Joydeep",
""
],
[
"Bhattacharya",
"Jayanta",
""
]
] |
new_dataset
| 0.998998 |
2208.00512
|
Edgar Martinez-Moro
|
Maryam Bajalan, Edgar Martinez-Moro, Reza Sobhani, Steve Szabo and
Gulsum Gozde Yilmazguc
|
On the structure of repeated-root polycyclic codes over local rings
| null | null | null | null |
cs.IT math.IT math.RA
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper provides the Generalized Mattson Solomon polynomial for
repeated-root polycyclic codes over local rings that gives an explicit
decomposition of them in terms of idempotents that completes the single root
study. It also states some structural properties of repeated-root polycyclic
codes over finite fields in terms of matrix product codes. Both approaches
provide a description of the $\perp_0$-dual code of a given polycyclic code.
|
[
{
"version": "v1",
"created": "Sun, 31 Jul 2022 20:28:14 GMT"
},
{
"version": "v2",
"created": "Sun, 29 Jan 2023 21:50:48 GMT"
},
{
"version": "v3",
"created": "Fri, 19 May 2023 15:39:55 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Bajalan",
"Maryam",
""
],
[
"Martinez-Moro",
"Edgar",
""
],
[
"Sobhani",
"Reza",
""
],
[
"Szabo",
"Steve",
""
],
[
"Yilmazguc",
"Gulsum Gozde",
""
]
] |
new_dataset
| 0.975493 |
2208.04987
|
Yunpeng Liu
|
Yunpeng Liu, Jonathan Wilder Lavington, Adam Scibior, Frank Wood
|
Vehicle Type Specific Waypoint Generation
| null |
2022 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS)
| null | null |
cs.AI cs.HC cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We develop a generic mechanism for generating vehicle-type specific sequences
of waypoints from a probabilistic foundation model of driving behavior. Many
foundation behavior models are trained on data that does not include vehicle
information, which limits their utility in downstream applications such as
planning. Our novel methodology conditionally specializes such a behavior
predictive model to a vehicle-type by utilizing byproducts of the reinforcement
learning algorithms used to produce vehicle specific controllers. We show how
to compose a vehicle specific value function estimate with a generic
probabilistic behavior model to generate vehicle-type specific waypoint
sequences that are more likely to be physically plausible then their
vehicle-agnostic counterparts.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 18:29:00 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Liu",
"Yunpeng",
""
],
[
"Lavington",
"Jonathan Wilder",
""
],
[
"Scibior",
"Adam",
""
],
[
"Wood",
"Frank",
""
]
] |
new_dataset
| 0.988967 |
2212.03000
|
Yonghui Wu
|
Zehao Yu, Xi Yang, Chong Dang, Prakash Adekkanattu, Braja Gopal Patra,
Yifan Peng, Jyotishman Pathak, Debbie L. Wilson, Ching-Yuan Chang, Wei-Hsuan
Lo-Ciganic, Thomas J. George, William R. Hogan, Yi Guo, Jiang Bian, Yonghui
Wu
|
SODA: A Natural Language Processing Package to Extract Social
Determinants of Health for Cancer Studies
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Objective: We aim to develop an open-source natural language processing (NLP)
package, SODA (i.e., SOcial DeterminAnts), with pre-trained transformer models
to extract social determinants of health (SDoH) for cancer patients, examine
the generalizability of SODA to a new disease domain (i.e., opioid use), and
evaluate the extraction rate of SDoH using cancer populations.
Methods: We identified SDoH categories and attributes and developed an SDoH
corpus using clinical notes from a general cancer cohort. We compared four
transformer-based NLP models to extract SDoH, examined the generalizability of
NLP models to a cohort of patients prescribed with opioids, and explored
customization strategies to improve performance. We applied the best NLP model
to extract 19 categories of SDoH from the breast (n=7,971), lung (n=11,804),
and colorectal cancer (n=6,240) cohorts.
Results and Conclusion: We developed a corpus of 629 cancer patients notes
with annotations of 13,193 SDoH concepts/attributes from 19 categories of SDoH.
The Bidirectional Encoder Representations from Transformers (BERT) model
achieved the best strict/lenient F1 scores of 0.9216 and 0.9441 for SDoH
concept extraction, 0.9617 and 0.9626 for linking attributes to SDoH concepts.
Fine-tuning the NLP models using new annotations from opioid use patients
improved the strict/lenient F1 scores from 0.8172/0.8502 to 0.8312/0.8679. The
extraction rates among 19 categories of SDoH varied greatly, where 10 SDoH
could be extracted from >70% of cancer patients, but 9 SDoH had a low
extraction rate (<70% of cancer patients). The SODA package with pre-trained
transformer models is publicly available at
https://github.com/uf-hobiinformatics-lab/SDoH_SODA.
|
[
{
"version": "v1",
"created": "Tue, 6 Dec 2022 14:23:38 GMT"
},
{
"version": "v2",
"created": "Thu, 18 May 2023 18:39:20 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Yu",
"Zehao",
""
],
[
"Yang",
"Xi",
""
],
[
"Dang",
"Chong",
""
],
[
"Adekkanattu",
"Prakash",
""
],
[
"Patra",
"Braja Gopal",
""
],
[
"Peng",
"Yifan",
""
],
[
"Pathak",
"Jyotishman",
""
],
[
"Wilson",
"Debbie L.",
""
],
[
"Chang",
"Ching-Yuan",
""
],
[
"Lo-Ciganic",
"Wei-Hsuan",
""
],
[
"George",
"Thomas J.",
""
],
[
"Hogan",
"William R.",
""
],
[
"Guo",
"Yi",
""
],
[
"Bian",
"Jiang",
""
],
[
"Wu",
"Yonghui",
""
]
] |
new_dataset
| 0.983174 |
2302.02231
|
Kian Ahrabian
|
Kian Ahrabian, Xinwei Du, Richard Delwin Myloth, Arun Baalaaji Sankar
Ananthan, Jay Pujara
|
PubGraph: A Large-Scale Scientific Knowledge Graph
|
17 Pages, 6 Figures, 9 Tables
| null | null | null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Research publications are the primary vehicle for sharing scientific progress
in the form of new discoveries, methods, techniques, and insights.
Unfortunately, the lack of a large-scale, comprehensive, and easy-to-use
resource capturing the myriad relationships between publications, their
authors, and venues presents a barrier to applications for gaining a deeper
understanding of science. In this paper, we present PubGraph, a new resource
for studying scientific progress that takes the form of a large-scale knowledge
graph (KG) with more than 385M entities, 13B main edges, and 1.5B qualifier
edges. PubGraph is comprehensive and unifies data from various sources,
including Wikidata, OpenAlex, and Semantic Scholar, using the Wikidata
ontology. Beyond the metadata available from these sources, PubGraph includes
outputs from auxiliary community detection algorithms and large language
models. To further support studies on reasoning over scientific networks, we
create several large-scale benchmarks extracted from PubGraph for the core task
of knowledge graph completion (KGC). These benchmarks present many challenges
for knowledge graph embedding models, including an adversarial community-based
KGC evaluation setting, zero-shot inductive learning, and large-scale learning.
All of the aforementioned resources are accessible at https://pubgraph.isi.edu/
and released under the CC-BY-SA license. We plan to update PubGraph quarterly
to accommodate the release of new publications.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 20:03:55 GMT"
},
{
"version": "v2",
"created": "Fri, 19 May 2023 04:56:47 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Ahrabian",
"Kian",
""
],
[
"Du",
"Xinwei",
""
],
[
"Myloth",
"Richard Delwin",
""
],
[
"Ananthan",
"Arun Baalaaji Sankar",
""
],
[
"Pujara",
"Jay",
""
]
] |
new_dataset
| 0.999451 |
2302.10182
|
Stefan Gaugel
|
Stefan Gaugel, Manfred Reichert
|
PrecTime: A Deep Learning Architecture for Precise Time Series
Segmentation in Industrial Manufacturing Operations
|
Preprint
|
Engineering Applications of Artificial Intelligence, Volume 122,
2023
|
10.1016/j.engappai.2023.106078
| null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The fourth industrial revolution creates ubiquitous sensor data in production
plants. To generate maximum value out of these data, reliable and precise time
series-based machine learning methods like temporal neural networks are needed.
This paper proposes a novel sequence-to-sequence deep learning architecture for
time series segmentation called PrecTime which tries to combine the concepts
and advantages of sliding window and dense labeling approaches. The
general-purpose architecture is evaluated on a real-world industry dataset
containing the End-of-Line testing sensor data of hydraulic pumps. We are able
to show that PrecTime outperforms five implemented state-of-the-art baseline
networks based on multiple metrics. The achieved segmentation accuracy of
around 96% shows that PrecTime can achieve results close to human intelligence
in operational state segmentation within a testing cycle.
|
[
{
"version": "v1",
"created": "Fri, 27 Jan 2023 12:47:27 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Gaugel",
"Stefan",
""
],
[
"Reichert",
"Manfred",
""
]
] |
new_dataset
| 0.998638 |
2303.09113
|
Srivatsan Sridhar
|
Lucianna Kiffer, Joachim Neu, Srivatsan Sridhar, Aviv Zohar, David Tse
|
Security of Nakamoto Consensus under Congestion
| null | null | null | null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nakamoto consensus (NC) powers major proof-of-work (PoW) and proof-of-stake
(PoS) blockchains such as Bitcoin or Cardano. Given a network of nodes with
certain communication and computation capacities, against what fraction of
adversarial power (the resilience) is Nakamoto consensus secure for a given
block production rate? Prior security analyses of NC used a bounded delay model
which does not capture network congestion resulting from high block production
rates, bursty release of adversarial blocks, and in PoS, spamming due to
equivocations. For PoW, we find a new attack, called teasing attack, that
exploits congestion to increase the time taken to download and verify blocks,
thereby succeeding at lower adversarial power than the private attack which was
deemed to be the worst-case attack in prior analysis. By adopting a bounded
bandwidth model to capture congestion, and through an improved analysis method,
we identify the resilience of PoW NC for a given block production rate. In PoS,
we augment our attack with equivocations to further increase congestion, making
the vanilla PoS NC protocol insecure against any adversarial power except at
very low block production rates. To counter equivocation spamming in PoS, we
present a new NC-style protocol Sanitizing PoS (SaPoS) which achieves the same
resilience as PoW NC.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 07:00:34 GMT"
},
{
"version": "v2",
"created": "Fri, 19 May 2023 01:19:17 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Kiffer",
"Lucianna",
""
],
[
"Neu",
"Joachim",
""
],
[
"Sridhar",
"Srivatsan",
""
],
[
"Zohar",
"Aviv",
""
],
[
"Tse",
"David",
""
]
] |
new_dataset
| 0.995116 |
2303.17343
|
Boya Wang
|
Boya Wang, Wouter Lueks, Justinas Sukaitis, Vincent Graf Narbel,
Carmela Troncoso
|
Not Yet Another Digital ID: Privacy-preserving Humanitarian Aid
Distribution
|
Full version with proofs corresponding to accepted IEEE S&P 2023
conference version
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humanitarian aid-distribution programs help bring physical goods to people in
need. Traditional paper-based solutions to support aid distribution do not
scale to large populations and are hard to secure. Existing digital solutions
solve these issues, at the cost of collecting large amount of personal
information. This lack of privacy can endanger recipients' safety and harm
their dignity. In collaboration with the International Committee of the Red
Cross, we build a safe digital aid-distribution system. We first systematize
the requirements such a system should satisfy. We then propose a decentralized
solution based on the use of tokens that fulfills the needs of humanitarian
organizations. It provides scalability and strong accountability, and, by
design, guarantees the recipients' privacy. We provide two instantiations of
our design, on a smart card and on a smartphone. We formally prove the security
and privacy properties of these solutions, and empirically show that they can
operate at scale.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 12:53:38 GMT"
},
{
"version": "v2",
"created": "Fri, 19 May 2023 12:26:25 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Wang",
"Boya",
""
],
[
"Lueks",
"Wouter",
""
],
[
"Sukaitis",
"Justinas",
""
],
[
"Narbel",
"Vincent Graf",
""
],
[
"Troncoso",
"Carmela",
""
]
] |
new_dataset
| 0.984065 |
2305.08592
|
Yu Pei
|
Yu Pei (1), Jeongju Sohn (1), Sarra Habchi (2), Mike Papadakis (1)
((1) University of Luxembourg, (2) Ubisoft)
|
Time-based Repair for Asynchronous Wait Flaky Tests in Web Testing
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Asynchronous waits are one of the most prevalent root causes of flaky tests
and a major time-influential factor of web application testing. To investigate
the characteristics of asynchronous wait flaky tests and their fixes in web
testing, we build a dataset of 49 reproducible flaky tests, from 26 open-source
projects, caused by asynchronous waits, along with their corresponding
developer-written fixes. Our study of these flaky tests reveals that in
approximately 63% of them (31 out of 49), developers addressed Asynchronous
Wait flaky tests by adapting the wait time, even for cases where the root
causes lie elsewhere. Based on this finding, we propose TRaf, an automated
time-based repair method for asynchronous wait flaky tests in web applications.
TRaf tackles the flakiness issues by suggesting a proper waiting time for each
asynchronous call in a web application, using code similarity and past change
history. The core insight is that as developers often make similar mistakes
more than once, hints for the efficient wait time exist in the current or past
codebase. Our analysis shows that TRaf can suggest a shorter wait time to
resolve the test flakiness compared to developer-written fixes, reducing the
test execution time by 11.1%. With additional dynamic tuning of the new wait
time, TRaf further reduces the execution time by 20.2%.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 12:17:30 GMT"
},
{
"version": "v2",
"created": "Fri, 19 May 2023 17:04:51 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Pei",
"Yu",
"",
"University of Luxembourg"
],
[
"Sohn",
"Jeongju",
"",
"University of Luxembourg"
],
[
"Habchi",
"Sarra",
"",
"Ubisoft"
],
[
"Papadakis",
"Mike",
"",
"University of Luxembourg"
]
] |
new_dataset
| 0.998652 |
2305.08703
|
Ningyu Zhang
|
Hongbin Ye, Honghao Gui, Xin Xu, Huajun Chen, Ningyu Zhang
|
Schema-adaptable Knowledge Graph Construction
|
Work in progress
| null | null | null |
cs.CL cs.AI cs.DB cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conventional Knowledge Graph Construction (KGC) approaches typically follow
the static information extraction paradigm with a closed set of pre-defined
schema. As a result, such approaches fall short when applied to dynamic
scenarios or domains, whereas a new type of knowledge emerges. This
necessitates a system that can handle evolving schema automatically to extract
information for KGC. To address this need, we propose a new task called
schema-adaptable KGC, which aims to continually extract entity, relation, and
event based on a dynamically changing schema graph without re-training. We
first split and convert existing datasets based on three principles to build a
benchmark, i.e., horizontal schema expansion, vertical schema expansion, and
hybrid schema expansion; then investigate the schema-adaptable performance of
several well-known approaches such as Text2Event, TANL, UIE and GPT-3.5. We
further propose a simple yet effective baseline dubbed AdaKGC, which contains
schema-enriched prefix instructor and schema-conditioned dynamic decoding to
better handle evolving schema. Comprehensive experimental results illustrate
that AdaKGC can outperform baselines but still have room for improvement. We
hope the proposed work can deliver benefits to the community. Code and datasets
will be available in https://github.com/zjunlp/AdaKGC.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 15:06:20 GMT"
},
{
"version": "v2",
"created": "Fri, 19 May 2023 08:59:33 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Ye",
"Hongbin",
""
],
[
"Gui",
"Honghao",
""
],
[
"Xu",
"Xin",
""
],
[
"Chen",
"Huajun",
""
],
[
"Zhang",
"Ningyu",
""
]
] |
new_dataset
| 0.989555 |
2305.09846
|
Zihao He
|
Zihao He, Jonathan May, Kristina Lerman
|
CPL-NoViD: Context-Aware Prompt-based Learning for Norm Violation
Detection in Online Communities
| null | null | null | null |
cs.CL cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Detecting norm violations in online communities is critical to maintaining
healthy and safe spaces for online discussions. Existing machine learning
approaches often struggle to adapt to the diverse rules and interpretations
across different communities due to the inherent challenges of fine-tuning
models for such context-specific tasks. In this paper, we introduce
Context-aware Prompt-based Learning for Norm Violation Detection (CPL-NoViD), a
novel method that employs prompt-based learning to detect norm violations
across various types of rules. CPL-NoViD outperforms the baseline by
incorporating context through natural language prompts and demonstrates
improved performance across different rule types. Significantly, it not only
excels in cross-rule-type and cross-community norm violation detection but also
exhibits adaptability in few-shot learning scenarios. Most notably, it
establishes a new state-of-the-art in norm violation detection, surpassing
existing benchmarks. Our work highlights the potential of prompt-based learning
for context-sensitive norm violation detection and paves the way for future
research on more adaptable, context-aware models to better support online
community moderators.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 23:27:59 GMT"
},
{
"version": "v2",
"created": "Thu, 18 May 2023 18:33:01 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"He",
"Zihao",
""
],
[
"May",
"Jonathan",
""
],
[
"Lerman",
"Kristina",
""
]
] |
new_dataset
| 0.994799 |
2305.11067
|
Ze Jin
|
Ze Jin, Zorina Song
|
Generating coherent comic with rich story using ChatGPT and Stable
Diffusion
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Past work demonstrated that using neural networks, we can extend unfinished
music pieces while maintaining the music style of the musician. With recent
advancements in large language models and diffusion models, we are now capable
of generating comics with an interesting storyline while maintaining the art
style of the artist. In this paper, we used ChatGPT to generate storylines and
dialogue and then generated the comic using stable diffusion. We introduced a
novel way to evaluate AI-generated stories, and we achieved SOTA performance on
character fidelity and art style by fine-tuning stable diffusion using LoRA,
ControlNet, etc.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 13:11:45 GMT"
},
{
"version": "v2",
"created": "Fri, 19 May 2023 02:04:56 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Jin",
"Ze",
""
],
[
"Song",
"Zorina",
""
]
] |
new_dataset
| 0.952577 |
2305.11196
|
Jingyu Zhao
|
Chu Wu, Jingyu Zhao, Qiaomu Hu, Rui Zeng, Minming Zhang
|
Non-volatile Reconfigurable Digital Optical Diffractive Neural Network
Based on Phase Change Material
| null | null | null | null |
cs.ET eess.SP physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optical diffractive neural networks have triggered extensive research with
their low power consumption and high speed in image processing. In this work,
we propose a reconfigurable digital all-optical diffractive neural network
(R-ODNN) structure. The optical neurons are built with Sb2Se3 phase-change
material, making our network reconfigurable, digital, and non-volatile. Using
three digital diffractive layers with 14,400 neurons on each and 10
photodetectors connected to a resistor network, our model achieves 94.46%
accuracy for handwritten digit recognition. We also performed full-vector
simulations and discussed the impact of errors to demonstrate the feasibility
and robustness of the R-ODNN.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 14:04:37 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Wu",
"Chu",
""
],
[
"Zhao",
"Jingyu",
""
],
[
"Hu",
"Qiaomu",
""
],
[
"Zeng",
"Rui",
""
],
[
"Zhang",
"Minming",
""
]
] |
new_dataset
| 0.990985 |
2305.11206
|
Omer Levy
|
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning
Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike
Lewis, Luke Zettlemoyer, Omer Levy
|
LIMA: Less Is More for Alignment
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models are trained in two stages: (1) unsupervised pretraining
from raw text, to learn general-purpose representations, and (2) large scale
instruction tuning and reinforcement learning, to better align to end tasks and
user preferences. We measure the relative importance of these two stages by
training LIMA, a 65B parameter LLaMa language model fine-tuned with the
standard supervised loss on only 1,000 carefully curated prompts and responses,
without any reinforcement learning or human preference modeling. LIMA
demonstrates remarkably strong performance, learning to follow specific
response formats from only a handful of examples in the training data,
including complex queries that range from planning trip itineraries to
speculating about alternate history. Moreover, the model tends to generalize
well to unseen tasks that did not appear in the training data. In a controlled
human study, responses from LIMA are either equivalent or strictly preferred to
GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard
and 65% versus DaVinci003, which was trained with human feedback. Taken
together, these results strongly suggest that almost all knowledge in large
language models is learned during pretraining, and only limited instruction
tuning data is necessary to teach models to produce high quality output.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 17:45:22 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Zhou",
"Chunting",
""
],
[
"Liu",
"Pengfei",
""
],
[
"Xu",
"Puxin",
""
],
[
"Iyer",
"Srini",
""
],
[
"Sun",
"Jiao",
""
],
[
"Mao",
"Yuning",
""
],
[
"Ma",
"Xuezhe",
""
],
[
"Efrat",
"Avia",
""
],
[
"Yu",
"Ping",
""
],
[
"Yu",
"Lili",
""
],
[
"Zhang",
"Susan",
""
],
[
"Ghosh",
"Gargi",
""
],
[
"Lewis",
"Mike",
""
],
[
"Zettlemoyer",
"Luke",
""
],
[
"Levy",
"Omer",
""
]
] |
new_dataset
| 0.996919 |
2305.11267
|
Dejan Vukobratovic
|
Srdjan Sobot, Milan Lukic, Dusan Bortnik, Vladimir Nikic, Brena Lima,
Marko Beko, Dejan Vukobratovic
|
Two-Tier UAV-based Low Power Wide Area Networks: A Testbed and
Experimentation Study
|
This paper presents an extended version of the solution presented by
a joint University of Novi Sad and Lusofona University team at the IEEE
Vehicular Technology Society UAV Innovation Challenge. The team won the first
prize at both the first and the second (final) competition stage
|
6th Conference on Cloud and Internet of Things, March 20-22, 2023,
Lusofona University (Lisbon, Portugal)
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose, design, deploy and demonstrate a two-tier Low
Power Wide Area Network (LP WAN) system based on Unmanned Aerial Vehicle (UAV)
base stations suitable for dynamic deployment in deep rural environments. The
proposed UAV-based LP WAN network augments the existing macro-cellular LP WAN
network (Tier 1) with an additional layer of mobile base stations (Tier 2)
based also on LP WAN technology. Mobile Tier 2 LP WAN base stations provide
connectivity to static or mobile LP WAN user equipment deployed in the areas
without direct Tier 1 LP WAN network coverage. The proposed two-tier LP WAN
network scenario is suitable for various agricultural, forestry and
environmental applications such as livestock or wild animal monitoring. In this
experimental work, we report the prototype that was successfully deployed and
used in a real-world deep rural environment without Tier 1 LP WAN network
coverage.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 19:20:21 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Sobot",
"Srdjan",
""
],
[
"Lukic",
"Milan",
""
],
[
"Bortnik",
"Dusan",
""
],
[
"Nikic",
"Vladimir",
""
],
[
"Lima",
"Brena",
""
],
[
"Beko",
"Marko",
""
],
[
"Vukobratovic",
"Dejan",
""
]
] |
new_dataset
| 0.997689 |
2305.11293
|
Kalvin Eng
|
Kalvin Eng, Abram Hindle, and Eleni Stroulia
|
Patterns in Docker Compose Multi-Container Orchestration
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software design patterns present general code solutions to common software
design problems. Modern software systems rely heavily on containers for
organizing and orchestrating their constituent service components. Yet, despite
the prevalence of ready-to-use Docker service images ready to participate in
multi-container orchestration, developers do not have much guidance on how to
develop their own multi-container Docker orchestrations. Thus in this work, we
curate a dataset of successful projects that employ Docker Compose as an
orchestration tool; then, we engage in qualitative and quantitative analysis of
Docker Compose configurations. The collection of data and analysis enables the
identification and naming of repeating patterns of deployment and orchestration
employed by numerous successful open-source projects, much like software design
patterns. These patterns highlight how software systems are orchestrated in the
wild and can give examples to anybody wishing to develop their container
orchestrations. These contributions also advance empirical research in software
engineering patterns as evidence is provided about how Docker Compose is used.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 20:32:58 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Eng",
"Kalvin",
""
],
[
"Hindle",
"Abram",
""
],
[
"Stroulia",
"Eleni",
""
]
] |
new_dataset
| 0.999673 |
2305.11301
|
Navdeep Kaur
|
Ishaan Singh and Navdeep Kaur and Garima Gaur and Mausam
|
NeuSTIP: A Novel Neuro-Symbolic Model for Link and Time Prediction in
Temporal Knowledge Graphs
|
13 pages, 2 Figures
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
While Knowledge Graph Completion (KGC) on static facts is a matured field,
Temporal Knowledge Graph Completion (TKGC), that incorporates validity time
into static facts is still in its nascent stage. The KGC methods fall into
multiple categories including embedding-based, rule-based, GNN-based,
pretrained Language Model based approaches. However, such dimensions have not
been explored in TKG. To that end, we propose a novel temporal neuro-symbolic
model, NeuSTIP, that performs link prediction and time interval prediction in a
TKG. NeuSTIP learns temporal rules in the presence of the Allen predicates that
ensure the temporal consistency between neighboring predicates in a given rule.
We further design a unique scoring function that evaluates the confidence of
the candidate answers while performing link prediction and time interval
prediction by utilizing the learned rules. Our empirical evaluation on two time
interval based TKGC datasets suggests that our model outperforms
state-of-the-art models for both link prediction and the time interval
prediction task.
|
[
{
"version": "v1",
"created": "Mon, 15 May 2023 13:46:34 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Singh",
"Ishaan",
""
],
[
"Kaur",
"Navdeep",
""
],
[
"Gaur",
"Garima",
""
],
[
"Mausam",
"",
""
]
] |
new_dataset
| 0.99956 |
2305.11349
|
Amila Silva
|
Amila Silva, Ling Luo, Shanika Karunasekera, Christopher Leckie
|
Unsupervised Domain-agnostic Fake News Detection using Multi-modal Weak
Signals
|
15 pages
| null | null | null |
cs.LG cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emergence of social media as one of the main platforms for people to
access news has enabled the wide dissemination of fake news. This has motivated
numerous studies on automating fake news detection. Although there have been
limited attempts at unsupervised fake news detection, their performance suffers
due to not exploiting the knowledge from various modalities related to news
records and due to the presence of various latent biases in the existing news
datasets. To address these limitations, this work proposes an effective
framework for unsupervised fake news detection, which first embeds the
knowledge available in four modalities in news records and then proposes a
novel noise-robust self-supervised learning technique to identify the veracity
of news records from the multi-modal embeddings. Also, we propose a novel
technique to construct news datasets minimizing the latent biases in existing
news datasets. Following the proposed approach for dataset construction, we
produce a Large-scale Unlabelled News Dataset consisting 419,351 news articles
related to COVID-19, acronymed as LUND-COVID. We trained the proposed
unsupervised framework using LUND-COVID to exploit the potential of large
datasets, and evaluate it using a set of existing labelled datasets. Our
results show that the proposed unsupervised framework largely outperforms
existing unsupervised baselines for different tasks such as multi-modal fake
news detection, fake news early detection and few-shot fake news detection,
while yielding notable improvements for unseen domains during training.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 23:49:31 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Silva",
"Amila",
""
],
[
"Luo",
"Ling",
""
],
[
"Karunasekera",
"Shanika",
""
],
[
"Leckie",
"Christopher",
""
]
] |
new_dataset
| 0.981665 |
2305.11355
|
Jacob Eisenstein
|
Jacob Eisenstein, Vinodkumar Prabhakaran, Clara Rivera, Dorottya
Demszky, Devyani Sharma
|
MD3: The Multi-Dialect Dataset of Dialogues
|
InterSpeech 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a new dataset of conversational speech representing English from
India, Nigeria, and the United States. The Multi-Dialect Dataset of Dialogues
(MD3) strikes a new balance between open-ended conversational speech and
task-oriented dialogue by prompting participants to perform a series of short
information-sharing tasks. This facilitates quantitative cross-dialectal
comparison, while avoiding the imposition of a restrictive task structure that
might inhibit the expression of dialect features. Preliminary analysis of the
dataset reveals significant differences in syntax and in the use of discourse
markers. The dataset, which will be made publicly available with the
publication of this paper, includes more than 20 hours of audio and more than
200,000 orthographically-transcribed tokens.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 00:14:10 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Eisenstein",
"Jacob",
""
],
[
"Prabhakaran",
"Vinodkumar",
""
],
[
"Rivera",
"Clara",
""
],
[
"Demszky",
"Dorottya",
""
],
[
"Sharma",
"Devyani",
""
]
] |
new_dataset
| 0.99986 |
2305.11392
|
Xiameng Qin
|
Mingliang Zhai, Yulin Li, Xiameng Qin, Chen Yi, Qunyi Xie, Chengquan
Zhang, Kun Yao, Yuwei Wu, Yunde Jia
|
Fast-StrucTexT: An Efficient Hourglass Transformer with Modality-guided
Dynamic Token Merge for Document Understanding
|
IJCAI 2023
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformers achieve promising performance in document understanding because
of their high effectiveness and still suffer from quadratic computational
complexity dependency on the sequence length. General efficient transformers
are challenging to be directly adapted to model document. They are unable to
handle the layout representation in documents, e.g. word, line and paragraph,
on different granularity levels and seem hard to achieve a good trade-off
between efficiency and performance. To tackle the concerns, we propose
Fast-StrucTexT, an efficient multi-modal framework based on the StrucTexT
algorithm with an hourglass transformer architecture, for visual document
understanding. Specifically, we design a modality-guided dynamic token merging
block to make the model learn multi-granularity representation and prunes
redundant tokens. Additionally, we present a multi-modal interaction module
called Symmetry Cross Attention (SCA) to consider multi-modal fusion and
efficiently guide the token mergence. The SCA allows one modality input as
query to calculate cross attention with another modality in a dual phase.
Extensive experiments on FUNSD, SROIE, and CORD datasets demonstrate that our
model achieves the state-of-the-art performance and almost 1.9X faster
inference time than the state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 02:42:35 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Zhai",
"Mingliang",
""
],
[
"Li",
"Yulin",
""
],
[
"Qin",
"Xiameng",
""
],
[
"Yi",
"Chen",
""
],
[
"Xie",
"Qunyi",
""
],
[
"Zhang",
"Chengquan",
""
],
[
"Yao",
"Kun",
""
],
[
"Wu",
"Yuwei",
""
],
[
"Jia",
"Yunde",
""
]
] |
new_dataset
| 0.98795 |
2305.11423
|
Adiwena Putra
|
Adiwena Putra, Prasetiyo, Yi Chen, John Kim, Joo-Young Kim
|
Strix: An End-to-End Streaming Architecture with Two-Level Ciphertext
Batching for Fully Homomorphic Encryption with Programmable Bootstrapping
| null | null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Homomorphic encryption (HE) enables computations on encrypted data by
concealing information under noise for security. However, the process of
bootstrapping, which resets the noise level in the ciphertext, is
computationally expensive and requires a large bootstrapping key. The TFHE
scheme offers a faster and programmable bootstrapping algorithm called PBS,
crucial for security-focused applications like machine learning. Nevertheless,
the current TFHE scheme lacks support for ciphertext packing, resulting in low
throughput. This work thoroughly analyzes TFHE bootstrapping, identifies the
bottleneck in GPUs caused by the blind rotation fragmentation problem, and
proposes a hardware TFHE accelerator called Strix. Strix introduces a two-level
batching approach to enhance the batch size in PBS, utilizes a specialized
microarchitecture for efficient streaming data processing, and incorporates a
fully-pipelined FFT microarchitecture to improve performance. It achieves
significantly higher throughput than state-of-the-art implementations on both
CPUs and GPUs, outperforming existing TFHE accelerators by a factor of 7.4.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 04:40:04 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Putra",
"Adiwena",
""
],
[
"Prasetiyo",
"",
""
],
[
"Chen",
"Yi",
""
],
[
"Kim",
"John",
""
],
[
"Kim",
"Joo-Young",
""
]
] |
new_dataset
| 0.984648 |
2305.11444
|
Hiroki Ouchi
|
Hiroki Ouchi, Hiroyuki Shindo, Shoko Wakamiya, Yuki Matsuda, Naoya
Inoue, Shohei Higashiyama, Satoshi Nakamura, Taro Watanabe
|
Arukikata Travelogue Dataset
|
The application website for Arukikata Travelogue Dataset:
https://www.nii.ac.jp/dsc/idr/arukikata/
| null | null | null |
cs.CL cs.AI cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We have constructed Arukikata Travelogue Dataset and released it free of
charge for academic research. This dataset is a Japanese text dataset with a
total of over 31 million words, comprising 4,672 Japanese domestic travelogues
and 9,607 overseas travelogues. Before providing our dataset, there was a
scarcity of widely available travelogue data for research purposes, and each
researcher had to prepare their own data. This hinders the replication of
existing studies and fair comparative analysis of experimental results. Our
dataset enables any researchers to conduct investigation on the same data and
to ensure transparency and reproducibility in research. In this paper, we
describe the academic significance, characteristics, and prospects of our
dataset.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 05:53:49 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Ouchi",
"Hiroki",
""
],
[
"Shindo",
"Hiroyuki",
""
],
[
"Wakamiya",
"Shoko",
""
],
[
"Matsuda",
"Yuki",
""
],
[
"Inoue",
"Naoya",
""
],
[
"Higashiyama",
"Shohei",
""
],
[
"Nakamura",
"Satoshi",
""
],
[
"Watanabe",
"Taro",
""
]
] |
new_dataset
| 0.999783 |
2305.11513
|
Aaron Jie
|
Leiping Jie, Hui Zhang
|
When SAM Meets Shadow Detection
|
Technical Report
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a promptable generic object segmentation model, segment anything model
(SAM) has recently attracted significant attention, and also demonstrates its
powerful performance. Nevertheless, it still meets its Waterloo when
encountering several tasks, e.g., medical image segmentation, camouflaged
object detection, etc. In this report, we try SAM on an unexplored popular
task: shadow detection. Specifically, four benchmarks were chosen and evaluated
with widely used metrics. The experimental results show that the performance
for shadow detection using SAM is not satisfactory, especially when comparing
with the elaborate models. Code is available at
https://github.com/LeipingJie/SAMSh.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 08:26:08 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Jie",
"Leiping",
""
],
[
"Zhang",
"Hui",
""
]
] |
new_dataset
| 0.999411 |
2305.11522
|
Heyuan Li
|
Heyuan Li, Bo Wang, Yu Cheng, Mohan Kankanhalli, Robby T. Tan
|
DSFNet: Dual Space Fusion Network for Occlusion-Robust 3D Dense Face
Alignment
|
Accepted into CVPR'23
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sensitivity to severe occlusion and large view angles limits the usage
scenarios of the existing monocular 3D dense face alignment methods. The
state-of-the-art 3DMM-based method, directly regresses the model's
coefficients, underutilizing the low-level 2D spatial and semantic information,
which can actually offer cues for face shape and orientation. In this work, we
demonstrate how modeling 3D facial geometry in image and model space jointly
can solve the occlusion and view angle problems. Instead of predicting the
whole face directly, we regress image space features in the visible facial
region by dense prediction first. Subsequently, we predict our model's
coefficients based on the regressed feature of the visible regions, leveraging
the prior knowledge of whole face geometry from the morphable models to
complete the invisible regions. We further propose a fusion network that
combines the advantages of both the image and model space predictions to
achieve high robustness and accuracy in unconstrained scenarios. Thanks to the
proposed fusion module, our method is robust not only to occlusion and large
pitch and roll view angles, which is the benefit of our image space approach,
but also to noise and large yaw angles, which is the benefit of our model space
method. Comprehensive evaluations demonstrate the superior performance of our
method compared with the state-of-the-art methods. On the 3D dense face
alignment task, we achieve 3.80% NME on the AFLW2000-3D dataset, which
outperforms the state-of-the-art method by 5.5%. Code is available at
https://github.com/lhyfst/DSFNet.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 08:43:37 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Li",
"Heyuan",
""
],
[
"Wang",
"Bo",
""
],
[
"Cheng",
"Yu",
""
],
[
"Kankanhalli",
"Mohan",
""
],
[
"Tan",
"Robby T.",
""
]
] |
new_dataset
| 0.964729 |
2305.11527
|
Ningyu Zhang
|
Honghao Gui, Jintian Zhang, Hongbin Ye, Ningyu Zhang
|
InstructIE: A Chinese Instruction-based Information Extraction Dataset
|
Work in progress
| null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a new Information Extraction (IE) task dubbed Instruction-based
IE, which aims to ask the system to follow specific instructions or guidelines
to extract information. To facilitate research in this area, we construct a
dataset called InstructIE, consisting of 270,000 weakly supervised data from
Chinese Wikipedia and 1,000 high-quality crowdsourced annotated instances. We
further evaluate the performance of various baseline models on the InstructIE
dataset. The results reveal that although current models exhibit promising
performance, there is still room for improvement. Furthermore, we conduct a
comprehensive case study analysis, underlining the challenges inherent in the
Instruction-based IE task. Code and dataset are available at
https://github.com/zjunlp/DeepKE/tree/main/example/llm.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 08:51:11 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Gui",
"Honghao",
""
],
[
"Zhang",
"Jintian",
""
],
[
"Ye",
"Hongbin",
""
],
[
"Zhang",
"Ningyu",
""
]
] |
new_dataset
| 0.999765 |
2305.11529
|
Hafida Benhidour
|
Hanan S. Murayshid, Hafida Benhidour, Said Kerrache
|
A Sequence-to-Sequence Approach for Arabic Pronoun Resolution
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a sequence-to-sequence learning approach for Arabic
pronoun resolution, which explores the effectiveness of using advanced natural
language processing (NLP) techniques, specifically Bi-LSTM and the BERT
pre-trained Language Model, in solving the pronoun resolution problem in
Arabic. The proposed approach is evaluated on the AnATAr dataset, and its
performance is compared to several baseline models, including traditional
machine learning models and handcrafted feature-based models. Our results
demonstrate that the proposed model outperforms the baseline models, which
include KNN, logistic regression, and SVM, across all metrics. In addition, we
explore the effectiveness of various modifications to the model, including
concatenating the anaphor text beside the paragraph text as input, adding a
mask to focus on candidate scores, and filtering candidates based on gender and
number agreement with the anaphor. Our results show that these modifications
significantly improve the model's performance, achieving up to 81% on MRR and
71% for F1 score while also demonstrating higher precision, recall, and
accuracy. These findings suggest that the proposed model is an effective
approach to Arabic pronoun resolution and highlights the potential benefits of
leveraging advanced NLP neural models.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 08:53:41 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Murayshid",
"Hanan S.",
""
],
[
"Benhidour",
"Hafida",
""
],
[
"Kerrache",
"Said",
""
]
] |
new_dataset
| 0.984608 |
2305.11588
|
Jingbo Zhang
|
Jingbo Zhang, Xiaoyu Li, Ziyu Wan, Can Wang, and Jing Liao
|
Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields
|
Homepage: https://eckertzhang.github.io/Text2NeRF.github.io/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-driven 3D scene generation is widely applicable to video gaming, film
industry, and metaverse applications that have a large demand for 3D scenes.
However, existing text-to-3D generation methods are limited to producing 3D
objects with simple geometries and dreamlike styles that lack realism. In this
work, we present Text2NeRF, which is able to generate a wide range of 3D scenes
with complicated geometric structures and high-fidelity textures purely from a
text prompt. To this end, we adopt NeRF as the 3D representation and leverage a
pre-trained text-to-image diffusion model to constrain the 3D reconstruction of
the NeRF to reflect the scene description. Specifically, we employ the
diffusion model to infer the text-related image as the content prior and use a
monocular depth estimation method to offer the geometric prior. Both content
and geometric priors are utilized to update the NeRF model. To guarantee
textured and geometric consistency between different views, we introduce a
progressive scene inpainting and updating strategy for novel view synthesis of
the scene. Our method requires no additional training data but only a natural
language description of the scene as the input. Extensive experiments
demonstrate that our Text2NeRF outperforms existing methods in producing
photo-realistic, multi-view consistent, and diverse 3D scenes from a variety of
natural language prompts.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 10:58:04 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Zhang",
"Jingbo",
""
],
[
"Li",
"Xiaoyu",
""
],
[
"Wan",
"Ziyu",
""
],
[
"Wang",
"Can",
""
],
[
"Liao",
"Jing",
""
]
] |
new_dataset
| 0.997371 |
2305.11590
|
Ryota Eguchi
|
Ryota Eguchi, Fukuhito Ooshita, Michiko Inoue, S\'ebastien Tixeuil
|
Meeting Times of Non-atomic Random Walks
|
18 pages, 1 figure
| null | null | null |
cs.DC cs.DM math.PR
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we revisit the problem of classical \textit{meeting times} of
random walks in graphs. In the process that two tokens (called agents) perform
random walks on an undirected graph, the meeting times are defined as the
expected times until they meet when the two agents are initially located at
different vertices. A key feature of the problem is that, in each discrete
time-clock (called \textit{round}) of the process, the scheduler selects only
one of the two agents, and the agent performs one move of the random walk. In
the adversarial setting, the scheduler utilizes the strategy that intends to
\textit{maximizing} the expected time to meet.
In the seminal papers \cite{collisions,israeli1990token,tetali1993simult},
for the random walks of two agents, the notion of \textit{atomicity} is
implicitly considered. That is, each move of agents should complete while the
other agent waits. In this paper, we consider and formalize the meeting time of
\textit{non-atomic} random walks. In the non-atomic random walks, we assume
that in each round, only one agent can move but the move does not necessarily
complete in the next round. In other words, we assume that an agent can move at
a round while the other agent is still moving on an edge. For the non-atomic
random walks with the adversarial schedulers, we give a polynomial upper bound
on the meeting times.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 10:58:35 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Eguchi",
"Ryota",
""
],
[
"Ooshita",
"Fukuhito",
""
],
[
"Inoue",
"Michiko",
""
],
[
"Tixeuil",
"Sébastien",
""
]
] |
new_dataset
| 0.977154 |
2305.11592
|
Piyush Kumar Garg
|
Piyush Kumar Garg, Roshni Chakraborty, Srishti Gupta, and Sourav Kumar
Dandapat
|
IKDSumm: Incorporating Key-phrases into BERT for extractive Disaster
Tweet Summarization
| null | null | null | null |
cs.CL cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Online social media platforms, such as Twitter, are one of the most valuable
sources of information during disaster events. Therefore, humanitarian
organizations, government agencies, and volunteers rely on a summary of this
information, i.e., tweets, for effective disaster management. Although there
are several existing supervised and unsupervised approaches for automated tweet
summary approaches, these approaches either require extensive labeled
information or do not incorporate specific domain knowledge of disasters.
Additionally, the most recent approaches to disaster summarization have
proposed BERT-based models to enhance the summary quality. However, for further
improved performance, we introduce the utilization of domain-specific knowledge
without any human efforts to understand the importance (salience) of a tweet
which further aids in summary creation and improves summary quality. In this
paper, we propose a disaster-specific tweet summarization framework, IKDSumm,
which initially identifies the crucial and important information from each
tweet related to a disaster through key-phrases of that tweet. We identify
these key-phrases by utilizing the domain knowledge (using existing ontology)
of disasters without any human intervention. Further, we utilize these
key-phrases to automatically generate a summary of the tweets. Therefore, given
tweets related to a disaster, IKDSumm ensures fulfillment of the summarization
key objectives, such as information coverage, relevance, and diversity in
summary without any human intervention. We evaluate the performance of IKDSumm
with 8 state-of-the-art techniques on 12 disaster datasets. The evaluation
results show that IKDSumm outperforms existing techniques by approximately
2-79% in terms of ROUGE-N F1-score.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 11:05:55 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Garg",
"Piyush Kumar",
""
],
[
"Chakraborty",
"Roshni",
""
],
[
"Gupta",
"Srishti",
""
],
[
"Dandapat",
"Sourav Kumar",
""
]
] |
new_dataset
| 0.9975 |
2305.11605
|
Tashi Namgyal
|
Tashi Namgyal, Peter Flach, Raul Santos-Rodriguez
|
MIDI-Draw: Sketching to Control Melody Generation
|
Late-Breaking / Demo Session Extended Abstract, ISMIR 2022 Conference
| null | null | null |
cs.SD cs.AI cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We describe a proof-of-principle implementation of a system for drawing
melodies that abstracts away from a note-level input representation via melodic
contours. The aim is to allow users to express their musical intentions without
requiring prior knowledge of how notes fit together melodiously. Current
approaches to controllable melody generation often require users to choose
parameters that are static across a whole sequence, via buttons or sliders. In
contrast, our method allows users to quickly specify how parameters should
change over time by drawing a contour.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 11:31:33 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Namgyal",
"Tashi",
""
],
[
"Flach",
"Peter",
""
],
[
"Santos-Rodriguez",
"Raul",
""
]
] |
new_dataset
| 0.953647 |
2305.11618
|
Amira Guesmi
|
Amira Guesmi, Ruitian Ding, Muhammad Abdullah Hanif, Ihsen Alouani,
Muhammad Shafique
|
DAP: A Dynamic Adversarial Patch for Evading Person Detectors
| null | null | null | null |
cs.CR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a novel approach for generating naturalistic
adversarial patches without using GANs. Our proposed approach generates a
Dynamic Adversarial Patch (DAP) that looks naturalistic while maintaining high
attack efficiency and robustness in real-world scenarios. To achieve this, we
redefine the optimization problem by introducing a new objective function,
where a similarity metric is used to construct a similarity loss. This guides
the patch to follow predefined patterns while maximizing the victim model's
loss function. Our technique is based on directly modifying the pixel values in
the patch which gives higher flexibility and larger space to incorporate
multiple transformations compared to the GAN-based techniques. Furthermore,
most clothing-based physical attacks assume static objects and ignore the
possible transformations caused by non-rigid deformation due to changes in a
person's pose. To address this limitation, we incorporate a ``Creases
Transformation'' (CT) block, i.e., a preprocessing block following an
Expectation Over Transformation (EOT) block used to generate a large variation
of transformed patches incorporated in the training process to increase its
robustness to different possible real-world distortions (e.g., creases in the
clothing, rotation, re-scaling, random noise, brightness and contrast
variations, etc.). We demonstrate that the presence of different real-world
variations in clothing and object poses (i.e., above-mentioned distortions)
lead to a drop in the performance of state-of-the-art attacks. For instance,
these techniques can merely achieve 20\% in the physical world and 30.8\% in
the digital world while our attack provides superior success rate of up to 65\%
and 84.56\%, respectively when attacking the YOLOv3tiny detector deployed in
smart cameras at the edge.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 11:52:42 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Guesmi",
"Amira",
""
],
[
"Ding",
"Ruitian",
""
],
[
"Hanif",
"Muhammad Abdullah",
""
],
[
"Alouani",
"Ihsen",
""
],
[
"Shafique",
"Muhammad",
""
]
] |
new_dataset
| 0.980662 |
2305.11625
|
Valentin Malykh
|
Ivan Sedykh, Dmitry Abulkhanov, Nikita Sorokin, Sergey Nikolenko,
Valentin Malykh
|
Searching by Code: a New SearchBySnippet Dataset and SnippeR Retrieval
Model for Searching by Code Snippets
| null | null | null | null |
cs.CL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Code search is an important task that has seen many developments in recent
years. However, previous attempts have mostly considered the problem of
searching for code by a text query. We argue that using a code snippet (and
possibly an associated traceback) as a query and looking for answers with
bugfixing instructions and code samples is a natural use case that is not
covered by existing approaches. Moreover, existing datasets use comments
extracted from code rather than full-text descriptions as text, making them
unsuitable for this use case. We present a new SearchBySnippet dataset
implementing the search-by-code use case based on StackOverflow data; it turns
out that in this setting, existing architectures fall short of the simplest
BM25 baseline even after fine-tuning. We present a new single encoder model
SnippeR that outperforms several strong baselines on the SearchBySnippet
dataset with a result of 0.451 Recall@10; we propose the SearchBySnippet
dataset and SnippeR as a new important benchmark for code search evaluation.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 12:09:30 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Sedykh",
"Ivan",
""
],
[
"Abulkhanov",
"Dmitry",
""
],
[
"Sorokin",
"Nikita",
""
],
[
"Nikolenko",
"Sergey",
""
],
[
"Malykh",
"Valentin",
""
]
] |
new_dataset
| 0.999273 |
2305.11626
|
Valentin Malykh
|
Nikita Sorokin, Dmitry Abulkhanov, Sergey Nikolenko, Valentin Malykh
|
CCT-Code: Cross-Consistency Training for Multilingual Clone Detection
and Code Search
| null | null | null | null |
cs.CL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the clone detection and information retrieval problems for source
code, well-known tasks important for any programming language. Although it is
also an important and interesting problem to find code snippets that operate
identically but are written in different programming languages, to the best of
our knowledge multilingual clone detection has not been studied in literature.
In this work, we formulate the multilingual clone detection problem and present
XCD, a new benchmark dataset produced from the CodeForces submissions dataset.
Moreover, we present a novel training procedure, called cross-consistency
training (CCT), that we apply to train language models on source code in
different programming languages. The resulting CCT-LM model, initialized with
GraphCodeBERT and fine-tuned with CCT, achieves new state of the art,
outperforming existing approaches on the POJ-104 clone detection benchmark with
95.67\% MAP and AdvTest code search benchmark with 47.18\% MRR; it also shows
the best results on the newly created multilingual clone detection benchmark
XCD across all programming languages.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 12:09:49 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Sorokin",
"Nikita",
""
],
[
"Abulkhanov",
"Dmitry",
""
],
[
"Nikolenko",
"Sergey",
""
],
[
"Malykh",
"Valentin",
""
]
] |
new_dataset
| 0.987557 |
2305.11674
|
Jai Prakash
|
Jai Prakash, Michele Vignati, and Edoardo Sabbioni
|
Vehicle Teleoperation: Performance Assessment of SRPT Approach Under
State Estimation Errors
|
This work has been submitted to Elsevier for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Vehicle teleoperation has numerous potential applications, including serving
as a backup solution for autonomous vehicles, facilitating remote delivery
services, and enabling hazardous remote operations. However, complex urban
scenarios, limited situational awareness, and network delay increase the
cognitive workload of human operators and degrade teleoperation performance. To
address this, the successive reference pose tracking (SRPT) approach was
introduced in earlier work, which transmits successive reference poses to the
remote vehicle instead of steering commands. The operator generates reference
poses online with the help of a joystick steering and an augmented display,
potentially mitigating the detrimental effects of delays. However, it is not
clear which minimal set of sensors is essential for the SRPT vehicle
teleoperation control loop.
This paper tests the robustness of the SRPT approach in the presence of state
estimation inaccuracies, environmental disturbances, and measurement noises.
The simulation environment, implemented in Simulink, features a 14-dof vehicle
model and incorporates difficult maneuvers such as tight corners, double-lane
changes, and slalom. Environmental disturbances include low adhesion track
regions and strong cross-wind gusts. The results demonstrate that the SRPT
approach, using either estimated or actual states, performs similarly under
various worst-case scenarios, even without a position sensor requirement.
Additionally, the designed state estimator ensures sufficient performance with
just an inertial measurement unit, wheel speed encoder, and steer encoder,
constituting a minimal set of essential sensors for the SRPT vehicle
teleoperation control loop.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 13:42:51 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Prakash",
"Jai",
""
],
[
"Vignati",
"Michele",
""
],
[
"Sabbioni",
"Edoardo",
""
]
] |
new_dataset
| 0.999219 |
2305.11692
|
Long Bai
|
Long Bai, Mobarakol Islam, Lalithkumar Seenivasan, Hongliang Ren
|
Surgical-VQLA: Transformer with Gated Vision-Language Embedding for
Visual Question Localized-Answering in Robotic Surgery
|
To appear in IEEE ICRA 2023. Code and data availability:
https://github.com/longbai1006/Surgical-VQLA
| null | null | null |
cs.CV cs.AI cs.CL cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the availability of computer-aided simulators and recorded videos of
surgical procedures, junior residents still heavily rely on experts to answer
their queries. However, expert surgeons are often overloaded with clinical and
academic workloads and limit their time in answering. For this purpose, we
develop a surgical question-answering system to facilitate robot-assisted
surgical scene and activity understanding from recorded videos. Most of the
existing VQA methods require an object detector and regions based feature
extractor to extract visual features and fuse them with the embedded text of
the question for answer generation. However, (1) surgical object detection
model is scarce due to smaller datasets and lack of bounding box annotation;
(2) current fusion strategy of heterogeneous modalities like text and image is
naive; (3) the localized answering is missing, which is crucial in complex
surgical scenarios. In this paper, we propose Visual Question
Localized-Answering in Robotic Surgery (Surgical-VQLA) to localize the specific
surgical area during the answer prediction. To deal with the fusion of the
heterogeneous modalities, we design gated vision-language embedding (GVLE) to
build input patches for the Language Vision Transformer (LViT) to predict the
answer. To get localization, we add the detection head in parallel with the
prediction head of the LViT. We also integrate GIoU loss to boost localization
performance by preserving the accuracy of the question-answering model. We
annotate two datasets of VQLA by utilizing publicly available surgical videos
from MICCAI challenges EndoVis-17 and 18. Our validation results suggest that
Surgical-VQLA can better understand the surgical scene and localize the
specific area related to the question-answering. GVLE presents an efficient
language-vision embedding technique by showing superior performance over the
existing benchmarks.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 14:13:47 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Bai",
"Long",
""
],
[
"Islam",
"Mobarakol",
""
],
[
"Seenivasan",
"Lalithkumar",
""
],
[
"Ren",
"Hongliang",
""
]
] |
new_dataset
| 0.997736 |
2305.11729
|
Ioanna Diamanti
|
Ioanna Diamanti, Antigoni Tsiami, Petros Koutras and Petros Maragos
|
ViDaS Video Depth-aware Saliency Network
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce ViDaS, a two-stream, fully convolutional Video, Depth-Aware
Saliency network to address the problem of attention modeling ``in-the-wild",
via saliency prediction in videos. Contrary to existing visual saliency
approaches using only RGB frames as input, our network employs also depth as an
additional modality. The network consists of two visual streams, one for the
RGB frames, and one for the depth frames. Both streams follow an
encoder-decoder approach and are fused to obtain a final saliency map. The
network is trained end-to-end and is evaluated in a variety of different
databases with eye-tracking data, containing a wide range of video content.
Although the publicly available datasets do not contain depth, we estimate it
using three different state-of-the-art methods, to enable comparisons and a
deeper insight. Our method outperforms in most cases state-of-the-art models
and our RGB-only variant, which indicates that depth can be beneficial to
accurately estimating saliency in videos displayed on a 2D screen. Depth has
been widely used to assist salient object detection problems, where it has been
proven to be very beneficial. Our problem though differs significantly from
salient object detection, since it is not restricted to specific salient
objects, but predicts human attention in a more general aspect. These two
problems not only have different objectives, but also different ground truth
data and evaluation metrics. To our best knowledge, this is the first
competitive deep learning video saliency estimation approach that combines both
RGB and Depth features to address the general problem of saliency estimation
``in-the-wild". The code will be publicly released.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 15:04:49 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Diamanti",
"Ioanna",
""
],
[
"Tsiami",
"Antigoni",
""
],
[
"Koutras",
"Petros",
""
],
[
"Maragos",
"Petros",
""
]
] |
new_dataset
| 0.995031 |
2305.11731
|
Mohammad Dehghani
|
Mohammad Dehghani, Heshaam Faili
|
Persian Typographical Error Type Detection using Many-to-Many Deep
Neural Networks on Algorithmically-Generated Misspellings
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Digital technologies have led to an influx of text created daily in a variety
of languages, styles, and formats. A great deal of the popularity of
spell-checking systems can be attributed to this phenomenon since they are
crucial to polishing the digitally conceived text. In this study, we tackle
Typographical Error Type Detection in Persian, which has been relatively
understudied. In this paper, we present a public dataset named FarsTypo,
containing 3.4 million chronologically ordered and part-of-speech tagged words
of diverse topics and linguistic styles. An algorithm for applying
Persian-specific errors is developed and applied to a scalable size of these
words, forming a parallel dataset of correct and incorrect words. Using
FarsTypo, we establish a firm baseline and compare different methodologies
using various architectures. In addition, we present a novel Many-to-Many Deep
Sequential Neural Network to perform token classification using both word and
character embeddings in combination with bidirectional LSTM layers to detect
typographical errors across 51 classes. We compare our approach with
highly-advanced industrial systems that, unlike this study, have been developed
utilizing a variety of resources. The results of our final method were
competitive in that we achieved an accuracy of 97.62%, a precision of 98.83%, a
recall of 98.61%, and outperformed the rest in terms of speed.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 15:05:39 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Dehghani",
"Mohammad",
""
],
[
"Faili",
"Heshaam",
""
]
] |
new_dataset
| 0.997998 |
2305.11746
|
Marta R. Costa-Juss\`a
|
David Dale, Elena Voita, Janice Lam, Prangthip Hansanti, Christophe
Ropers, Elahe Kalbassi, Cynthia Gao, Lo\"ic Barrault, Marta R. Costa-juss\`a
|
HalOmi: A Manually Annotated Benchmark for Multilingual Hallucination
and Omission Detection in Machine Translation
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Hallucinations in machine translation are translations that contain
information completely unrelated to the input. Omissions are translations that
do not include some of the input information. While both cases tend to be
catastrophic errors undermining user trust, annotated data with these types of
pathologies is extremely scarce and is limited to a few high-resource
languages. In this work, we release an annotated dataset for the hallucination
and omission phenomena covering 18 translation directions with varying resource
levels and scripts. Our annotation covers different levels of partial and full
hallucinations as well as omissions both at the sentence and at the word level.
Additionally, we revisit previous methods for hallucination and omission
detection, show that conclusions made based on a single language pair largely
do not hold for a large-scale evaluation, and establish new solid baselines.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 15:33:50 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Dale",
"David",
""
],
[
"Voita",
"Elena",
""
],
[
"Lam",
"Janice",
""
],
[
"Hansanti",
"Prangthip",
""
],
[
"Ropers",
"Christophe",
""
],
[
"Kalbassi",
"Elahe",
""
],
[
"Gao",
"Cynthia",
""
],
[
"Barrault",
"Loïc",
""
],
[
"Costa-jussà",
"Marta R.",
""
]
] |
new_dataset
| 0.999612 |
2305.11819
|
Zijian Zhang
|
Zijian Zhang and Linglong Dai
|
Reconfigurable Intelligent Surfaces for 6G: Nine Fundamental Issues and
One Critical Problem
|
To appear in TST as an invited paper. This paper discusses nine
fundamental issues and one critical problem of RISs. Highly related works can
be found at arxiv:2103.15154
| null |
10.26599/TST.2023.9010001
| null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Thanks to the recent advances in metamaterials, reconfigurable intelligent
surface (RIS) has emerged as a promising technology for future 6G wireless
communications. Benefiting from its high array gain, low cost, and low power
consumption, RISs are expected to greatly enlarge signal coverage, improve
system capacity, and increase energy efficiency. In this article, we
systematically overview the emerging RIS technology with the focus on its key
basics, nine fundamental issues, and one critical problem. Specifically, we
first explain the RIS basics, including its working principles, hardware
structures, and potential benefits for communications. Based on these basics,
nine fundamental issues of RISs, such as ``What's the differences between RISs
and massive MIMO?'' and ``Is RIS really intelligent?'', are explicitly
addressed to elaborate its technical features, distinguish it from existing
technologies, and clarify some misunderstandings in the literature. Then, one
critical problem of RISs is revealed that, due to the ``multiplicative fading''
effect, existing passive RISs can hardly achieve visible performance gains in
many communication scenarios with strong direct links. To address this critical
problem, a potential solution called active RISs is introduced, and its
effectiveness is demonstrated by numerical simulations.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 16:53:25 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Zhang",
"Zijian",
""
],
[
"Dai",
"Linglong",
""
]
] |
new_dataset
| 0.994302 |
2305.11840
|
Akshita Jha
|
Akshita Jha, Aida Davani, Chandan K. Reddy, Shachi Dave, Vinodkumar
Prabhakaran, Sunipa Dev
|
SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage
Leveraging Generative Models
| null | null | null | null |
cs.CL cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Stereotype benchmark datasets are crucial to detect and mitigate social
stereotypes about groups of people in NLP models. However, existing datasets
are limited in size and coverage, and are largely restricted to stereotypes
prevalent in the Western society. This is especially problematic as language
technologies gain hold across the globe. To address this gap, we present
SeeGULL, a broad-coverage stereotype dataset, built by utilizing generative
capabilities of large language models such as PaLM, and GPT-3, and leveraging a
globally diverse rater pool to validate the prevalence of those stereotypes in
society. SeeGULL is in English, and contains stereotypes about identity groups
spanning 178 countries across 8 different geo-political regions across 6
continents, as well as state-level identities within the US and India. We also
include fine-grained offensiveness scores for different stereotypes and
demonstrate their global disparities. Furthermore, we include comparative
annotations about the same groups by annotators living in the region vs. those
that are based in North America, and demonstrate that within-region stereotypes
about groups differ from those prevalent in North America. CONTENT WARNING:
This paper contains stereotype examples that may be offensive.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 17:30:19 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Jha",
"Akshita",
""
],
[
"Davani",
"Aida",
""
],
[
"Reddy",
"Chandan K.",
""
],
[
"Dave",
"Shachi",
""
],
[
"Prabhakaran",
"Vinodkumar",
""
],
[
"Dev",
"Sunipa",
""
]
] |
new_dataset
| 0.999731 |
2305.11846
|
Ziyi Yang
|
Zineng Tang, Ziyi Yang, Chenguang Zhu, Michael Zeng, Mohit Bansal
|
Any-to-Any Generation via Composable Diffusion
|
Project Page: https://codi-gen.github.io
| null | null | null |
cs.CV cs.CL cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present Composable Diffusion (CoDi), a novel generative model capable of
generating any combination of output modalities, such as language, image,
video, or audio, from any combination of input modalities. Unlike existing
generative AI systems, CoDi can generate multiple modalities in parallel and
its input is not limited to a subset of modalities like text or image. Despite
the absence of training datasets for many combinations of modalities, we
propose to align modalities in both the input and output space. This allows
CoDi to freely condition on any input combination and generate any group of
modalities, even if they are not present in the training data. CoDi employs a
novel composable generation strategy which involves building a shared
multimodal space by bridging alignment in the diffusion process, enabling the
synchronized generation of intertwined modalities, such as temporally aligned
video and audio. Highly customizable and flexible, CoDi achieves strong
joint-modality generation quality, and outperforms or is on par with the
unimodal state-of-the-art for single-modality synthesis. The project page with
demonstrations and code is at https://codi-gen.github.io
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 17:38:32 GMT"
}
] | 2023-05-22T00:00:00 |
[
[
"Tang",
"Zineng",
""
],
[
"Yang",
"Ziyi",
""
],
[
"Zhu",
"Chenguang",
""
],
[
"Zeng",
"Michael",
""
],
[
"Bansal",
"Mohit",
""
]
] |
new_dataset
| 0.956078 |
2204.09397
|
Loris Giulivi
|
Loris Giulivi, Malhar Jere, Loris Rossi, Farinaz Koushanfar, Gabriela
Ciocarlie, Briland Hitaj, Giacomo Boracchi
|
Adversarial Scratches: Deployable Attacks to CNN Classifiers
|
This work is published at Pattern Recognition (Elsevier). This paper
stems from 'Scratch that! An Evolution-based Adversarial Attack against
Neural Networks' for which an arXiv preprint is available at
arXiv:1912.02316. Further studies led to a complete overhaul of the work,
resulting in this paper
|
Pattern Recognition, Volume 133, January 2023, 108985
|
10.1016/j.patcog.2022.108985
| null |
cs.LG cs.CR cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
A growing body of work has shown that deep neural networks are susceptible to
adversarial examples. These take the form of small perturbations applied to the
model's input which lead to incorrect predictions. Unfortunately, most
literature focuses on visually imperceivable perturbations to be applied to
digital images that often are, by design, impossible to be deployed to physical
targets. We present Adversarial Scratches: a novel L0 black-box attack, which
takes the form of scratches in images, and which possesses much greater
deployability than other state-of-the-art attacks. Adversarial Scratches
leverage B\'ezier Curves to reduce the dimension of the search space and
possibly constrain the attack to a specific location. We test Adversarial
Scratches in several scenarios, including a publicly available API and images
of traffic signs. Results show that, often, our attack achieves higher fooling
rate than other deployable state-of-the-art methods, while requiring
significantly fewer queries and modifying very few pixels.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2022 11:42:24 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Aug 2022 08:22:01 GMT"
},
{
"version": "v3",
"created": "Thu, 18 May 2023 07:55:02 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Giulivi",
"Loris",
""
],
[
"Jere",
"Malhar",
""
],
[
"Rossi",
"Loris",
""
],
[
"Koushanfar",
"Farinaz",
""
],
[
"Ciocarlie",
"Gabriela",
""
],
[
"Hitaj",
"Briland",
""
],
[
"Boracchi",
"Giacomo",
""
]
] |
new_dataset
| 0.979994 |
2209.06809
|
Tiago Pimentel
|
Clemente Pasti, Andreas Opedal, Tiago Pimentel, Tim Vieira, Jason
Eisner, Ryan Cotterell
|
On the Intersection of Context-Free and Regular Languages
|
EACL 2023 camera ready version. Our code is available in
https://github.com/rycolab/bar-hillel
| null | null | null |
cs.FL cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Bar-Hillel construction is a classic result in formal language theory. It
shows, by a simple construction, that the intersection of a context-free
language and a regular language is itself context-free. In the construction,
the regular language is specified by a finite-state automaton. However, neither
the original construction (Bar-Hillel et al., 1961) nor its weighted extension
(Nederhof and Satta, 2003) can handle finite-state automata with
$\varepsilon$-arcs. While it is possible to remove $\varepsilon$-arcs from a
finite-state automaton efficiently without modifying the language, such an
operation modifies the automaton's set of paths. We give a construction that
generalizes the Bar-Hillel in the case where the desired automaton has
$\varepsilon$-arcs, and further prove that our generalized construction leads
to a grammar that encodes the structure of both the input automaton and grammar
while retaining the asymptotic size of the original construction.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 17:49:06 GMT"
},
{
"version": "v2",
"created": "Thu, 18 May 2023 09:57:42 GMT"
}
] | 2023-05-19T00:00:00 |
[
[
"Pasti",
"Clemente",
""
],
[
"Opedal",
"Andreas",
""
],
[
"Pimentel",
"Tiago",
""
],
[
"Vieira",
"Tim",
""
],
[
"Eisner",
"Jason",
""
],
[
"Cotterell",
"Ryan",
""
]
] |
new_dataset
| 0.999442 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.