id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2205.03860
|
Chunyu Xie
|
Chunyu Xie, Jincheng Li, Heng Cai, Fanjing Kong, Xiaoyu Wu, Jianfei
Song, Henrique Morimitsu, Lin Yao, Dexin Wang, Dawei Leng, Baochang Zhang,
Xiangyang Ji, Yafeng Deng
|
Zero and R2D2: A Large-scale Chinese Cross-modal Benchmark and A
Vision-Language Framework
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision-language pre-training (VLP) on large-scale datasets has shown premier
performance on various downstream tasks. In contrast to plenty of available
benchmarks with English corpus, large-scale pre-training datasets and
downstream datasets with Chinese corpus remain largely unexplored. In this
work, we build a large-scale high-quality Chinese cross-modal benchmark named
ZERO for the research community, which contains the currently largest public
pre-training dataset ZERO-Corpus and five human-annotated fine-tuning datasets
for downstream tasks. ZERO-Corpus contains 250 million images paired with 750
million text descriptions, plus two of the five fine-tuning datasets are also
currently the largest ones for Chinese cross-modal downstream tasks. Along with
the ZERO benchmark, we also develop a VLP framework with pre-Ranking + Ranking
mechanism, boosted with target-guided Distillation and feature-guided
Distillation (R2D2) for large-scale cross-modal learning. A global contrastive
pre-ranking is first introduced to learn the individual representations of
images and texts. These primitive representations are then fused in a
fine-grained ranking manner via an image-text cross encoder and a text-image
cross encoder. The target-guided distillation and feature-guided distillation
are further proposed to enhance the capability of R2D2. With the ZERO-Corpus
and the R2D2 VLP framework, we achieve state-of-the-art performance on twelve
downstream datasets from five broad categories of tasks including image-text
retrieval, image-text matching, image caption, text-to-image generation, and
zero-shot image classification. The datasets, models, and codes are available
at https://github.com/yuxie11/R2D2
|
[
{
"version": "v1",
"created": "Sun, 8 May 2022 13:19:23 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jun 2022 13:11:20 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Jun 2022 03:21:04 GMT"
},
{
"version": "v4",
"created": "Mon, 13 Jun 2022 03:09:51 GMT"
},
{
"version": "v5",
"created": "Thu, 17 Nov 2022 10:18:14 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Xie",
"Chunyu",
""
],
[
"Li",
"Jincheng",
""
],
[
"Cai",
"Heng",
""
],
[
"Kong",
"Fanjing",
""
],
[
"Wu",
"Xiaoyu",
""
],
[
"Song",
"Jianfei",
""
],
[
"Morimitsu",
"Henrique",
""
],
[
"Yao",
"Lin",
""
],
[
"Wang",
"Dexin",
""
],
[
"Leng",
"Dawei",
""
],
[
"Zhang",
"Baochang",
""
],
[
"Ji",
"Xiangyang",
""
],
[
"Deng",
"Yafeng",
""
]
] |
new_dataset
| 0.999853 |
2205.13643
|
Davi Colli Tozoni
|
Zizhou Huang, Davi Colli Tozoni, Arvi Gjoka, Zachary Ferguson, Teseo
Schneider, Daniele Panozzo, Denis Zorin
|
Differentiable solver for time-dependent deformation problems with
contact
| null | null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a general differentiable solver for time-dependent deformation
problems with contact and friction. Our approach uses a finite element
discretization with a high-order time integrator coupled with the recently
proposed incremental potential contact method for handling contact and friction
forces to solve PDE- and ODE-constrained optimization problems on scenes with a
complex geometry. It support static and dynamic problems and differentiation
with respect to all physical parameters involved in the physical problem
description, which include shape, material parameters, friction parameters, and
initial conditions. Our analytically derived adjoint formulation is efficient,
with a small overhead (typically less than 10% for nonlinear problems) over the
forward simulation, and shares many similarities with the forward problem,
allowing the reuse of large parts of existing forward simulator code.
We implement our approach on top of the open-source PolyFEM library, and
demonstrate the applicability of our solver to shape design, initial condition
optimization, and material estimation on both simulated results and in physical
validations.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 21:38:02 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Nov 2022 15:57:48 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Huang",
"Zizhou",
""
],
[
"Tozoni",
"Davi Colli",
""
],
[
"Gjoka",
"Arvi",
""
],
[
"Ferguson",
"Zachary",
""
],
[
"Schneider",
"Teseo",
""
],
[
"Panozzo",
"Daniele",
""
],
[
"Zorin",
"Denis",
""
]
] |
new_dataset
| 0.993928 |
2208.06787
|
Kim Yu-Ji
|
Kim Jun-Seong, Kim Yu-Ji, Moon Ye-Bin, Tae-Hyun Oh
|
HDR-Plenoxels: Self-Calibrating High Dynamic Range Radiance Fields
|
Accepted at ECCV 2022. [Project page] https://hdr-plenoxels.github.io
[Code] https://github.com/postech-ami/HDR-Plenoxels
| null | null | null |
cs.CV cs.AI cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose high dynamic range (HDR) radiance fields, HDR-Plenoxels, that
learn a plenoptic function of 3D HDR radiance fields, geometry information, and
varying camera settings inherent in 2D low dynamic range (LDR) images. Our
voxel-based volume rendering pipeline reconstructs HDR radiance fields with
only multi-view LDR images taken from varying camera settings in an end-to-end
manner and has a fast convergence speed. To deal with various cameras in
real-world scenarios, we introduce a tone mapping module that models the
digital in-camera imaging pipeline (ISP) and disentangles radiometric settings.
Our tone mapping module allows us to render by controlling the radiometric
settings of each novel view. Finally, we build a multi-view dataset with
varying camera conditions, which fits our problem setting. Our experiments show
that HDR-Plenoxels can express detail and high-quality HDR novel views from
only LDR images with various cameras.
|
[
{
"version": "v1",
"created": "Sun, 14 Aug 2022 06:12:22 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Nov 2022 13:32:35 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Jun-Seong",
"Kim",
""
],
[
"Yu-Ji",
"Kim",
""
],
[
"Ye-Bin",
"Moon",
""
],
[
"Oh",
"Tae-Hyun",
""
]
] |
new_dataset
| 0.995862 |
2208.07473
|
Yunge Cui
|
Yunge Cui, Xieyuanli Chen, Yinlong Zhang, Jiahua Dong, Qingxiao Wu,
Feng Zhu
|
BoW3D: Bag of Words for Real-Time Loop Closing in 3D LiDAR SLAM
|
Accepted by IEEE Robotics and Automation Letters (RA-L)/ICRA 2023
| null |
10.1109/LRA.2022.3221336
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Loop closing is a fundamental part of simultaneous localization and mapping
(SLAM) for autonomous mobile systems. In the field of visual SLAM, bag of words
(BoW) has achieved great success in loop closure. The BoW features for loop
searching can also be used in the subsequent 6-DoF loop correction. However,
for 3D LiDAR SLAM, the state-of-the-art methods may fail to effectively
recognize the loop in real time, and usually cannot correct the full 6-DoF loop
pose. To address this limitation, we present a novel Bag of Words for real-time
loop closing in 3D LiDAR SLAM, called BoW3D. Our method not only efficiently
recognizes the revisited loop places, but also corrects the full 6-DoF loop
pose in real time. BoW3D builds the bag of words based on the 3D LiDAR feature
LinK3D, which is efficient, pose-invariant and can be used for accurate
point-to-point matching. We furthermore embed our proposed method into 3D LiDAR
odometry system to evaluate loop closing performance. We test our method on
public dataset, and compare it against other state-of-the-art algorithms. BoW3D
shows better performance in terms of F1 max and extended precision scores on
most scenarios. It is noticeable that BoW3D takes an average of 48 ms to
recognize and correct the loops on KITTI 00 (includes 4K+ 64-ray LiDAR scans),
when executed on a notebook with an Intel Core i7 @2.2 GHz processor. We
release the implementation of our method here:
https://github.com/YungeCui/BoW3D.
|
[
{
"version": "v1",
"created": "Mon, 15 Aug 2022 23:46:17 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Nov 2022 02:35:19 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Cui",
"Yunge",
""
],
[
"Chen",
"Xieyuanli",
""
],
[
"Zhang",
"Yinlong",
""
],
[
"Dong",
"Jiahua",
""
],
[
"Wu",
"Qingxiao",
""
],
[
"Zhu",
"Feng",
""
]
] |
new_dataset
| 0.99701 |
2211.05958
|
Amir Pouran Ben Veyseh
|
Amir Pouran Ben Veyseh, Minh Van Nguyen, Franck Dernoncourt, and Thien
Huu Nguyen
|
MINION: a Large-Scale and Diverse Dataset for Multilingual Event
Detection
|
Accepted at NAACL 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Event Detection (ED) is the task of identifying and classifying trigger words
of event mentions in text. Despite considerable research efforts in recent
years for English text, the task of ED in other languages has been
significantly less explored. Switching to non-English languages, important
research questions for ED include how well existing ED models perform on
different languages, how challenging ED is in other languages, and how well ED
knowledge and annotation can be transferred across languages. To answer those
questions, it is crucial to obtain multilingual ED datasets that provide
consistent event annotation for multiple languages. There exist some
multilingual ED datasets; however, they tend to cover a handful of languages
and mainly focus on popular ones. Many languages are not covered in existing
multilingual ED datasets. In addition, the current datasets are often small and
not accessible to the public. To overcome those shortcomings, we introduce a
new large-scale multilingual dataset for ED (called MINION) that consistently
annotates events for 8 different languages; 5 of them have not been supported
by existing multilingual datasets. We also perform extensive experiments and
analysis to demonstrate the challenges and transferability of ED across
languages in MINION that in all call for more research effort in this area.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 02:09:51 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Nov 2022 23:50:28 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Veyseh",
"Amir Pouran Ben",
""
],
[
"Van Nguyen",
"Minh",
""
],
[
"Dernoncourt",
"Franck",
""
],
[
"Nguyen",
"Thien Huu",
""
]
] |
new_dataset
| 0.999424 |
2211.06116
|
Jinghua Xu
|
Jinghua Xu, Zarah Weiss
|
How Much Hate with #china? A Preliminary Analysis on China-related
Hateful Tweets Two Years After the Covid Pandemic Began
| null | null | null | null |
cs.CL cs.AI cs.CY cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Following the outbreak of a global pandemic, online content is filled with
hate speech. Donald Trump's ''Chinese Virus'' tweet shifted the blame for the
spread of the Covid-19 virus to China and the Chinese people, which triggered a
new round of anti-China hate both online and offline. This research intends to
examine China-related hate speech on Twitter during the two years following the
burst of the pandemic (2020 and 2021). Through Twitter's API, in total
2,172,333 tweets hashtagged #china posted during the time were collected. By
employing multiple state-of-the-art pretrained language models for hate speech
detection, we identify a wide range of hate of various types, resulting in an
automatically labeled anti-China hate speech dataset. We identify a hateful
rate in #china tweets of 2.5% in 2020 and 1.9% in 2021. This is well above the
average rate of online hate speech on Twitter at 0.6% identified in Gao et al.,
2017. We further analyzed the longitudinal development of #china tweets and
those identified as hateful in 2020 and 2021 through visualizing the daily
number and hate rate over the two years. Our keyword analysis of hate speech in
#china tweets reveals the most frequently mentioned terms in the hateful #china
tweets, which can be used for further social science studies.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 10:48:00 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Xu",
"Jinghua",
""
],
[
"Weiss",
"Zarah",
""
]
] |
new_dataset
| 0.995711 |
2211.09847
|
Fazlourrahman Balouchzahi
|
H.L. Shashirekha and F. Balouchzahi and M.D. Anusha and G. Sidorov
|
CoLI-Machine Learning Approaches for Code-mixed Language Identification
at the Word Level in Kannada-English Texts
| null | null | null | null |
cs.CL cs.AI cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The task of automatically identifying a language used in a given text is
called Language Identification (LI). India is a multilingual country and many
Indians especially youths are comfortable with Hindi and English, in addition
to their local languages. Hence, they often use more than one language to post
their comments on social media. Texts containing more than one language are
called "code-mixed texts" and are a good source of input for LI. Languages in
these texts may be mixed at sentence level, word level or even at sub-word
level. LI at word level is a sequence labeling problem where each and every
word in a sentence is tagged with one of the languages in the predefined set of
languages. In order to address word level LI in code-mixed Kannada-English
(Kn-En) texts, this work presents i) the construction of code-mixed Kn-En
dataset called CoLI-Kenglish dataset, ii) code-mixed Kn-En embedding and iii)
learning models using Machine Learning (ML), Deep Learning (DL) and Transfer
Learning (TL) approaches. Code-mixed Kn-En texts are extracted from Kannada
YouTube video comments to construct CoLI-Kenglish dataset and code-mixed Kn-En
embedding. The words in CoLI-Kenglish dataset are grouped into six major
categories, namely, "Kannada", "English", "Mixed-language", "Name", "Location"
and "Other". The learning models, namely, CoLI-vectors and CoLI-ngrams based on
ML, CoLI-BiLSTM based on DL and CoLI-ULMFiT based on TL approaches are built
and evaluated using CoLI-Kenglish dataset. The performances of the learning
models illustrated, the superiority of CoLI-ngrams model, compared to other
models with a macro average F1-score of 0.64. However, the results of all the
learning models were quite competitive with each other.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 19:16:56 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Shashirekha",
"H. L.",
""
],
[
"Balouchzahi",
"F.",
""
],
[
"Anusha",
"M. D.",
""
],
[
"Sidorov",
"G.",
""
]
] |
new_dataset
| 0.988647 |
2211.10001
|
Qin Wang
|
Bo Qin, Qin Wang, Qianhong Wu, Sanxi Li, Wenchang Shi, Yingxin Bi,
Wenyi Tang
|
BDTS: A Blockchain-based Data Trading System with Fair Exchange
| null | null | null | null |
cs.CR cs.CY cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
Trading data through blockchain platforms is hard to achieve \textit{fair
exchange}. Reasons come from two folds: Firstly, guaranteeing fairness between
sellers and consumers is a challenging task as the deception of any
participating parties is risk-free. This leads to the second issue where
judging the behavior of data executors (such as cloud service providers) among
distrustful parties is impractical in traditional trading protocols. To fill
the gaps, in this paper, we present a \underline{b}lockchain-based
\underline{d}ata \underline{t}rading \underline{s}ystem, named BDTS. The
proposed BDTS implements a fair-exchange protocol in which benign behaviors can
obtain rewards while dishonest behaviors will be punished. Our scheme leverages
the smart contract technique to act as the agency, managing data distribution
and payment execution. The solution requires the seller to provide consumers
with the correct decryption keys for proper execution and encourages a
\textit{rational} data executor to behave faithfully for \textit{maximum}
benefits. We analyze the strategies of consumers, sellers, and dealers based on
the game theory and prove that our game can reach the subgame perfect Nash
equilibrium when each party honestly behaves. Further, we implement our scheme
based on the Hyperledger Fabric platform with a full-functional design.
Evaluations show that our scheme achieves satisfactory efficiency and
feasibility.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 03:01:36 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Qin",
"Bo",
""
],
[
"Wang",
"Qin",
""
],
[
"Wu",
"Qianhong",
""
],
[
"Li",
"Sanxi",
""
],
[
"Shi",
"Wenchang",
""
],
[
"Bi",
"Yingxin",
""
],
[
"Tang",
"Wenyi",
""
]
] |
new_dataset
| 0.996366 |
2211.10018
|
Yew Ken Chia
|
Yew Ken Chia, Lidong Bing, Sharifah Mahani Aljunied, Luo Si and
Soujanya Poria
|
A Dataset for Hyper-Relational Extraction and a Cube-Filling Approach
|
19 pages, 6 figures, accepted by EMNLP 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Relation extraction has the potential for large-scale knowledge graph
construction, but current methods do not consider the qualifier attributes for
each relation triplet, such as time, quantity or location. The qualifiers form
hyper-relational facts which better capture the rich and complex knowledge
graph structure. For example, the relation triplet (Leonard Parker, Educated
At, Harvard University) can be factually enriched by including the qualifier
(End Time, 1967). Hence, we propose the task of hyper-relational extraction to
extract more specific and complete facts from text. To support the task, we
construct HyperRED, a large-scale and general-purpose dataset. Existing models
cannot perform hyper-relational extraction as it requires a model to consider
the interaction between three entities. Hence, we propose CubeRE, a
cube-filling model inspired by table-filling approaches and explicitly
considers the interaction between relation triplets and qualifiers. To improve
model scalability and reduce negative class imbalance, we further propose a
cube-pruning method. Our experiments show that CubeRE outperforms strong
baselines and reveal possible directions for future research. Our code and data
are available at github.com/declare-lab/HyperRED.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 03:51:28 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Chia",
"Yew Ken",
""
],
[
"Bing",
"Lidong",
""
],
[
"Aljunied",
"Sharifah Mahani",
""
],
[
"Si",
"Luo",
""
],
[
"Poria",
"Soujanya",
""
]
] |
new_dataset
| 0.99962 |
2211.10023
|
Ming-Yuan Yu
|
Ming-Yuan Yu, Ram Vasudevan, Matthew Johnson-Roberson
|
LiSnowNet: Real-time Snow Removal for LiDAR Point Cloud
|
The paper has been accepted for the 2022 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS 2022)
| null | null | null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
LiDARs have been widely adopted to modern self-driving vehicles, providing 3D
information of the scene and surrounding objects. However, adverser weather
conditions still pose significant challenges to LiDARs since point clouds
captured during snowfall can easily be corrupted. The resulting noisy point
clouds degrade downstream tasks such as mapping. Existing works in de-noising
point clouds corrupted by snow are based on nearest-neighbor search, and thus
do not scale well with modern LiDARs which usually capture $100k$ or more
points at 10Hz. In this paper, we introduce an unsupervised de-noising
algorithm, LiSnowNet, running 52$\times$ faster than the state-of-the-art
methods while achieving superior performance in de-noising. Unlike previous
methods, the proposed algorithm is based on a deep convolutional neural network
and can be easily deployed to hardware accelerators such as GPUs. In addition,
we demonstrate how to use the proposed method for mapping even with corrupted
point clouds.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 04:19:05 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Yu",
"Ming-Yuan",
""
],
[
"Vasudevan",
"Ram",
""
],
[
"Johnson-Roberson",
"Matthew",
""
]
] |
new_dataset
| 0.999017 |
2211.10033
|
Vahid Behzadan
|
Bibek Upadhayay and Vahid Behzadan
|
Adversarial Stimuli: Attacking Brain-Computer Interfaces via Perturbed
Sensory Events
| null | null | null | null |
cs.CR cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning models are known to be vulnerable to adversarial
perturbations in the input domain, causing incorrect predictions. Inspired by
this phenomenon, we explore the feasibility of manipulating EEG-based Motor
Imagery (MI) Brain Computer Interfaces (BCIs) via perturbations in sensory
stimuli. Similar to adversarial examples, these \emph{adversarial stimuli} aim
to exploit the limitations of the integrated brain-sensor-processing components
of the BCI system in handling shifts in participants' response to changes in
sensory stimuli. This paper proposes adversarial stimuli as an attack vector
against BCIs, and reports the findings of preliminary experiments on the impact
of visual adversarial stimuli on the integrity of EEG-based MI BCIs. Our
findings suggest that minor adversarial stimuli can significantly deteriorate
the performance of MI BCIs across all participants (p=0.0003). Additionally,
our results indicate that such attacks are more effective in conditions with
induced stress.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 05:20:35 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Upadhayay",
"Bibek",
""
],
[
"Behzadan",
"Vahid",
""
]
] |
new_dataset
| 0.985001 |
2211.10274
|
Hayden Gunraj
|
Hayden Gunraj, Paul Guerrier, Sheldon Fernandez, Alexander Wong
|
SolderNet: Towards Trustworthy Visual Inspection of Solder Joints in
Electronics Manufacturing Using Explainable Artificial Intelligence
|
Accepted by IAAI-23, 7 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In electronics manufacturing, solder joint defects are a common problem
affecting a variety of printed circuit board components. To identify and
correct solder joint defects, the solder joints on a circuit board are
typically inspected manually by trained human inspectors, which is a very
time-consuming and error-prone process. To improve both inspection efficiency
and accuracy, in this work we describe an explainable deep learning-based
visual quality inspection system tailored for visual inspection of solder
joints in electronics manufacturing environments. At the core of this system is
an explainable solder joint defect identification system called SolderNet which
we design and implement with trust and transparency in mind. While several
challenges remain before the full system can be developed and deployed, this
study presents important progress towards trustworthy visual inspection of
solder joints in electronics manufacturing.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 15:02:59 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Gunraj",
"Hayden",
""
],
[
"Guerrier",
"Paul",
""
],
[
"Fernandez",
"Sheldon",
""
],
[
"Wong",
"Alexander",
""
]
] |
new_dataset
| 0.999248 |
2211.10330
|
Biyang Guo
|
Biyang Guo, Yeyun Gong, Yelong Shen, Songqiao Han, Hailiang Huang, Nan
Duan, Weizhu Chen
|
GENIUS: Sketch-based Language Model Pre-training via Extreme and
Selective Masking for Text Generation and Augmentation
|
21 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce GENIUS: a conditional text generation model using sketches as
input, which can fill in the missing contexts for a given sketch (key
information consisting of textual spans, phrases, or words, concatenated by
mask tokens). GENIUS is pre-trained on a large-scale textual corpus with a
novel reconstruction from sketch objective using an extreme and selective
masking strategy, enabling it to generate diverse and high-quality texts given
sketches. Comparison with other competitive conditional language models (CLMs)
reveals the superiority of GENIUS's text generation quality. We further show
that GENIUS can be used as a strong and ready-to-use data augmentation tool for
various natural language processing (NLP) tasks. Most existing textual data
augmentation methods are either too conservative, by making small changes to
the original text, or too aggressive, by creating entirely new samples. With
GENIUS, we propose GeniusAug, which first extracts the target-aware sketches
from the original training set and then generates new samples based on the
sketches. Empirical experiments on 6 text classification datasets show that
GeniusAug significantly improves the models' performance in both
in-distribution (ID) and out-of-distribution (OOD) settings. We also
demonstrate the effectiveness of GeniusAug on named entity recognition (NER)
and machine reading comprehension (MRC) tasks. (Code and models are publicly
available at https://github.com/microsoft/SCGLab and
https://github.com/beyondguo/genius)
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 16:39:45 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Guo",
"Biyang",
""
],
[
"Gong",
"Yeyun",
""
],
[
"Shen",
"Yelong",
""
],
[
"Han",
"Songqiao",
""
],
[
"Huang",
"Hailiang",
""
],
[
"Duan",
"Nan",
""
],
[
"Chen",
"Weizhu",
""
]
] |
new_dataset
| 0.960205 |
2009.12619
|
Ivan Iudice Ph.D.
|
Donatella Darsena, Giacinto Gelli, Ivan Iudice, Francesco Verde
|
Sensing Technologies for Crowd Management, Adaptation, and Information
Dissemination in Public Transportation Systems: A Review
|
24 pages, 3 figures, 5 tables, accepted for publication in IEEE
Sensors Journal
| null |
10.1109/JSEN.2022.3223297
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Management of crowd information in public transportation (PT) systems is
crucial, both to foster sustainable mobility, by increasing the user's comfort
and satisfaction during normal operation, as well as to cope with emergency
situations, such as pandemic crises, as recently experienced with COVID-19
limitations. This paper presents a taxonomy and review of sensing technologies
based on Internet of Things (IoT) for real-time crowd analysis, which can be
adopted in the different segments of the PT system (buses/trams/trains,
railway/metro stations, and bus/tram stops). To discuss such technologies in a
clear systematic perspective, we introduce a reference architecture for crowd
management, which employs modern information and communication technologies
(ICT) in order to: (i) monitor and predict crowding events; (ii) implement
crowd-aware policies for real-time and adaptive operation control in
intelligent transportation systems (ITSs); (iii) inform in real-time the users
of the crowding status of the PT system, by means of electronic displays
installed inside vehicles or at bus/tram stops/stations, and/or by mobile
transport applications. It is envisioned that the innovative crowd management
functionalities enabled by ICT/IoT sensing technologies can be incrementally
implemented as an add-on to state-of-the-art ITS platforms, which are already
in use by major PT companies operating in urban areas. Moreover, it is argued
that, in this new framework, additional services can be delivered to the
passengers, such as, e.g., on-line ticketing, vehicle access control and
reservation in severely crowded situations, and evolved crowd-aware route
planning.
|
[
{
"version": "v1",
"created": "Sat, 26 Sep 2020 15:25:46 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Oct 2021 14:30:03 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Dec 2021 18:47:14 GMT"
},
{
"version": "v4",
"created": "Mon, 2 May 2022 08:24:37 GMT"
},
{
"version": "v5",
"created": "Mon, 19 Sep 2022 09:41:35 GMT"
},
{
"version": "v6",
"created": "Thu, 17 Nov 2022 11:55:06 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Darsena",
"Donatella",
""
],
[
"Gelli",
"Giacinto",
""
],
[
"Iudice",
"Ivan",
""
],
[
"Verde",
"Francesco",
""
]
] |
new_dataset
| 0.996108 |
2107.07000
|
Neha Thomas
|
Neha Thomas, Farimah Fazlollahi, Jeremy D. Brown, Katherine J.
Kuchenbecker
|
Sensorimotor-inspired Tactile Feedback and Control Improve Consistency
of Prosthesis Manipulation in the Absence of Direct Vision
|
Accepted to IROS 2021
| null |
10.1109/IROS51168.2021.9635885
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The lack of haptically aware upper-limb prostheses forces amputees to rely
largely on visual cues to complete activities of daily living. In contrast,
able-bodied individuals inherently rely on conscious haptic perception and
automatic tactile reflexes to govern volitional actions in situations that do
not allow for constant visual attention. We therefore propose a myoelectric
prosthesis system that reflects these concepts to aid manipulation performance
without direct vision. To implement this design, we built two fabric-based
tactile sensors that measure contact location along the palmar and dorsal sides
of the prosthetic fingers and grasp pressure at the tip of the prosthetic
thumb. Inspired by the natural sensorimotor system, we use the measurements
from these sensors to provide vibrotactile feedback of contact location and
implement a tactile grasp controller that uses automatic reflexes to prevent
over-grasping and object slip. We compare this system to a standard myoelectric
prosthesis in a challenging reach-to-pick-and-place task conducted without
direct vision; 17 able-bodied adults took part in this single-session
between-subjects study. Participants in the tactile group achieved more
consistent high performance compared to participants in the standard group.
These results indicate that the addition of contact-location feedback and
reflex control increases the consistency with which objects can be grasped and
moved without direct vision in upper-limb prosthetics.
|
[
{
"version": "v1",
"created": "Wed, 14 Jul 2021 21:03:53 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Thomas",
"Neha",
""
],
[
"Fazlollahi",
"Farimah",
""
],
[
"Brown",
"Jeremy D.",
""
],
[
"Kuchenbecker",
"Katherine J.",
""
]
] |
new_dataset
| 0.979297 |
2201.07434
|
Ahmed Abdelali
|
Ahmed Abdelali, Nadir Durrani, Fahim Dalvi, and Hassan Sajjad
|
Interpreting Arabic Transformer Models
|
A new version of the paper was uploaded under a different reference:
arXiv:2210.09990
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Arabic is a Semitic language which is widely spoken with many dialects. Given
the success of pre-trained language models, many transformer models trained on
Arabic and its dialects have surfaced. While these models have been compared
with respect to downstream NLP tasks, no evaluation has been carried out to
directly compare the internal representations. We probe how linguistic
information is encoded in Arabic pretrained models, trained on different
varieties of Arabic language. We perform a layer and neuron analysis on the
models using three intrinsic tasks: two morphological tagging tasks based on
MSA (modern standard Arabic) and dialectal POS-tagging and a dialectal
identification task. Our analysis enlightens interesting findings such as: i)
word morphology is learned at the lower and middle layers ii) dialectal
identification necessitate more knowledge and hence preserved even in the final
layers, iii) despite a large overlap in their vocabulary, the MSA-based models
fail to capture the nuances of Arabic dialects, iv) we found that neurons in
embedding layers are polysemous in nature, while the neurons in middle layers
are exclusive to specific properties.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 06:32:25 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Oct 2022 13:06:13 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Abdelali",
"Ahmed",
""
],
[
"Durrani",
"Nadir",
""
],
[
"Dalvi",
"Fahim",
""
],
[
"Sajjad",
"Hassan",
""
]
] |
new_dataset
| 0.995832 |
2201.11300
|
Shun Zhang
|
Shun Zhang, Tao Zhang, Zhili Chen, N. Xiong
|
Geo-MOEA: A Multi-Objective Evolutionary Algorithm with Geo-obfuscation
for Mobile Crowdsourcing Workers
|
14 pages, 13 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid development of mobile Internet and sharing economy brings the
prosperity of Spatial Crowdsourcing (SC). SC applications assign various tasks
according to reported location information of task's requesters and outsourced
workers (such as DiDi, MeiTuan and Uber). However, SC-servers are often
untrustworthy and the exposure of users' locations raises privacy concerns. In
this paper, we design a framework called Geo-MOEA (Multi-Objective Evolutionary
Algorithm with Geo-obfuscation) to protect location privacy of workers involved
on SC platform in mobile networks environment. We propose an adaptive
regionalized obfuscation approach with inference error bounds based on
geo-indistinguishability (a strong notion of differential privacy), which is
suitable for the context of large-scale location data and task allocations.
This enables each worker to report a pseudo-location that is adaptively
generated with a personalized inference error threshold. Moreover, as a popular
computational intelligence method, MOEA is introduced to optimize the trade-off
between SC service availability and privacy protection while ensuring
theoretically the most general condition on protection location sets for larger
search space. Finally, the experimental results on two public datasets show
that our Geo-MOEA approach achieves up to 20% reduction in service quality loss
while guaranteeing differential and geo-distortion location privacy.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 03:37:23 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jun 2022 08:38:34 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Nov 2022 02:43:26 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Zhang",
"Shun",
""
],
[
"Zhang",
"Tao",
""
],
[
"Chen",
"Zhili",
""
],
[
"Xiong",
"N.",
""
]
] |
new_dataset
| 0.996604 |
2203.12345
|
Benjamin Marussig
|
Benjamin Marussig, Ulrich Reif
|
Surface Patches with Rounded Corners
| null |
Computer Aided Geometric Design, Volume 97, August 2022, 102134
|
10.1016/j.cagd.2022.102134
| null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We analyze surface patches with a corner that is rounded in the sense that
the partial derivatives at that point are antiparallel. Sufficient conditions
for $G^1$ smoothness are given, which, up to a certain degenerate case, are
also necessary. Further, we investigate curvature integrability and present
examples
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 11:56:23 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Jul 2022 08:43:24 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Marussig",
"Benjamin",
""
],
[
"Reif",
"Ulrich",
""
]
] |
new_dataset
| 0.999126 |
2204.13420
|
Yiyang Shen
|
Yiyang Shen, Yongzhen Wang, Mingqiang Wei, Honghua Chen, Haoran Xie,
Gary Cheng, Fu Lee Wang
|
Semi-MoreGAN: A New Semi-supervised Generative Adversarial Network for
Mixture of Rain Removal
|
18 pages
| null |
10.1111/cgf.14690
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rain is one of the most common weather which can completely degrade the image
quality and interfere with the performance of many computer vision tasks,
especially under heavy rain conditions. We observe that: (i) rain is a mixture
of rain streaks and rainy haze; (ii) the scene depth determines the intensity
of rain streaks and the transformation into the rainy haze; (iii) most existing
deraining methods are only trained on synthetic rainy images, and hence
generalize poorly to the real-world scenes. Motivated by these observations, we
propose a new SEMI-supervised Mixture Of rain REmoval Generative Adversarial
Network (Semi-MoreGAN), which consists of four key modules: (I) a novel
attentional depth prediction network to provide precise depth estimation; (ii)
a context feature prediction network composed of several well-designed detailed
residual blocks to produce detailed image context features; (iii) a pyramid
depth-guided non-local network to effectively integrate the image context with
the depth information, and produce the final rain-free images; and (iv) a
comprehensive semi-supervised loss function to make the model not limited to
synthetic datasets but generalize smoothly to real-world heavy rainy scenes.
Extensive experiments show clear improvements of our approach over twenty
representative state-of-the-arts on both synthetic and real-world rainy images.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 11:35:26 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Sep 2022 02:56:01 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Shen",
"Yiyang",
""
],
[
"Wang",
"Yongzhen",
""
],
[
"Wei",
"Mingqiang",
""
],
[
"Chen",
"Honghua",
""
],
[
"Xie",
"Haoran",
""
],
[
"Cheng",
"Gary",
""
],
[
"Wang",
"Fu Lee",
""
]
] |
new_dataset
| 0.984092 |
2205.00742
|
Ali Behrouz
|
Ali Behrouz, Farnoosh Hashemi, Laks V.S. Lakshmanan
|
FirmTruss Community Search in Multilayer Networks
|
Accepted to VLDB 2023 (PVLDB 2022)
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In applications such as biological, social, and transportation networks,
interactions between objects span multiple aspects. For accurately modeling
such applications, multilayer networks have been proposed. Community search
allows for personalized community discovery and has a wide range of
applications in large real-world networks. While community search has been
widely explored for single-layer graphs, the problem for multilayer graphs has
just recently attracted attention. Existing community models in multilayer
graphs have several limitations, including disconnectivity, free-rider effect,
resolution limits, and inefficiency. To address these limitations, we study the
problem of community search over large multilayer graphs. We first introduce
FirmTruss, a novel dense structure in multilayer networks, which extends the
notion of truss to multilayer graphs. We show that FirmTrusses possess nice
structural and computational properties and bring many advantages compared to
the existing models. Building on this, we present a new community model based
on FirmTruss, called FTCS, and show that finding an FTCS community is NP-hard.
We propose two efficient 2-approximation algorithms, and show that no
polynomial-time algorithm can have a better approximation guarantee unless P =
NP. We propose an index-based method to further improve the efficiency of the
algorithms. We then consider attributed multilayer networks and propose a new
community model based on network homophily. We show that community search in
attributed multilayer graphs is NP-hard and present an effective and efficient
approximation algorithm. Experimental studies on real-world graphs with
ground-truth communities validate the quality of the solutions we obtain and
the efficiency of the proposed algorithms.
|
[
{
"version": "v1",
"created": "Mon, 2 May 2022 08:48:55 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Nov 2022 05:34:48 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Behrouz",
"Ali",
""
],
[
"Hashemi",
"Farnoosh",
""
],
[
"Lakshmanan",
"Laks V. S.",
""
]
] |
new_dataset
| 0.998061 |
2206.07373
|
Ahmed Abdelali
|
Ahmed Abdelali, Nadir Durrani, Cenk Demiroglu, Fahim Dalvi, Hamdy
Mubarak, Kareem Darwish
|
NatiQ: An End-to-end Text-to-Speech System for Arabic
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
NatiQ is end-to-end text-to-speech system for Arabic. Our speech synthesizer
uses an encoder-decoder architecture with attention. We used both
tacotron-based models (tacotron-1 and tacotron-2) and the faster transformer
model for generating mel-spectrograms from characters. We concatenated
Tacotron1 with the WaveRNN vocoder, Tacotron2 with the WaveGlow vocoder and
ESPnet transformer with the parallel wavegan vocoder to synthesize waveforms
from the spectrograms. We used in-house speech data for two voices: 1) neutral
male "Hamza"- narrating general content and news, and 2) expressive female
"Amina"- narrating children story books to train our models. Our best systems
achieve an average Mean Opinion Score (MOS) of 4.21 and 4.40 for Amina and
Hamza respectively. The objective evaluation of the systems using word and
character error rate (WER and CER) as well as the response time measured by
real-time factor favored the end-to-end architecture ESPnet. NatiQ demo is
available on-line at https://tts.qcri.org
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 08:28:08 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Nov 2022 22:00:45 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Abdelali",
"Ahmed",
""
],
[
"Durrani",
"Nadir",
""
],
[
"Demiroglu",
"Cenk",
""
],
[
"Dalvi",
"Fahim",
""
],
[
"Mubarak",
"Hamdy",
""
],
[
"Darwish",
"Kareem",
""
]
] |
new_dataset
| 0.999684 |
2211.06119
|
Michael Ying Yang
|
Yuren Cong, Jinhui Yi, Bodo Rosenhahn, Michael Ying Yang
|
SSGVS: Semantic Scene Graph-to-Video Synthesis
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a natural extension of the image synthesis task, video synthesis has
attracted a lot of interest recently. Many image synthesis works utilize class
labels or text as guidance. However, neither labels nor text can provide
explicit temporal guidance, such as when an action starts or ends. To overcome
this limitation, we introduce semantic video scene graphs as input for video
synthesis, as they represent the spatial and temporal relationships between
objects in the scene. Since video scene graphs are usually temporally discrete
annotations, we propose a video scene graph (VSG) encoder that not only encodes
the existing video scene graphs but also predicts the graph representations for
unlabeled frames. The VSG encoder is pre-trained with different contrastive
multi-modal losses. A semantic scene graph-to-video synthesis framework
(SSGVS), based on the pre-trained VSG encoder, VQ-VAE, and auto-regressive
Transformer, is proposed to synthesize a video given an initial scene image and
a non-fixed number of semantic scene graphs. We evaluate SSGVS and other
state-of-the-art video synthesis models on the Action Genome dataset and
demonstrate the positive significance of video scene graphs in video synthesis.
The source code will be released.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 11:02:30 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Nov 2022 09:24:59 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Cong",
"Yuren",
""
],
[
"Yi",
"Jinhui",
""
],
[
"Rosenhahn",
"Bodo",
""
],
[
"Yang",
"Michael Ying",
""
]
] |
new_dataset
| 0.997428 |
2211.07022
|
Tanmay Samak
|
Tanmay Vilas Samak, Chinmay Vilas Samak
|
AutoDRIVE Simulator -- Technical Report
|
This work was a part of India Connect @ NTU (IC@N) Research
Internship Program 2020. arXiv admin note: substantial text overlap with
arXiv:2103.10030
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
AutoDRIVE is envisioned to be a comprehensive research platform for scaled
autonomous vehicles. This work is a stepping-stone towards the greater goal of
realizing such a research platform. Particularly, this work proposes a
pseudo-realistic simulator for scaled autonomous vehicles, which is targeted
towards simplicity, modularity and flexibility. The AutoDRIVE Simulator not
only mimics realistic system dynamics but also simulates a comprehensive sensor
suite and realistic actuator response. The simulator also features a
communication bridge in order to interface externally developed autonomous
driving software stack, which allows users to design and develop their
algorithms conveniently and have them tested on our simulator. Presently, the
bridge is compatible with Robot Operating System (ROS) and can be interfaced
directly with the Python and C++ scripts developed as a part of this project.
The bridge supports local as well as distributed computing.
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 21:49:15 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Nov 2022 03:32:28 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Samak",
"Tanmay Vilas",
""
],
[
"Samak",
"Chinmay Vilas",
""
]
] |
new_dataset
| 0.995126 |
2211.07393
|
Isabella Degen
|
Isabella Degen, Zahraa S. Abdallah
|
Temporal patterns in insulin needs for Type 1 diabetes
|
Submitted and accepted for presentation as a poster at the NeurIPS22
Time series for Health workshop, https://timeseriesforhealth.github.io/
| null |
10.48550/arxiv.2211.07393
| null |
cs.LG q-bio.QM stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Type 1 Diabetes (T1D) is a chronic condition where the body produces little
or no insulin, a hormone required for the cells to use blood glucose (BG) for
energy and to regulate BG levels in the body. Finding the right insulin dose
and time remains a complex, challenging and as yet unsolved control task. In
this study, we use the OpenAPS Data Commons dataset, which is an extensive
dataset collected in real-life conditions, to discover temporal patterns in
insulin need driven by well-known factors such as carbohydrates as well as
potentially novel factors. We utilised various time series techniques to spot
such patterns using matrix profile and multi-variate clustering. The better we
understand T1D and the factors impacting insulin needs, the more we can
contribute to building data-driven technology for T1D treatments.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 14:19:50 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Nov 2022 11:09:54 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Degen",
"Isabella",
""
],
[
"Abdallah",
"Zahraa S.",
""
]
] |
new_dataset
| 0.998868 |
2211.08295
|
Paul K. Mandal
|
Paul K. Mandal, Rakeshkumar Mahto
|
An FNet based Auto Encoder for Long Sequence News Story Generation
|
7 pages, 6 figures
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we design an auto encoder based off of Google's FNet
Architecture in order to generate text from a subset of news stories contained
in Google's C4 dataset. We discuss previous attempts and methods to generate
text from autoencoders and non LLM Models. FNET poses multiple advantages to
BERT based encoders in the realm of efficiency which train 80% faster on GPUs
and 70% faster on TPUs. We then compare outputs of how this autencoder perfroms
on different epochs. Finally, we analyze what outputs the encoder produces with
different seed text.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 16:48:09 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Nov 2022 13:52:14 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Mandal",
"Paul K.",
""
],
[
"Mahto",
"Rakeshkumar",
""
]
] |
new_dataset
| 0.997415 |
2211.08475
|
Tanmay Samak
|
Tanmay Vilas Samak, Chinmay Vilas Samak
|
AutoDRIVE -- Technical Report
|
This work was a part of 2021 Undergraduate Final Year Project at the
Department of Mechatronics Engineering, SRM Institute of Science and
Technology
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents AutoDRIVE, a comprehensive research and education platform
for implementing and validating intelligent transportation algorithms
pertaining to vehicular autonomy as well as smart city management. It is an
openly accessible platform featuring a 1:14 scale car with realistic drive and
steering actuators, redundant sensing modalities, high-performance
computational resources, and standard vehicular lighting system. Additionally,
the platform also offers a range of modules for rapid design and development of
the infrastructure. The AutoDRIVE platform encompasses Devkit, Simulator and
Testbed, a harmonious trio to develop, simulate and deploy autonomy algorithms.
It is compatible with a variety of software development packages, and supports
single as well as multi-agent paradigms through local and distributed
computing. AutoDRIVE is a product-level implementation, with a vast scope for
commercialization. This versatile platform has numerous applications, and they
are bound to keep increasing as new features are added. This work demonstrates
four such applications including autonomous parking, behavioural cloning,
intersection traversal and smart city management, each exploiting distinct
features of the platform.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 20:01:25 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Nov 2022 03:39:09 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Samak",
"Tanmay Vilas",
""
],
[
"Samak",
"Chinmay Vilas",
""
]
] |
new_dataset
| 0.999294 |
2211.09206
|
Yu Yuan
|
Yu Yuan and Jiaqi Wu and Lindong Wang and Zhongliang Jing and Henry
Leung and Shuyuan Zhu and Han Pan
|
Learning to Kindle the Starlight
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Capturing highly appreciated star field images is extremely challenging due
to light pollution, the requirements of specialized hardware, and the high
level of photographic skills needed. Deep learning-based techniques have
achieved remarkable results in low-light image enhancement (LLIE) but have not
been widely applied to star field image enhancement due to the lack of training
data. To address this problem, we construct the first Star Field Image
Enhancement Benchmark (SFIEB) that contains 355 real-shot and 854
semi-synthetic star field images, all having the corresponding reference
images. Using the presented dataset, we propose the first star field image
enhancement approach, namely StarDiffusion, based on conditional denoising
diffusion probabilistic models (DDPM). We introduce dynamic stochastic
corruptions to the inputs of conditional DDPM to improve the performance and
generalization of the network on our small-scale dataset. Experiments show
promising results of our method, which outperforms state-of-the-art low-light
image enhancement algorithms. The dataset and codes will be open-sourced.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 20:48:46 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Yuan",
"Yu",
""
],
[
"Wu",
"Jiaqi",
""
],
[
"Wang",
"Lindong",
""
],
[
"Jing",
"Zhongliang",
""
],
[
"Leung",
"Henry",
""
],
[
"Zhu",
"Shuyuan",
""
],
[
"Pan",
"Han",
""
]
] |
new_dataset
| 0.961126 |
2211.09245
|
Amanda Sutrisno
|
Amanda Sutrisno and David J. Braun
|
High-energy-density 3D-printed Composite Springs for Lightweight and
Energy-efficient Compliant Robots
|
This work has been submitted to the IEEE International Conference on
Robotics and Automation 2023 for possible publication. Copyright may be
transferred without notice, after which this version may no longer be
accessible
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Springs store mechanical energy similar to batteries storing electrical
energy. However, conventional springs are heavy and store limited amounts of
mechanical energy relative to batteries, i.e they have low mass-energy-density.
Next-generation 3D printing technology could potentially enable manufacturing
low cost lightweight springs with high energy storage capacity. Here we present
a novel design of a high-energy-density 3D printed torsional spiral spring
using structural optimization. By optimizing the internal structure of the
spring we obtained a 45% increase in the mass energy density, compared to a
torsional spiral spring of uniform thickness. Our result suggests that
optimally designed 3D printed springs could enable robots to recycle more
mechanical energy per unit mass, potentially reducing the energy required to
control robots.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 22:23:02 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Sutrisno",
"Amanda",
""
],
[
"Braun",
"David J.",
""
]
] |
new_dataset
| 0.999589 |
2211.09257
|
Richard Soref
|
Richard Soref, Dusan Gostimirovic
|
An Integrated Optical Circuit Architecture for Inverse-Designed Silicon
Photonic Components
|
8 pages, 14 figures
| null | null | null |
cs.ET physics.optics
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this work, we demonstrate a compact toolkit of inverse-designed
topologically optimized silicon-photonic devices that are arranged in a
plug-and-play fashion to realize many different photonic integrated circuits,
both passive and active, each with a small footprint. The silicon-on-insulator
1550-nm toolkit contains a 2x2 3dB splitter-combiner, a 2x2 waveguide crossover
and a 2x2 all-forward add-drop resonator. The resonator can become a 2x2
electro-optical crossbar switch by means of the thermo-optical effect or
phase-change cladding or free-carrier injection. For each of the ten circuits
demonstrated in this work, the toolkit of photonic devices enables the compact
circuit to achieve low insertion loss and low crosstalk. By adopting the
sophisticated inverse-design approach, the design structure, shape, and sizing
of each individual device can be made more flexible to better suit the
architecture of the greater circuit. For a compact architecture, we present a
unified, parallel waveguide circuit framework into which the devices are
designed to fit seamlessly, thus enabling low-complexity circuit design.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 23:07:23 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Soref",
"Richard",
""
],
[
"Gostimirovic",
"Dusan",
""
]
] |
new_dataset
| 0.994136 |
2211.09267
|
Pei Zhou
|
Pei Zhou, Hyundong Cho, Pegah Jandaghi, Dong-Ho Lee, Bill Yuchen Lin,
Jay Pujara, Xiang Ren
|
Reflect, Not Reflex: Inference-Based Common Ground Improves Dialogue
Response Quality
|
Accepted at EMNLP-2022. 19 pages, 17 figures, 4 tables
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Human communication relies on common ground (CG), the mutual knowledge and
beliefs shared by participants, to produce coherent and interesting
conversations. In this paper, we demonstrate that current response generation
(RG) models produce generic and dull responses in dialogues because they act
reflexively, failing to explicitly model CG, both due to the lack of CG in
training data and the standard RG training procedure. We introduce Reflect, a
dataset that annotates dialogues with explicit CG (materialized as inferences
approximating shared knowledge and beliefs) and solicits 9k diverse
human-generated responses each following one common ground. Using Reflect, we
showcase the limitations of current dialogue data and RG models: less than half
of the responses in current data are rated as high quality (sensible, specific,
and interesting) and models trained using this data have even lower quality,
while most Reflect responses are judged high quality. Next, we analyze whether
CG can help models produce better-quality responses by using Reflect CG to
guide RG models. Surprisingly, we find that simply prompting GPT3 to "think"
about CG generates 30% more quality responses, showing promising benefits to
integrating CG into the RG process.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 23:50:22 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Zhou",
"Pei",
""
],
[
"Cho",
"Hyundong",
""
],
[
"Jandaghi",
"Pegah",
""
],
[
"Lee",
"Dong-Ho",
""
],
[
"Lin",
"Bill Yuchen",
""
],
[
"Pujara",
"Jay",
""
],
[
"Ren",
"Xiang",
""
]
] |
new_dataset
| 0.989757 |
2211.09342
|
Hanan Ronaldo Quispe Condori
|
Hanan Quispe, Jorshinno Sumire, Patricia Condori, Edwin Alvarez and
Harley Vera
|
I see you: A Vehicle-Pedestrian Interaction Dataset from Traffic
Surveillance Cameras
|
paper accepted at LXAI workshop at NeurIPS 2022, github repository
https://github.com/hvzzzz/Vehicle_Trajectory_Dataset
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The development of autonomous vehicles arises new challenges in urban traffic
scenarios where vehicle-pedestrian interactions are frequent e.g. vehicle
yields to pedestrians, pedestrian slows down due approaching to the vehicle.
Over the last years, several datasets have been developed to model these
interactions. However, available datasets do not cover near-accident scenarios
that our dataset covers. We introduce I see you, a new vehicle-pedestrian
interaction dataset that tackles the lack of trajectory data in near-accident
scenarios using YOLOv5 and camera calibration methods. I see you consist of 170
near-accident occurrences in seven intersections in Cusco-Peru. This new
dataset and pipeline code are available on Github.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 05:03:54 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Quispe",
"Hanan",
""
],
[
"Sumire",
"Jorshinno",
""
],
[
"Condori",
"Patricia",
""
],
[
"Alvarez",
"Edwin",
""
],
[
"Vera",
"Harley",
""
]
] |
new_dataset
| 0.999699 |
2211.09375
|
Jiaheng Liu
|
Jiaheng Liu, Tong He, Honghui Yang, Rui Su, Jiayi Tian, Junran Wu,
Hongcheng Guo, Ke Xu, Wanli Ouyang
|
3D-QueryIS: A Query-based Framework for 3D Instance Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Previous top-performing methods for 3D instance segmentation often maintain
inter-task dependencies and the tendency towards a lack of robustness. Besides,
inevitable variations of different datasets make these methods become
particularly sensitive to hyper-parameter values and manifest poor
generalization capability. In this paper, we address the aforementioned
challenges by proposing a novel query-based method, termed as 3D-QueryIS, which
is detector-free, semantic segmentation-free, and cluster-free. Specifically,
we propose to generate representative points in an implicit manner, and use
them together with the initial queries to generate the informative instance
queries. Then, the class and binary instance mask predictions can be produced
by simply applying MLP layers on top of the instance queries and the extracted
point cloud embeddings. Thus, our 3D-QueryIS is free from the accumulated
errors caused by the inter-task dependencies. Extensive experiments on multiple
benchmark datasets demonstrate the effectiveness and efficiency of our proposed
3D-QueryIS method.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 07:04:53 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Liu",
"Jiaheng",
""
],
[
"He",
"Tong",
""
],
[
"Yang",
"Honghui",
""
],
[
"Su",
"Rui",
""
],
[
"Tian",
"Jiayi",
""
],
[
"Wu",
"Junran",
""
],
[
"Guo",
"Hongcheng",
""
],
[
"Xu",
"Ke",
""
],
[
"Ouyang",
"Wanli",
""
]
] |
new_dataset
| 0.993165 |
2211.09385
|
Lee Hyun
|
Lee Hyun, Taehyun Kim, Hyolim Kang, Minjoo Ki, Hyeonchan Hwang, Kwanho
Park, Sharang Han, Seon Joo Kim
|
ComMU: Dataset for Combinatorial Music Generation
|
19 pages, 12 figures
| null | null | null |
cs.SD cs.AI cs.MM eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Commercial adoption of automatic music composition requires the capability of
generating diverse and high-quality music suitable for the desired context
(e.g., music for romantic movies, action games, restaurants, etc.). In this
paper, we introduce combinatorial music generation, a new task to create
varying background music based on given conditions. Combinatorial music
generation creates short samples of music with rich musical metadata, and
combines them to produce a complete music. In addition, we introduce ComMU, the
first symbolic music dataset consisting of short music samples and their
corresponding 12 musical metadata for combinatorial music generation. Notable
properties of ComMU are that (1) dataset is manually constructed by
professional composers with an objective guideline that induces regularity, and
(2) it has 12 musical metadata that embraces composers' intentions. Our results
show that we can generate diverse high-quality music only with metadata, and
that our unique metadata such as track-role and extended chord quality improves
the capacity of the automatic composition. We highly recommend watching our
video before reading the paper (https://pozalabs.github.io/ComMU).
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 07:25:09 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Hyun",
"Lee",
""
],
[
"Kim",
"Taehyun",
""
],
[
"Kang",
"Hyolim",
""
],
[
"Ki",
"Minjoo",
""
],
[
"Hwang",
"Hyeonchan",
""
],
[
"Park",
"Kwanho",
""
],
[
"Han",
"Sharang",
""
],
[
"Kim",
"Seon Joo",
""
]
] |
new_dataset
| 0.999763 |
2211.09386
|
Zehui Chen
|
Zehui Chen, Zhenyu Li, Shiquan Zhang, Liangji Fang, Qinhong Jiang,
Feng Zhao
|
BEVDistill: Cross-Modal BEV Distillation for Multi-View 3D Object
Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
3D object detection from multiple image views is a fundamental and
challenging task for visual scene understanding. Owing to its low cost and high
efficiency, multi-view 3D object detection has demonstrated promising
application prospects. However, accurately detecting objects through
perspective views is extremely difficult due to the lack of depth information.
Current approaches tend to adopt heavy backbones for image encoders, making
them inapplicable for real-world deployment. Different from the images, LiDAR
points are superior in providing spatial cues, resulting in highly precise
localization. In this paper, we explore the incorporation of LiDAR-based
detectors for multi-view 3D object detection. Instead of directly training a
depth prediction network, we unify the image and LiDAR features in the
Bird-Eye-View (BEV) space and adaptively transfer knowledge across
non-homogenous representations in a teacher-student paradigm. To this end, we
propose \textbf{BEVDistill}, a cross-modal BEV knowledge distillation (KD)
framework for multi-view 3D object detection. Extensive experiments demonstrate
that the proposed method outperforms current KD approaches on a
highly-competitive baseline, BEVFormer, without introducing any extra cost in
the inference phase. Notably, our best model achieves 59.4 NDS on the nuScenes
test leaderboard, achieving new state-of-the-art in comparison with various
image-based detectors. Code will be available at
https://github.com/zehuichen123/BEVDistill.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 07:26:14 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Chen",
"Zehui",
""
],
[
"Li",
"Zhenyu",
""
],
[
"Zhang",
"Shiquan",
""
],
[
"Fang",
"Liangji",
""
],
[
"Jiang",
"Qinhong",
""
],
[
"Zhao",
"Feng",
""
]
] |
new_dataset
| 0.998667 |
2211.09401
|
Hung-Chieh Fang
|
Hung-Chieh Fang, Kuo-Han Hung, Chao-Wei Huang, Yun-Nung Chen
|
Open-Domain Conversational Question Answering with Historical Answers
|
AACL-IJCNLP 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open-domain conversational question answering can be viewed as two tasks:
passage retrieval and conversational question answering, where the former
relies on selecting candidate passages from a large corpus and the latter
requires better understanding of a question with contexts to predict the
answers. This paper proposes ConvADR-QA that leverages historical answers to
boost retrieval performance and further achieves better answering performance.
In our proposed framework, the retrievers use a teacher-student framework to
reduce noises from previous turns. Our experiments on the benchmark dataset,
OR-QuAC, demonstrate that our model outperforms existing baselines in both
extractive and generative reader settings, well justifying the effectiveness of
historical answers for open-domain conversational question answering.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 08:20:57 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Fang",
"Hung-Chieh",
""
],
[
"Hung",
"Kuo-Han",
""
],
[
"Huang",
"Chao-Wei",
""
],
[
"Chen",
"Yun-Nung",
""
]
] |
new_dataset
| 0.988388 |
2211.09407
|
Hyeong-Seok Choi
|
Hyeong-Seok Choi, Jinhyeok Yang, Juheon Lee, Hyeongju Kim
|
NANSY++: Unified Voice Synthesis with Neural Analysis and Synthesis
|
Submitted to ICLR 2023
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Various applications of voice synthesis have been developed independently
despite the fact that they generate "voice" as output in common. In addition,
most of the voice synthesis models still require a large number of audio data
paired with annotated labels (e.g., text transcription and music score) for
training. To this end, we propose a unified framework of synthesizing and
manipulating voice signals from analysis features, dubbed NANSY++. The backbone
network of NANSY++ is trained in a self-supervised manner that does not require
any annotations paired with audio. After training the backbone network, we
efficiently tackle four voice applications - i.e. voice conversion,
text-to-speech, singing voice synthesis, and voice designing - by partially
modeling the analysis features required for each task. Extensive experiments
show that the proposed framework offers competitive advantages such as
controllability, data efficiency, and fast training convergence, while
providing high quality synthesis. Audio samples: tinyurl.com/8tnsy3uc.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 08:29:57 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Choi",
"Hyeong-Seok",
""
],
[
"Yang",
"Jinhyeok",
""
],
[
"Lee",
"Juheon",
""
],
[
"Kim",
"Hyeongju",
""
]
] |
new_dataset
| 0.97445 |
2211.09469
|
Pengpeng Zeng
|
Pengpeng Zeng, Haonan Zhang, Lianli Gao, Xiangpeng Li, Jin Qian, Heng
Tao Shen
|
Visual Commonsense-aware Representation Network for Video Captioning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating consecutive descriptions for videos, i.e., Video Captioning,
requires taking full advantage of visual representation along with the
generation process. Existing video captioning methods focus on making an
exploration of spatial-temporal representations and their relationships to
produce inferences. However, such methods only exploit the superficial
association contained in the video itself without considering the intrinsic
visual commonsense knowledge that existed in a video dataset, which may hinder
their capabilities of knowledge cognitive to reason accurate descriptions. To
address this problem, we propose a simple yet effective method, called Visual
Commonsense-aware Representation Network (VCRN), for video captioning.
Specifically, we construct a Video Dictionary, a plug-and-play component,
obtained by clustering all video features from the total dataset into multiple
clustered centers without additional annotation. Each center implicitly
represents a visual commonsense concept in the video domain, which is utilized
in our proposed Visual Concept Selection (VCS) to obtain a video-related
concept feature. Next, a Conceptual Integration Generation (CIG) is proposed to
enhance the caption generation. Extensive experiments on three publicly video
captioning benchmarks: MSVD, MSR-VTT, and VATEX, demonstrate that our method
reaches state-of-the-art performance, indicating the effectiveness of our
method. In addition, our approach is integrated into the existing method of
video question answering and improves this performance, further showing the
generalization of our method. Source code has been released at
https://github.com/zchoi/VCRN.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 11:27:15 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Zeng",
"Pengpeng",
""
],
[
"Zhang",
"Haonan",
""
],
[
"Gao",
"Lianli",
""
],
[
"Li",
"Xiangpeng",
""
],
[
"Qian",
"Jin",
""
],
[
"Shen",
"Heng Tao",
""
]
] |
new_dataset
| 0.990905 |
2211.09507
|
Peng Wang Dr.
|
Christopher Carr, Shenglin Wang, Peng Wang, Liangxiu Han
|
Attacking Digital Twins of Robotic Systems to Compromise Security and
Safety
|
4 pages, 1 figure
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Security and safety are of paramount importance to human-robot interaction,
either for autonomous robots or human-robot collaborative manufacturing. The
intertwined relationship between security and safety has imposed new challenges
on the emerging digital twin systems of various types of robots. To be
specific, the attack of either the cyber-physical system or the digital-twin
system could cause severe consequences to the other. Particularly, the attack
of a digital-twin system that is synchronized with a cyber-physical system
could cause lateral damage to humans and other surrounding facilities. This
paper demonstrates that for Robot Operating System (ROS) driven systems,
attacks such as the person-in-the-middle attack of the digital-twin system
could eventually lead to a collapse of the cyber-physical system, whether it is
an industrial robot or an autonomous mobile robot, causing unexpected
consequences. We also discuss potential solutions to alleviate such attacks.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 13:06:40 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Carr",
"Christopher",
""
],
[
"Wang",
"Shenglin",
""
],
[
"Wang",
"Peng",
""
],
[
"Han",
"Liangxiu",
""
]
] |
new_dataset
| 0.998543 |
2211.09518
|
Yiyang Shen
|
Yiyang Shen, Rongwei Yu, Peng Wu, Haoran Xie, Lina Gong, Jing Qin, and
Mingqiang Wei
|
ImLiDAR: Cross-Sensor Dynamic Message Propagation Network for 3D Object
Detection
|
12 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR and camera, as two different sensors, supply geometric (point clouds)
and semantic (RGB images) information of 3D scenes. However, it is still
challenging for existing methods to fuse data from the two cross sensors,
making them complementary for quality 3D object detection (3OD). We propose
ImLiDAR, a new 3OD paradigm to narrow the cross-sensor discrepancies by
progressively fusing the multi-scale features of camera Images and LiDAR point
clouds. ImLiDAR enables to provide the detection head with cross-sensor yet
robustly fused features. To achieve this, two core designs exist in ImLiDAR.
First, we propose a cross-sensor dynamic message propagation module to combine
the best of the multi-scale image and point features. Second, we raise a direct
set prediction problem that allows designing an effective set-based detector to
tackle the inconsistency of the classification and localization confidences,
and the sensitivity of hand-tuned hyperparameters. Besides, the novel set-based
detector can be detachable and easily integrated into various detection
networks. Comparisons on both the KITTI and SUN-RGBD datasets show clear visual
and numerical improvements of our ImLiDAR over twenty-three state-of-the-art
3OD methods.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 13:31:23 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Shen",
"Yiyang",
""
],
[
"Yu",
"Rongwei",
""
],
[
"Wu",
"Peng",
""
],
[
"Xie",
"Haoran",
""
],
[
"Gong",
"Lina",
""
],
[
"Qin",
"Jing",
""
],
[
"Wei",
"Mingqiang",
""
]
] |
new_dataset
| 0.999635 |
2211.09519
|
Peng Wang Dr.
|
Christopher Carr, Peng Wang, Shenglin Wang
|
A Human-friendly Verbal Communication Platform for Multi-Robot Systems:
Design and Principles
|
7 pages and 7 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While multi-robot systems have been broadly researched and deployed, their
success is built chiefly upon the dependency on network infrastructures,
whether wired or wireless. Aiming at the first steps toward de-coupling the
application of multi-robot systems from the reliance on network
infrastructures, this paper proposes a human-friendly verbal communication
platform for multi-robot systems, following the deliberately designed
principles of being adaptable, transparent, and secure. The platform is network
independent and is subsequently capable of functioning in network
infrastructure lacking environments from underwater to planet explorations. A
series of experiments were conducted to demonstrate the platform's capability
in multi-robot systems communication and task coordination, showing its
potential in infrastructure-free applications. To benefit the community, we
have made the codes open source at https://github.com/jynxmagic/MSc_AI_project
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 13:31:55 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Carr",
"Christopher",
""
],
[
"Wang",
"Peng",
""
],
[
"Wang",
"Shenglin",
""
]
] |
new_dataset
| 0.958821 |
2211.09620
|
Zhongying Deng
|
Zhongying Deng, Yanqi Chen, Lihao Liu, Shujun Wang, Rihuan Ke,
Carola-Bibiane Schonlieb, Angelica I Aviles-Rivero
|
TrafficCAM: A Versatile Dataset for Traffic Flow Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traffic flow analysis is revolutionising traffic management. Qualifying
traffic flow data, traffic control bureaus could provide drivers with real-time
alerts, advising the fastest routes and therefore optimising transportation
logistics and reducing congestion. The existing traffic flow datasets have two
major limitations. They feature a limited number of classes, usually limited to
one type of vehicle, and the scarcity of unlabelled data. In this paper, we
introduce a new benchmark traffic flow image dataset called TrafficCAM. Our
dataset distinguishes itself by two major highlights. Firstly, TrafficCAM
provides both pixel-level and instance-level semantic labelling along with a
large range of types of vehicles and pedestrians. It is composed of a large and
diverse set of video sequences recorded in streets from eight Indian cities
with stationary cameras. Secondly, TrafficCAM aims to establish a new benchmark
for developing fully-supervised tasks, and importantly, semi-supervised
learning techniques. It is the first dataset that provides a vast amount of
unlabelled data, helping to better capture traffic flow qualification under a
low cost annotation requirement. More precisely, our dataset has 4,402 image
frames with semantic and instance annotations along with 59,944 unlabelled
image frames. We validate our new dataset through a large and comprehensive
range of experiments on several state-of-the-art approaches under four
different settings: fully-supervised semantic and instance segmentation, and
semi-supervised semantic and instance segmentation tasks. Our benchmark dataset
will be released.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 16:14:38 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Deng",
"Zhongying",
""
],
[
"Chen",
"Yanqi",
""
],
[
"Liu",
"Lihao",
""
],
[
"Wang",
"Shujun",
""
],
[
"Ke",
"Rihuan",
""
],
[
"Schonlieb",
"Carola-Bibiane",
""
],
[
"Aviles-Rivero",
"Angelica I",
""
]
] |
new_dataset
| 0.999783 |
2211.09716
|
Nuno Guedelha
|
Nuno Guedelha (1), Venus Pasandi (1), Giuseppe L'Erario (1), Silvio
Traversaro (1), Daniele Pucci (1) ((1) Istituto Italiano di Tecnologia,
Genova, Italy)
|
A Flexible MATLAB/Simulink Simulator for Robotic Floating-base Systems
in Contact with the Ground
|
To be published in IEEE-IRC 2022 proceedings, 5 pages with 6 figures,
equal contribution by authors Nuno Guedelha and Venus Pasandi
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physics simulators are widely used in robotics fields, from mechanical design
to dynamic simulation, and controller design. This paper presents an
open-source MATLAB/Simulink simulator for rigid-body articulated systems,
including manipulators and floating-base robots. Thanks to MATLAB/Simulink
features like MATLAB system classes and Simulink function blocks, the presented
simulator combines a programmatic and block-based approach, resulting in a
flexible design in the sense that different parts, including its physics
engine, robot-ground interaction model, and state evolution algorithm are
simply accessible and editable. Moreover, through the use of Simulink dynamic
mask blocks, the proposed simulation framework supports robot models
integrating open-chain and closed-chain kinematics with any desired number of
links interacting with the ground. The simulator can also integrate
second-order actuator dynamics. Furthermore, the simulator benefits from a
one-line installation and an easy-to-use Simulink interface.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 17:49:44 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Guedelha",
"Nuno",
""
],
[
"Pasandi",
"Venus",
""
],
[
"L'Erario",
"Giuseppe",
""
],
[
"Traversaro",
"Silvio",
""
],
[
"Pucci",
"Daniele",
""
]
] |
new_dataset
| 0.999118 |
2211.09731
|
Xin Zhang
|
Xin Zhang, Iv\'an Vall\'es-P\'erez, Andreas Stolcke, Chengzhu Yu,
Jasha Droppo, Olabanji Shonibare, Roberto Barra-Chicote, Venkatesh
Ravichandran
|
Stutter-TTS: Controlled Synthesis and Improved Recognition of Stuttered
Speech
|
8 pages, 3 figures, 2 tables
|
NeurIPS Workshop on SyntheticData4ML, December 2022
| null | null |
cs.CL cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stuttering is a speech disorder where the natural flow of speech is
interrupted by blocks, repetitions or prolongations of syllables, words and
phrases. The majority of existing automatic speech recognition (ASR) interfaces
perform poorly on utterances with stutter, mainly due to lack of matched
training data. Synthesis of speech with stutter thus presents an opportunity to
improve ASR for this type of speech. We describe Stutter-TTS, an end-to-end
neural text-to-speech model capable of synthesizing diverse types of stuttering
utterances. We develop a simple, yet effective prosody-control strategy whereby
additional tokens are introduced into source text during training to represent
specific stuttering characteristics. By choosing the position of the stutter
tokens, Stutter-TTS allows word-level control of where stuttering occurs in the
synthesized utterance. We are able to synthesize stutter events with high
accuracy (F1-scores between 0.63 and 0.84, depending on stutter type). By
fine-tuning an ASR model on synthetic stuttered speech we are able to reduce
word error by 5.7% relative on stuttered utterances, with only minor (<0.2%
relative) degradation for fluent utterances.
|
[
{
"version": "v1",
"created": "Fri, 4 Nov 2022 23:45:31 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Zhang",
"Xin",
""
],
[
"Vallés-Pérez",
"Iván",
""
],
[
"Stolcke",
"Andreas",
""
],
[
"Yu",
"Chengzhu",
""
],
[
"Droppo",
"Jasha",
""
],
[
"Shonibare",
"Olabanji",
""
],
[
"Barra-Chicote",
"Roberto",
""
],
[
"Ravichandran",
"Venkatesh",
""
]
] |
new_dataset
| 0.99944 |
2211.09751
|
Nayeeb Rashid
|
Nayeeb Rashid, Swapnil Saha, Mohseu Rashid Subah, Rizwan Ahmed Robin,
Syed Mortuza Hasan Fahim, Shahed Ahmed, Talha Ibn Mahmud
|
Heart Abnormality Detection from Heart Sound Signals using MFCC Feature
and Dual Stream Attention Based Network
| null | null | null | null |
cs.SD cs.AI eess.AS physics.med-ph q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Cardiovascular diseases are one of the leading cause of death in today's
world and early screening of heart condition plays a crucial role in preventing
them. The heart sound signal is one of the primary indicator of heart condition
and can be used to detect abnormality in the heart. The acquisition of heart
sound signal is non-invasive, cost effective and requires minimum equipment.
But currently the detection of heart abnormality from heart sound signal
depends largely on the expertise and experience of the physician. As such an
automatic detection system for heart abnormality detection from heart sound
signal can be a great asset for the people living in underdeveloped areas. In
this paper we propose a novel deep learning based dual stream network with
attention mechanism that uses both the raw heart sound signal and the MFCC
features to detect abnormality in heart condition of a patient. The deep neural
network has a convolutional stream that uses the raw heart sound signal and a
recurrent stream that uses the MFCC features of the signal. The features from
these two streams are merged together using a novel attention network and
passed through the classification network. The model is trained on the largest
publicly available dataset of PCG signal and achieves an accuracy of 87.11,
sensitivity of 82.41, specificty of 91.8 and a MACC of 87.12.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 18:20:46 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Rashid",
"Nayeeb",
""
],
[
"Saha",
"Swapnil",
""
],
[
"Subah",
"Mohseu Rashid",
""
],
[
"Robin",
"Rizwan Ahmed",
""
],
[
"Fahim",
"Syed Mortuza Hasan",
""
],
[
"Ahmed",
"Shahed",
""
],
[
"Mahmud",
"Talha Ibn",
""
]
] |
new_dataset
| 0.979004 |
2211.09770
|
Amaya Dharmasiri
|
Amaya Dharmasiri, Dinithi Dissanayake, Mohamed Afham, Isuru
Dissanayake, Ranga Rodrigo, Kanchana Thilakarathna
|
3DLatNav: Navigating Generative Latent Spaces for Semantic-Aware 3D
Object Manipulation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
3D generative models have been recently successful in generating realistic 3D
objects in the form of point clouds. However, most models do not offer
controllability to manipulate the shape semantics of component object parts
without extensive semantic attribute labels or other reference point clouds.
Moreover, beyond the ability to perform simple latent vector arithmetic or
interpolations, there is a lack of understanding of how part-level semantics of
3D shapes are encoded in their corresponding generative latent spaces. In this
paper, we propose 3DLatNav; a novel approach to navigating pretrained
generative latent spaces to enable controlled part-level semantic manipulation
of 3D objects. First, we propose a part-level weakly-supervised shape semantics
identification mechanism using latent representations of 3D shapes. Then, we
transfer that knowledge to a pretrained 3D object generative latent space to
unravel disentangled embeddings to represent different shape semantics of
component parts of an object in the form of linear subspaces, despite the
unavailability of part-level labels during the training. Finally, we utilize
those identified subspaces to show that controllable 3D object part
manipulation can be achieved by applying the proposed framework to any
pretrained 3D generative model. With two novel quantitative metrics to evaluate
the consistency and localization accuracy of part-level manipulations, we show
that 3DLatNav outperforms existing unsupervised latent disentanglement methods
in identifying latent directions that encode part-level shape semantics of 3D
objects. With multiple ablation studies and testing on state-of-the-art
generative models, we show that 3DLatNav can implement controlled part-level
semantic manipulations on an input point cloud while preserving other features
and the realistic nature of the object.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 18:47:56 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Dharmasiri",
"Amaya",
""
],
[
"Dissanayake",
"Dinithi",
""
],
[
"Afham",
"Mohamed",
""
],
[
"Dissanayake",
"Isuru",
""
],
[
"Rodrigo",
"Ranga",
""
],
[
"Thilakarathna",
"Kanchana",
""
]
] |
new_dataset
| 0.980072 |
2211.09799
|
Xinyu Zhang
|
Xinyu Zhang, Jiahui Chen, Junkun Yuan, Qiang Chen, Jian Wang, Xiaodi
Wang, Shumin Han, Xiaokang Chen, Jimin Pi, Kun Yao, Junyu Han, Errui Ding,
Jingdong Wang
|
CAE v2: Context Autoencoder with CLIP Target
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Masked image modeling (MIM) learns visual representation by masking and
reconstructing image patches. Applying the reconstruction supervision on the
CLIP representation has been proven effective for MIM. However, it is still
under-explored how CLIP supervision in MIM influences performance. To
investigate strategies for refining the CLIP-targeted MIM, we study two
critical elements in MIM, i.e., the supervision position and the mask ratio,
and reveal two interesting perspectives, relying on our developed simple
pipeline, context autodecoder with CLIP target (CAE v2). Firstly, we observe
that the supervision on visible patches achieves remarkable performance, even
better than that on masked patches, where the latter is the standard format in
the existing MIM methods. Secondly, the optimal mask ratio positively
correlates to the model size. That is to say, the smaller the model, the lower
the mask ratio needs to be. Driven by these two discoveries, our simple and
concise approach CAE v2 achieves superior performance on a series of downstream
tasks. For example, a vanilla ViT-Large model achieves 81.7% and 86.7% top-1
accuracy on linear probing and fine-tuning on ImageNet-1K, and 55.9% mIoU on
semantic segmentation on ADE20K with the pre-training for 300 epochs. We hope
our findings can be helpful guidelines for the pre-training in the MIM area,
especially for the small-scale models.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 18:58:33 GMT"
}
] | 2022-11-18T00:00:00 |
[
[
"Zhang",
"Xinyu",
""
],
[
"Chen",
"Jiahui",
""
],
[
"Yuan",
"Junkun",
""
],
[
"Chen",
"Qiang",
""
],
[
"Wang",
"Jian",
""
],
[
"Wang",
"Xiaodi",
""
],
[
"Han",
"Shumin",
""
],
[
"Chen",
"Xiaokang",
""
],
[
"Pi",
"Jimin",
""
],
[
"Yao",
"Kun",
""
],
[
"Han",
"Junyu",
""
],
[
"Ding",
"Errui",
""
],
[
"Wang",
"Jingdong",
""
]
] |
new_dataset
| 0.993653 |
2009.12293
|
Yuke Zhu
|
Yuke Zhu and Josiah Wong and Ajay Mandlekar and Roberto
Mart\'in-Mart\'in and Abhishek Joshi and Soroush Nasiriany and Yifeng Zhu
|
robosuite: A Modular Simulation Framework and Benchmark for Robot
Learning
|
For more information, please visit https://robosuite.ai
| null | null | null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
robosuite is a simulation framework for robot learning powered by the MuJoCo
physics engine. It offers a modular design for creating robotic tasks as well
as a suite of benchmark environments for reproducible research. This paper
discusses the key system modules and the benchmark environments of our new
release robosuite v1.0.
|
[
{
"version": "v1",
"created": "Fri, 25 Sep 2020 15:32:31 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Nov 2022 21:06:04 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Zhu",
"Yuke",
""
],
[
"Wong",
"Josiah",
""
],
[
"Mandlekar",
"Ajay",
""
],
[
"Martín-Martín",
"Roberto",
""
],
[
"Joshi",
"Abhishek",
""
],
[
"Nasiriany",
"Soroush",
""
],
[
"Zhu",
"Yifeng",
""
]
] |
new_dataset
| 0.996378 |
2104.10249
|
Saba Dadsetan
|
Saba Dadsetan, David Pichler, David Wilson, Naira Hovakimyan, Jennifer
Hobbs
|
Superpixels and Graph Convolutional Neural Networks for Efficient
Detection of Nutrient Deficiency Stress from Aerial Imagery
| null |
2021 IEEE/CVF Conference on Computer Vision and Pattern
Recognition Workshops (CVPRW)
|
10.1109/CVPRW53098.2021.00330
| null |
cs.CV cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Advances in remote sensing technology have led to the capture of massive
amounts of data. Increased image resolution, more frequent revisit times, and
additional spectral channels have created an explosion in the amount of data
that is available to provide analyses and intelligence across domains,
including agriculture. However, the processing of this data comes with a cost
in terms of computation time and money, both of which must be considered when
the goal of an algorithm is to provide real-time intelligence to improve
efficiencies. Specifically, we seek to identify nutrient deficient areas from
remotely sensed data to alert farmers to regions that require attention;
detection of nutrient deficient areas is a key task in precision agriculture as
farmers must quickly respond to struggling areas to protect their harvests.
Past methods have focused on pixel-level classification (i.e. semantic
segmentation) of the field to achieve these tasks, often using deep learning
models with tens-of-millions of parameters. In contrast, we propose a much
lighter graph-based method to perform node-based classification. We first use
Simple Linear Iterative Cluster (SLIC) to produce superpixels across the field.
Then, to perform segmentation across the non-Euclidean domain of superpixels,
we leverage a Graph Convolutional Neural Network (GCN). This model has
4-orders-of-magnitude fewer parameters than a CNN model and trains in a matter
of minutes.
|
[
{
"version": "v1",
"created": "Tue, 20 Apr 2021 21:18:16 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Apr 2021 00:44:11 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Nov 2022 23:27:59 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Dadsetan",
"Saba",
""
],
[
"Pichler",
"David",
""
],
[
"Wilson",
"David",
""
],
[
"Hovakimyan",
"Naira",
""
],
[
"Hobbs",
"Jennifer",
""
]
] |
new_dataset
| 0.989966 |
2107.02168
|
Dongqi Fu
|
Dongqi Fu, Jingrui He
|
DPPIN: A Biological Repository of Dynamic Protein-Protein Interaction
Network Data
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In the big data era, the relationship between entries becomes more and more
complex. Many graph (or network) algorithms have already paid attention to
dynamic networks, which are more suitable than static ones for fitting the
complex real-world scenarios with evolving structures and features. To
contribute to the dynamic network representation learning and mining research,
we provide a new bunch of label-adequate, dynamics-meaningful, and
attribute-sufficient dynamic networks from the health domain. To be specific,
in our proposed repository DPPIN, we totally have 12 individual dynamic network
datasets at different scales, and each dataset is a dynamic protein-protein
interaction network describing protein-level interactions of yeast cells. We
hope these domain-specific node features, structure evolution patterns, and
node and graph labels could inspire the regularization techniques to increase
the performance of graph machine learning algorithms in a more complex setting.
Also, we link potential applications with our DPPIN by designing various
dynamic graph experiments, where DPPIN could indicate future research
opportunities for some tasks by presenting challenges on state-of-the-art
baseline algorithms. Finally, we identify future directions to improve the
utility of this repository and welcome constructive inputs from the community.
All resources (e.g., data and code) of this work are deployed and publicly
available at https://github.com/DongqiFu/DPPIN.
|
[
{
"version": "v1",
"created": "Mon, 5 Jul 2021 17:52:55 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Sep 2021 03:02:23 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Mar 2022 16:07:34 GMT"
},
{
"version": "v4",
"created": "Tue, 22 Mar 2022 16:52:04 GMT"
},
{
"version": "v5",
"created": "Wed, 16 Nov 2022 06:27:29 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Fu",
"Dongqi",
""
],
[
"He",
"Jingrui",
""
]
] |
new_dataset
| 0.999767 |
2109.13855
|
Ivan P Yamshchikov
|
Alexey Tikhonov and Ivan P. Yamshchikov
|
Actionable Entities Recognition Benchmark for Interactive Fiction
| null | null | null | null |
cs.CL cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a new natural language processing task - Actionable
Entities Recognition (AER) - recognition of entities that protagonists could
interact with for further plot development. Though similar to classical Named
Entity Recognition (NER), it has profound differences. In particular, it is
crucial for interactive fiction, where the agent needs to detect entities that
might be useful in the future. We also discuss if AER might be further helpful
for the systems dealing with narrative processing since actionable entities
profoundly impact the causal relationship in a story. We validate the proposed
task on two previously available datasets and present a new benchmark dataset
for the AER task that includes 5550 descriptions with one or more actionable
entities.
|
[
{
"version": "v1",
"created": "Tue, 28 Sep 2021 16:39:59 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Nov 2022 12:35:04 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Nov 2022 10:24:21 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Tikhonov",
"Alexey",
""
],
[
"Yamshchikov",
"Ivan P.",
""
]
] |
new_dataset
| 0.99757 |
2112.04838
|
Julian Speith
|
Julian Speith, Florian Schweins, Maik Ender, Marc Fyrbiak, Alexander
May, Christof Paar
|
How Not to Protect Your IP -- An Industry-Wide Break of IEEE 1735
Implementations
| null | null |
10.1109/SP46214.2022.9833605
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern hardware systems are composed of a variety of third-party Intellectual
Property (IP) cores to implement their overall functionality. Since hardware
design is a globalized process involving various (untrusted) stakeholders, a
secure management of the valuable IP between authors and users is inevitable to
protect them from unauthorized access and modification. To this end, the widely
adopted IEEE standard 1735-2014 was created to ensure confidentiality and
integrity.
In this paper, we outline structural weaknesses in IEEE 1735 that cannot be
fixed with cryptographic solutions (given the contemporary hardware design
process) and thus render the standard inherently insecure. We practically
demonstrate the weaknesses by recovering the private keys of IEEE 1735
implementations from major Electronic Design Automation (EDA) tool vendors,
namely Intel, Xilinx, Cadence, Siemens, Microsemi, and Lattice, while results
on a seventh case study are withheld. As a consequence, we can decrypt, modify,
and re-encrypt all allegedly protected IP cores designed for the respective
tools, thus leading to an industry-wide break. As part of this analysis, we are
the first to publicly disclose three RSA-based white-box schemes that are used
in real-world products and present cryptanalytical attacks for all of them,
finally resulting in key recovery.
|
[
{
"version": "v1",
"created": "Thu, 9 Dec 2021 11:13:56 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Speith",
"Julian",
""
],
[
"Schweins",
"Florian",
""
],
[
"Ender",
"Maik",
""
],
[
"Fyrbiak",
"Marc",
""
],
[
"May",
"Alexander",
""
],
[
"Paar",
"Christof",
""
]
] |
new_dataset
| 0.96462 |
2202.02170
|
Eva Vanmassenhove
|
Dimitar Shterionov and Eva Vanmassenhove
|
The Ecological Footprint of Neural Machine Translation Systems
|
25 pages, 3 figures, 10 tables
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Over the past decade, deep learning (DL) has led to significant advancements
in various fields of artificial intelligence, including machine translation
(MT). These advancements would not be possible without the ever-growing volumes
of data and the hardware that allows large DL models to be trained efficiently.
Due to the large amount of computing cores as well as dedicated memory,
graphics processing units (GPUs) are a more effective hardware solution for
training and inference with DL models than central processing units (CPUs).
However, the former is very power demanding. The electrical power consumption
has economical as well as ecological implications.
This chapter focuses on the ecological footprint of neural MT systems. It
starts from the power drain during the training of and the inference with
neural MT models and moves towards the environment impact, in terms of carbon
dioxide emissions. Different architectures (RNN and Transformer) and different
GPUs (consumer-grate NVidia 1080Ti and workstation-grade NVidia P100) are
compared. Then, the overall CO2 offload is calculated for Ireland and the
Netherlands. The NMT models and their ecological impact are compared to common
household appliances to draw a more clear picture.
The last part of this chapter analyses quantization, a technique for reducing
the size and complexity of models, as a way to reduce power consumption. As
quantized models can run on CPUs, they present a power-efficient inference
solution without depending on a GPU.
|
[
{
"version": "v1",
"created": "Fri, 4 Feb 2022 14:56:41 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Shterionov",
"Dimitar",
""
],
[
"Vanmassenhove",
"Eva",
""
]
] |
new_dataset
| 0.98378 |
2205.01404
|
Subba Reddy Oota
|
Subba Reddy Oota, Jashn Arora, Veeral Agarwal, Mounika Marreddy,
Manish Gupta and Bapi Raju Surampudi
|
Neural Language Taskonomy: Which NLP Tasks are the most Predictive of
fMRI Brain Activity?
|
18 pages, 18 figures
| null |
10.18653/v1/2022.naacl-main.235
| null |
cs.CL cs.AI cs.LG q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Several popular Transformer based language models have been found to be
successful for text-driven brain encoding. However, existing literature
leverages only pretrained text Transformer models and has not explored the
efficacy of task-specific learned Transformer representations. In this work, we
explore transfer learning from representations learned for ten popular natural
language processing tasks (two syntactic and eight semantic) for predicting
brain responses from two diverse datasets: Pereira (subjects reading sentences
from paragraphs) and Narratives (subjects listening to the spoken stories).
Encoding models based on task features are used to predict activity in
different regions across the whole brain. Features from coreference resolution,
NER, and shallow syntax parsing explain greater variance for the reading
activity. On the other hand, for the listening activity, tasks such as
paraphrase generation, summarization, and natural language inference show
better encoding performance. Experiments across all 10 task representations
provide the following cognitive insights: (i) language left hemisphere has
higher predictive brain activity versus language right hemisphere, (ii)
posterior medial cortex, temporo-parieto-occipital junction, dorsal frontal
lobe have higher correlation versus early auditory and auditory association
cortex, (iii) syntactic and semantic tasks display a good predictive
performance across brain regions for reading and listening stimuli resp.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 10:23:08 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Oota",
"Subba Reddy",
""
],
[
"Arora",
"Jashn",
""
],
[
"Agarwal",
"Veeral",
""
],
[
"Marreddy",
"Mounika",
""
],
[
"Gupta",
"Manish",
""
],
[
"Surampudi",
"Bapi Raju",
""
]
] |
new_dataset
| 0.978402 |
2205.02546
|
Milica Petkovic
|
Tijana Devaja, Milica Petkovic, Francisco J. Escribano, Cedomir
Stefanovic, Dejan Vukobratovic
|
Slotted Aloha with Capture for OWC-based IoT: Finite Block-Length
Performance Analysis
|
Submitted
| null | null | null |
cs.NI cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a Slotted ALOHA (SA)-inspired solution for an
indoor optical wireless communication (OWC)-based Internet of Things (IoT)
system. Assuming that the OWC receiver exploits the capture effect, we are
interested in the derivation of error probability of decoding a short-length
data packet originating from a randomly selected OWC IoT transmitter. The
presented OWC system analysis rests on the derivation of the
signal-to-noise-and-interference-ratio (SINR) statistics and usage of finite
block-length (FBL) information theory, from which relevant error probability
and throughput is derived. Using the derived expressions, we obtain numerical
results which are further utilized to characterize the trade-offs between the
system performance and the OWC system setup parameters. The indoor OWC-based
system geometry plays an important role in the system performance, thus the
presented results can be used as a guideline for the system design to optimize
the performance of the SA-based random access protocol.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 10:16:42 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Nov 2022 15:02:12 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Devaja",
"Tijana",
""
],
[
"Petkovic",
"Milica",
""
],
[
"Escribano",
"Francisco J.",
""
],
[
"Stefanovic",
"Cedomir",
""
],
[
"Vukobratovic",
"Dejan",
""
]
] |
new_dataset
| 0.986567 |
2207.14147
|
Tingying He
|
Tingying He, Petra Isenberg, Raimund Dachselt, and Tobias Isenberg
|
BeauVis: A Validated Scale for Measuring the Aesthetic Pleasure of
Visual Representations
| null |
IEEE Transactions on Visualization and Computer Graphics 29(1),
2023
|
10.1109/TVCG.2022.3209390
| null |
cs.HC cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
We developed and validated a rating scale to assess the aesthetic pleasure
(or beauty) of a visual data representation: the BeauVis scale. With our work
we offer researchers and practitioners a simple instrument to compare the
visual appearance of different visualizations, unrelated to data or context of
use. Our rating scale can, for example, be used to accompany results from
controlled experiments or be used as informative data points during in-depth
qualitative studies. Given the lack of an aesthetic pleasure scale dedicated to
visualizations, researchers have mostly chosen their own terms to study or
compare the aesthetic pleasure of visualizations. Yet, many terms are possible
and currently no clear guidance on their effectiveness regarding the judgment
of aesthetic pleasure exists. To solve this problem, we engaged in a multi-step
research process to develop the first validated rating scale specifically for
judging the aesthetic pleasure of a visualization (osf.io/fxs76). Our final
BeauVis scale consists of five items, "enjoyable," "likable," "pleasing,"
"nice," and "appealing." Beyond this scale itself, we contribute (a) a
systematic review of the terms used in past research to capture aesthetics, (b)
an investigation with visualization experts who suggested terms to use for
judging the aesthetic pleasure of a visualization, and (c) a confirmatory
survey in which we used our terms to study the aesthetic pleasure of a set of 3
visualizations.
|
[
{
"version": "v1",
"created": "Thu, 28 Jul 2022 15:10:09 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Sep 2022 11:01:31 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"He",
"Tingying",
""
],
[
"Isenberg",
"Petra",
""
],
[
"Dachselt",
"Raimund",
""
],
[
"Isenberg",
"Tobias",
""
]
] |
new_dataset
| 0.999559 |
2208.03299
|
Patrick Lewis
|
Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio
Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel,
Edouard Grave
|
Atlas: Few-shot Learning with Retrieval Augmented Language Models
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models have shown impressive few-shot results on a wide range
of tasks. However, when knowledge is key for such results, as is the case for
tasks such as question answering and fact checking, massive parameter counts to
store knowledge seem to be needed. Retrieval augmented models are known to
excel at knowledge intensive tasks without the need for as many parameters, but
it is unclear whether they work in few-shot settings. In this work we present
Atlas, a carefully designed and pre-trained retrieval augmented language model
able to learn knowledge intensive tasks with very few training examples. We
perform evaluations on a wide range of tasks, including MMLU, KILT and
NaturalQuestions, and study the impact of the content of the document index,
showing that it can easily be updated. Notably, Atlas reaches over 42% accuracy
on Natural Questions using only 64 examples, outperforming a 540B parameters
model by 3% despite having 50x fewer parameters.
|
[
{
"version": "v1",
"created": "Fri, 5 Aug 2022 17:39:22 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2022 15:01:33 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Nov 2022 16:38:18 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Izacard",
"Gautier",
""
],
[
"Lewis",
"Patrick",
""
],
[
"Lomeli",
"Maria",
""
],
[
"Hosseini",
"Lucas",
""
],
[
"Petroni",
"Fabio",
""
],
[
"Schick",
"Timo",
""
],
[
"Dwivedi-Yu",
"Jane",
""
],
[
"Joulin",
"Armand",
""
],
[
"Riedel",
"Sebastian",
""
],
[
"Grave",
"Edouard",
""
]
] |
new_dataset
| 0.959807 |
2210.11948
|
Mitchell Wortsman
|
Mitchell Wortsman, Suchin Gururangan, Shen Li, Ali Farhadi, Ludwig
Schmidt, Michael Rabbat, Ari S. Morcos
|
lo-fi: distributed fine-tuning without communication
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When fine-tuning large neural networks, it is common to use multiple nodes
and to communicate gradients at each optimization step. By contrast, we
investigate completely local fine-tuning, which we refer to as lo-fi. During
lo-fi, each node is fine-tuned independently without any communication. Then,
the weights are averaged across nodes at the conclusion of fine-tuning. When
fine-tuning DeiT-base and DeiT-large on ImageNet, this procedure matches
accuracy in-distribution and improves accuracy under distribution shift
compared to the baseline, which observes the same amount of data but
communicates gradients at each step. We also observe that lo-fi matches the
baseline's performance when fine-tuning OPT language models (up to 1.3B
parameters) on Common Crawl. By removing the communication requirement, lo-fi
reduces resource barriers for fine-tuning large models and enables fine-tuning
in settings with prohibitive communication cost.
|
[
{
"version": "v1",
"created": "Wed, 19 Oct 2022 20:15:18 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Nov 2022 21:59:57 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Wortsman",
"Mitchell",
""
],
[
"Gururangan",
"Suchin",
""
],
[
"Li",
"Shen",
""
],
[
"Farhadi",
"Ali",
""
],
[
"Schmidt",
"Ludwig",
""
],
[
"Rabbat",
"Michael",
""
],
[
"Morcos",
"Ari S.",
""
]
] |
new_dataset
| 0.991255 |
2210.17367
|
Yuya Yamamoto
|
Yuya Yamamoto, Juhan Nam, Hiroko Terasawa
|
Analysis and Detection of Singing Techniques in Repertoires of J-POP
Solo Singers
|
Accepted at ISMIR 2022, appendix website:
https://yamathcy.github.io/ISMIR2022J-POP/
| null | null | null |
cs.SD cs.DL cs.IR cs.MM eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we focus on singing techniques within the scope of music
information retrieval research. We investigate how singers use singing
techniques using real-world recordings of famous solo singers in Japanese
popular music songs (J-POP). First, we built a new dataset of singing
techniques. The dataset consists of 168 commercial J-POP songs, and each song
is annotated using various singing techniques with timestamps and vocal pitch
contours. We also present descriptive statistics of singing techniques on the
dataset to clarify what and how often singing techniques appear. We further
explored the difficulty of the automatic detection of singing techniques using
previously proposed machine learning techniques. In the detection, we also
investigate the effectiveness of auxiliary information (i.e., pitch and
distribution of label duration), not only providing the baseline. The best
result achieves 40.4% at macro-average F-measure on nine-way multi-class
detection. We provide the annotation of the dataset and its detail on the
appendix website 0 .
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 14:45:01 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Nov 2022 19:31:27 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Yamamoto",
"Yuya",
""
],
[
"Nam",
"Juhan",
""
],
[
"Terasawa",
"Hiroko",
""
]
] |
new_dataset
| 0.99902 |
2211.03418
|
YuanFu Yang
|
YuanFu Yang, Min Sun
|
QRF: Implicit Neural Representations with Quantum Radiance Fields
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Photorealistic rendering of real-world scenes is a tremendous challenge with
a wide range of applications, including mixed reality (MR), and virtual reality
(VR). Neural networks, which have long been investigated in the context of
solving differential equations, have previously been introduced as implicit
representations for photorealistic rendering. However, realistic rendering
using classic computing is challenging because it requires time-consuming
optical ray marching, and suffer computational bottlenecks due to the curse of
dimensionality. In this paper, we propose Quantum Radiance Fields (QRF), which
integrate the quantum circuit, quantum activation function, and quantum volume
rendering for implicit scene representation. The results indicate that QRF not
only exploits the advantage of quantum computing, such as high speed, fast
convergence, and high parallelism, but also ensure high quality of volume
rendering.
|
[
{
"version": "v1",
"created": "Mon, 7 Nov 2022 10:23:32 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2022 16:22:41 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Nov 2022 09:14:12 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Yang",
"YuanFu",
""
],
[
"Sun",
"Min",
""
]
] |
new_dataset
| 0.979764 |
2211.05448
|
Siyao Li
|
Siyao Li and Giuseppe Caire
|
On the Capacity of "Beam-Pointing" Channels with Block Memory and
Feedback: The Binary Case
|
7 pages, 2 figures, this paper has been accepted by the 2022 Asilomar
Conference on Signals, Systems, and Computers
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Millimeter-wave (mmWave) communication is one of the key enablers for 5G
systems as it provides larger system bandwidth and the possibility of packing
numerous antennas in a small form factor for highly directional communication.
In order to materialize the potentially very high beamforming gain, the
transmitter and receiver beams need to be aligned. Practically, the
Angle-of-Departure (AoD) remains almost constant over numerous consecutive time
slots, which presents a state-dependent channel with memory. In addition, the
backscatter signal can be modeled as a (causal) generalized feedback. The
capacity of such channels with memory is generally an open problem in
information theory. Towards solving this difficult problem, we consider a "toy
model", consisting of a binary state-dependent (BSD) channel with in-block
memory (iBM) [1] and one unit-delayed feedback. The capacity of this model
under the peak transmission cost constraint is characterized by an iterative
closed-form expression. We propose a capacity-achieving scheme where the
transmitted signal carries information and meanwhile uniformly and randomly
probes the beams with the help of feedback.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 09:43:09 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Nov 2022 16:46:56 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Li",
"Siyao",
""
],
[
"Caire",
"Giuseppe",
""
]
] |
new_dataset
| 0.998419 |
2211.05976
|
Jiancheng An
|
Jiancheng An, Chao Xu, Qingqing Wu, Derrick Wing Kwan Ng, Marco Di
Renzo, Chau Yuen, and Lajos Hanzo
|
Codebook-Based Solutions for Reconfigurable Intelligent Surfaces and
Their Open Challenges
|
8 pages, 4 figures, 2 tables. Accepted for publication in IEEE
Wireless Communications
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfigurable intelligent surfaces (RIS) is a revolutionary technology to
cost-effectively improve the performance of wireless networks. We first review
the existing framework of channel estimation and passive beamforming (CE & PBF)
in RIS-assisted communication systems. To reduce the excessive pilot signaling
overhead and implementation complexity of the CE & PBF framework, we conceive a
codebook-based framework to strike flexible tradeoffs between communication
performance and signaling overhead. Moreover, we provide useful insights into
the codebook design and learning mechanisms of the RIS reflection pattern.
Finally, we analyze the scalability of the proposed framework by flexibly
adapting the training overhead to the specified quality-of-service requirements
and then elaborate on its appealing advantages over the existing CE & PBF
approaches. It is shown that our novel codebook-based framework can be
beneficially applied to all RIS-assisted scenarios and avoids the curse of
model dependency faced by its existing counterparts, thus constituting a
competitive solution for practical RIS-assisted communication systems.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 02:50:01 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"An",
"Jiancheng",
""
],
[
"Xu",
"Chao",
""
],
[
"Wu",
"Qingqing",
""
],
[
"Ng",
"Derrick Wing Kwan",
""
],
[
"Di Renzo",
"Marco",
""
],
[
"Yuen",
"Chau",
""
],
[
"Hanzo",
"Lajos",
""
]
] |
new_dataset
| 0.993515 |
2211.06474
|
Ann Lee
|
Peng-Jen Chen, Kevin Tran, Yilin Yang, Jingfei Du, Justine Kao, Yu-An
Chung, Paden Tomasello, Paul-Ambroise Duquenne, Holger Schwenk, Hongyu Gong,
Hirofumi Inaguma, Sravya Popuri, Changhan Wang, Juan Pino, Wei-Ning Hsu, Ann
Lee
|
Speech-to-Speech Translation For A Real-world Unwritten Language
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study speech-to-speech translation (S2ST) that translates speech from one
language into another language and focuses on building systems to support
languages without standard text writing systems. We use English-Taiwanese
Hokkien as a case study, and present an end-to-end solution from training data
collection, modeling choices to benchmark dataset release. First, we present
efforts on creating human annotated data, automatically mining data from large
unlabeled speech datasets, and adopting pseudo-labeling to produce weakly
supervised data. On the modeling, we take advantage of recent advances in
applying self-supervised discrete representations as target for prediction in
S2ST and show the effectiveness of leveraging additional text supervision from
Mandarin, a language similar to Hokkien, in model training. Finally, we release
an S2ST benchmark set to facilitate future research in this field. The demo can
be found at https://huggingface.co/spaces/facebook/Hokkien_Translation .
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 20:21:38 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Chen",
"Peng-Jen",
""
],
[
"Tran",
"Kevin",
""
],
[
"Yang",
"Yilin",
""
],
[
"Du",
"Jingfei",
""
],
[
"Kao",
"Justine",
""
],
[
"Chung",
"Yu-An",
""
],
[
"Tomasello",
"Paden",
""
],
[
"Duquenne",
"Paul-Ambroise",
""
],
[
"Schwenk",
"Holger",
""
],
[
"Gong",
"Hongyu",
""
],
[
"Inaguma",
"Hirofumi",
""
],
[
"Popuri",
"Sravya",
""
],
[
"Wang",
"Changhan",
""
],
[
"Pino",
"Juan",
""
],
[
"Hsu",
"Wei-Ning",
""
],
[
"Lee",
"Ann",
""
]
] |
new_dataset
| 0.998347 |
2211.08460
|
Laura Nicolas-S\'aenz
|
Laura Nicol\'as-S\'aenz, Agapito Ledezma, Javier Pascau, Arrate
Mu\~noz-Barrutia
|
ABANICCO: A New Color Space for Multi-Label Pixel Classification and
Color Segmentation
|
Working Paper
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In any computer vision task involving color images, a necessary step is
classifying pixels according to color and segmenting the respective areas.
However, the development of methods able to successfully complete this task has
proven challenging, mainly due to the gap between human color perception,
linguistic color terms, and digital representation. In this paper, we propose a
novel method combining geometric analysis of color theory, fuzzy color spaces,
and multi-label systems for the automatic classification of pixels according to
12 standard color categories (Green, Yellow, Light Orange, Deep Orange, Red,
Pink, Purple, Ultramarine, Blue, Teal, Brown, and Neutral). Moreover, we
present a robust, unsupervised, unbiased strategy for color naming based on
statistics and color theory. ABANICCO was tested against the state of the art
in color classification and with the standarized ISCC-NBS color system,
providing accurate classification and a standard, easily understandable
alternative for hue naming recognizable by humans and machines. We expect this
solution to become the base to successfully tackle a myriad of problems in all
fields of computer vision, such as region characterization, histopathology
analysis, fire detection, product quality prediction, object description, and
hyperspectral imaging.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 19:26:51 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Nicolás-Sáenz",
"Laura",
""
],
[
"Ledezma",
"Agapito",
""
],
[
"Pascau",
"Javier",
""
],
[
"Muñoz-Barrutia",
"Arrate",
""
]
] |
new_dataset
| 0.999506 |
2211.08483
|
Nathana\"el Jarrass\'e Dr
|
Alexis Poignant, Nathanael Jarrasse and Guillaume Morel
|
Virtually turning robotic manipulators into worn devices: opening new
horizons for wearable assistive robotics
|
4 pages, 3 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Robotic sensorimotor extensions (supernumerary limbs, prosthesis, handheld
tools) are worn devices used to interact with the nearby environment, whether
to assist the capabilities of impaired users or to enhance the dexterity of
industrial operators. Despite numerous mechanical achievements, embedding these
robotics devices remains critical due to their weight and discomfort. To
emancipate from these mechanical constraints, we propose a new hybrid system
using a virtually worn robotic arm in augmented-reality, and a real robotic
manipulator servoed on such virtual representation. We aim at bringing an
illusion of wearing a robotic system while its weight is fully deported,
thinking that this approach could open new horizons for the study of wearable
robotics without any intrinsic impairment of the human movement abilities.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 20:21:37 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Poignant",
"Alexis",
""
],
[
"Jarrasse",
"Nathanael",
""
],
[
"Morel",
"Guillaume",
""
]
] |
new_dataset
| 0.99414 |
2211.08504
|
Sibendu Paul
|
Sibendu Paul, Kunal Rao, Giuseppe Coviello, Murugan Sankaradas, Oliver
Po, Y. Charlie Hu and Srimat Chakradhar
|
APT: Adaptive Perceptual quality based camera Tuning using reinforcement
learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cameras are increasingly being deployed in cities, enterprises and roads
world-wide to enable many applications in public safety, intelligent
transportation, retail, healthcare and manufacturing. Often, after initial
deployment of the cameras, the environmental conditions and the scenes around
these cameras change, and our experiments show that these changes can adversely
impact the accuracy of insights from video analytics. This is because the
camera parameter settings, though optimal at deployment time, are not the best
settings for good-quality video capture as the environmental conditions and
scenes around a camera change during operation. Capturing poor-quality video
adversely affects the accuracy of analytics. To mitigate the loss in accuracy
of insights, we propose a novel, reinforcement-learning based system APT that
dynamically, and remotely (over 5G networks), tunes the camera parameters, to
ensure a high-quality video capture, which mitigates any loss in accuracy of
video analytics. As a result, such tuning restores the accuracy of insights
when environmental conditions or scene content change. APT uses reinforcement
learning, with no-reference perceptual quality estimation as the reward
function. We conducted extensive real-world experiments, where we
simultaneously deployed two cameras side-by-side overlooking an enterprise
parking lot (one camera only has manufacturer-suggested default setting, while
the other camera is dynamically tuned by APT during operation). Our experiments
demonstrated that due to dynamic tuning by APT, the analytics insights are
consistently better at all times of the day: the accuracy of object detection
video analytics application was improved on average by ~ 42%. Since our reward
function is independent of any analytics task, APT can be readily used for
different video analytics tasks.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 21:02:48 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Paul",
"Sibendu",
""
],
[
"Rao",
"Kunal",
""
],
[
"Coviello",
"Giuseppe",
""
],
[
"Sankaradas",
"Murugan",
""
],
[
"Po",
"Oliver",
""
],
[
"Hu",
"Y. Charlie",
""
],
[
"Chakradhar",
"Srimat",
""
]
] |
new_dataset
| 0.97076 |
2211.08526
|
Yuanchao Li
|
Yuanchao Li, Catherine Lai, Divesh Lala, Koji Inoue, Tatsuya Kawahara
|
Alzheimer's Dementia Detection through Spontaneous Dialogue with
Proactive Robotic Listeners
|
Accepted for HRI2022 Late-Breaking Report
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the aging of society continues to accelerate, Alzheimer's Disease (AD) has
received more and more attention from not only medical but also other fields,
such as computer science, over the past decade. Since speech is considered one
of the effective ways to diagnose cognitive decline, AD detection from speech
has emerged as a hot topic. Nevertheless, such approaches fail to tackle
several key issues: 1) AD is a complex neurocognitive disorder which means it
is inappropriate to conduct AD detection using utterance information alone
while ignoring dialogue information; 2) Utterances of AD patients contain many
disfluencies that affect speech recognition yet are helpful to diagnosis; 3) AD
patients tend to speak less, causing dialogue breakdown as the disease
progresses. This fact leads to a small number of utterances, which may cause
detection bias. Therefore, in this paper, we propose a novel AD detection
architecture consisting of two major modules: an ensemble AD detector and a
proactive listener. This architecture can be embedded in the dialogue system of
conversational robots for healthcare.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 21:52:41 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Li",
"Yuanchao",
""
],
[
"Lai",
"Catherine",
""
],
[
"Lala",
"Divesh",
""
],
[
"Inoue",
"Koji",
""
],
[
"Kawahara",
"Tatsuya",
""
]
] |
new_dataset
| 0.962507 |
2211.08543
|
Leijie Wu
|
Leijie Wu, Song Guo, Yaohong Ding, Junxiao Wang, Wenchao Xu, Richard
Yida Xu and Jie Zhang
|
Demystify Self-Attention in Vision Transformers from a Semantic
Perspective: Analysis and Application
|
10 pages, 11 figures
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-attention mechanisms, especially multi-head self-attention (MSA), have
achieved great success in many fields such as computer vision and natural
language processing. However, many existing vision transformer (ViT) works
simply inherent transformer designs from NLP to adapt vision tasks, while
ignoring the fundamental difference between ``how MSA works in image and
language settings''. Language naturally contains highly semantic structures
that are directly interpretable by humans. Its basic unit (word) is discrete
without redundant information, which readily supports interpretable studies on
MSA mechanisms of language transformer. In contrast, visual data exhibits a
fundamentally different structure: Its basic unit (pixel) is a natural
low-level representation with significant redundancies in the neighbourhood,
which poses obvious challenges to the interpretability of MSA mechanism in ViT.
In this paper, we introduce a typical image processing technique, i.e.,
scale-invariant feature transforms (SIFTs), which maps low-level
representations into mid-level spaces, and annotates extensive discrete
keypoints with semantically rich information. Next, we construct a weighted
patch interrelation analysis based on SIFT keypoints to capture the attention
patterns hidden in patches with different semantic concentrations
Interestingly, we find this quantitative analysis is not only an effective
complement to the interpretability of MSA mechanisms in ViT, but can also be
applied to 1) spurious correlation discovery and ``prompting'' during model
inference, 2) and guided model pre-training acceleration. Experimental results
on both applications show significant advantages over baselines, demonstrating
the efficacy of our method.
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 15:18:31 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Wu",
"Leijie",
""
],
[
"Guo",
"Song",
""
],
[
"Ding",
"Yaohong",
""
],
[
"Wang",
"Junxiao",
""
],
[
"Xu",
"Wenchao",
""
],
[
"Xu",
"Richard Yida",
""
],
[
"Zhang",
"Jie",
""
]
] |
new_dataset
| 0.950964 |
2211.08545
|
Shuaichen Chang
|
Shuaichen Chang, David Palzer, Jialin Li, Eric Fosler-Lussier,
Ningchuan Xiao
|
MapQA: A Dataset for Question Answering on Choropleth Maps
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Choropleth maps are a common visual representation for region-specific
tabular data and are used in a number of different venues (newspapers,
articles, etc). These maps are human-readable but are often challenging to deal
with when trying to extract data for screen readers, analyses, or other related
tasks. Recent research into Visual-Question Answering (VQA) has studied
question answering on human-generated charts (ChartQA), such as bar, line, and
pie charts. However, little work has paid attention to understanding maps;
general VQA models, and ChartQA models, suffer when asked to perform this task.
To facilitate and encourage research in this area, we present MapQA, a
large-scale dataset of ~800K question-answer pairs over ~60K map images. Our
task tests various levels of map understanding, from surface questions about
map styles to complex questions that require reasoning on the underlying data.
We present the unique challenges of MapQA that frustrate most strong baseline
algorithms designed for ChartQA and general VQA tasks. We also present a novel
algorithm, Visual Multi-Output Data Extraction based QA (V-MODEQA) for MapQA.
V-MODEQA extracts the underlying structured data from a map image with a
multi-output model and then performs reasoning on the extracted data. Our
experimental results show that V-MODEQA has better overall performance and
robustness on MapQA than the state-of-the-art ChartQA and VQA algorithms by
capturing the unique properties in map question answering.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 22:31:38 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Chang",
"Shuaichen",
""
],
[
"Palzer",
"David",
""
],
[
"Li",
"Jialin",
""
],
[
"Fosler-Lussier",
"Eric",
""
],
[
"Xiao",
"Ningchuan",
""
]
] |
new_dataset
| 0.999825 |
2211.08570
|
Shadrokh Samavi
|
Mohammadreza Naderi, Nader Karimi, Ali Emami, Shahram Shirani,
Shadrokh Samavi
|
Dynamic-Pix2Pix: Noise Injected cGAN for Modeling Input and Target
Domain Joint Distributions with Limited Training Data
|
15 pages, 7 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Learning to translate images from a source to a target domain with
applications such as converting simple line drawing to oil painting has
attracted significant attention. The quality of translated images is directly
related to two crucial issues. First, the consistency of the output
distribution with that of the target is essential. Second, the generated output
should have a high correlation with the input. Conditional Generative
Adversarial Networks, cGANs, are the most common models for translating images.
The performance of a cGAN drops when we use a limited training dataset. In this
work, we increase the Pix2Pix (a form of cGAN) target distribution modeling
ability with the help of dynamic neural network theory. Our model has two
learning cycles. The model learns the correlation between input and ground
truth in the first cycle. Then, the model's architecture is refined in the
second cycle to learn the target distribution from noise input. These processes
are executed in each iteration of the training procedure. Helping the cGAN
learn the target distribution from noise input results in a better model
generalization during the test time and allows the model to fit almost
perfectly to the target domain distribution. As a result, our model surpasses
the Pix2Pix model in segmenting HC18 and Montgomery's chest x-ray images. Both
qualitative and Dice scores show the superiority of our model. Although our
proposed method does not use thousand of additional data for pretraining, it
produces comparable results for the in and out-domain generalization compared
to the state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 23:25:11 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Naderi",
"Mohammadreza",
""
],
[
"Karimi",
"Nader",
""
],
[
"Emami",
"Ali",
""
],
[
"Shirani",
"Shahram",
""
],
[
"Samavi",
"Shadrokh",
""
]
] |
new_dataset
| 0.996106 |
2211.08585
|
Nader Zare
|
Nader Zare, Omid Amini, Aref Sayareh, Mahtab Sarvmaili, Arad
Firouzkouhi, Saba Ramezani Rad, Stan Matwin, Amilcar Soares
|
Cyrus2D base: Source Code Base for RoboCup 2D Soccer Simulation League
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Soccer Simulation 2D League is one of the major leagues of RoboCup
competitions. In a Soccer Simulation 2D (SS2D) game, two teams of 11 players
and one coach compete against each other. Several base codes have been released
for the RoboCup soccer simulation 2D (RCSS2D) community that have promoted the
application of multi-agent and AI algorithms in this field. In this paper, we
introduce "Cyrus2D Base", which is derived from the base code of the RCSS2D
2021 champion. We merged Gliders2D base V2.6 with the newest version of the
Helios base. We applied several features of Cyrus2021 to improve the
performance and capabilities of this base alongside a Data Extractor to
facilitate the implementation of machine learning in the field. We have tested
this base code in different teams and scenarios, and the obtained results
demonstrate significant improvements in the defensive and offensive strategy of
the team.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 23:57:46 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Zare",
"Nader",
""
],
[
"Amini",
"Omid",
""
],
[
"Sayareh",
"Aref",
""
],
[
"Sarvmaili",
"Mahtab",
""
],
[
"Firouzkouhi",
"Arad",
""
],
[
"Rad",
"Saba Ramezani",
""
],
[
"Matwin",
"Stan",
""
],
[
"Soares",
"Amilcar",
""
]
] |
new_dataset
| 0.99982 |
2211.08626
|
Zhi Yu
|
Zhi Yu (1), Chao Feng (1), Yong Zeng (1 and 3), Teng Li (2 and 3), and
Shi Jin (1) ((1) National Mobile Communications Research Laboratory,
Southeast University, Nanjing, China, (2) State Key Laboratory of Millimeter
Waves, Southeast University, Nanjing, China, (3) Purple Mountain
Laboratories, Nanjing, China)
|
Wireless Communication Using Metal Reflectors: Reflection Modelling and
Experimental Verification
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless communication using fully passive metal reflectors is a promising
technique for coverage expansion, signal enhancement, rank improvement and
blind-zone compensation, thanks to its appealing features including zero energy
consumption, ultra low cost, signaling- and maintenance-free, easy deployment
and full compatibility with existing and future wireless systems. However, a
prevalent understanding for reflection by metal plates is based on Snell's Law,
i.e., signal can only be received when the observation angle equals to the
incident angle, which is valid only when the electrical dimension of the metal
plate is extremely large. In this paper, we rigorously derive a general
reflection model that is applicable to metal reflectors of any size, any
orientation, and any linear polarization. The derived model is given compactly
in terms of the radar cross section (RCS) of the metal plate, as a function of
its physical dimensions and orientation vectors, as well as the wave
polarization and the wave deflection vector, i.e., the change of direction from
the incident wave direction to the observation direction. Furthermore,
experimental results based on actual field measurements are provided to
validate the accuracy of our developed model and demonstrate the great
potential of communications using metal reflectors.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 02:33:40 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Yu",
"Zhi",
"",
"1 and 3"
],
[
"Feng",
"Chao",
"",
"1 and 3"
],
[
"Zeng",
"Yong",
"",
"1 and 3"
],
[
"Li",
"Teng",
"",
"2 and 3"
],
[
"Jin",
"Shi",
""
]
] |
new_dataset
| 0.996701 |
2211.08636
|
Andres S. Chavez Armijos
|
Andres S. Chavez Armijos, Anni Li, Christos G. Cassandras, Yasir K.
Al-Nadawi, Hidekazu Araki, Behdad Chalaki, Ehsan Moradi-Pari, Hossein
Nourkhiz Mahjoub, Vaishnav Tadiparthi
|
Cooperative Energy and Time-Optimal Lane Change Maneuvers with Minimal
Highway Traffic Disruption
|
arXiv admin note: substantial text overlap with arXiv:2203.17102
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
We derive optimal control policies for a Connected Automated Vehicle (CAV)
and cooperating neighboring CAVs to carry out a lane change maneuver consisting
of a longitudinal phase where the CAV properly positions itself relative to the
cooperating neighbors and a lateral phase where it safely changes lanes. In
contrast to prior work on this problem, where the CAV "selfishly" only seeks to
minimize its maneuver time, we seek to ensure that the fast-lane traffic flow
is minimally disrupted (through a properly defined metric). Additionally, when
performing lane-changing maneuvers, we optimally select the cooperating
vehicles from a set of feasible neighboring vehicles and experimentally show
that the highway throughput is improved compared to the baseline case of
human-driven vehicles changing lanes with no cooperation. When feasible
solutions do not exist for a given maximal allowable disruption, we include a
time relaxation method trading off a longer maneuver time with reduced
disruption. Our analysis is also extended to multiple sequential maneuvers.
Simulation results show the effectiveness of our controllers in terms of safety
guarantees and up to 16% and 90% average throughput and maneuver time
improvement respectively when compared to maneuvers with no cooperation.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 03:10:21 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Armijos",
"Andres S. Chavez",
""
],
[
"Li",
"Anni",
""
],
[
"Cassandras",
"Christos G.",
""
],
[
"Al-Nadawi",
"Yasir K.",
""
],
[
"Araki",
"Hidekazu",
""
],
[
"Chalaki",
"Behdad",
""
],
[
"Moradi-Pari",
"Ehsan",
""
],
[
"Mahjoub",
"Hossein Nourkhiz",
""
],
[
"Tadiparthi",
"Vaishnav",
""
]
] |
new_dataset
| 0.996345 |
2211.08724
|
Haixiong Li
|
Yanbo Yuan, Hua Zhong, Haixiong Li, Xiao cheng, Linmei Xia
|
PAANet:Visual Perception based Four-stage Framework for Salient Object
Detection using High-order Contrast Operator
| null | null | null | null |
cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is believed that human vision system (HVS) consists of pre-attentive
process and attention process when performing salient object detection (SOD).
Based on this fact, we propose a four-stage framework for SOD, in which the
first two stages match the \textbf{P}re-\textbf{A}ttentive process consisting
of general feature extraction (GFE) and feature preprocessing (FP), and the
last two stages are corresponding to \textbf{A}ttention process containing
saliency feature extraction (SFE) and the feature aggregation (FA), namely
\textbf{PAANet}. According to the pre-attentive process, the GFE stage applies
the fully-trained backbone and needs no further finetuning for different
datasets. This modification can greatly increase the training speed. The FP
stage plays the role of finetuning but works more efficiently because of its
simpler structure and fewer parameters. Moreover, in SFE stage we design for
saliency feature extraction a novel contrast operator, which works more
semantically in contrast with the traditional convolution operator when
extracting the interactive information between the foreground and its
surroundings. Interestingly, this contrast operator can be cascaded to form a
deeper structure and extract higher-order saliency more effective for complex
scene. Comparative experiments with the state-of-the-art methods on 5 datasets
demonstrate the effectiveness of our framework.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 07:28:07 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Yuan",
"Yanbo",
""
],
[
"Zhong",
"Hua",
""
],
[
"Li",
"Haixiong",
""
],
[
"cheng",
"Xiao",
""
],
[
"Xia",
"Linmei",
""
]
] |
new_dataset
| 0.973235 |
2211.08837
|
Emerson Sie
|
Emerson Sie, Deepak Vasisht
|
RF-Annotate: Automatic RF-Supervised Image Annotation of Common Objects
in Context
| null | null |
10.1109/ICRA46639.2022.9812072
| null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless tags are increasingly used to track and identify common items of
interest such as retail goods, food, medicine, clothing, books, documents,
keys, equipment, and more. At the same time, there is a need for labelled
visual data featuring such items for the purpose of training object detection
and recognition models for robots operating in homes, warehouses, stores,
libraries, pharmacies, and so on. In this paper, we ask: can we leverage the
tracking and identification capabilities of such tags as a basis for a
large-scale automatic image annotation system for robotic perception tasks? We
present RF-Annotate, a pipeline for autonomous pixel-wise image annotation
which enables robots to collect labelled visual data of objects of interest as
they encounter them within their environment. Our pipeline uses unmodified
commodity RFID readers and RGB-D cameras, and exploits arbitrary small-scale
motions afforded by mobile robotic platforms to spatially map RFIDs to
corresponding objects in the scene. Our only assumption is that the objects of
interest within the environment are pre-tagged with inexpensive battery-free
RFIDs costing 3-15 cents each. We demonstrate the efficacy of our pipeline on
several RGB-D sequences of tabletop scenes featuring common objects in a
variety of indoor environments.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 11:25:38 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Sie",
"Emerson",
""
],
[
"Vasisht",
"Deepak",
""
]
] |
new_dataset
| 0.999307 |
2211.08893
|
Fabrizio Schiano
|
Fabrizio Schiano, Przemyslaw Mariusz Kornatowski, Leonardo Cencetti,
Dario Floreano
|
Reconfigurable Drone System for Transportation of Parcels With Variable
Mass and Size
| null |
Published in: IEEE Robotics and Automation Letters ( Volume: 7,
Issue: 4, October 2022)
|
10.1109/LRA.2022.3208716
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cargo drones are designed to carry payloads with predefined shape, size,
and/or mass. This lack of flexibility requires a fleet of diverse drones
tailored to specific cargo dimensions. Here we propose a new reconfigurable
drone based on a modular design that adapts to different cargo shapes, sizes,
and mass. We also propose a method for the automatic generation of drone
configurations and suitable parameters for the flight controller. The parcel
becomes the drone's body to which several individual propulsion modules are
attached. We demonstrate the use of the reconfigurable hardware and the
accompanying software by transporting parcels of different mass and sizes
requiring various numbers and propulsion modules' positioning. The experiments
are conducted indoors (with a motion capture system) and outdoors (with an
RTK-GNSS sensor). The proposed design represents a cheaper and more versatile
alternative to the solutions involving several drones for parcel
transportation.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 13:01:22 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Schiano",
"Fabrizio",
""
],
[
"Kornatowski",
"Przemyslaw Mariusz",
""
],
[
"Cencetti",
"Leonardo",
""
],
[
"Floreano",
"Dario",
""
]
] |
new_dataset
| 0.999767 |
2211.08954
|
Prajwal K R
|
K R Prajwal, Hannah Bull, Liliane Momeni, Samuel Albanie, G\"ul Varol,
Andrew Zisserman
|
Weakly-supervised Fingerspelling Recognition in British Sign Language
Videos
|
Appears in: British Machine Vision Conference 2022 (BMVC 2022)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of this work is to detect and recognize sequences of letters signed
using fingerspelling in British Sign Language (BSL). Previous fingerspelling
recognition methods have not focused on BSL, which has a very different signing
alphabet (e.g., two-handed instead of one-handed) to American Sign Language
(ASL). They also use manual annotations for training. In contrast to previous
methods, our method only uses weak annotations from subtitles for training. We
localize potential instances of fingerspelling using a simple feature
similarity method, then automatically annotate these instances by querying
subtitle words and searching for corresponding mouthing cues from the signer.
We propose a Transformer architecture adapted to this task, with a
multiple-hypothesis CTC loss function to learn from alternative annotation
possibilities. We employ a multi-stage training approach, where we make use of
an initial version of our trained model to extend and enhance our training data
before re-training again to achieve better performance. Through extensive
evaluations, we verify our method for automatic annotation and our model
architecture. Moreover, we provide a human expert annotated test set of 5K
video clips for evaluating BSL fingerspelling recognition methods to support
sign language research.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 15:02:36 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Prajwal",
"K R",
""
],
[
"Bull",
"Hannah",
""
],
[
"Momeni",
"Liliane",
""
],
[
"Albanie",
"Samuel",
""
],
[
"Varol",
"Gül",
""
],
[
"Zisserman",
"Andrew",
""
]
] |
new_dataset
| 0.977644 |
2211.08963
|
Alexander Nolte
|
Jeanette Falk, Alexander Nolte, Daniela Huppenkothen, Marion
Weinzierl, Kiev Gama, Daniel Spikol, Erik Tollerud, Neil Chue Hong, Ines
Kn\"apper, Linda Bailey Hayden
|
The Future of Hackathon Research and Practice
|
20 pages, 3 figures, 1 table
| null | null | null |
cs.HC cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Hackathons are time-bounded collaborative events which have become a global
phenomenon adopted by both researchers and practitioners in a plethora of
contexts. Hackathon events are generally used to accelerate the development of,
for example, scientific results and collaborations, communities, and innovative
prototypes addressing urgent challenges. As hackathons have been adopted into
many different contexts, the events have also been adapted in numerous ways
corresponding to the unique needs and situations of organizers, participants
and other stakeholders. While these interdisciplinary adaptions, in general
affords many advantages - such as tailoring the format to specific needs - they
also entail certain challenges, specifically: 1) limited exchange of best
practices, 2) limited exchange of research findings, and 3) larger overarching
questions that require interdisciplinary collaboration are not discovered and
remain unaddressed. We call for interdisciplinary collaborations to address
these challenges. As a first initiative towards this, we performed an
interdisciplinary collaborative analysis in the context of a workshop at the
Lorentz Center, Leiden in December 2021. In this paper, we present the results
of this analysis in terms of six important areas which we envision to
contribute to maturing hackathon research and practice: 1) hackathons for
different purposes, 2) socio-technical event design, 3) scaling up, 4) making
hackathons equitable, 5) studying hackathons, and 6) hackathon goals and how to
reach them. We present these areas in terms of the state of the art and
research proposals and conclude the paper by suggesting next steps needed for
advancing hackathon research and practice.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 15:15:48 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Falk",
"Jeanette",
""
],
[
"Nolte",
"Alexander",
""
],
[
"Huppenkothen",
"Daniela",
""
],
[
"Weinzierl",
"Marion",
""
],
[
"Gama",
"Kiev",
""
],
[
"Spikol",
"Daniel",
""
],
[
"Tollerud",
"Erik",
""
],
[
"Hong",
"Neil Chue",
""
],
[
"Knäpper",
"Ines",
""
],
[
"Hayden",
"Linda Bailey",
""
]
] |
new_dataset
| 0.982202 |
2211.09032
|
Niccol\`o Biondi
|
Niccolo Biondi, Federico Pernici, Matteo Bruni, Daniele Mugnai, and
Alberto Del Bimbo
|
CL2R: Compatible Lifelong Learning Representations
|
Published on ACM TOMM 2022
| null |
10.1145/3564786
| null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a method to partially mimic natural intelligence
for the problem of lifelong learning representations that are compatible. We
take the perspective of a learning agent that is interested in recognizing
object instances in an open dynamic universe in a way in which any update to
its internal feature representation does not render the features in the gallery
unusable for visual search. We refer to this learning problem as Compatible
Lifelong Learning Representations (CL2R) as it considers compatible
representation learning within the lifelong learning paradigm. We identify
stationarity as the property that the feature representation is required to
hold to achieve compatibility and propose a novel training procedure that
encourages local and global stationarity on the learned representation. Due to
stationarity, the statistical properties of the learned features do not change
over time, making them interoperable with previously learned features.
Extensive experiments on standard benchmark datasets show that our CL2R
training procedure outperforms alternative baselines and state-of-the-art
methods. We also provide novel metrics to specifically evaluate compatible
representation learning under catastrophic forgetting in various sequential
learning tasks. Code at
https://github.com/NiccoBiondi/CompatibleLifelongRepresentation.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 16:41:33 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Biondi",
"Niccolo",
""
],
[
"Pernici",
"Federico",
""
],
[
"Bruni",
"Matteo",
""
],
[
"Mugnai",
"Daniele",
""
],
[
"Del Bimbo",
"Alberto",
""
]
] |
new_dataset
| 0.964375 |
2211.09035
|
Yuejia Xiang
|
Xiang Yuejia, Lv Chuanhao, Liu Qingdazhu, Yang Xiaocui, Liu Bo, Ju
Meizhi
|
A Creative Industry Image Generation Dataset Based on Captions
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most image generation methods are difficult to precisely control the
properties of the generated images, such as structure, scale, shape, etc.,
which limits its large-scale application in creative industries such as
conceptual design and graphic design, and so on. Using the prompt and the
sketch is a practical solution for controllability. Existing datasets lack
either prompt or sketch and are not designed for the creative industry. Here is
the main contribution of our work. a) This is the first dataset that covers the
4 most important areas of creative industry domains and is labeled with prompt
and sketch. b) We provide multiple reference images in the test set and
fine-grained scores for each reference which are useful for measurement. c) We
apply two state-of-the-art models to our dataset and then find some
shortcomings, such as the prompt is more highly valued than the sketch.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 16:46:49 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Yuejia",
"Xiang",
""
],
[
"Chuanhao",
"Lv",
""
],
[
"Qingdazhu",
"Liu",
""
],
[
"Xiaocui",
"Yang",
""
],
[
"Bo",
"Liu",
""
],
[
"Meizhi",
"Ju",
""
]
] |
new_dataset
| 0.998367 |
2211.09085
|
Robert Stojnic
|
Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony
Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, Robert Stojnic
|
Galactica: A Large Language Model for Science
| null | null | null | null |
cs.CL stat.ML
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Information overload is a major obstacle to scientific progress. The
explosive growth in scientific literature and data has made it ever harder to
discover useful insights in a large mass of information. Today scientific
knowledge is accessed through search engines, but they are unable to organize
scientific knowledge alone. In this paper we introduce Galactica: a large
language model that can store, combine and reason about scientific knowledge.
We train on a large scientific corpus of papers, reference material, knowledge
bases and many other sources. We outperform existing models on a range of
scientific tasks. On technical knowledge probes such as LaTeX equations,
Galactica outperforms the latest GPT-3 by 68.2% versus 49.0%. Galactica also
performs well on reasoning, outperforming Chinchilla on mathematical MMLU by
41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%. It
also sets a new state-of-the-art on downstream tasks such as PubMedQA and
MedMCQA dev of 77.6% and 52.9%. And despite not being trained on a general
corpus, Galactica outperforms BLOOM and OPT-175B on BIG-bench. We believe these
results demonstrate the potential for language models as a new interface for
science. We open source the model for the benefit of the scientific community.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 18:06:33 GMT"
}
] | 2022-11-17T00:00:00 |
[
[
"Taylor",
"Ross",
""
],
[
"Kardas",
"Marcin",
""
],
[
"Cucurull",
"Guillem",
""
],
[
"Scialom",
"Thomas",
""
],
[
"Hartshorn",
"Anthony",
""
],
[
"Saravia",
"Elvis",
""
],
[
"Poulton",
"Andrew",
""
],
[
"Kerkez",
"Viktor",
""
],
[
"Stojnic",
"Robert",
""
]
] |
new_dataset
| 0.997968 |
1910.10376
|
Zahed Rahmati
|
Bardia Hamedmohseni, Zahed Rahmati, Debajyoti Mondal
|
Emanation Graph: A Plane Geometric Spanner with Steiner Points
|
A preliminary version of this work was presented at the 30th Canadian
Conference on Computational Geometry (CCCG) and the 46th International
Conference on Current Trends in Theory and Practice of Computer Science
(SOFSEM)
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An emanation graph of grade $k$ on a set of points is a plane spanner made by
shooting $2^{k+1}$ equally spaced rays from each point, where the shorter rays
stop the longer ones upon collision. The collision points are the Steiner
points of the spanner. Emanation graphs of grade one were studied by Mondal and
Nachmanson in the context of network visualization. They proved that the
spanning ratio of such a graph is bounded by $(2+\sqrt{2})\approx 3.414$. We
improve this upper bound to $\sqrt{10} \approx 3.162$ and show this to be
tight, i.e., there exist emanation graphs with spanning ratio $\sqrt{10}$. We
show that for every fixed $k$, the emanation graphs of grade $k$ are constant
spanners, where the constant factor depends on $k$.
An emanation graph of grade two may have twice the number of edges compared
to grade one graphs. Hence we introduce a heuristic method for simplifying
them.
In particular, we compare simplified emanation graphs against Shewchuk's
constrained Delaunay triangulations on both synthetic and real-life datasets.
Our experimental results reveal that the simplified emanation graphs outperform
constrained Delaunay triangulations in common quality measures (e.g., edge
count, angular resolution, average degree, total edge length) while maintaining
a comparable spanning ratio and Steiner point count.
|
[
{
"version": "v1",
"created": "Wed, 23 Oct 2019 06:08:15 GMT"
},
{
"version": "v2",
"created": "Sat, 15 May 2021 20:39:53 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Nov 2022 03:34:38 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Hamedmohseni",
"Bardia",
""
],
[
"Rahmati",
"Zahed",
""
],
[
"Mondal",
"Debajyoti",
""
]
] |
new_dataset
| 0.996284 |
2011.12427
|
Luiz A. Zanlorensi
|
Luiz A. Zanlorensi and Rayson Laroca and Diego R. Lucio and Lucas R.
Santos and Alceu S. Britto Jr. and David Menotti
|
A New Periocular Dataset Collected by Mobile Devices in Unconstrained
Scenarios
| null |
Scientific Reports, vol. 12, p. 17989, 2022
|
10.1038/s41598-022-22811-y
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, ocular biometrics in unconstrained environments using images
obtained at visible wavelength have gained the researchers' attention,
especially with images captured by mobile devices. Periocular recognition has
been demonstrated to be an alternative when the iris trait is not available due
to occlusions or low image resolution. However, the periocular trait does not
have the high uniqueness presented in the iris trait. Thus, the use of datasets
containing many subjects is essential to assess biometric systems' capacity to
extract discriminating information from the periocular region. Also, to address
the within-class variability caused by lighting and attributes in the
periocular region, it is of paramount importance to use datasets with images of
the same subject captured in distinct sessions. As the datasets available in
the literature do not present all these factors, in this work, we present a new
periocular dataset containing samples from 1,122 subjects, acquired in 3
sessions by 196 different mobile devices. The images were captured under
unconstrained environments with just a single instruction to the participants:
to place their eyes on a region of interest. We also performed an extensive
benchmark with several Convolutional Neural Network (CNN) architectures and
models that have been employed in state-of-the-art approaches based on
Multi-class Classification, Multitask Learning, Pairwise Filters Network, and
Siamese Network. The results achieved in the closed- and open-world protocol,
considering the identification and verification tasks, show that this area
still needs research and development.
|
[
{
"version": "v1",
"created": "Tue, 24 Nov 2020 22:20:37 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2022 22:34:16 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Zanlorensi",
"Luiz A.",
""
],
[
"Laroca",
"Rayson",
""
],
[
"Lucio",
"Diego R.",
""
],
[
"Santos",
"Lucas R.",
""
],
[
"Britto",
"Alceu S.",
"Jr."
],
[
"Menotti",
"David",
""
]
] |
new_dataset
| 0.999692 |
2104.13097
|
Michael Lampis
|
Michael Lampis
|
Minimum Stable Cut and Treewidth
|
Full version of ICALP 2021 paper
| null | null | null |
cs.CC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A stable or locally-optimal cut of a graph is a cut whose weight cannot be
increased by changing the side of a single vertex. In this paper we study
Minimum Stable Cut, the problem of finding a stable cut of minimum weight.
Since this problem is NP-hard, we study its complexity on graphs of low
treewidth, low degree, or both. We begin by showing that the problem remains
weakly NP-hard on severely restricted trees, so bounding treewidth alone cannot
make it tractable. We match this hardness with a pseudo-polynomial DP algorithm
solving the problem in time $(\Delta\cdot W)^{O(tw)}n^{O(1)}$, where $tw$ is
the treewidth, $\Delta$ the maximum degree, and $W$ the maximum weight. On the
other hand, bounding $\Delta$ is also not enough, as the problem is NP-hard for
unweighted graphs of bounded degree. We therefore parameterize Minimum Stable
Cut by both $tw$ and $\Delta$ and obtain an FPT algorithm running in time
$2^{O(\Delta tw)}(n+\log W)^{O(1)}$. Our main result for the weighted problem
is to provide a reduction showing that both aforementioned algorithms are
essentially optimal, even if we replace treewidth by pathwidth: if there exists
an algorithm running in $(nW)^{o(pw)}$ or $2^{o(\Delta pw)}(n+\log W)^{O(1)}$,
then the ETH is false. Complementing this, we show that we can, however, obtain
an FPT approximation scheme parameterized by treewidth, if we consider
almost-stable solutions, that is, solutions where no single vertex can
unilaterally increase the weight of its incident cut edges by more than a
factor of $(1+\varepsilon)$.
Motivated by these mostly negative results, we consider Unweighted Minimum
Stable Cut. Here our results already imply a much faster exact algorithm
running in time $\Delta^{O(tw)}n^{O(1)}$. We show that this is also probably
essentially optimal: an algorithm running in $n^{o(pw)}$ would contradict the
ETH.
|
[
{
"version": "v1",
"created": "Tue, 27 Apr 2021 10:42:04 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Nov 2022 17:25:46 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Lampis",
"Michael",
""
]
] |
new_dataset
| 0.995262 |
2105.05172
|
Hayato Takahashi
|
Hayato Takahashi
|
The explicit formulae for the distributions of nonoverlapping words and
its applications to statistical tests for pseudo random numbers
| null | null | null | null |
cs.IT cs.DM math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The distributions of the number of occurrences of words (the distributions of
words for short) play key roles in information theory, statistics, probability
theory, ergodic theory, computer science, and DNA analysis.
Bassino et al. 2010 and Regnier et al. 1998 showed generating functions of
the distributions of words for all sample sizes. Robin et al. 1999 presented
generating functions of the distributions for the return time of words and
demonstrated a recurrence formula for these distributions. These generating
functions are rational functions; except for simple cases, it is difficult to
expand them into power series. In this paper, we study finite-dimensional
generating functions of the distributions of nonoverlapping words for each
fixed sample size and demonstrate the explicit formulae for the distributions
of words for the Bernoulli models. Our results are generalized to
nonoverlapping partial words. We study statistical tests that depend on the
number of occurrences of words and the number of block-wise occurrences of
words, respectively. We demonstrate that the power of the test that depends on
the number of occurrences of words is significantly large compared to the other
one. Finally, we apply our results to statistical tests for pseudo random
numbers.
|
[
{
"version": "v1",
"created": "Tue, 11 May 2021 16:27:48 GMT"
},
{
"version": "v2",
"created": "Thu, 27 May 2021 12:23:34 GMT"
},
{
"version": "v3",
"created": "Sat, 29 May 2021 12:11:46 GMT"
},
{
"version": "v4",
"created": "Tue, 15 Nov 2022 14:12:44 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Takahashi",
"Hayato",
""
]
] |
new_dataset
| 0.986417 |
2201.05955
|
Alisa Liu
|
Alisa Liu, Swabha Swayamdipta, Noah A. Smith, Yejin Choi
|
WANLI: Worker and AI Collaboration for Natural Language Inference
Dataset Creation
|
EMNLP Findings camera-ready
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
A recurring challenge of crowdsourcing NLP datasets at scale is that human
writers often rely on repetitive patterns when crafting examples, leading to a
lack of linguistic diversity. We introduce a novel approach for dataset
creation based on worker and AI collaboration, which brings together the
generative strength of language models and the evaluative strength of humans.
Starting with an existing dataset, MultiNLI for natural language inference
(NLI), our approach uses dataset cartography to automatically identify examples
that demonstrate challenging reasoning patterns, and instructs GPT-3 to compose
new examples with similar patterns. Machine generated examples are then
automatically filtered, and finally revised and labeled by human crowdworkers.
The resulting dataset, WANLI, consists of 107,885 NLI examples and presents
unique empirical strengths over existing NLI datasets. Remarkably, training a
model on WANLI improves performance on eight out-of-domain test sets we
consider, including by 11% on HANS and 9% on Adversarial NLI, compared to
training on the 4x larger MultiNLI. Moreover, it continues to be more effective
than MultiNLI augmented with other NLI datasets. Our results demonstrate the
promise of leveraging natural language generation techniques and re-imagining
the role of humans in the dataset creation process.
|
[
{
"version": "v1",
"created": "Sun, 16 Jan 2022 03:13:49 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Apr 2022 20:12:20 GMT"
},
{
"version": "v3",
"created": "Sat, 25 Jun 2022 02:13:09 GMT"
},
{
"version": "v4",
"created": "Sun, 23 Oct 2022 18:31:44 GMT"
},
{
"version": "v5",
"created": "Tue, 15 Nov 2022 00:42:00 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Liu",
"Alisa",
""
],
[
"Swayamdipta",
"Swabha",
""
],
[
"Smith",
"Noah A.",
""
],
[
"Choi",
"Yejin",
""
]
] |
new_dataset
| 0.999827 |
2203.02798
|
Aleksandros Sobczyk
|
Aleksandros Sobczyk and Efstratios Gallopoulos
|
pylspack: Parallel algorithms and data structures for sketching, column
subset selection, regression and leverage scores
|
To appear in ACM TOMS
| null |
10.1145/3555370
| null |
cs.DS cs.DC cs.MS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present parallel algorithms and data structures for three fundamental
operations in Numerical Linear Algebra: (i) Gaussian and CountSketch random
projections and their combination, (ii) computation of the Gram matrix and
(iii) computation of the squared row norms of the product of two matrices, with
a special focus on "tall-and-skinny" matrices, which arise in many
applications. We provide a detailed analysis of the ubiquitous CountSketch
transform and its combination with Gaussian random projections, accounting for
memory requirements, computational complexity and workload balancing. We also
demonstrate how these results can be applied to column subset selection, least
squares regression and leverage scores computation. These tools have been
implemented in pylspack, a publicly available Python package
(https://github.com/IBM/pylspack) whose core is written in C++ and parallelized
with OpenMP, and which is compatible with standard matrix data structures of
SciPy and NumPy. Extensive numerical experiments indicate that the proposed
algorithms scale well and significantly outperform existing libraries for
tall-and-skinny matrices.
|
[
{
"version": "v1",
"created": "Sat, 5 Mar 2022 18:21:05 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2022 22:09:46 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Sobczyk",
"Aleksandros",
""
],
[
"Gallopoulos",
"Efstratios",
""
]
] |
new_dataset
| 0.992971 |
2203.10232
|
Yifu Qiu
|
Yifu Qiu, Hongyu Li, Yingqi Qu, Ying Chen, Qiaoqiao She, Jing Liu, Hua
Wu, Haifeng Wang
|
DuReader_retrieval: A Large-scale Chinese Benchmark for Passage
Retrieval from Web Search Engine
|
EMNLP 2022, 13 pages
| null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present DuReader_retrieval, a large-scale Chinese dataset
for passage retrieval. DuReader_retrieval contains more than 90K queries and
over 8M unique passages from a commercial search engine. To alleviate the
shortcomings of other datasets and ensure the quality of our benchmark, we (1)
reduce the false negatives in development and test sets by manually annotating
results pooled from multiple retrievers, and (2) remove the training queries
that are semantically similar to the development and testing queries.
Additionally, we provide two out-of-domain testing sets for cross-domain
evaluation, as well as a set of human translated queries for for cross-lingual
retrieval evaluation. The experiments demonstrate that DuReader_retrieval is
challenging and a number of problems remain unsolved, such as the salient
phrase mismatch and the syntactic mismatch between queries and paragraphs.
These experiments also show that dense retrievers do not generalize well across
domains, and cross-lingual retrieval is essentially challenging.
DuReader_retrieval is publicly available at
https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 03:24:53 GMT"
},
{
"version": "v2",
"created": "Tue, 24 May 2022 16:02:51 GMT"
},
{
"version": "v3",
"created": "Wed, 25 May 2022 11:49:41 GMT"
},
{
"version": "v4",
"created": "Tue, 15 Nov 2022 14:42:31 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Qiu",
"Yifu",
""
],
[
"Li",
"Hongyu",
""
],
[
"Qu",
"Yingqi",
""
],
[
"Chen",
"Ying",
""
],
[
"She",
"Qiaoqiao",
""
],
[
"Liu",
"Jing",
""
],
[
"Wu",
"Hua",
""
],
[
"Wang",
"Haifeng",
""
]
] |
new_dataset
| 0.999582 |
2205.11097
|
Lijie Wang
|
Lijie Wang, Yaozong Shen, Shuyuan Peng, Shuai Zhang, Xinyan Xiao, Hao
Liu, Hongxuan Tang, Ying Chen, Hua Wu, Haifeng Wang
|
A Fine-grained Interpretability Evaluation Benchmark for Neural NLP
| null |
CoNLL 2022
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
While there is increasing concern about the interpretability of neural
models, the evaluation of interpretability remains an open problem, due to the
lack of proper evaluation datasets and metrics. In this paper, we present a
novel benchmark to evaluate the interpretability of both neural models and
saliency methods. This benchmark covers three representative NLP tasks:
sentiment analysis, textual similarity and reading comprehension, each provided
with both English and Chinese annotated data. In order to precisely evaluate
the interpretability, we provide token-level rationales that are carefully
annotated to be sufficient, compact and comprehensive. We also design a new
metric, i.e., the consistency between the rationales before and after
perturbations, to uniformly evaluate the interpretability on different types of
tasks. Based on this benchmark, we conduct experiments on three typical models
with three saliency methods, and unveil their strengths and weakness in terms
of interpretability. We will release this benchmark
https://www.luge.ai/#/luge/task/taskDetail?taskId=15 and hope it can facilitate
the research in building trustworthy systems.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 07:37:04 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Nov 2022 02:09:01 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Wang",
"Lijie",
""
],
[
"Shen",
"Yaozong",
""
],
[
"Peng",
"Shuyuan",
""
],
[
"Zhang",
"Shuai",
""
],
[
"Xiao",
"Xinyan",
""
],
[
"Liu",
"Hao",
""
],
[
"Tang",
"Hongxuan",
""
],
[
"Chen",
"Ying",
""
],
[
"Wu",
"Hua",
""
],
[
"Wang",
"Haifeng",
""
]
] |
new_dataset
| 0.991732 |
2206.00288
|
Jan Tobias Muehlberg
|
Jan Tobias Muehlberg
|
Sustaining Security and Safety in ICT: A Quest for Terminology,
Objectives, and Limits
| null |
LIMITS '22: Workshop on Computing within Limits, June 21--22, 2022
|
10.21428/bf6fb269.58c3a89d
| null |
cs.CR cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Security and safety are intertwined concepts in the world of computing. In
recent years, the terms "sustainable security" and "sustainable safety" came
into fashion and are being used referring to a variety of systems properties
ranging from efficiency to profitability, and sometimes meaning that a product
or service is good for people and planet. This leads to confusing perceptions
of products where customers might expect a sustainable product to be developed
without child labour, while the producer uses the term to signify that their
new product uses marginally less power than the previous generation of that
products. Even in research on sustainably safe and secure ICT, these different
notions of terminology are prevalent. As researchers we often work towards
optimising our subject of study towards one specific sustainability metric -
let's say energy consumption - while being blissfully unaware of, e.g., social
impacts, life-cycle impacts, or rebound effects of such optimisations.
In this paper I dissect the idea of sustainable safety and security, starting
from the questions of what we want to sustain, and for whom we want to sustain
it. I believe that a general "people and planet" answer is inadequate here
because this form of sustainability cannot be the property of a single industry
sector but must be addressed by society as a whole. However, with sufficient
understanding of life-cycle impacts we may very well be able to devise research
and development efforts, and inform decision making processes towards the use
of integrated safety and security solutions that help us to address societal
challenges in the context of the climate and ecological crises, and that are
aligned with concepts such as intersectionality and climate justice. Of course,
these solutions can only be effective if they are embedded in societal and
economic change towards more frugal uses of data and ICT.
|
[
{
"version": "v1",
"created": "Wed, 1 Jun 2022 07:46:17 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jul 2022 14:01:19 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Nov 2022 16:35:12 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Muehlberg",
"Jan Tobias",
""
]
] |
new_dataset
| 0.99593 |
2208.13301
|
Sunita Chandrasekaran
|
Thomas Huber, Swaroop Pophale, Nolan Baker, Michael Carr, Nikhil Rao,
Jaydon Reap, Kristina Holsapple, Joshua Hoke Davis, Tobias Burnus, Seyong
Lee, David E. Bernholdt, Sunita Chandrasekaran
|
ECP SOLLVE: Validation and Verification Testsuite Status Update and
Compiler Insight for OpenMP
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
The OpenMP language continues to evolve with every new specification release,
as does the need to validate and verify the new features that have been
introduced. With the release of OpenMP 5.0 and OpenMP 5.1, plenty of new target
offload and host-based features have been introduced to the programming model.
While OpenMP continues to grow in maturity, there is an observable growth in
the number of compiler and hardware vendors that support OpenMP. In this
manuscript, we focus on evaluating the conformity and implementation progress
of various compiler vendors such as Cray, IBM, GNU, Clang/LLVM, NVIDIA, Intel
and AMD. We specifically address the 4.5, 5.0, and 5.1 versions of the
specification.
|
[
{
"version": "v1",
"created": "Sun, 28 Aug 2022 22:10:53 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Oct 2022 14:57:32 GMT"
},
{
"version": "v3",
"created": "Tue, 15 Nov 2022 00:05:19 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Huber",
"Thomas",
""
],
[
"Pophale",
"Swaroop",
""
],
[
"Baker",
"Nolan",
""
],
[
"Carr",
"Michael",
""
],
[
"Rao",
"Nikhil",
""
],
[
"Reap",
"Jaydon",
""
],
[
"Holsapple",
"Kristina",
""
],
[
"Davis",
"Joshua Hoke",
""
],
[
"Burnus",
"Tobias",
""
],
[
"Lee",
"Seyong",
""
],
[
"Bernholdt",
"David E.",
""
],
[
"Chandrasekaran",
"Sunita",
""
]
] |
new_dataset
| 0.999737 |
2210.03797
|
Asahi Ushio
|
Asahi Ushio and Leonardo Neves and Vitor Silva and Francesco Barbieri
and Jose Camacho-Collados
|
Named Entity Recognition in Twitter: A Dataset and Analysis on
Short-Term Temporal Shifts
|
AACL 2022 main conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recent progress in language model pre-training has led to important
improvements in Named Entity Recognition (NER). Nonetheless, this progress has
been mainly tested in well-formatted documents such as news, Wikipedia, or
scientific articles. In social media the landscape is different, in which it
adds another layer of complexity due to its noisy and dynamic nature. In this
paper, we focus on NER in Twitter, one of the largest social media platforms,
and construct a new NER dataset, TweetNER7, which contains seven entity types
annotated over 11,382 tweets from September 2019 to August 2021. The dataset
was constructed by carefully distributing the tweets over time and taking
representative trends as a basis. Along with the dataset, we provide a set of
language model baselines and perform an analysis on the language model
performance on the task, especially analyzing the impact of different time
periods. In particular, we focus on three important temporal aspects in our
analysis: short-term degradation of NER models over time, strategies to
fine-tune a language model over different periods, and self-labeling as an
alternative to lack of recently-labeled data. TweetNER7 is released publicly
(https://huggingface.co/datasets/tner/tweetner7) along with the models
fine-tuned on it.
|
[
{
"version": "v1",
"created": "Fri, 7 Oct 2022 19:58:47 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Nov 2022 13:58:40 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Ushio",
"Asahi",
""
],
[
"Neves",
"Leonardo",
""
],
[
"Silva",
"Vitor",
""
],
[
"Barbieri",
"Francesco",
""
],
[
"Camacho-Collados",
"Jose",
""
]
] |
new_dataset
| 0.99921 |
2210.10362
|
Xuehai He
|
Xuehai He, Diji Yang, Weixi Feng, Tsu-Jui Fu, Arjun Akula, Varun
Jampani, Pradyumna Narayana, Sugato Basu, William Yang Wang, Xin Eric Wang
|
CPL: Counterfactual Prompt Learning for Vision and Language Models
| null | null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prompt tuning is a new few-shot transfer learning technique that only tunes
the learnable prompt for pre-trained vision and language models such as CLIP.
However, existing prompt tuning methods tend to learn spurious or entangled
representations, which leads to poor generalization to unseen concepts. Towards
non-spurious and efficient prompt learning from limited examples, this paper
presents a novel \underline{\textbf{C}}ounterfactual
\underline{\textbf{P}}rompt \underline{\textbf{L}}earning (CPL) method for
vision and language models, which simultaneously employs counterfactual
generation and contrastive learning in a joint optimization framework.
Particularly, CPL constructs counterfactual by identifying minimal non-spurious
feature change between semantically-similar positive and negative samples that
causes concept change, and learns more generalizable prompt representation from
both factual and counterfactual examples via contrastive learning. Extensive
experiments demonstrate that CPL can obtain superior few-shot performance on
different vision and language tasks than previous prompt tuning methods on
CLIP. On image classification, we achieve 3.55\% average relative improvement
on unseen classes across seven datasets; on image-text retrieval and visual
question answering, we gain up to 4.09\% and 25.08\% relative improvements
across three few-shot scenarios on unseen test sets respectively.
|
[
{
"version": "v1",
"created": "Wed, 19 Oct 2022 08:06:39 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Oct 2022 05:10:22 GMT"
},
{
"version": "v3",
"created": "Sat, 5 Nov 2022 03:51:49 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"He",
"Xuehai",
""
],
[
"Yang",
"Diji",
""
],
[
"Feng",
"Weixi",
""
],
[
"Fu",
"Tsu-Jui",
""
],
[
"Akula",
"Arjun",
""
],
[
"Jampani",
"Varun",
""
],
[
"Narayana",
"Pradyumna",
""
],
[
"Basu",
"Sugato",
""
],
[
"Wang",
"William Yang",
""
],
[
"Wang",
"Xin Eric",
""
]
] |
new_dataset
| 0.99236 |
2210.14353
|
Victor Zhong
|
Victor Zhong, Weijia Shi, Wen-tau Yih, Luke Zettlemoyer
|
RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question
Answering
|
The source code and evaluation for RoMQA are at
https://github.com/facebookresearch/romqa
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce RoMQA, the first benchmark for robust, multi-evidence,
multi-answer question answering (QA). RoMQA contains clusters of questions that
are derived from related constraints mined from the Wikidata knowledge graph.
RoMQA evaluates robustness of QA models to varying constraints by measuring
worst-case performance within each question cluster. Compared to prior QA
datasets, RoMQA has more human-written questions that require reasoning over
more evidence text and have, on average, many more correct answers. In
addition, human annotators rate RoMQA questions as more natural or likely to be
asked by people. We evaluate state-of-the-art large language models in
zero-shot, few-shot, and fine-tuning settings, and find that RoMQA is
challenging: zero-shot and few-shot models perform similarly to naive
baselines, while supervised retrieval methods perform well below gold evidence
upper bounds. Moreover, existing models are not robust to variations in
question constraints, but can be made more robust by tuning on clusters of
related questions. Our results show that RoMQA is a challenging benchmark for
large language models, and provides a quantifiable test to build more robust QA
methods.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 21:39:36 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Nov 2022 17:30:07 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Zhong",
"Victor",
""
],
[
"Shi",
"Weijia",
""
],
[
"Yih",
"Wen-tau",
""
],
[
"Zettlemoyer",
"Luke",
""
]
] |
new_dataset
| 0.999676 |
2211.01633
|
Bj\"orn Koopmann
|
Bj\"orn Koopmann, Stefan Puch, G\"unter Ehmen, Martin Fr\"anzle
|
Cooperative Maneuvers of Highly Automated Vehicles at Urban
Intersections: A Game-theoretic Approach
| null |
Proceedings of the 6th International Conference on Vehicle
Technology and Intelligent Transport Systems (VEHITS), pp. 15-26, 2020
|
10.5220/0009351500150026
| null |
cs.GT cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose an approach how connected and highly automated
vehicles can perform cooperative maneuvers such as lane changes and left-turns
at urban intersections where they have to deal with human-operated vehicles and
vulnerable road users such as cyclists and pedestrians in so-called mixed
traffic. In order to support cooperative maneuvers the urban intersection is
equipped with an intelligent controller which has access to different sensors
along the intersection to detect and predict the behavior of the traffic
participants involved. Since the intersection controller cannot directly
control all road users and - not least due to the legal situation - driving
decisions must always be made by the vehicle controller itself, we focus on a
decentralized control paradigm. In this context, connected and highly automated
vehicles use some carefully selected game theory concepts to make the best
possible and clear decisions about cooperative maneuvers. The aim is to improve
traffic efficiency while maintaining road safety at the same time. Our first
results obtained with a prototypical implementation of the approach in a
traffic simulation are promising.
|
[
{
"version": "v1",
"created": "Thu, 3 Nov 2022 07:49:51 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Koopmann",
"Björn",
""
],
[
"Puch",
"Stefan",
""
],
[
"Ehmen",
"Günter",
""
],
[
"Fränzle",
"Martin",
""
]
] |
new_dataset
| 0.997982 |
2211.02579
|
Mohammad Raashid Ansari
|
Jean-Philippe Monteuuis, Jonathan Petit, Mohammad Raashid Ansari, Cong
Chen, Seung Yang
|
V2X Misbehavior in Maneuver Sharing and Coordination Service:
Considerations for Standardization
|
7 pages, 4 figures, 4 tables, IEEE CSCN 2022. arXiv admin note: text
overlap with arXiv:2112.02184
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Connected and Automated Vehicles (CAV) use sensors and wireless communication
to improve road safety and efficiency. However, attackers may target
Vehicle-to-Everything (V2X) communication. Indeed, an attacker may send
authenticated-but-wrong data to send false location information, alert
incorrect events, or report a bogus object endangering safety of other CAVs.
Standardization Development Organizations (SDO) are currently working on
developing security standards against such attacks. Unfortunately, current
standardization efforts do not include misbehavior specifications for advanced
V2X services such as Maneuver Sharing and Coordination Service (MSCS). This
work assesses the security of MSC Messages (MSCM) and proposes inputs for
consideration in existing standards.
|
[
{
"version": "v1",
"created": "Fri, 4 Nov 2022 16:50:21 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2022 21:26:27 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Monteuuis",
"Jean-Philippe",
""
],
[
"Petit",
"Jonathan",
""
],
[
"Ansari",
"Mohammad Raashid",
""
],
[
"Chen",
"Cong",
""
],
[
"Yang",
"Seung",
""
]
] |
new_dataset
| 0.993466 |
2211.07549
|
Ying Xu
|
Ying Xu, Romane Gauriau, Anna Decker, Jacob Oppenheim
|
Phenotype Detection in Real World Data via Online MixEHR Algorithm
|
Extended Abstract presented at Machine Learning for Health (ML4H)
symposium 2022, November 28th, 2022, New Orleans, United States & Virtual,
http://www.ml4h.cc, 6 pages
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Understanding patterns of diagnoses, medications, procedures, and laboratory
tests from electronic health records (EHRs) and health insurer claims is
important for understanding disease risk and for efficient clinical
development, which often require rules-based curation in collaboration with
clinicians. We extended an unsupervised phenotyping algorithm, mixEHR, to an
online version allowing us to use it on order of magnitude larger datasets
including a large, US-based claims dataset and a rich regional EHR dataset. In
addition to recapitulating previously observed disease groups, we discovered
clinically meaningful disease subtypes and comorbidities. This work scaled up
an effective unsupervised learning method, reinforced existing clinical
knowledge, and is a promising approach for efficient collaboration with
clinicians.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 17:14:39 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Nov 2022 14:19:28 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Xu",
"Ying",
""
],
[
"Gauriau",
"Romane",
""
],
[
"Decker",
"Anna",
""
],
[
"Oppenheim",
"Jacob",
""
]
] |
new_dataset
| 0.997497 |
2211.07709
|
MD Abdullah Al Nasim
|
Md Aminul Haque Palash, Akib Khan, Kawsarul Islam, MD Abdullah Al
Nasim, Ryan Mohammad Bin Shahjahan
|
Incongruity Detection between Bangla News Headline and Body Content
through Graph Neural Network
|
6 figures, 2 tables
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Incongruity between news headlines and the body content is a common method of
deception used to attract readers. Profitable headlines pique readers' interest
and encourage them to visit a specific website. This is usually done by adding
an element of dishonesty, using enticements that do not precisely reflect the
content being delivered. As a result, automatic detection of incongruent news
between headline and body content using language analysis has gained the
research community's attention. However, various solutions are primarily being
developed for English to address this problem, leaving low-resource languages
out of the picture. Bangla is ranked 7th among the top 100 most widely spoken
languages, which motivates us to pay special attention to the Bangla language.
Furthermore, Bangla has a more complex syntactic structure and fewer natural
language processing resources, so it becomes challenging to perform NLP tasks
like incongruity detection and stance detection. To tackle this problem, for
the Bangla language, we offer a graph-based hierarchical dual encoder (BGHDE)
model that learns the content similarity and contradiction between Bangla news
headlines and content paragraphs effectively. The experimental results show
that the proposed Bangla graph-based neural network model achieves above 90%
accuracy on various Bangla news datasets.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 20:57:45 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Palash",
"Md Aminul Haque",
""
],
[
"Khan",
"Akib",
""
],
[
"Islam",
"Kawsarul",
""
],
[
"Nasim",
"MD Abdullah Al",
""
],
[
"Shahjahan",
"Ryan Mohammad Bin",
""
]
] |
new_dataset
| 0.974493 |
2211.07712
|
Muhammad Nasir Zafar
|
Dr. Omer Beg, Muhammad Nasir Zafar, Waleed Anjum
|
Cloning Ideology and Style using Deep Learning
|
11 pages, 7 figures, 3 tables
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Text generation tasks have gotten the attention of researchers in the last
few years because of their applications on a large scale.In the past, many
researchers focused on task-based text generations.Our research focuses on text
generation based on the ideology and style of a specific author, and text
generation on a topic that was not written by the same author in the past.Our
trained model requires an input prompt containing initial few words of text to
produce a few paragraphs of text based on the ideology and style of the author
on which the model is trained.Our methodology to accomplish this task is based
on Bi-LSTM.The Bi-LSTM model is used to make predictions at the character
level, during the training corpus of a specific author is used along with the
ground truth corpus.A pre-trained model is used to identify the sentences of
ground truth having contradiction with the author's corpus to make our language
model inclined.During training, we have achieved a perplexity score of 2.23 at
the character level. The experiments show a perplexity score of around 3 over
the test dataset.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 11:37:19 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Beg",
"Dr. Omer",
""
],
[
"Zafar",
"Muhammad Nasir",
""
],
[
"Anjum",
"Waleed",
""
]
] |
new_dataset
| 0.963094 |
2211.07737
|
Hira Dhamyal
|
Hira Dhamyal, Benjamin Elizalde, Soham Deshmukh, Huaming Wang, Bhiksha
Raj, Rita Singh
|
Describing emotions with acoustic property prompts for speech emotion
recognition
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Emotions lie on a broad continuum and treating emotions as a discrete number
of classes limits the ability of a model to capture the nuances in the
continuum. The challenge is how to describe the nuances of emotions and how to
enable a model to learn the descriptions. In this work, we devise a method to
automatically create a description (or prompt) for a given audio by computing
acoustic properties, such as pitch, loudness, speech rate, and articulation
rate. We pair a prompt with its corresponding audio using 5 different emotion
datasets. We trained a neural network model using these audio-text pairs. Then,
we evaluate the model using one more dataset. We investigate how the model can
learn to associate the audio with the descriptions, resulting in performance
improvement of Speech Emotion Recognition and Speech Audio Retrieval. We expect
our findings to motivate research describing the broad continuum of emotion
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 20:29:37 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Dhamyal",
"Hira",
""
],
[
"Elizalde",
"Benjamin",
""
],
[
"Deshmukh",
"Soham",
""
],
[
"Wang",
"Huaming",
""
],
[
"Raj",
"Bhiksha",
""
],
[
"Singh",
"Rita",
""
]
] |
new_dataset
| 0.952618 |
2211.07748
|
Harry Freeman
|
Harry Freeman, Eric Schneider, Chung Hee Kim, Moonyoung Lee, George
Kantor
|
3D Reconstruction-Based Seed Counting of Sorghum Panicles for
Agricultural Inspection
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a method for creating high-quality 3D models of
sorghum panicles for phenotyping in breeding experiments. This is achieved with
a novel reconstruction approach that uses seeds as semantic landmarks in both
2D and 3D. To evaluate the performance, we develop a new metric for assessing
the quality of reconstructed point clouds without having a ground-truth point
cloud. Finally, a counting method is presented where the density of seed
centers in the 3D model allows 2D counts from multiple views to be effectively
combined into a whole-panicle count. We demonstrate that using this method to
estimate seed count and weight for sorghum outperforms count extrapolation from
2D images, an approach used in most state of the art methods for seeds and
grains of comparable size.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 20:51:09 GMT"
}
] | 2022-11-16T00:00:00 |
[
[
"Freeman",
"Harry",
""
],
[
"Schneider",
"Eric",
""
],
[
"Kim",
"Chung Hee",
""
],
[
"Lee",
"Moonyoung",
""
],
[
"Kantor",
"George",
""
]
] |
new_dataset
| 0.970768 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.