id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2302.06826
|
Wenhao Chai
|
Shidong Cao, Wenhao Chai, Shengyu Hao, Yanting Zhang, Hangyue Chen,
and Gaoang Wang
|
DiffFashion: Reference-based Fashion Design with Structure-aware
Transfer by Diffusion Models
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image-based fashion design with AI techniques has attracted increasing
attention in recent years. We focus on a new fashion design task, where we aim
to transfer a reference appearance image onto a clothing image while preserving
the structure of the clothing image. It is a challenging task since there are
no reference images available for the newly designed output fashion images.
Although diffusion-based image translation or neural style transfer (NST) has
enabled flexible style transfer, it is often difficult to maintain the original
structure of the image realistically during the reverse diffusion, especially
when the referenced appearance image greatly differs from the common clothing
appearance. To tackle this issue, we present a novel diffusion model-based
unsupervised structure-aware transfer method to semantically generate new
clothes from a given clothing image and a reference appearance image. In
specific, we decouple the foreground clothing with automatically generated
semantic masks by conditioned labels. And the mask is further used as guidance
in the denoising process to preserve the structure information. Moreover, we
use the pre-trained vision Transformer (ViT) for both appearance and structure
guidance. Our experimental results show that the proposed method outperforms
state-of-the-art baseline models, generating more realistic images in the
fashion design task. Code and demo can be found at
https://github.com/Rem105-210/DiffFashion.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 04:45:44 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Cao",
"Shidong",
""
],
[
"Chai",
"Wenhao",
""
],
[
"Hao",
"Shengyu",
""
],
[
"Zhang",
"Yanting",
""
],
[
"Chen",
"Hangyue",
""
],
[
"Wang",
"Gaoang",
""
]
] |
new_dataset
| 0.950305 |
2302.06862
|
Liangwei Yang
|
Jing Ma, Liangwei Yang, Qiong Feng, Weizhi Zhang, Philip S. Yu
|
Graph-based Village Level Poverty Identification
|
5 pages, accepted by theWebConf 2023
| null |
10.1145/3543507.3583864
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Poverty status identification is the first obstacle to eradicating poverty.
Village-level poverty identification is very challenging due to the arduous
field investigation and insufficient information. The development of the Web
infrastructure and its modeling tools provides fresh approaches to identifying
poor villages. Upon those techniques, we build a village graph for village
poverty status identification. By modeling the village connections as a graph
through the geographic distance, we show the correlation between village
poverty status and its graph topological position and identify two key factors
(Centrality, Homophily Decaying effect) for identifying villages. We further
propose the first graph-based method to identify poor villages. It includes a
global Centrality2Vec module to embed village centrality into the dense vector
and a local graph distance convolution module that captures the decaying
effect. In this paper, we make the first attempt to interpret and identify
village-level poverty from a graph perspective.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 06:58:40 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Ma",
"Jing",
""
],
[
"Yang",
"Liangwei",
""
],
[
"Feng",
"Qiong",
""
],
[
"Zhang",
"Weizhi",
""
],
[
"Yu",
"Philip S.",
""
]
] |
new_dataset
| 0.999065 |
2302.06868
|
Koustava Goswami
|
Koustava Goswami, Lukas Lange, Jun Araki, Heike Adel
|
SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for
Classification in Low-Resource Domains
|
Accepted at EACL 2023 Main Conference
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Prompting pre-trained language models leads to promising results across
natural language processing tasks but is less effective when applied in
low-resource domains, due to the domain gap between the pre-training data and
the downstream task. In this work, we bridge this gap with a novel and
lightweight prompting methodology called SwitchPrompt for the adaptation of
language models trained on datasets from the general domain to diverse
low-resource domains. Using domain-specific keywords with a trainable gated
prompt, SwitchPrompt offers domain-oriented prompting, that is, effective
guidance on the target domains for general-domain language models. Our few-shot
experiments on three text classification benchmarks demonstrate the efficacy of
the general-domain pre-trained language models when used with SwitchPrompt.
They often even outperform their domain-specific counterparts trained with
baseline state-of-the-art prompting methods by up to 10.7% performance increase
in accuracy. This result indicates that SwitchPrompt effectively reduces the
need for domain-specific language model pre-training.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 07:14:08 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Goswami",
"Koustava",
""
],
[
"Lange",
"Lukas",
""
],
[
"Araki",
"Jun",
""
],
[
"Adel",
"Heike",
""
]
] |
new_dataset
| 0.974958 |
2302.06895
|
Cong Wang
|
Cong Wang, Eric Florin, Hsing-Yin Chang, Jana Thayer, Chun Hong Yoon
|
SpeckleNN: A unified embedding for real-time speckle pattern
classification in X-ray single-particle imaging with limited labeled examples
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
With X-ray free-electron lasers (XFELs), it is possible to determine the
three-dimensional structure of noncrystalline nanoscale particles using X-ray
single-particle imaging (SPI) techniques at room temperature. Classifying SPI
scattering patterns, or "speckles", to extract single hits that are needed for
real-time vetoing and three-dimensional reconstruction poses a challenge for
high data rate facilities like European XFEL and LCLS-II-HE. Here, we introduce
SpeckleNN, a unified embedding model for real-time speckle pattern
classification with limited labeled examples that can scale linearly with
dataset size. Trained with twin neural networks, SpeckleNN maps speckle
patterns to a unified embedding vector space, where similarity is measured by
Euclidean distance. We highlight its few-shot classification capability on new
never-seen samples and its robust performance despite only tens of labels per
classification category even in the presence of substantial missing detector
areas. Without the need for excessive manual labeling or even a full detector
image, our classification method offers a great solution for real-time
high-throughput SPI experiments.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 08:29:28 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Wang",
"Cong",
""
],
[
"Florin",
"Eric",
""
],
[
"Chang",
"Hsing-Yin",
""
],
[
"Thayer",
"Jana",
""
],
[
"Yoon",
"Chun Hong",
""
]
] |
new_dataset
| 0.983292 |
2302.06917
|
Vera Sosnovik
|
Vera Sosnovik, Romaissa Kessi, Maximin Coavoux, Oana Goga
|
On Detecting Policy-Related Political Ads: An Exploratory Analysis of
Meta Ads in 2022 French Election
|
Proceedings of the ACM Web Conference 2023 (WWW '23), May 1--5, 2023,
Austin, TX, USA
| null |
10.1145/3543507.3583875
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online political advertising has become the cornerstone of political
campaigns. The budget spent solely on political advertising in the U.S. has
increased by more than 100% from \$700 million during the 2017-2018 U.S.
election cycle to \$1.6 billion during the 2020 U.S. presidential elections.
Naturally, the capacity offered by online platforms to micro-target ads with
political content has been worrying lawmakers, journalists, and online
platforms, especially after the 2016 U.S. presidential election, where
Cambridge Analytica has targeted voters with political ads congruent with their
personality
To curb such risks, both online platforms and regulators (through the DSA act
proposed by the European Commission) have agreed that researchers, journalists,
and civil society need to be able to scrutinize the political ads running on
large online platforms. Consequently, online platforms such as Meta and Google
have implemented Ad Libraries that contain information about all political ads
running on their platforms. This is the first step on a long path. Due to the
volume of available data, it is impossible to go through these ads manually,
and we now need automated methods and tools to assist in the scrutiny of
political ads.
In this paper, we focus on political ads that are related to policy.
Understanding which policies politicians or organizations promote and to whom
is essential in determining dishonest representations. This paper proposes
automated methods based on pre-trained models to classify ads in 14 main policy
groups identified by the Comparative Agenda Project (CAP). We discuss several
inherent challenges that arise. Finally, we analyze policy-related ads featured
on Meta platforms during the 2022 French presidential elections period.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 09:04:43 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Sosnovik",
"Vera",
""
],
[
"Kessi",
"Romaissa",
""
],
[
"Coavoux",
"Maximin",
""
],
[
"Goga",
"Oana",
""
]
] |
new_dataset
| 0.99716 |
2302.07036
|
Sairam Sri Vatsavai
|
Sairam Sri Vatsavai, Venkata Sai Praneeth Karempudi, Ishan Thakkar,
Ahmad Salehi, and Todd Hastings
|
SCONNA: A Stochastic Computing Based Optical Accelerator for Ultra-Fast,
Energy-Efficient Inference of Integer-Quantized CNNs
|
To Appear at IPDPS 2023
| null | null | null |
cs.AR cs.AI cs.ET cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The acceleration of a CNN inference task uses convolution operations that are
typically transformed into vector-dot-product (VDP) operations. Several
photonic microring resonators (MRRs) based hardware architectures have been
proposed to accelerate integer-quantized CNNs with remarkably higher throughput
and energy efficiency compared to their electronic counterparts. However, the
existing photonic MRR-based analog accelerators exhibit a very strong trade-off
between the achievable input/weight precision and VDP operation size, which
severely restricts their achievable VDP operation size for the quantized
input/weight precision of 4 bits and higher. The restricted VDP operation size
ultimately suppresses computing throughput to severely diminish the achievable
performance benefits. To address this shortcoming, we for the first time
present a merger of stochastic computing and MRR-based CNN accelerators. To
leverage the innate precision flexibility of stochastic computing, we invent an
MRR-based optical stochastic multiplier (OSM). We employ multiple OSMs in a
cascaded manner using dense wavelength division multiplexing, to forge a novel
Stochastic Computing based Optical Neural Network Accelerator (SCONNA). SCONNA
achieves significantly high throughput and energy efficiency for accelerating
inferences of high-precision quantized CNNs. Our evaluation for the inference
of four modern CNNs at 8-bit input/weight precision indicates that SCONNA
provides improvements of up to 66.5x, 90x, and 91x in frames-per-second (FPS),
FPS/W and FPS/W/mm2, respectively, on average over two photonic MRR-based
analog CNN accelerators from prior work, with Top-1 accuracy drop of only up to
0.4% for large CNNs and up to 1.5% for small CNNs. We developed a
transaction-level, event-driven python-based simulator for the evaluation of
SCONNA and other accelerators (https://github.com/uky-UCAT/SC_ONN_SIM.git).
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 13:35:15 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Vatsavai",
"Sairam Sri",
""
],
[
"Karempudi",
"Venkata Sai Praneeth",
""
],
[
"Thakkar",
"Ishan",
""
],
[
"Salehi",
"Ahmad",
""
],
[
"Hastings",
"Todd",
""
]
] |
new_dataset
| 0.966889 |
2302.07055
|
Fangwen Mu
|
Fangwen Mu, Xiao Chen, Lin Shi, Song Wang, Qing Wang
|
Developer-Intent Driven Code Comment Generation
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing automatic code comment generators mainly focus on producing a
general description of functionality for a given code snippet without
considering developer intentions. However, in real-world practice, comments are
complicated, which often contain information reflecting various intentions of
developers, e.g., functionality summarization, design rationale, implementation
details, code properties, etc. To bridge the gap between automatic code comment
generation and real-world comment practice, we define Developer-Intent Driven
Code Comment Generation, which can generate intent-aware comments for the same
source code with different intents. To tackle this challenging task, we propose
DOME, an approach that utilizes Intent-guided Selective Attention to explicitly
select intent-relevant information from the source code, and produces various
comments reflecting different intents. Our approach is evaluated on two
real-world Java datasets, and the experimental results show that our approach
outperforms the state-of-the-art baselines. A human evaluation also confirms
the significant potential of applying DOME in practical usage, enabling
developers to comment code effectively according to their own needs.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 14:17:55 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Mu",
"Fangwen",
""
],
[
"Chen",
"Xiao",
""
],
[
"Shi",
"Lin",
""
],
[
"Wang",
"Song",
""
],
[
"Wang",
"Qing",
""
]
] |
new_dataset
| 0.995479 |
2302.07104
|
Zahra Azad
|
Zahra Azad, Guowei Yang, Rashmi Agrawal, Daniel Petrisko, Michael
Taylor, Ajay Joshi
|
RISE: RISC-V SoC for En/decryption Acceleration on the Edge for
Homomorphic Encryption
| null | null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Today edge devices commonly connect to the cloud to use its storage and
compute capabilities. This leads to security and privacy concerns about user
data. Homomorphic Encryption (HE) is a promising solution to address the data
privacy problem as it allows arbitrarily complex computations on encrypted data
without ever needing to decrypt it. While there has been a lot of work on
accelerating HE computations in the cloud, little attention has been paid to
the message-to-ciphertext and ciphertext-to-message conversion operations on
the edge. In this work, we profile the edge-side conversion operations, and our
analysis shows that during conversion error sampling, encryption, and
decryption operations are the bottlenecks. To overcome these bottlenecks, we
present RISE, an area and energy-efficient RISC-V SoC. RISE leverages an
efficient and lightweight pseudo-random number generator core and combines it
with fast sampling techniques to accelerate the error sampling operations. To
accelerate the encryption and decryption operations, RISE uses scalable,
data-level parallelism to implement the number theoretic transform operation,
the main bottleneck within the encryption and decryption operations. In
addition, RISE saves area by implementing a unified en/decryption datapath, and
efficiently exploits techniques like memory reuse and data reordering to
utilize a minimal amount of on-chip memory. We evaluate RISE using a complete
RTL design containing a RISC-V processor interfaced with our accelerator. Our
analysis reveals that for message-to-ciphertext conversion and
ciphertext-to-message conversion, using RISE leads up to 6191.19X and 2481.44X
more energy-efficient solution, respectively, than when using just the RISC-V
processor.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 14:58:46 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Azad",
"Zahra",
""
],
[
"Yang",
"Guowei",
""
],
[
"Agrawal",
"Rashmi",
""
],
[
"Petrisko",
"Daniel",
""
],
[
"Taylor",
"Michael",
""
],
[
"Joshi",
"Ajay",
""
]
] |
new_dataset
| 0.961194 |
2302.07120
|
Zhangyang Gao
|
Zhangyang Gao, Yuqi Hu, Cheng Tan, Stan Z. Li
|
PrefixMol: Target- and Chemistry-aware Molecule Design via Prefix
Embedding
| null | null | null | null |
cs.AI cs.LG q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Is there a unified model for generating molecules considering different
conditions, such as binding pockets and chemical properties? Although
target-aware generative models have made significant advances in drug design,
they do not consider chemistry conditions and cannot guarantee the desired
chemical properties. Unfortunately, merging the target-aware and chemical-aware
models into a unified model to meet customized requirements may lead to the
problem of negative transfer. Inspired by the success of multi-task learning in
the NLP area, we use prefix embeddings to provide a novel generative model that
considers both the targeted pocket's circumstances and a variety of chemical
properties. All conditional information is represented as learnable features,
which the generative model subsequently employs as a contextual prompt.
Experiments show that our model exhibits good controllability in both single
and multi-conditional molecular generation. The controllability enables us to
outperform previous structure-based drug design methods. More interestingly, we
open up the attention mechanism and reveal coupling relationships between
conditions, providing guidance for multi-conditional molecule generation.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 15:27:47 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Gao",
"Zhangyang",
""
],
[
"Hu",
"Yuqi",
""
],
[
"Tan",
"Cheng",
""
],
[
"Li",
"Stan Z.",
""
]
] |
new_dataset
| 0.997486 |
2302.07159
|
Kathleen Fraser
|
Kathleen C. Fraser, Svetlana Kiritchenko, and Isar Nejadgholi
|
A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the
Input is Under-Specified?
|
Appearing in the AAAI 2023 Workshop on Creative AI Across Modalities
| null | null | null |
cs.CY cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As text-to-image systems continue to grow in popularity with the general
public, questions have arisen about bias and diversity in the generated images.
Here, we investigate properties of images generated in response to prompts
which are visually under-specified, but contain salient social attributes
(e.g., 'a portrait of a threatening person' versus 'a portrait of a friendly
person'). Grounding our work in social cognition theory, we find that in many
cases, images contain similar demographic biases to those reported in the
stereotype literature. However, trends are inconsistent across different models
and further investigation is warranted.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 16:11:06 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Fraser",
"Kathleen C.",
""
],
[
"Kiritchenko",
"Svetlana",
""
],
[
"Nejadgholi",
"Isar",
""
]
] |
new_dataset
| 0.975958 |
2302.07168
|
Christian Schulz
|
Ernestine Gro{\ss}mann, Jonas Sauer, Christian Schulz, Patrick Steil
|
Arc-Flags Meet Trip-Based Public Transit Routing
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
We present Arc-Flag TB, a journey planning algorithm for public transit
networks which combines Trip-Based Public Transit Routing (TB) with the
Arc-Flags speedup technique. Compared to previous attempts to apply Arc-Flags
to public transit networks, which saw limited success, our approach uses
stronger pruning rules to reduce the search space. Our experiments show that
Arc-Flag TB achieves a speedup of up to two orders of magnitude over TB,
offering query times of less than a millisecond even on large countrywide
networks. Compared to the state-of-the-art speedup technique Trip-Based Public
Transit Routing Using Condensed Search Trees (TB-CST), our algorithm achieves
similar query times but requires significantly less additional memory. Other
state-of-the-art algorithms which achieve even faster query times, e.g., Public
Transit Labeling, require enormous memory usage. In contrast, Arc-Flag TB
offers a tradeoff between query performance and memory usage due to the fact
that the number of regions in the network partition required by our algorithm
is a configurable parameter. We also identify an issue in the transfer
precomputation of TB that affects both TB-CST and Arc-Flag TB, leading to
incorrect answers for some queries. This has not been previously recognized by
the author of TB-CST. We provide discussion on how to resolve this issue in the
future. Currently, Arc-Flag TB answers 1-6% of queries incorrectly, compared to
over 20% for TB-CST on some networks.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 16:29:51 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Großmann",
"Ernestine",
""
],
[
"Sauer",
"Jonas",
""
],
[
"Schulz",
"Christian",
""
],
[
"Steil",
"Patrick",
""
]
] |
new_dataset
| 0.999148 |
2302.07229
|
Patr\'icia Matsubara
|
Patr\'icia Matsubara, Igor Steinmacher, Bruno Gadelha, and Tayana
Conte
|
Moving on from the software engineers' gambit: an approach to support
the defense of software effort estimates
|
12 pages, 3 figures
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pressure for higher productivity and faster delivery is increasingly
pervading software organizations. This can lead software engineers to act like
chess players playing a gambit -- making sacrifices of their technically sound
estimates, thus submitting their teams to time pressure. In turn, time pressure
can have varied detrimental effects, such as poor product quality and emotional
distress, decreasing productivity, which leads to more time pressure and
delays: a hard-to-stop vicious cycle. This reveals a need for moving on from
the more passive strategy of yielding to pressure to a more active one of
defending software estimates. Therefore, we propose an approach to support
software estimators in acquiring knowledge on how to carry out such defense, by
introducing negotiation principles encapsulated in a set of defense lenses,
presented through a digital simulation. We evaluated the proposed approach
through a controlled experiment with software practitioners from different
companies. We collected data on participants' attitudes, subjective norms,
perceived behavioral control, and intentions to perform the defense of their
estimates in light of the Theory of Planned Behavior. We employed a frequentist
and a bayesian approach to data analysis. Results show improved scores among
experimental group participants after engaging with the digital simulation and
learning about the lenses. They were also more inclined to choose a defense
action when facing pressure scenarios than a control group exposed to questions
to reflect on the reasons and outcomes of pressure over estimates. Qualitative
evidence reveals that practitioners perceived the set of lenses as useful in
their current work environments. Collectively, these results show the
effectiveness of the proposed approach and its perceived relevance for the
industry, despite the low amount of time required to engage with it.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 18:19:15 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Matsubara",
"Patrícia",
""
],
[
"Steinmacher",
"Igor",
""
],
[
"Gadelha",
"Bruno",
""
],
[
"Conte",
"Tayana",
""
]
] |
new_dataset
| 0.984884 |
2302.07232
|
Sandro Pezzelle
|
Lars Buijtelaar, Sandro Pezzelle
|
A Psycholinguistic Analysis of BERT's Representations of Compounds
|
To appear in the Proceedings of EACL 2023 (main conference)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work studies the semantic representations learned by BERT for compounds,
that is, expressions such as sunlight or bodyguard. We build on recent studies
that explore semantic information in Transformers at the word level and test
whether BERT aligns with human semantic intuitions when dealing with
expressions (e.g., sunlight) whose overall meaning depends -- to a various
extent -- on the semantics of the constituent words (sun, light). We leverage a
dataset that includes human judgments on two psycholinguistic measures of
compound semantic analysis: lexeme meaning dominance (LMD; quantifying the
weight of each constituent toward the compound meaning) and semantic
transparency (ST; evaluating the extent to which the compound meaning is
recoverable from the constituents' semantics). We show that BERT-based measures
moderately align with human intuitions, especially when using contextualized
representations, and that LMD is overall more predictable than ST. Contrary to
the results reported for 'standard' words, higher, more contextualized layers
are the best at representing compound meaning. These findings shed new light on
the abilities of BERT in dealing with fine-grained semantic phenomena.
Moreover, they can provide insights into how speakers represent compounds.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 18:23:15 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Buijtelaar",
"Lars",
""
],
[
"Pezzelle",
"Sandro",
""
]
] |
new_dataset
| 0.998724 |
2302.07245
|
Shiv Ram Dubey
|
Laxman Kumarapu, Shiv Ram Dubey, Snehasis Mukherjee, Parkhi Mohan,
Sree Pragna Vinnakoti, Subhash Karthikeya
|
WSD: Wild Selfie Dataset for Face Recognition in Selfie Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rise of handy smart phones in the recent years, the trend of
capturing selfie images is observed. Hence efficient approaches are required to
be developed for recognising faces in selfie images. Due to the short distance
between the camera and face in selfie images, and the different visual effects
offered by the selfie apps, face recognition becomes more challenging with
existing approaches. A dataset is needed to be developed to encourage the study
to recognize faces in selfie images. In order to alleviate this problem and to
facilitate the research on selfie face images, we develop a challenging Wild
Selfie Dataset (WSD) where the images are captured from the selfie cameras of
different smart phones, unlike existing datasets where most of the images are
captured in controlled environment. The WSD dataset contains 45,424 images from
42 individuals (i.e., 24 female and 18 male subjects), which are divided into
40,862 training and 4,562 test images. The average number of images per subject
is 1,082 with minimum and maximum number of images for any subject are 518 and
2,634, respectively. The proposed dataset consists of several challenges,
including but not limited to augmented reality filtering, mirrored images,
occlusion, illumination, scale, expressions, view-point, aspect ratio, blur,
partial faces, rotation, and alignment. We compare the proposed dataset with
existing benchmark datasets in terms of different characteristics. The
complexity of WSD dataset is also observed experimentally, where the
performance of the existing state-of-the-art face recognition methods is poor
on WSD dataset, compared to the existing datasets. Hence, the proposed WSD
dataset opens up new challenges in the area of face recognition and can be
beneficial to the community to study the specific challenges related to selfie
images and develop improved methods for face recognition in selfie images.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 18:43:21 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Kumarapu",
"Laxman",
""
],
[
"Dubey",
"Shiv Ram",
""
],
[
"Mukherjee",
"Snehasis",
""
],
[
"Mohan",
"Parkhi",
""
],
[
"Vinnakoti",
"Sree Pragna",
""
],
[
"Karthikeya",
"Subhash",
""
]
] |
new_dataset
| 0.999775 |
2302.07257
|
Sheng Wang
|
Sheng Wang, Zihao Zhao, Xi Ouyang, Qian Wang, Dinggang Shen
|
ChatCAD: Interactive Computer-Aided Diagnosis on Medical Image using
Large Language Models
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have recently demonstrated their potential in
clinical applications, providing valuable medical knowledge and advice. For
example, a large dialog LLM like ChatGPT has successfully passed part of the US
medical licensing exam. However, LLMs currently have difficulty processing
images, making it challenging to interpret information from medical images,
which are rich in information that supports clinical decisions. On the other
hand, computer-aided diagnosis (CAD) networks for medical images have seen
significant success in the medical field by using advanced deep-learning
algorithms to support clinical decision-making. This paper presents a method
for integrating LLMs into medical-image CAD networks. The proposed framework
uses LLMs to enhance the output of multiple CAD networks, such as diagnosis
networks, lesion segmentation networks, and report generation networks, by
summarizing and reorganizing the information presented in natural language text
format. The goal is to merge the strengths of LLMs' medical domain knowledge
and logical reasoning with the vision understanding capability of existing
medical-image CAD models to create a more user-friendly and understandable
system for patients compared to conventional CAD systems. In the future, LLM's
medical knowledge can be also used to improve the performance of vision-based
medical-image CAD models.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 18:54:06 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Wang",
"Sheng",
""
],
[
"Zhao",
"Zihao",
""
],
[
"Ouyang",
"Xi",
""
],
[
"Wang",
"Qian",
""
],
[
"Shen",
"Dinggang",
""
]
] |
new_dataset
| 0.96369 |
2105.10087
|
Zhehua Mao
|
Zhehua Mao, Liang Zhao, Shoudong Huang, Yiting Fan, and Alex Pui-Wai
Lee
|
DSR: Direct Simultaneous Registration for Multiple 3D Images
|
10 pages, 3 figures, The 25th International Conference on Medical
Image Computing and Computer Assisted Intervention, MICCAI 2022
|
Medical Image Computing and Computer Assisted Intervention (2022)
|
10.1007/978-3-031-16446-0_10
| null |
cs.CV cs.RO eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel algorithm named Direct Simultaneous Registration
(DSR) that registers a collection of 3D images in a simultaneous fashion
without specifying any reference image, feature extraction and matching, or
information loss or reuse. The algorithm optimizes the global poses of local
image frames by maximizing the similarity between a predefined panoramic image
and local images. Although we formulate the problem as a Direct Bundle
Adjustment (DBA) that jointly optimizes the poses of local frames and the
intensities of the panoramic image, by investigating the independence of pose
estimation from the panoramic image in the solving process, DSR is proposed to
solve the poses only and proved to be able to obtain the same optimal poses as
DBA. The proposed method is particularly suitable for the scenarios where
distinct features are not available, such as Transesophageal Echocardiography
(TEE) images. DSR is evaluated by comparing it with four widely used methods
via simulated and in-vivo 3D TEE images. It is shown that the proposed method
outperforms these four methods in terms of accuracy and requires much fewer
computational resources than the state-of-the-art accumulated pairwise
estimates (APE).
|
[
{
"version": "v1",
"created": "Fri, 21 May 2021 01:42:11 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Aug 2022 07:01:56 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Mao",
"Zhehua",
""
],
[
"Zhao",
"Liang",
""
],
[
"Huang",
"Shoudong",
""
],
[
"Fan",
"Yiting",
""
],
[
"Lee",
"Alex Pui-Wai",
""
]
] |
new_dataset
| 0.986899 |
2203.04232
|
Yan Xia
|
Yan Xia, Qiangqiang Wu, Wei Li, Antoni B. Chan, Uwe Stilla
|
A Lightweight and Detector-free 3D Single Object Tracker on Point Clouds
|
Accepted by IEEE Transactions on Intelligent Transportation Systems
2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent works on 3D single object tracking treat the task as a target-specific
3D detection task, where an off-the-shelf 3D detector is commonly employed for
the tracking. However, it is non-trivial to perform accurate target-specific
detection since the point cloud of objects in raw LiDAR scans is usually sparse
and incomplete. In this paper, we address this issue by explicitly leveraging
temporal motion cues and propose DMT, a Detector-free Motion-prediction-based
3D Tracking network that completely removes the usage of complicated 3D
detectors and is lighter, faster, and more accurate than previous trackers.
Specifically, the motion prediction module is first introduced to estimate a
potential target center of the current frame in a point-cloud-free manner.
Then, an explicit voting module is proposed to directly regress the 3D box from
the estimated target center. Extensive experiments on KITTI and NuScenes
datasets demonstrate that our DMT can still achieve better performance (~10%
improvement over the NuScenes dataset) and a faster tracking speed (i.e., 72
FPS) than state-of-the-art approaches without applying any complicated 3D
detectors. Our code is released at \url{https://github.com/jimmy-dq/DMT}
|
[
{
"version": "v1",
"created": "Tue, 8 Mar 2022 17:49:07 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Feb 2023 17:40:17 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Xia",
"Yan",
""
],
[
"Wu",
"Qiangqiang",
""
],
[
"Li",
"Wei",
""
],
[
"Chan",
"Antoni B.",
""
],
[
"Stilla",
"Uwe",
""
]
] |
new_dataset
| 0.999408 |
2205.11081
|
Rifat Shahriyar
|
Abhik Bhattacharjee, Tahmid Hasan, Wasi Uddin Ahmad, Rifat Shahriyar
|
BanglaNLG and BanglaT5: Benchmarks and Resources for Evaluating
Low-Resource Natural Language Generation in Bangla
|
Findings of EACL 2023 (camera-ready)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents BanglaNLG, a comprehensive benchmark for evaluating
natural language generation (NLG) models in Bangla, a widely spoken yet
low-resource language. We aggregate six challenging conditional text generation
tasks under the BanglaNLG benchmark, introducing a new dataset on dialogue
generation in the process. Furthermore, using a clean corpus of 27.5 GB of
Bangla data, we pretrain BanglaT5, a sequence-to-sequence Transformer language
model for Bangla. BanglaT5 achieves state-of-the-art performance in all of
these tasks, outperforming several multilingual models by up to 9% absolute
gain and 32% relative gain. We are making the new dialogue dataset and the
BanglaT5 model publicly available at https://github.com/csebuetnlp/BanglaNLG in
the hope of advancing future research on Bangla NLG.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 06:54:56 GMT"
},
{
"version": "v2",
"created": "Tue, 24 May 2022 01:33:34 GMT"
},
{
"version": "v3",
"created": "Sun, 22 Jan 2023 19:08:58 GMT"
},
{
"version": "v4",
"created": "Sun, 12 Feb 2023 04:14:24 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Bhattacharjee",
"Abhik",
""
],
[
"Hasan",
"Tahmid",
""
],
[
"Ahmad",
"Wasi Uddin",
""
],
[
"Shahriyar",
"Rifat",
""
]
] |
new_dataset
| 0.99979 |
2206.04564
|
Zilong Chen
|
Shangbin Feng, Zhaoxuan Tan, Herun Wan, Ningnan Wang, Zilong Chen,
Binchi Zhang, Qinghua Zheng, Wenqian Zhang, Zhenyu Lei, Shujie Yang, Xinshun
Feng, Qingyue Zhang, Hongrui Wang, Yuhan Liu, Yuyang Bai, Heng Wang, Zijian
Cai, Yanbo Wang, Lijing Zheng, Zihan Ma, Jundong Li, Minnan Luo
|
TwiBot-22: Towards Graph-Based Twitter Bot Detection
|
NeurIPS 2022, Datasets and Benchmarks Track
| null | null | null |
cs.SI cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Twitter bot detection has become an increasingly important task to combat
misinformation, facilitate social media moderation, and preserve the integrity
of the online discourse. State-of-the-art bot detection methods generally
leverage the graph structure of the Twitter network, and they exhibit promising
performance when confronting novel Twitter bots that traditional methods fail
to detect. However, very few of the existing Twitter bot detection datasets are
graph-based, and even these few graph-based datasets suffer from limited
dataset scale, incomplete graph structure, as well as low annotation quality.
In fact, the lack of a large-scale graph-based Twitter bot detection benchmark
that addresses these issues has seriously hindered the development and
evaluation of novel graph-based bot detection approaches. In this paper, we
propose TwiBot-22, a comprehensive graph-based Twitter bot detection benchmark
that presents the largest dataset to date, provides diversified entities and
relations on the Twitter network, and has considerably better annotation
quality than existing datasets. In addition, we re-implement 35 representative
Twitter bot detection baselines and evaluate them on 9 datasets, including
TwiBot-22, to promote a fair comparison of model performance and a holistic
understanding of research progress. To facilitate further research, we
consolidate all implemented codes and datasets into the TwiBot-22 evaluation
framework, where researchers could consistently evaluate new models and
datasets. The TwiBot-22 Twitter bot detection benchmark and evaluation
framework are publicly available at https://twibot22.github.io/
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 15:23:37 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Jun 2022 09:05:30 GMT"
},
{
"version": "v3",
"created": "Wed, 17 Aug 2022 09:35:29 GMT"
},
{
"version": "v4",
"created": "Mon, 26 Sep 2022 02:01:01 GMT"
},
{
"version": "v5",
"created": "Tue, 11 Oct 2022 01:55:27 GMT"
},
{
"version": "v6",
"created": "Sun, 12 Feb 2023 10:16:29 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Feng",
"Shangbin",
""
],
[
"Tan",
"Zhaoxuan",
""
],
[
"Wan",
"Herun",
""
],
[
"Wang",
"Ningnan",
""
],
[
"Chen",
"Zilong",
""
],
[
"Zhang",
"Binchi",
""
],
[
"Zheng",
"Qinghua",
""
],
[
"Zhang",
"Wenqian",
""
],
[
"Lei",
"Zhenyu",
""
],
[
"Yang",
"Shujie",
""
],
[
"Feng",
"Xinshun",
""
],
[
"Zhang",
"Qingyue",
""
],
[
"Wang",
"Hongrui",
""
],
[
"Liu",
"Yuhan",
""
],
[
"Bai",
"Yuyang",
""
],
[
"Wang",
"Heng",
""
],
[
"Cai",
"Zijian",
""
],
[
"Wang",
"Yanbo",
""
],
[
"Zheng",
"Lijing",
""
],
[
"Ma",
"Zihan",
""
],
[
"Li",
"Jundong",
""
],
[
"Luo",
"Minnan",
""
]
] |
new_dataset
| 0.989859 |
2206.04936
|
Shitao Li
|
Shitao Li, Minjia Shi, Huizhou Liu
|
Several constructions of optimal LCD codes over small finite fields
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear complementary dual (LCD) codes are linear codes which intersect their
dual codes trivially, which have been of interest and extensively studied due
to their practical applications in computational complexity and information
protection. In this paper, we give some methods for constructing LCD codes over
small finite fields by modifying some typical methods for constructing linear
codes. We show that all odd-like binary LCD codes, ternary LCD codes and
quaternary Hermitian LCD codes can be constructed using the modified methods.
Our results improve the known lower bounds on the largest minimum distances of
LCD codes. Furthermore, we give two counterexamples to disprove the conjecture
proposed by Bouyuklieva (Des. Codes Cryptogr. 89(11): 2445-2461, 2021).
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 08:19:27 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Oct 2022 03:57:01 GMT"
},
{
"version": "v3",
"created": "Sat, 11 Feb 2023 04:23:05 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Li",
"Shitao",
""
],
[
"Shi",
"Minjia",
""
],
[
"Liu",
"Huizhou",
""
]
] |
new_dataset
| 0.998533 |
2207.04858
|
Jinbin Bai
|
Jinbin Bai, Chunhui Liu, Feiyue Ni, Haofan Wang, Mengying Hu, Xiaofeng
Guo, Lele Cheng
|
LaT: Latent Translation with Cycle-Consistency for Video-Text Retrieval
| null | null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video-text retrieval is a class of cross-modal representation learning
problems, where the goal is to select the video which corresponds to the text
query between a given text query and a pool of candidate videos. The
contrastive paradigm of vision-language pretraining has shown promising success
with large-scale datasets and unified transformer architecture, and
demonstrated the power of a joint latent space. Despite this, the intrinsic
divergence between the visual domain and textual domain is still far from being
eliminated, and projecting different modalities into a joint latent space might
result in the distorting of the information inside the single modality. To
overcome the above issue, we present a novel mechanism for learning the
translation relationship from a source modality space $\mathcal{S}$ to a target
modality space $\mathcal{T}$ without the need for a joint latent space, which
bridges the gap between visual and textual domains. Furthermore, to keep cycle
consistency between translations, we adopt a cycle loss involving both forward
translations from $\mathcal{S}$ to the predicted target space $\mathcal{T'}$,
and backward translations from $\mathcal{T'}$ back to $\mathcal{S}$. Extensive
experiments conducted on MSR-VTT, MSVD, and DiDeMo datasets demonstrate the
superiority and effectiveness of our LaT approach compared with vanilla
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 13:37:32 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Feb 2023 18:00:34 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Bai",
"Jinbin",
""
],
[
"Liu",
"Chunhui",
""
],
[
"Ni",
"Feiyue",
""
],
[
"Wang",
"Haofan",
""
],
[
"Hu",
"Mengying",
""
],
[
"Guo",
"Xiaofeng",
""
],
[
"Cheng",
"Lele",
""
]
] |
new_dataset
| 0.98694 |
2207.12536
|
James Avery
|
James Avery, Mark Runciman, Cristina Fiani, Elena Monfort Sanchez,
Saina Akhond, Zhuang Liu, Kirill Aristovich and George Mylonas
|
Lumen Shape Reconstruction using a Soft Robotic Balloon Catheter and
Electrical Impedance Tomography
|
Published version in IROS 2022 The IEEE/RSJ International Conference
on Intelligent Robots and Systems. Improved Figure 3, discussion and more
concise methods section
| null |
10.1109/IROS47612.2022.9981150
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Incorrectly sized balloon catheters can lead to increased post-surgical
complications, yet even with preoperative imaging, correct selection remains a
challenge. With limited feedback during surgery, it is difficult to verify
correct deployment. We propose the use of integrated impedance measurements and
Electrical Impedance Tomography (EIT) imaging to assess the deformation of the
balloon and determine the size and shape of the surrounding lumen. Previous
work using single impedance measurements, or pressure data and analytical
models, whilst demonstrating high sizing accuracy, have assumed a circular
cross section. Here we extend these methods by adding a multitude of electrodes
to detect elliptical and occluded lumen and obtain EIT images to localise
deformations. Using a 14 Fr (5.3 mm) catheter as an example, numerical
simulations were performed to find the optimal electrode configuration of two
rings of 8 electrodes spaced 10 mm apart. The simulations predicted that the
maximum detectable aspect ratio decreased from 0.9 for a 14mm balloon to 0.5 at
30mm. The sizing and ellipticity detection results were verified
experimentally. A prototype robotic balloon catheter was constructed to
automatically inflate a compliant balloon while simultaneously recording EIT
and pressure data. Data were collected in experiments replicating stenotic
vessels with an elliptical and asymmetrical profile, and the widening of a
lumen during angioplasty. After calibration, the system was able to correctly
localise the occlusion and detect aspect ratios of 0.75. EIT images further
localised the occlusion and visualised the dilation of the lumen during balloon
inflation.
|
[
{
"version": "v1",
"created": "Mon, 25 Jul 2022 21:17:40 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Aug 2022 14:00:21 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Avery",
"James",
""
],
[
"Runciman",
"Mark",
""
],
[
"Fiani",
"Cristina",
""
],
[
"Sanchez",
"Elena Monfort",
""
],
[
"Akhond",
"Saina",
""
],
[
"Liu",
"Zhuang",
""
],
[
"Aristovich",
"Kirill",
""
],
[
"Mylonas",
"George",
""
]
] |
new_dataset
| 0.995244 |
2208.03879
|
Wei Luo
|
Wei Luo, Tongzhi Niu, Lixin Tang, Wenyong Yu, Bin Li
|
Clear Memory-Augmented Auto-Encoder for Surface Defect Detection
|
12 pages
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In surface defect detection, due to the extreme imbalance in the number of
positive and negative samples, positive-samples-based anomaly detection methods
have received more and more attention. Specifically, reconstruction-based
methods are the most popular. However, existing methods are either difficult to
repair abnormal foregrounds or reconstruct clear backgrounds. Therefore, we
propose a clear memory-augmented auto-encoder (CMA-AE). At first, we propose a
novel clear memory-augmented module (CMAM), which combines the encoding and
memoryencoding in a way of forgetting and inputting, thereby repairing abnormal
foregrounds and preserving clear backgrounds. Secondly, a general artificial
anomaly generation algorithm (GAAGA) is proposed to simulate anomalies that are
as realistic and feature-rich as possible. At last, we propose a novel multi
scale feature residual detection method (MSFR) for defect segmentation, which
makes the defect location more accurate. Extensive comparison experiments
demonstrate that CMA-AE achieves state-of-the-art detection accuracy and shows
great potential in industrial applications.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 02:39:03 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Feb 2023 05:34:59 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Luo",
"Wei",
""
],
[
"Niu",
"Tongzhi",
""
],
[
"Tang",
"Lixin",
""
],
[
"Yu",
"Wenyong",
""
],
[
"Li",
"Bin",
""
]
] |
new_dataset
| 0.99711 |
2209.13097
|
Akhil Padmanabha
|
Akhil Padmanabha, Qin Wang, Daphne Han, Jashkumar Diyora, Kriti
Kacker, Hamza Khalid, Liang-Jung Chen, Carmel Majidi and Zackory Erickson
|
HAT: Head-Worn Assistive Teleoperation of Mobile Manipulators
|
Project Website: https://sites.google.com/view/hat-teleop/home
| null | null | null |
cs.RO cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile manipulators in the home can provide increased autonomy to individuals
with severe motor impairments, who often cannot complete activities of daily
living (ADLs) without the help of a caregiver. Teleoperation of an assistive
mobile manipulator could enable an individual with motor impairments to
independently perform self-care and household tasks, yet limited motor function
can impede one's ability to interface with a robot. In this work, we present a
unique inertial-based wearable assistive interface, embedded in a familiar
head-worn garment, for individuals with severe motor impairments to teleoperate
and perform physical tasks with a mobile manipulator. We evaluate this wearable
interface with both able-bodied (N = 16) and individuals with motor impairments
(N = 2) for performing ADLs and everyday household tasks. Our results show that
the wearable interface enabled participants to complete physical tasks with low
error rates, high perceived ease of use, and low workload measures. Overall,
this inertial-based wearable serves as a new assistive interface option for
control of mobile manipulators in the home.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 01:09:09 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Feb 2023 15:24:44 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Padmanabha",
"Akhil",
""
],
[
"Wang",
"Qin",
""
],
[
"Han",
"Daphne",
""
],
[
"Diyora",
"Jashkumar",
""
],
[
"Kacker",
"Kriti",
""
],
[
"Khalid",
"Hamza",
""
],
[
"Chen",
"Liang-Jung",
""
],
[
"Majidi",
"Carmel",
""
],
[
"Erickson",
"Zackory",
""
]
] |
new_dataset
| 0.999417 |
2210.04191
|
Steven Y. Feng
|
Steven Y. Feng, Vivek Khetan, Bogdan Sacaleanu, Anatole Gershman,
Eduard Hovy
|
CHARD: Clinical Health-Aware Reasoning Across Dimensions for Text
Generation Models
|
EACL 2023. Code available at https://github.com/styfeng/CHARD
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We motivate and introduce CHARD: Clinical Health-Aware Reasoning across
Dimensions, to investigate the capability of text generation models to act as
implicit clinical knowledge bases and generate free-flow textual explanations
about various health-related conditions across several dimensions. We collect
and present an associated dataset, CHARDat, consisting of explanations about 52
health conditions across three clinical dimensions. We conduct extensive
experiments using BART and T5 along with data augmentation, and perform
automatic, human, and qualitative analyses. We show that while our models can
perform decently, CHARD is very challenging with strong potential for further
exploration.
|
[
{
"version": "v1",
"created": "Sun, 9 Oct 2022 07:16:58 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Feb 2023 03:09:06 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Feng",
"Steven Y.",
""
],
[
"Khetan",
"Vivek",
""
],
[
"Sacaleanu",
"Bogdan",
""
],
[
"Gershman",
"Anatole",
""
],
[
"Hovy",
"Eduard",
""
]
] |
new_dataset
| 0.993535 |
2210.07382
|
Peter Jansen
|
Ruoyao Wang, Peter Jansen, Marc-Alexandre C\^ot\'e, Prithviraj
Ammanabrolu
|
Behavior Cloned Transformers are Neurosymbolic Reasoners
|
Accepted to EACL 2023
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we explore techniques for augmenting interactive agents with
information from symbolic modules, much like humans use tools like calculators
and GPS systems to assist with arithmetic and navigation. We test our agent's
abilities in text games -- challenging benchmarks for evaluating the multi-step
reasoning abilities of game agents in grounded, language-based environments.
Our experimental study indicates that injecting the actions from these symbolic
modules into the action space of a behavior cloned transformer agent increases
performance on four text game benchmarks that test arithmetic, navigation,
sorting, and common sense reasoning by an average of 22%, allowing an agent to
reach the highest possible performance on unseen games. This action injection
technique is easily extended to new agents, environments, and symbolic modules.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 21:54:33 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Feb 2023 15:51:15 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Wang",
"Ruoyao",
""
],
[
"Jansen",
"Peter",
""
],
[
"Côté",
"Marc-Alexandre",
""
],
[
"Ammanabrolu",
"Prithviraj",
""
]
] |
new_dataset
| 0.996915 |
2210.07587
|
Ranran Haoran Zhang
|
Ranran Haoran Zhang, Aysa Xuemo Fan and Rui Zhang
|
ConEntail: An Entailment-based Framework for Universal Zero and Few Shot
Classification with Supervised Contrastive Pretraining
|
Accepted by EACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
A universal classification model aims to generalize to diverse classification
tasks in both zero and few shot settings. A promising way toward universal
classification is to cast heterogeneous data formats into a dataset-agnostic
"meta-task" (e.g., textual entailment, question answering) then pretrain a
model on the combined meta dataset. The existing work is either pretrained on
specific subsets of classification tasks, or pretrained on both classification
and generation data but the model could not fulfill its potential in
universality and reliability. These also leave a massive amount of annotated
data under-exploited. To fill these gaps, we propose ConEntail, a new framework
for universal zero and few shot classification with supervised contrastive
pretraining. Our unified meta-task for classification is based on nested
entailment. It can be interpreted as "Does sentence a entails [sentence b
entails label c]". This formulation enables us to make better use of 57
annotated classification datasets for supervised contrastive pretraining and
universal evaluation. In this way, ConEntail helps the model (1) absorb
knowledge from different datasets, and (2) gain consistent performance gain
with more pretraining data. In experiments, we compare our model with
discriminative and generative models pretrained on the same dataset. The
results confirm that our framework effectively exploits existing annotated data
and consistently outperforms baselines in both zero (9.4% average improvement)
and few shot settings (3.5% average improvement).
|
[
{
"version": "v1",
"created": "Fri, 14 Oct 2022 07:37:27 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Feb 2023 07:12:36 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Zhang",
"Ranran Haoran",
""
],
[
"Fan",
"Aysa Xuemo",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.996725 |
2210.13768
|
Xingting Yao
|
Xingting Yao, Fanrong Li, Zitao Mo, Jian Cheng
|
GLIF: A Unified Gated Leaky Integrate-and-Fire Neuron for Spiking Neural
Networks
|
Accepted at NeurIPS 2022
| null | null | null |
cs.NE cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spiking Neural Networks (SNNs) have been studied over decades to incorporate
their biological plausibility and leverage their promising energy efficiency.
Throughout existing SNNs, the leaky integrate-and-fire (LIF) model is commonly
adopted to formulate the spiking neuron and evolves into numerous variants with
different biological features. However, most LIF-based neurons support only
single biological feature in different neuronal behaviors, limiting their
expressiveness and neuronal dynamic diversity. In this paper, we propose GLIF,
a unified spiking neuron, to fuse different bio-features in different neuronal
behaviors, enlarging the representation space of spiking neurons. In GLIF,
gating factors, which are exploited to determine the proportion of the fused
bio-features, are learnable during training. Combining all learnable
membrane-related parameters, our method can make spiking neurons different and
constantly changing, thus increasing the heterogeneity and adaptivity of
spiking neurons. Extensive experiments on a variety of datasets demonstrate
that our method obtains superior performance compared with other SNNs by simply
changing their neuronal formulations to GLIF. In particular, we train a spiking
ResNet-19 with GLIF and achieve $77.35\%$ top-1 accuracy with six time steps on
CIFAR-100, which has advanced the state-of-the-art. Codes are available at
\url{https://github.com/Ikarosy/Gated-LIF}.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 05:07:48 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Nov 2022 16:24:59 GMT"
},
{
"version": "v3",
"created": "Sun, 13 Nov 2022 15:41:21 GMT"
},
{
"version": "v4",
"created": "Mon, 13 Feb 2023 16:52:10 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Yao",
"Xingting",
""
],
[
"Li",
"Fanrong",
""
],
[
"Mo",
"Zitao",
""
],
[
"Cheng",
"Jian",
""
]
] |
new_dataset
| 0.99816 |
2210.16875
|
Yuanhao Huang
|
Xinyu Zhang, Yuanhao Huang, Kangyao Huang, Xiaoyu Wang, Dafeng Jin,
Huaping Liu, Jun Li
|
A Multi-modal Deformable Land-air Robot for Complex Environments
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Single locomotion robots often struggle to adapt in highly variable or
uncertain environments, especially in emergencies. In this paper, a multi-modal
deformable robot is introduced that can both fly and drive. Compatibility
issues with multi-modal locomotive fusion for this hybrid land-air robot are
solved using proposed design conceptions, including power settings, energy
selection, and designs of deformable structure. The robot can also
automatically transform between land and air modes during 3D planning and
tracking. Meanwhile, we proposed a algorithms for evaluation the performance of
land-air robots. A series of comparisons and experiments were conducted to
demonstrate the robustness and reliability of the proposed structure in complex
field environments.
|
[
{
"version": "v1",
"created": "Sun, 30 Oct 2022 16:38:13 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Nov 2022 07:09:01 GMT"
},
{
"version": "v3",
"created": "Wed, 18 Jan 2023 02:36:25 GMT"
},
{
"version": "v4",
"created": "Sat, 11 Feb 2023 09:15:37 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Zhang",
"Xinyu",
""
],
[
"Huang",
"Yuanhao",
""
],
[
"Huang",
"Kangyao",
""
],
[
"Wang",
"Xiaoyu",
""
],
[
"Jin",
"Dafeng",
""
],
[
"Liu",
"Huaping",
""
],
[
"Li",
"Jun",
""
]
] |
new_dataset
| 0.998635 |
2211.14029
|
Hongbo Li
|
Hongbo Li and Lingjie Duan
|
When Congestion Games Meet Mobile Crowdsourcing: Selective Information
Disclosure
|
Online technical report for our forthcoming AAAI 2023 paper
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In congestion games, users make myopic routing decisions to jam each other,
and the social planner with the full information designs mechanisms on
information or payment side to regulate. However, it is difficult to obtain
time-varying traffic conditions, and emerging crowdsourcing platforms (e.g.,
Waze and Google Maps) provide a convenient way for mobile users travelling on
the paths to learn and share the traffic conditions over time. When congestion
games meet mobile crowdsourcing, it is critical to incentive selfish users to
change their myopic routing policy and reach the best exploitation-exploration
trade-off. By considering a simple but fundamental parallel routing network
with one deterministic path and multiple stochastic paths for atomic users, we
prove that the myopic routing policy's price of anarchy (PoA) is larger than
$\frac{1}{1-\rho}$, which can be arbitrarily large as discount factor
$\rho\rightarrow1$. To remedy such huge efficiency loss, we propose a selective
information disclosure (SID) mechanism: we only reveal the latest traffic
information to users when they intend to over-explore the stochastic paths,
while hiding such information when they want to under-explore. We prove that
our mechanism reduces PoA to be less than $\frac{1}{1-\frac{\rho}{2}}$. Besides
the worst-case performance, we further examine our mechanism's average-case
performance by using extensive simulations.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 11:03:54 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Feb 2023 10:25:38 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Li",
"Hongbo",
""
],
[
"Duan",
"Lingjie",
""
]
] |
new_dataset
| 0.97757 |
2212.01528
|
arXiv Admin
|
Chao Hu, Liqiang Zhu, Weibing Qiu, Weijie Wu
|
IDMS: Instance Depth for Multi-scale Monocular 3D Object Detection
|
This submission has been withdrawn by arXiv administrators due to
inappropriate text overlap with external sources
|
Journal of Machine Learning Research 2023
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the lack of depth information of images and poor detection accuracy in
monocular 3D object detection, we proposed the instance depth for multi-scale
monocular 3D object detection method. Firstly, to enhance the model's
processing ability for different scale targets, a multi-scale perception module
based on dilated convolution is designed, and the depth features containing
multi-scale information are re-refined from both spatial and channel directions
considering the inconsistency between feature maps of different scales.
Firstly, we designed a multi-scale perception module based on dilated
convolution to enhance the model's processing ability for different scale
targets. The depth features containing multi-scale information are re-refined
from spatial and channel directions considering the inconsistency between
feature maps of different scales. Secondly, so as to make the model obtain
better 3D perception, this paper proposed to use the instance depth information
as an auxiliary learning task to enhance the spatial depth feature of the 3D
target and use the sparse instance depth to supervise the auxiliary task.
Finally, by verifying the proposed algorithm on the KITTI test set and
evaluation set, the experimental results show that compared with the baseline
method, the proposed method improves by 5.27\% in AP40 in the car category,
effectively improving the detection performance of the monocular 3D object
detection algorithm.
|
[
{
"version": "v1",
"created": "Sat, 3 Dec 2022 04:02:31 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Feb 2023 16:35:45 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Hu",
"Chao",
""
],
[
"Zhu",
"Liqiang",
""
],
[
"Qiu",
"Weibing",
""
],
[
"Wu",
"Weijie",
""
]
] |
new_dataset
| 0.996513 |
2212.09939
|
Jeffrey Zhao
|
Jeffrey Zhao, Yuan Cao, Raghav Gupta, Harrison Lee, Abhinav Rastogi,
Mingqiu Wang, Hagen Soltau, Izhak Shafran, Yonghui Wu
|
AnyTOD: A Programmable Task-Oriented Dialog System
|
v2, update with Multiwoz, SGD results
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We propose AnyTOD, an end-to-end, zero-shot task-oriented dialog (TOD) system
capable of handling unseen tasks without task-specific training. We view TOD as
a program executed by a language model (LM), where program logic and ontology
is provided by a designer as a schema. To enable generalization to unseen
schemas and programs without prior training, AnyTOD adopts a neuro-symbolic
approach. A neural LM keeps track of events occurring during a conversation and
a symbolic program implementing the dialog policy is executed to recommend next
actions AnyTOD should take. This approach drastically reduces data annotation
and model training requirements, addressing the enduring challenge of rapidly
adapting a TOD system to unseen tasks and domains. We demonstrate
state-of-the-art results on STAR, ABCD and SGD benchmarks. We also demonstrate
strong zero-shot transfer ability in low-resource settings, such as zero-shot
on MultiWOZ. In addition, we release STARv2, an updated version of the STAR
dataset with richer annotations, for benchmarking zero-shot end-to-end TOD
models.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 01:23:01 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Feb 2023 18:26:37 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Zhao",
"Jeffrey",
""
],
[
"Cao",
"Yuan",
""
],
[
"Gupta",
"Raghav",
""
],
[
"Lee",
"Harrison",
""
],
[
"Rastogi",
"Abhinav",
""
],
[
"Wang",
"Mingqiu",
""
],
[
"Soltau",
"Hagen",
""
],
[
"Shafran",
"Izhak",
""
],
[
"Wu",
"Yonghui",
""
]
] |
new_dataset
| 0.98951 |
2302.00965
|
Minghuan Liu
|
Minghuan Liu, Tairan He, Weinan Zhang, Shuicheng Yan, Zhongwen Xu
|
Visual Imitation Learning with Patch Rewards
|
Accepted by ICLR 2023. 18 pages, 14 figures, 2 tables. Codes are
available at https://github.com/sail-sg/PatchAIL
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Visual imitation learning enables reinforcement learning agents to learn to
behave from expert visual demonstrations such as videos or image sequences,
without explicit, well-defined rewards. Previous research either adopted
supervised learning techniques or induce simple and coarse scalar rewards from
pixels, neglecting the dense information contained in the image demonstrations.
In this work, we propose to measure the expertise of various local regions of
image samples, or called \textit{patches}, and recover multi-dimensional
\textit{patch rewards} accordingly. Patch reward is a more precise rewarding
characterization that serves as a fine-grained expertise measurement and visual
explainability tool. Specifically, we present Adversarial Imitation Learning
with Patch Rewards (PatchAIL), which employs a patch-based discriminator to
measure the expertise of different local parts from given images and provide
patch rewards. The patch-based knowledge is also used to regularize the
aggregated reward and stabilize the training. We evaluate our method on
DeepMind Control Suite and Atari tasks. The experiment results have
demonstrated that PatchAIL outperforms baseline methods and provides valuable
interpretations for visual demonstrations.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 09:13:10 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Feb 2023 16:57:11 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Liu",
"Minghuan",
""
],
[
"He",
"Tairan",
""
],
[
"Zhang",
"Weinan",
""
],
[
"Yan",
"Shuicheng",
""
],
[
"Xu",
"Zhongwen",
""
]
] |
new_dataset
| 0.962732 |
2302.01058
|
Juze Zhang
|
Juze Zhang, Ye Shi, Yuexin Ma, Lan Xu, Jingyi Yu, Jingya Wang
|
IKOL: Inverse kinematics optimization layer for 3D human pose and shape
estimation via Gauss-Newton differentiation
|
Accepted by AAAI 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an inverse kinematic optimization layer (IKOL) for 3D
human pose and shape estimation that leverages the strength of both
optimization- and regression-based methods within an end-to-end framework. IKOL
involves a nonconvex optimization that establishes an implicit mapping from an
image's 3D keypoints and body shapes to the relative body-part rotations. The
3D keypoints and the body shapes are the inputs and the relative body-part
rotations are the solutions. However, this procedure is implicit and hard to
make differentiable. So, to overcome this issue, we designed a Gauss-Newton
differentiation (GN-Diff) procedure to differentiate IKOL. GN-Diff iteratively
linearizes the nonconvex objective function to obtain Gauss-Newton directions
with closed form solutions. Then, an automatic differentiation procedure is
directly applied to generate a Jacobian matrix for end-to-end training.
Notably, the GN-Diff procedure works fast because it does not rely on a
time-consuming implicit differentiation procedure. The twist rotation and shape
parameters are learned from the neural networks and, as a result, IKOL has a
much lower computational overhead than most existing optimization-based
methods. Additionally, compared to existing regression-based methods, IKOL
provides a more accurate mesh-image correspondence. This is because it
iteratively reduces the distance between the keypoints and also enhances the
reliability of the pose structures. Extensive experiments demonstrate the
superiority of our proposed framework over a wide range of 3D human pose and
shape estimation methods.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 12:43:29 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Feb 2023 12:54:46 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Zhang",
"Juze",
""
],
[
"Shi",
"Ye",
""
],
[
"Ma",
"Yuexin",
""
],
[
"Xu",
"Lan",
""
],
[
"Yu",
"Jingyi",
""
],
[
"Wang",
"Jingya",
""
]
] |
new_dataset
| 0.985139 |
2302.01451
|
Andrea Gemelli
|
Andrea Gemelli, Emanuele Vivoli, Simone Marinai
|
CTE: A Dataset for Contextualized Table Extraction
| null | null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Relevant information in documents is often summarized in tables, helping the
reader to identify useful facts. Most benchmark datasets support either
document layout analysis or table understanding, but lack in providing data to
apply both tasks in a unified way. We define the task of Contextualized Table
Extraction (CTE), which aims to extract and define the structure of tables
considering the textual context of the document. The dataset comprises 75k
fully annotated pages of scientific papers, including more than 35k tables.
Data are gathered from PubMed Central, merging the information provided by
annotations in the PubTables-1M and PubLayNet datasets. The dataset can support
CTE and adds new classes to the original ones. The generated annotations can be
used to develop end-to-end pipelines for various tasks, including document
layout analysis, table detection, structure recognition, and functional
analysis. We formally define CTE and evaluation metrics, showing which subtasks
can be tackled, describing advantages, limitations, and future works of this
collection of data. Annotations and code will be accessible a
https://github.com/AILab-UniFI/cte-dataset.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 22:38:23 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Feb 2023 18:22:57 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Gemelli",
"Andrea",
""
],
[
"Vivoli",
"Emanuele",
""
],
[
"Marinai",
"Simone",
""
]
] |
new_dataset
| 0.999878 |
2302.02094
|
Teo Susnjak
|
Paula Maddigan and Teo Susnjak
|
Chat2VIS: Generating Data Visualisations via Natural Language using
ChatGPT, Codex and GPT-3 Large Language Models
|
revision
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The field of data visualisation has long aimed to devise solutions for
generating visualisations directly from natural language text. Research in
Natural Language Interfaces (NLIs) has contributed towards the development of
such techniques. However, the implementation of workable NLIs has always been
challenging due to the inherent ambiguity of natural language, as well as in
consequence of unclear and poorly written user queries which pose problems for
existing language models in discerning user intent. Instead of pursuing the
usual path of developing new iterations of language models, this study uniquely
proposes leveraging the advancements in pre-trained large language models
(LLMs) such as ChatGPT and GPT-3 to convert free-form natural language directly
into code for appropriate visualisations. This paper presents a novel system,
Chat2VIS, which takes advantage of the capabilities of LLMs and demonstrates
how, with effective prompt engineering, the complex problem of language
understanding can be solved more efficiently, resulting in simpler and more
accurate end-to-end solutions than prior approaches. Chat2VIS shows that LLMs
together with the proposed prompts offer a reliable approach to rendering
visualisations from natural language queries, even when queries are highly
misspecified and underspecified. This solution also presents a significant
reduction in costs for the development of NLI systems, while attaining greater
visualisation inference abilities compared to traditional NLP approaches that
use hand-crafted grammar rules and tailored models. This study also presents
how LLM prompts can be constructed in a way that preserves data security and
privacy while being generalisable to different datasets. This work compares the
performance of GPT-3, Codex and ChatGPT across a number of case studies and
contrasts the performances with prior studies.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 05:19:31 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Feb 2023 20:52:49 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Maddigan",
"Paula",
""
],
[
"Susnjak",
"Teo",
""
]
] |
new_dataset
| 0.991426 |
2302.04449
|
Yue Wu
|
Yue Wu, Yewen Fan, Paul Pu Liang, Amos Azaria, Yuanzhi Li, Tom M.
Mitchell
|
Read and Reap the Rewards: Learning to Play Atari with the Help of
Instruction Manuals
| null | null | null | null |
cs.LG cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
High sample complexity has long been a challenge for RL. On the other hand,
humans learn to perform tasks not only from interaction or demonstrations, but
also by reading unstructured text documents, e.g., instruction manuals.
Instruction manuals and wiki pages are among the most abundant data that could
inform agents of valuable features and policies or task-specific environmental
dynamics and reward structures. Therefore, we hypothesize that the ability to
utilize human-written instruction manuals to assist learning policies for
specific tasks should lead to a more efficient and better-performing agent.
We propose the Read and Reward framework. Read and Reward speeds up RL
algorithms on Atari games by reading manuals released by the Atari game
developers. Our framework consists of a QA Extraction module that extracts and
summarizes relevant information from the manual and a Reasoning module that
evaluates object-agent interactions based on information from the manual.
Auxiliary reward is then provided to a standard A2C RL agent, when interaction
is detected. When assisted by our design, A2C improves on 4 games in the Atari
environment with sparse rewards, and requires 1000x less training frames
compared to the previous SOTA Agent 57 on Skiing, the hardest game in Atari.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 05:47:03 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Feb 2023 09:56:36 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Wu",
"Yue",
""
],
[
"Fan",
"Yewen",
""
],
[
"Liang",
"Paul Pu",
""
],
[
"Azaria",
"Amos",
""
],
[
"Li",
"Yuanzhi",
""
],
[
"Mitchell",
"Tom M.",
""
]
] |
new_dataset
| 0.995232 |
2302.05486
|
Hao Zhu
|
Longwei Guo, Hao Zhu, Yuanxun Lu, Menghua Wu, Xun Cao
|
RAFaRe: Learning Robust and Accurate Non-parametric 3D Face
Reconstruction from Pseudo 2D&3D Pairs
|
Accepted to AAAI 2023 (Oral)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a robust and accurate non-parametric method for single-view 3D
face reconstruction (SVFR). While tremendous efforts have been devoted to
parametric SVFR, a visible gap still lies between the result 3D shape and the
ground truth. We believe there are two major obstacles: 1) the representation
of the parametric model is limited to a certain face database; 2) 2D images and
3D shapes in the fitted datasets are distinctly misaligned. To resolve these
issues, a large-scale pseudo 2D\&3D dataset is created by first rendering the
detailed 3D faces, then swapping the face in the wild images with the rendered
face. These pseudo 2D&3D pairs are created from publicly available datasets
which eliminate the gaps between 2D and 3D data while covering diverse
appearances, poses, scenes, and illumination. We further propose a
non-parametric scheme to learn a well-generalized SVFR model from the created
dataset, and the proposed hierarchical signed distance function turns out to be
effective in predicting middle-scale and small-scale 3D facial geometry. Our
model outperforms previous methods on FaceScape-wild/lab and MICC benchmarks
and is well generalized to various appearances, poses, expressions, and
in-the-wild environments. The code is released at
http://github.com/zhuhao-nju/rafare .
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 19:40:26 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Guo",
"Longwei",
""
],
[
"Zhu",
"Hao",
""
],
[
"Lu",
"Yuanxun",
""
],
[
"Wu",
"Menghua",
""
],
[
"Cao",
"Xun",
""
]
] |
new_dataset
| 0.998456 |
2302.05507
|
Nicolas Gontier
|
Nicolas Gontier, Pau Rodriguez, Issam Laradji, David Vazquez,
Christopher Pal
|
Long-Context Language Decision Transformers and Exponential Tilt for
Interactive Text Environments
|
12 pages, 5 figures, 3 tables
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-based game environments are challenging because agents must deal with
long sequences of text, execute compositional actions using text and learn from
sparse rewards. We address these challenges by proposing Long-Context Language
Decision Transformers (LLDTs), a framework that is based on long transformer
language models and decision transformers (DTs). LLDTs extend DTs with 3
components: (1) exponential tilt to guide the agent towards high obtainable
goals, (2) novel goal conditioning methods yielding significantly better
results than the traditional return-to-go (sum of all future rewards), and (3)
a model of future observations. Our ablation results show that predicting
future observations improves agent performance. To the best of our knowledge,
LLDTs are the first to address offline RL with DTs on these challenging games.
Our experiments show that LLDTs achieve the highest scores among many different
types of agents on some of the most challenging Jericho games, such as
Enchanter.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 20:50:58 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Gontier",
"Nicolas",
""
],
[
"Rodriguez",
"Pau",
""
],
[
"Laradji",
"Issam",
""
],
[
"Vazquez",
"David",
""
],
[
"Pal",
"Christopher",
""
]
] |
new_dataset
| 0.993357 |
2302.05536
|
Ze Shi Li
|
Ze Shi Li, Nowshin Nawar Arony, Kezia Devathasan, Daniela Damian
|
"Software is the easy part of Software Engineering" -- Lessons and
Experiences from A Large-Scale, Multi-Team Capstone Course
|
2023 IEEE/ACM 45th International Conference on Software Engineering:
Software Engineering Education and Training (ICSE-SEET)
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Capstone courses in undergraduate software engineering are a critical final
milestone for students. These courses allow students to create a software
solution and demonstrate the knowledge they accumulated in their degrees.
However, a typical capstone project team is small containing no more than 5
students and function independently from other teams. To better reflect
real-world software development and meet industry demands, we introduce in this
paper our novel capstone course. Each student was assigned to a large-scale,
multi-team (i.e., company) of up to 20 students to collaboratively build
software. Students placed in a company gained first-hand experiences with
respect to multi-team coordination, integration, communication, agile, and
teamwork to build a microservices based project. Furthermore, each company was
required to implement plug-and-play so that their services would be compatible
with another company, thereby sharing common APIs. Through developing the
product in autonomous sub-teams, the students enhanced not only their technical
abilities but also their soft skills such as communication and coordination.
More importantly, experiencing the challenges that arose from the multi-team
project trained students to realize the pitfalls and advantages of
organizational culture. Among many lessons learned from this course experience,
students learned the critical importance of building team trust. We provide
detailed information about our course structure, lessons learned, and propose
recommendations for other universities and programs. Our work concerns
educators interested in launching similar capstone projects so that students in
other institutions can reap the benefits of large-scale, multi-team development
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 22:33:35 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Li",
"Ze Shi",
""
],
[
"Arony",
"Nowshin Nawar",
""
],
[
"Devathasan",
"Kezia",
""
],
[
"Damian",
"Daniela",
""
]
] |
new_dataset
| 0.997281 |
2302.05550
|
Susik Yoon
|
Susik Yoon, Hou Pong Chan, Jiawei Han
|
PDSum: Prototype-driven Continuous Summarization of Evolving
Multi-document Sets Stream
|
Accepted by WWW'23
| null | null | null |
cs.IR cs.AI cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Summarizing text-rich documents has been long studied in the literature, but
most of the existing efforts have been made to summarize a static and
predefined multi-document set. With the rapid development of online platforms
for generating and distributing text-rich documents, there arises an urgent
need for continuously summarizing dynamically evolving multi-document sets
where the composition of documents and sets is changing over time. This is
especially challenging as the summarization should be not only effective in
incorporating relevant, novel, and distinctive information from each concurrent
multi-document set, but also efficient in serving online applications. In this
work, we propose a new summarization problem, Evolving Multi-Document sets
stream Summarization (EMDS), and introduce a novel unsupervised algorithm PDSum
with the idea of prototype-driven continuous summarization. PDSum builds a
lightweight prototype of each multi-document set and exploits it to adapt to
new documents while preserving accumulated knowledge from previous documents.
To update new summaries, the most representative sentences for each
multi-document set are extracted by measuring their similarities to the
prototypes. A thorough evaluation with real multi-document sets streams
demonstrates that PDSum outperforms state-of-the-art unsupervised
multi-document summarization algorithms in EMDS in terms of relevance, novelty,
and distinctiveness and is also robust to various evaluation settings.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 23:43:46 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Yoon",
"Susik",
""
],
[
"Chan",
"Hou Pong",
""
],
[
"Han",
"Jiawei",
""
]
] |
new_dataset
| 0.992607 |
2302.05573
|
Bin Liu
|
Bo Li, Xiaolin Wei, Fengwei Chen, Bin Liu
|
3D Colored Shape Reconstruction from a Single RGB Image through
Diffusion
|
9 pages, 8 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel 3d colored shape reconstruction method from a single RGB
image through diffusion model. Diffusion models have shown great development
potentials for high-quality 3D shape generation. However, most existing work
based on diffusion models only focus on geometric shape generation, they cannot
either accomplish 3D reconstruction from a single image, or produce 3D
geometric shape with color information. In this work, we propose to reconstruct
a 3D colored shape from a single RGB image through a novel conditional
diffusion model. The reverse process of the proposed diffusion model is
consisted of three modules, shape prediction module, color prediction module
and NeRF-like rendering module. In shape prediction module, the reference RGB
image is first encoded into a high-level shape feature and then the shape
feature is utilized as a condition to predict the reverse geometric noise in
diffusion model. Then the color of each 3D point updated in shape prediction
module is predicted by color prediction module. Finally, a NeRF-like rendering
module is designed to render the colored point cloud predicted by the former
two modules to 2D image space to guide the training conditioned only on a
reference image. As far as the authors know, the proposed method is the first
diffusion model for 3D colored shape reconstruction from a single RGB image.
Experimental results demonstrate that the proposed method achieves competitive
performance on colored 3D shape reconstruction, and the ablation study
validates the positive role of the color prediction module in improving the
reconstruction quality of 3D geometric point cloud.
|
[
{
"version": "v1",
"created": "Sat, 11 Feb 2023 02:15:00 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Li",
"Bo",
""
],
[
"Wei",
"Xiaolin",
""
],
[
"Chen",
"Fengwei",
""
],
[
"Liu",
"Bin",
""
]
] |
new_dataset
| 0.992271 |
2302.05574
|
Junru Lu
|
Junru Lu, Jiazheng Li, Byron C. Wallace, Yulan He, Gabriele Pergola
|
NapSS: Paragraph-level Medical Text Simplification via Narrative
Prompting and Sentence-matching Summarization
|
Findings of EACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accessing medical literature is difficult for laypeople as the content is
written for specialists and contains medical jargon. Automated text
simplification methods offer a potential means to address this issue. In this
work, we propose a summarize-then-simplify two-stage strategy, which we call
NapSS, identifying the relevant content to simplify while ensuring that the
original narrative flow is preserved. In this approach, we first generate
reference summaries via sentence matching between the original and the
simplified abstracts. These summaries are then used to train an extractive
summarizer, learning the most relevant content to be simplified. Then, to
ensure the narrative consistency of the simplified text, we synthesize
auxiliary narrative prompts combining key phrases derived from the syntactical
analyses of the original text. Our model achieves results significantly better
than the seq2seq baseline on an English medical corpus, yielding 3%~4% absolute
improvements in terms of lexical similarity, and providing a further 1.1%
improvement of SARI score when combined with the baseline. We also highlight
shortcomings of existing evaluation methods, and introduce new metrics that
take into account both lexical and high-level semantic similarity. A human
evaluation conducted on a random sample of the test set further establishes the
effectiveness of the proposed approach. Codes and models are released here:
https://github.com/LuJunru/NapSS.
|
[
{
"version": "v1",
"created": "Sat, 11 Feb 2023 02:20:25 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Lu",
"Junru",
""
],
[
"Li",
"Jiazheng",
""
],
[
"Wallace",
"Byron C.",
""
],
[
"He",
"Yulan",
""
],
[
"Pergola",
"Gabriele",
""
]
] |
new_dataset
| 0.953237 |
2302.05597
|
Xianjun Yang
|
Xianjun Yang, Stephen Wilson, Linda Petzold
|
MatKB: Semantic Search for Polycrystalline Materials Synthesis
Procedures
|
Work in Progress
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a novel approach to knowledge extraction and
retrieval using Natural Language Processing (NLP) techniques for material
science. Our goal is to automatically mine structured knowledge from millions
of research articles in the field of polycrystalline materials and make it
easily accessible to the broader community. The proposed method leverages NLP
techniques such as entity recognition and document classification to extract
relevant information and build an extensive knowledge base, from a collection
of 9.5 Million publications. The resulting knowledge base is integrated into a
search engine, which enables users to search for information about specific
materials, properties, and experiments with greater precision than traditional
search engines like Google. We hope our results can enable material scientists
quickly locate desired experimental procedures, compare their differences, and
even inspire them to design new experiments. Our website will be available at
Github \footnote{https://github.com/Xianjun-Yang/PcMSP.git} soon.
|
[
{
"version": "v1",
"created": "Sat, 11 Feb 2023 04:18:07 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Yang",
"Xianjun",
""
],
[
"Wilson",
"Stephen",
""
],
[
"Petzold",
"Linda",
""
]
] |
new_dataset
| 0.99675 |
2302.05611
|
Shun Wang
|
Shun Wang, Yucheng Li, Chenghua Lin, Lo\"ic Barrault, Frank Guerin
|
Metaphor Detection with Effective Context Denoising
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel RoBERTa-based model, RoPPT, which introduces a
target-oriented parse tree structure in metaphor detection. Compared to
existing models, RoPPT focuses on semantically relevant information and
achieves the state-of-the-art on several main metaphor datasets. We also
compare our approach against several popular denoising and pruning methods,
demonstrating the effectiveness of our approach in context denoising. Our code
and dataset can be found at https://github.com/MajiBear000/RoPPT
|
[
{
"version": "v1",
"created": "Sat, 11 Feb 2023 05:53:51 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Wang",
"Shun",
""
],
[
"Li",
"Yucheng",
""
],
[
"Lin",
"Chenghua",
""
],
[
"Barrault",
"Loïc",
""
],
[
"Guerin",
"Frank",
""
]
] |
new_dataset
| 0.999704 |
2302.05681
|
Ilan Doron-Arad
|
Ilan Doron-Arad and Ariel Kulik and Hadas Shachnai
|
An EPTAS for Budgeted Matching and Budgeted Matroid Intersection
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
We study the budgeted versions of the well known matching and matroid
intersection problems. While both problems admit a polynomial-time
approximation scheme (PTAS) [Berger et al. (Math. Programming, 2011), Chekuri,
Vondrak and Zenklusen (SODA 2011)], it has been an intriguing open question
whether these problems admit a fully PTAS (FPTAS), or even an efficient PTAS
(EPTAS).
In this paper we answer the second part of this question affirmatively, by
presenting an EPTAS for budgeted matching and budgeted matroid intersection. A
main component of our scheme is a novel construction of representative sets for
desired solutions, whose cardinality depends only on $\varepsilon$, the
accuracy parameter. Thus, enumerating over solutions within a representative
set leads to an EPTAS. This crucially distinguishes our algorithms from
previous approaches, which rely on exhaustive enumeration over the solution
set. Our ideas for constructing representative sets may find use in tackling
other budgeted optimization problems, and are thus of independent interest.
|
[
{
"version": "v1",
"created": "Sat, 11 Feb 2023 12:28:57 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Doron-Arad",
"Ilan",
""
],
[
"Kulik",
"Ariel",
""
],
[
"Shachnai",
"Hadas",
""
]
] |
new_dataset
| 0.995528 |
2302.05685
|
Xiangjie Yan
|
Xiangjie Yan, Yongpeng Jiang, Guokun Wu, Chen Chen, Gao Huang, and
Xiang Li
|
Multi-Modal Interaction Control of Ultrasound Scanning Robots with Safe
Human Guidance and Contact Recovery
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ultrasound scanning robots enable the automatic imaging of a patient's
internal organs by maintaining close contact between the ultrasound probe and
the patient's body during a scanning procedure. Comprehensive, high-quality
ultrasound scans are essential for providing the patient with an accurate
diagnosis and effective treatment plan. An ultrasound scanning robot usually
works in a doctor-robot co-existing environment, hence both efficiency and
safety during the collaboration should be considered. In this paper, we propose
a novel multi-modal control scheme for ultrasound scanning robots, in which
three interaction modes are integrated into a single control input.
Specifically, the scanning mode drives the robot to track a time-varying
trajectory on the patient's body under the desired impedance model; the
recovery mode allows the robot to actively recontact the body whenever physical
contact between the ultrasound probe and the patient's body is lost; the
human-guided mode renders the robot passive such that the doctor can safely
intervene to manually reposition the probe. The integration of multiple modes
allows the doctor to intervene safely at any time during the task and also
maximizes the robot's autonomous scanning ability. The performance of the robot
is validated on a collaborative scanning task of a carotid artery examination.
|
[
{
"version": "v1",
"created": "Sat, 11 Feb 2023 12:44:48 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Yan",
"Xiangjie",
""
],
[
"Jiang",
"Yongpeng",
""
],
[
"Wu",
"Guokun",
""
],
[
"Chen",
"Chen",
""
],
[
"Huang",
"Gao",
""
],
[
"Li",
"Xiang",
""
]
] |
new_dataset
| 0.978145 |
2302.05794
|
Gongbo Liang
|
Gongbo Liang, Jesus Guerrero, Izzat Alsmadi
|
Mutation-Based Adversarial Attacks on Neural Text Detectors
| null | null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Neural text detectors aim to decide the characteristics that distinguish
neural (machine-generated) from human texts. To challenge such detectors,
adversarial attacks can alter the statistical characteristics of the generated
text, making the detection task more and more difficult. Inspired by the
advances of mutation analysis in software development and testing, in this
paper, we propose character- and word-based mutation operators for generating
adversarial samples to attack state-of-the-art natural text detectors. This
falls under white-box adversarial attacks. In such attacks, attackers have
access to the original text and create mutation instances based on this
original text. The ultimate goal is to confuse machine learning models and
classifiers and decrease their prediction accuracy.
|
[
{
"version": "v1",
"created": "Sat, 11 Feb 2023 22:08:32 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Liang",
"Gongbo",
""
],
[
"Guerrero",
"Jesus",
""
],
[
"Alsmadi",
"Izzat",
""
]
] |
new_dataset
| 0.998107 |
2302.05795
|
Kevin Desai
|
Kevin Desai and Omeed Ashtiani and Balakrishnan Prabhakaran
|
Assessment HTN (A-HTN) for Automated Task Performance Assessment in 3D
Serious Games
|
8 pages, 5 figures, 1 table
| null | null | null |
cs.HC cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In the recent years, various 3D mixed reality serious games have been
developed for different applications such as physical training, rehabilitation,
and education. Task performance in a serious game is a measurement of how
efficiently and accurately users accomplish the game's objectives. Prior
research includes a graph-based representation of tasks, e.g. Hierarchical Task
Network (HTN), which only models a game's tasks but does not perform
assessment. In this paper, we propose Assessment HTN (A-HTN), which both models
the task efficiently and incorporates assessment logic for game objectives.
Based on how the task performance is evaluated, A-HTN automatically performs:
(a) Task-level Assessment by comparing object manipulations and (b)
Action-level Assessment by comparing motion trajectories. The system can also
categorize the task performance assessment into single user or multi-user based
on who is being assessed. We showcase the effectiveness of the A-HTN using two
3D VR serious games: a hydrometer experiment and a multi-user chemistry
experiment. The A-HTN experiments show a high correlation between instructor
scores and the system generated scores indicating that the proposed A-HTN
generalizes automatic assessment at par with Subject Matter Experts.
|
[
{
"version": "v1",
"created": "Sat, 11 Feb 2023 22:13:16 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Desai",
"Kevin",
""
],
[
"Ashtiani",
"Omeed",
""
],
[
"Prabhakaran",
"Balakrishnan",
""
]
] |
new_dataset
| 0.998417 |
2302.05803
|
Mohammadjavad Ghorbanalivakili
|
Jungwon Kang, Mohammadjavad Ghorbanalivakili, Gunho Sohn, David Beach,
and Veronica Marin
|
TPE-Net: Track Point Extraction and Association Network for Rail Path
Proposal Generation
|
7 pages, 6 figures, and 1 table Jungwon Kang and Mohammadjavad
Ghorbanalivakili have equal contribution
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One essential feature of an autonomous train is minimizing collision risks
with third-party objects. To estimate the risk, the control system must
identify topological information of all the rail routes ahead on which the
train can possibly move, especially within merging or diverging rails. This
way, the train can figure out the status of potential obstacles with respect to
its route and hence, make a timely decision. Numerous studies have successfully
extracted all rail tracks as a whole within forward-looking images without
considering element instances. Still, some image-based methods have employed
hard-coded prior knowledge of railway geometry on 3D data to associate
left-right rails and generate rail route instances. However, we propose a rail
path extraction pipeline in which left-right rail pixels of each rail route
instance are extracted and associated through a fully convolutional
encoder-decoder architecture called TPE-Net. Two different regression branches
for TPE-Net are proposed to regress the locations of center points of each rail
route, along with their corresponding left-right pixels. Extracted rail pixels
are then spatially clustered to generate topological information of all the
possible train routes (ego-paths), discarding non-ego-path ones. Experimental
results on a challenging, publicly released benchmark show true-positive-pixel
level average precision and recall of 0.9207 and 0.8721, respectively, at about
12 frames per second. Even though our evaluation results are not higher than
the SOTA, the proposed regression pipeline performs remarkably in extracting
the correspondences by looking once at the image. It generates strong rail
route hypotheses without reliance on camera parameters, 3D data, and
geometrical constraints.
|
[
{
"version": "v1",
"created": "Sat, 11 Feb 2023 22:49:06 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Kang",
"Jungwon",
""
],
[
"Ghorbanalivakili",
"Mohammadjavad",
""
],
[
"Sohn",
"Gunho",
""
],
[
"Beach",
"David",
""
],
[
"Marin",
"Veronica",
""
]
] |
new_dataset
| 0.959542 |
2302.05840
|
Ahmed Elhadeedy
|
Ahmed Elhadeedy, Jeremy Daily
|
60 GHz Wi-Fi As A Tractor-Trailer Wireless Harness
|
IEEE CCWC
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Reverse driving a truck is a challenging task for human drivers and
self-driving software due to the lack for sensors on the trailer. Self-driving
and conventional trucks have an increasing need to replace the legacy
communication channels between the truck and the trailer to accommodate
bandwidth and latency requirements when more sensors and features are added to
the trailer to support driver assist features or self-driving functions, in
addition to the need of automating the tractor-trailer hitching and unhitching,
which is a complex process when using wires and connectors for communication
between the truck and the trailer. In this paper, we address using a wireless
harness between the tractor and the trailer based on Wi-Fi, in addition to
discussing using Named Data networking protocol for communication between the
truck and the trailer including handling interest and data packets. A Testbed
is used to evaluate communicating different data types from one device to three
devices over 802.11ac and it indicated a stable communication performance when
Named Data Networking and Data Distribution Service were used. Using a wireless
harness will ease the automation of trailer hitching and unhitching process and
will eliminate the need for communication wires or connectors between the
tractor and the trailers, therefore, reducing the complexity of the process.
|
[
{
"version": "v1",
"created": "Sun, 12 Feb 2023 03:14:09 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Elhadeedy",
"Ahmed",
""
],
[
"Daily",
"Jeremy",
""
]
] |
new_dataset
| 0.999801 |
2302.05863
|
Xiaolin Wen
|
Xiaolin Wen, Yong Wang, Xuanwu Yue, Feida Zhu, and Min Zhu
|
NFTDisk: Visual Detection of Wash Trading in NFT Markets
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the growing popularity of Non-Fungible Tokens (NFT), a new type of
digital assets, various fraudulent activities have appeared in NFT markets.
Among them, wash trading has become one of the most common frauds in NFT
markets, which attempts to mislead investors by creating fake trading volumes.
Due to the sophisticated patterns of wash trading, only a subset of them can be
detected by automatic algorithms, and manual inspection is usually required. We
propose NFTDisk, a novel visualization for investors to identify wash trading
activities in NFT markets, where two linked visualization modules are
presented: a radial visualization module with a disk metaphor to overview NFT
transactions and a flow-based visualization module to reveal detailed NFT flows
at multiple levels. We conduct two case studies and an in-depth user interview
with 14 NFT investors to evaluate NFTDisk. The results demonstrate its
effectiveness in exploring wash trading activities in NFT markets.
|
[
{
"version": "v1",
"created": "Sun, 12 Feb 2023 06:32:16 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Wen",
"Xiaolin",
""
],
[
"Wang",
"Yong",
""
],
[
"Yue",
"Xuanwu",
""
],
[
"Zhu",
"Feida",
""
],
[
"Zhu",
"Min",
""
]
] |
new_dataset
| 0.999478 |
2302.05929
|
Hanrong Zhang
|
Peng Peng, Hanrong Zhang, Mengxuan Li, Gongzhuang Peng, Hongwei Wang,
Weiming Shen
|
SCLIFD:Supervised Contrastive Knowledge Distillation for Incremental
Fault Diagnosis under Limited Fault Data
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intelligent fault diagnosis has made extraordinary advancements currently.
Nonetheless, few works tackle class-incremental learning for fault diagnosis
under limited fault data, i.e., imbalanced and long-tailed fault diagnosis,
which brings about various notable challenges. Initially, it is difficult to
extract discriminative features from limited fault data. Moreover, a
well-trained model must be retrained from scratch to classify the samples from
new classes, thus causing a high computational burden and time consumption.
Furthermore, the model may suffer from catastrophic forgetting when trained
incrementally. Finally, the model decision is biased toward the new classes due
to the class imbalance. The problems can consequently lead to performance
degradation of fault diagnosis models. Accordingly, we introduce a supervised
contrastive knowledge distillation for incremental fault diagnosis under
limited fault data (SCLIFD) framework to address these issues, which extends
the classical incremental classifier and representation learning (iCaRL)
framework from three perspectives. Primarily, we adopt supervised contrastive
knowledge distillation (KD) to enhance its representation learning capability
under limited fault data. Moreover, we propose a novel prioritized exemplar
selection method adaptive herding (AdaHerding) to restrict the increase of the
computational burden, which is also combined with KD to alleviate catastrophic
forgetting. Additionally, we adopt the cosine classifier to mitigate the
adverse impact of class imbalance. We conduct extensive experiments on
simulated and real-world industrial processes under different imbalance ratios.
Experimental results show that our SCLIFD outperforms the existing methods by a
large margin.
|
[
{
"version": "v1",
"created": "Sun, 12 Feb 2023 14:50:12 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Peng",
"Peng",
""
],
[
"Zhang",
"Hanrong",
""
],
[
"Li",
"Mengxuan",
""
],
[
"Peng",
"Gongzhuang",
""
],
[
"Wang",
"Hongwei",
""
],
[
"Shen",
"Weiming",
""
]
] |
new_dataset
| 0.99744 |
2302.05937
|
Binhai Zhu
|
Sergey Bereg and Yuya Higashikawa and Naoki Katoh and Manuel Lafond
and Yuki Tokuni and Binhai Zhu
|
The Two-Squirrel Problem and Its Relatives
|
17 pages, 7 figures
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we start with a variation of the star cover problem called the
Two-Squirrel problem. Given a set $P$ of $2n$ points in the plane, and two
sites $c_1$ and $c_2$, compute two $n$-stars $S_1$ and $S_2$ centered at $c_1$
and $c_2$ respectively such that the maximum weight of $S_1$ and $S_2$ is
minimized. This problem is strongly NP-hard by a reduction from Equal-size
Set-Partition with Rationals. Then we consider two variations of the
Two-Squirrel problem, namely the Two-MST and Two-TSP problem, which are both
NP-hard. The NP-hardness for the latter is obvious while the former needs a
non-trivial reduction from Equal-size Set-Partition with Rationals. In terms of
approximation algorithms, for Two-MST and Two-TSP we give factor 3.6402 and
$4+\varepsilon$ approximations respectively. Finally, we also show some
interesting polynomial-time solvable cases for Two-MST.
|
[
{
"version": "v1",
"created": "Sun, 12 Feb 2023 15:23:41 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Bereg",
"Sergey",
""
],
[
"Higashikawa",
"Yuya",
""
],
[
"Katoh",
"Naoki",
""
],
[
"Lafond",
"Manuel",
""
],
[
"Tokuni",
"Yuki",
""
],
[
"Zhu",
"Binhai",
""
]
] |
new_dataset
| 0.994417 |
2302.05959
|
Yanheng Li
|
Yanheng Li, Lin Luoying, Xinyan Li, Yaxuan Mao, Ray Lc
|
"Nice to meet you!": Expressing Emotions with Movement Gestures and
Textual Content in Automatic Handwriting Robots
|
HRI 2023 LBR
| null |
10.1145/3568294.3580045
| null |
cs.HC cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-writing robots have been used in assistive writing and drawing
applications. However, robots do not convey emotional tones in the writing
process due to the lack of behaviors humans typically adopt. To examine how
people interpret designed robotic expressions of emotion through both movements
and textual output, we used a pen-plotting robot to generate texts by
performing human-like behaviors like stop-and-go, speed, and pressure
variation. We examined how people convey emotion in the writing process by
observing how they wrote in different emotional contexts. We then mapped these
human expressions during writing to the handwriting robot and measured how well
other participants understood the robot's affective expression. We found that
textual output was the strongest determinant of participants' ability to
perceive the robot's emotions, whereas parameters of gestural movements of the
robots like speed, fluency, pressure, size, and acceleration could be useful
for understanding the context of the writing expression.
|
[
{
"version": "v1",
"created": "Sun, 12 Feb 2023 17:13:25 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Li",
"Yanheng",
""
],
[
"Luoying",
"Lin",
""
],
[
"Li",
"Xinyan",
""
],
[
"Mao",
"Yaxuan",
""
],
[
"Lc",
"Ray",
""
]
] |
new_dataset
| 0.998543 |
2302.05996
|
MohammadHossein Askarihemmat
|
MohammadHossein AskariHemmat, Theo Dupuis, Yoan Fournier, Nizar El
Zarif, Matheus Cavalcante, Matteo Perotti, Frank Gurkaynak, Luca Benini,
Francois Leduc-Primeau, Yvon Savaria, Jean-Pierre David
|
Quark: An Integer RISC-V Vector Processor for Sub-Byte Quantized DNN
Inference
|
5 pages. Accepted for publication in the 56th International Symposium
on Circuits and Systems (ISCAS 2023)
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present Quark, an integer RISC-V vector processor
specifically tailored for sub-byte DNN inference. Quark is implemented in
GlobalFoundries' 22FDX FD-SOI technology. It is designed on top of Ara, an
open-source 64-bit RISC-V vector processor. To accommodate sub-byte DNN
inference, Quark extends Ara by adding specialized vector instructions to
perform sub-byte quantized operations. We also remove the floating-point unit
from Quarks' lanes and use the CVA6 RISC-V scalar core for the re-scaling
operations that are required in quantized neural network inference. This makes
each lane of Quark 2 times smaller and 1.9 times more power efficient compared
to the ones of Ara. In this paper we show that Quark can run quantized models
at sub-byte precision. Notably we show that for 1-bit and 2-bit quantized
models, Quark can accelerate computation of Conv2d over various ranges of
inputs and kernel sizes.
|
[
{
"version": "v1",
"created": "Sun, 12 Feb 2023 20:45:07 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"AskariHemmat",
"MohammadHossein",
""
],
[
"Dupuis",
"Theo",
""
],
[
"Fournier",
"Yoan",
""
],
[
"Zarif",
"Nizar El",
""
],
[
"Cavalcante",
"Matheus",
""
],
[
"Perotti",
"Matteo",
""
],
[
"Gurkaynak",
"Frank",
""
],
[
"Benini",
"Luca",
""
],
[
"Leduc-Primeau",
"Francois",
""
],
[
"Savaria",
"Yvon",
""
],
[
"David",
"Jean-Pierre",
""
]
] |
new_dataset
| 0.993367 |
2302.06008
|
Ren\'e Peinl
|
Johannes Wirth, Ren\'e Peinl
|
ASR Bundestag: A Large-Scale political debate dataset in German
|
13 pages, 2 tables, 4 figures
| null | null | null |
cs.CL cs.AI cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present ASR Bundestag, a dataset for automatic speech recognition in
German, consisting of 610 hours of aligned audio-transcript pairs for
supervised training as well as 1,038 hours of unlabeled audio snippets for
self-supervised learning, based on raw audio data and transcriptions from
plenary sessions and committee meetings of the German parliament. In addition,
we discuss utilized approaches for the automated creation of speech datasets
and assess the quality of the resulting dataset based on evaluations and
finetuning of a pre-trained state of the art model. We make the dataset
publicly available, including all subsets.
|
[
{
"version": "v1",
"created": "Sun, 12 Feb 2023 21:45:18 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Wirth",
"Johannes",
""
],
[
"Peinl",
"René",
""
]
] |
new_dataset
| 0.999779 |
2302.06050
|
Yang Song
|
Yang Song, Junayed Mahmud, Nadeeshan De Silva, Ying Zhou, Oscar
Chaparro, Kevin Moran, Andrian Marcus, Denys Poshyvanyk
|
BURT: A Chatbot for Interactive Bug Reporting
|
Accepted by the Demonstrations Track of the 45th International
Conference on Software Engineering (ICSE'23). arXiv admin note: substantial
text overlap with arXiv:2209.10062
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces BURT, a web-based chatbot for interactive reporting of
Android app bugs. BURT is designed to assist Android app end-users in reporting
high-quality defect information using an interactive interface. BURT guides the
users in reporting essential bug report elements, i.e., the observed behavior,
expected behavior, and the steps to reproduce the bug. It verifies the quality
of the text written by the user and provides instant feedback. In addition,
BURT provides graphical suggestions that the users can choose as alternatives
to textual descriptions. We empirically evaluated BURT, asking end-users to
report bugs from six Android apps. The reporters found that BURT's guidance and
automated suggestions and clarifications are useful and BURT is easy to use.
BURT is an open-source tool, available at
github.com/sea-lab-wm/burt/tree/tool-demo. A video showing the full
capabilities of BURT can be found at https://youtu.be/SyfOXpHYGRo
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 01:52:50 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Song",
"Yang",
""
],
[
"Mahmud",
"Junayed",
""
],
[
"De Silva",
"Nadeeshan",
""
],
[
"Zhou",
"Ying",
""
],
[
"Chaparro",
"Oscar",
""
],
[
"Moran",
"Kevin",
""
],
[
"Marcus",
"Andrian",
""
],
[
"Poshyvanyk",
"Denys",
""
]
] |
new_dataset
| 0.999031 |
2302.06136
|
Varul Srivastava
|
Varul Srivastava, Dr. Sujit Gujar
|
PRAGTHOS:Practical Game Theoretically Secure Proof-of-Work Blockchain
| null | null | null | null |
cs.CR cs.DC cs.GT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Security analysis of blockchain technology is an active domain of research.
There has been both cryptographic and game-theoretic security analysis of
Proof-of-Work (PoW) blockchains. Prominent work includes the cryptographic
security analysis under the Universal Composable framework and Game-theoretic
security analysis using Rational Protocol Design. These security analysis
models rely on stricter assumptions that might not hold. In this paper, we
analyze the security of PoW blockchain protocols. We first show how assumptions
made by previous models need not be valid in reality, which attackers can
exploit to launch attacks that these models fail to capture. These include
Difficulty Alternating Attack, under which forking is possible for an adversary
with less than 0.5 mining power, Quick-Fork Attack, a general bound on selfish
mining attack and transaction withholding attack. Following this, we argue why
previous models for security analysis fail to capture these attacks and propose
a more practical framework for security analysis pRPD. We then propose a
framework to build PoW blockchains PRAGTHOS, which is secure from the attacks
mentioned above. Finally, we argue that PoW blockchains complying with the
PRAGTHOS framework are secure against a computationally bounded adversary under
certain conditions on the reward scheme.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 06:53:54 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Srivastava",
"Varul",
""
],
[
"Gujar",
"Dr. Sujit",
""
]
] |
new_dataset
| 0.986372 |
2302.06156
|
Xuehan Wang
|
Xuehan Wang, Xu Shi, Jintao Wang and Jian Song
|
On the Doppler Squint Effect in OTFS Systems over Doubly-Dispersive
Channels: Modeling and Evaluation
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Extensive work has demonstrated the excellent performance of orthogonal time
frequency space (OTFS) modulation in high-mobility scenarios. Time-variant
wideband channel estimation serves as one of the key compositions of OTFS
receivers since the data detection requires accurate channel state information
(CSI). In practical wideband OTFS systems, the Doppler shift brought by the
high mobility is frequency-dependent, which is referred to as the Doppler
Squint Effect (DSE). Unfortunately, DSE was ignored in overall prior estimation
schemes employed in OTFS systems, which leads to severe performance loss in
channel estimation and the consequent data detection. In this paper, we
investigate DSE of wideband time-variant channel in delay-Doppler domain and
concentrate on the characterization of OTFS channel coefficients considering
DSE. The formulation and evaluation of OTFS input-output relationship are
provided for both ideal and rectangular waveforms considering DSE. The channel
estimation is therefore formulated as a sparse signal recovery problem and an
orthogonal matching pursuit (OMP)-based scheme is adopted to solve it.
Simulation results confirm the significance of DSE and the performance
superiority compared with traditional channel estimation approaches ignoring
DSE.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 07:34:38 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Wang",
"Xuehan",
""
],
[
"Shi",
"Xu",
""
],
[
"Wang",
"Jintao",
""
],
[
"Song",
"Jian",
""
]
] |
new_dataset
| 0.998029 |
2302.06276
|
Jiahui Liu
|
Jiahui Liu, Xingqun Zhan, Cheng Chi, Xin Zhang, and Chuanrun Zhai
|
Robust Extrinsic Self-Calibration of Camera and Solid State LiDAR
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter proposes an extrinsic calibration approach for a pair of
monocular camera and prism-spinning solid-state LiDAR. The unique
characteristics of the point cloud measured resulting from the flower-like
scanning pattern is first disclosed as the vacant points, a type of outlier
between foreground target and background objects. Unlike existing method using
only depth continuous measurements, we use depth discontinuous measurements to
retain more valid features and efficiently remove vacant points. The larger
number of detected 3D corners thus contain more robust a priori information
than usual which, together with the 2D corners detected by overlapping cameras
and constrained by the proposed circularity and rectangularity rules, produce
accurate extrinsic estimates. The algorithm is evaluated with real field
experiments adopting both qualitative and quantitative performance criteria,
and found to be superior to existing algorithms. The code is available on
GitHub.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 11:32:30 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Liu",
"Jiahui",
""
],
[
"Zhan",
"Xingqun",
""
],
[
"Chi",
"Cheng",
""
],
[
"Zhang",
"Xin",
""
],
[
"Zhai",
"Chuanrun",
""
]
] |
new_dataset
| 0.985205 |
2302.06291
|
Sultan Abughazal
|
Sultan Abu Ghazal, Jean Lahoud and Rao Anwer
|
Surface-biased Multi-Level Context 3D Object Detection
| null | null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Object detection in 3D point clouds is a crucial task in a range of computer
vision applications including robotics, autonomous cars, and augmented reality.
This work addresses the object detection task in 3D point clouds using a highly
efficient, surface-biased, feature extraction method (wang2022rbgnet), that
also captures contextual cues on multiple levels. We propose a 3D object
detector that extracts accurate feature representations of object candidates
and leverages self-attention on point patches, object candidates, and on the
global scene in 3D scene. Self-attention is proven to be effective in encoding
correlation information in 3D point clouds by (xie2020mlcvnet). While other 3D
detectors focus on enhancing point cloud feature extraction by selectively
obtaining more meaningful local features (wang2022rbgnet) where contextual
information is overlooked. To this end, the proposed architecture uses
ray-based surface-biased feature extraction and multi-level context encoding to
outperform the state-of-the-art 3D object detector. In this work, 3D detection
experiments are performed on scenes from the ScanNet dataset whereby the
self-attention modules are introduced one after the other to isolate the effect
of self-attention at each level.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 11:50:04 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Ghazal",
"Sultan Abu",
""
],
[
"Lahoud",
"Jean",
""
],
[
"Anwer",
"Rao",
""
]
] |
new_dataset
| 0.998724 |
2302.06298
|
Zeqiang Lai
|
Zeqiang Lai, Ying Fu, Jun Zhang
|
Hyperspectral Image Super Resolution with Real Unaligned RGB Guidance
|
The code and dataset are publicly available at
https://zeqiang-lai.github.io/HSI-RefSR/
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Fusion-based hyperspectral image (HSI) super-resolution has become
increasingly prevalent for its capability to integrate high-frequency spatial
information from the paired high-resolution (HR) RGB reference image. However,
most of the existing methods either heavily rely on the accurate alignment
between low-resolution (LR) HSIs and RGB images, or can only deal with
simulated unaligned RGB images generated by rigid geometric transformations,
which weakens their effectiveness for real scenes. In this paper, we explore
the fusion-based HSI super-resolution with real RGB reference images that have
both rigid and non-rigid misalignments. To properly address the limitations of
existing methods for unaligned reference images, we propose an HSI fusion
network with heterogenous feature extractions, multi-stage feature alignments,
and attentive feature fusion. Specifically, our network first transforms the
input HSI and RGB images into two sets of multi-scale features with an HSI
encoder and an RGB encoder, respectively. The features of RGB reference images
are then processed by a multi-stage alignment module to explicitly align the
features of RGB reference with the LR HSI. Finally, the aligned features of RGB
reference are further adjusted by an adaptive attention module to focus more on
discriminative regions before sending them to the fusion decoder to generate
the reconstructed HR HSI. Additionally, we collect a real-world HSI fusion
dataset, consisting of paired HSI and unaligned RGB reference, to support the
evaluation of the proposed model for real scenes. Extensive experiments are
conducted on both simulated and our real-world datasets, and it shows that our
method obtains a clear improvement over existing single-image and fusion-based
super-resolution methods on quantitative assessment as well as visual
comparison.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 11:56:45 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Lai",
"Zeqiang",
""
],
[
"Fu",
"Ying",
""
],
[
"Zhang",
"Jun",
""
]
] |
new_dataset
| 0.975131 |
2302.06308
|
Jan Koh\'ut
|
Jan Koh\'ut, Michal Hradi\v{s}
|
Finetuning Is a Surprisingly Effective Domain Adaptation Baseline in
Handwriting Recognition
|
Submitted to ICDAR2023 conference
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In many machine learning tasks, a large general dataset and a small
specialized dataset are available. In such situations, various domain
adaptation methods can be used to adapt a general model to the target dataset.
We show that in the case of neural networks trained for handwriting recognition
using CTC, simple finetuning with data augmentation works surprisingly well in
such scenarios and that it is resistant to overfitting even for very small
target domain datasets. We evaluated the behavior of finetuning with respect to
augmentation, training data size, and quality of the pre-trained network, both
in writer-dependent and writer-independent settings. On a large real-world
dataset, finetuning provided an average relative CER improvement of 25 % with
16 text lines for new writers and 50 % for 256 text lines.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 12:18:58 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Kohút",
"Jan",
""
],
[
"Hradiš",
"Michal",
""
]
] |
new_dataset
| 0.995198 |
2302.06312
|
Perrine Rose SEGUIN
|
P S\'eguin (CRNL), E Maby (CRNL), J Mattout (CRNL)
|
Why BCIs work poorly with the patients who need them the most?
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A major objective of Brain-Computer interfaces (BCI) is to restore
communication and control in patients with severe motor impairments, like
people with Locked-in syndrome. These patients are left only with limited eye
and eyelid movements. However, they do not benefit from efficient BCI
solutions, yet. Different signals can be used as commands for non-invasive BCI:
mu and beta rhythm desynchronization, evoked potentials and slow cortical
potentials. Whatever the signal, clinical studies show a dramatic loss of
performance in severely impaired patients compared to healthy subjects.
Interestingly, the control principle is always the same, namely the replacement
of an impossible (overt) movement by a (covert) attentional command. Drawing
from the premotor theory of attention, from neuroimaging findings about the
functional anatomy of spatial attention, from clinical observations and from
recent computational accounts of attention for both action and perception, we
explore the hypothesis that these patients undergo negative plasticity that
extends their impairment from overt to covert attentional processes.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 12:23:46 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Séguin",
"P",
"",
"CRNL"
],
[
"Maby",
"E",
"",
"CRNL"
],
[
"Mattout",
"J",
"",
"CRNL"
]
] |
new_dataset
| 0.953236 |
2302.06355
|
Daniel Hienert
|
Andrea Papenmeier, Dagmar Kern, Daniel Hienert, Alfred Sliwa, Ahmet
Aker, Norbert Fuhr
|
Dataset of Natural Language Queries for E-Commerce
| null |
In CHIIR '21: Proceedings of the 2021 Conference on Human
Information Interaction and Retrieval
|
10.1145/3406522.3446043
| null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Shopping online is more and more frequent in our everyday life. For
e-commerce search systems, understanding natural language coming through voice
assistants, chatbots or from conversational search is an essential ability to
understand what the user really wants. However, evaluation datasets with
natural and detailed information needs of product-seekers which could be used
for research do not exist. Due to privacy issues and competitive consequences,
only few datasets with real user search queries from logs are openly available.
In this paper, we present a dataset of 3,540 natural language queries in two
domains that describe what users want when searching for a laptop or a jacket
of their choice. The dataset contains annotations of vague terms and key facts
of 1,754 laptop queries. This dataset opens up a range of research
opportunities in the fields of natural language processing and (interactive)
information retrieval for product search.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 13:39:12 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Papenmeier",
"Andrea",
""
],
[
"Kern",
"Dagmar",
""
],
[
"Hienert",
"Daniel",
""
],
[
"Sliwa",
"Alfred",
""
],
[
"Aker",
"Ahmet",
""
],
[
"Fuhr",
"Norbert",
""
]
] |
new_dataset
| 0.999521 |
2302.06368
|
Udugama Vithanage Bavantha Lakshan Udugama
|
B. Udugama
|
Mini bot 3D: A ROS based Gazebo Simulation
|
Report on a scientific study for a robot simulation
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The recent adoption of the Robot Operating System (ROS) as a software
standard in robotics has contributed to novel solutions for several problems on
the area. One such problem is known as Simultaneous Localization and Mapping
(SLAM) with autonomous navigation, for which a number of algorithms from
different classes are available as ROS packages ready to be used on any
compatible robot. Many anticipated applications of autonomous mobile robots
require for them to navigate in diverse complex environments without support
from exterior infrastructures. To perform this on-board navigation, the robot
must make use of the available sensor technologies and fuse the most reliable
data respective to the present environment in an adaptive manner and optimize
the algorithm parameters prior to the actual implementation to reduce the
workaround time. This paper will review recent efforts to develop onboard
navigation systems which can seamlessly transition between outdoor and indoor
environments and different terrains seamlessly using Gazebo simulator with ROS
integration. The methodologies surveyed include SLAM, Odometry and
Localisation. An overview of the state-of-the-art is provided with a focus on
approaches which are adaptive to dynamic sensor uncertainty, dynamic objects
and dynamic scenes. The experiences reported on this work should provide
insight for roboticists seeking an Autonomous SLAM solution for indoor
applications.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 13:56:13 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Udugama",
"B.",
""
]
] |
new_dataset
| 0.994591 |
2302.06414
|
MANUEL DIAZ ZAPATA
|
Manuel Alejandro Diaz-Zapata (CHROMA), David Sierra Gonz\'alez
(CHROMA), \"Ozg\"ur Erkent (CHROMA), Jilles Dibangoye (CHROMA), Christian
Laugier (CHROMA, E-MOTION, Inria)
|
LAPTNet-FPN: Multi-scale LiDAR-aided Projective Transform Network for
Real Time Semantic Grid Prediction
|
2023 IEEE International Conference on Robotics and Automation (ICRA),
IEEE Robotics and Automation Society, May 2023, London, United Kingdom
| null | null | null |
cs.RO cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic grids can be useful representations of the scene around an
autonomous system. By having information about the layout of the space around
itself, a robot can leverage this type of representation for crucial tasks such
as navigation or tracking. By fusing information from multiple sensors,
robustness can be increased and the computational load for the task can be
lowered, achieving real time performance. Our multi-scale LiDAR-Aided
Perspective Transform network uses information available in point clouds to
guide the projection of image features to a top-view representation, resulting
in a relative improvement in the state of the art for semantic grid generation
for human (+8.67%) and movable object (+49.07%) classes in the nuScenes
dataset, as well as achieving results close to the state of the art for the
vehicle, drivable area and walkway classes, while performing inference at 25
FPS.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 12:34:28 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Diaz-Zapata",
"Manuel Alejandro",
"",
"CHROMA"
],
[
"González",
"David Sierra",
"",
"CHROMA"
],
[
"Erkent",
"Özgür",
"",
"CHROMA"
],
[
"Dibangoye",
"Jilles",
"",
"CHROMA"
],
[
"Laugier",
"Christian",
"",
"CHROMA, E-MOTION, Inria"
]
] |
new_dataset
| 0.997992 |
2302.06506
|
Nicola Cotumaccio
|
Nicola Cotumaccio
|
A Myhill-Nerode Theorem for Generalized Automata
| null | null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The model of generalized automata, introduced by Eilenberg, allows to
represent a regular language more concisely than conventional automata by
allowing edges to be labeled not only with characters, but also strings.
Hashiguchi proved that the problem of determining the minimum number of states
of a generalized automata recognizing a given language is decidable [ICALP
1991]. Subsequently, Giammaresi and Montalbano introduced a notion of
determinism for generalized automata [STACS 1995, TCS 1999]. While generalized
deterministic automata retain many properties of conventional deterministic
automata, the uniqueness of a minimal generalized deterministic automaton is
lost. In this paper, we show that the lack of uniqueness can be explained by
introducing a set $ \mathcal{W(A)} $ associated with a generalized automaton $
\mathcal{A} $. The set $ \mathcal{W(A)} $ is always trivially equal to the set
of all prefixes of the language recognized by the automaton, if $ \mathcal{A} $
is a conventional automaton, but this need not be true for generalized
automata. By fixing $ \mathcal{W(A)} $, we are able to derive for the first
time a full Myhill-Nerode theorem for generalized automata, which contains the
classical Myhill-Nerode theorem for conventional automata as a degenerate case.
In the conclusions, we outline the essential role that $ \mathcal{W(A)} $ plays
in graph compression, allowing to extend the class of regular languages that
can be indexed and compressed.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 16:32:44 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Cotumaccio",
"Nicola",
""
]
] |
new_dataset
| 0.997936 |
2302.06560
|
Anubhav Jangra
|
Yash Verma, Anubhav Jangra, Raghvendra Kumar, Sriparna Saha
|
Large Scale Multi-Lingual Multi-Modal Summarization Dataset
| null | null | null | null |
cs.CL cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Significant developments in techniques such as encoder-decoder models have
enabled us to represent information comprising multiple modalities. This
information can further enhance many downstream tasks in the field of
information retrieval and natural language processing; however, improvements in
multi-modal techniques and their performance evaluation require large-scale
multi-modal data which offers sufficient diversity. Multi-lingual modeling for
a variety of tasks like multi-modal summarization, text generation, and
translation leverages information derived from high-quality multi-lingual
annotated data. In this work, we present the current largest multi-lingual
multi-modal summarization dataset (M3LS), and it consists of over a million
instances of document-image pairs along with a professionally annotated
multi-modal summary for each pair. It is derived from news articles published
by British Broadcasting Corporation(BBC) over a decade and spans 20 languages,
targeting diversity across five language roots, it is also the largest
summarization dataset for 13 languages and consists of cross-lingual
summarization data for 2 languages. We formally define the multi-lingual
multi-modal summarization task utilizing our dataset and report baseline scores
from various state-of-the-art summarization techniques in a multi-lingual
setting. We also compare it with many similar datasets to analyze the
uniqueness and difficulty of M3LS.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 18:00:23 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Verma",
"Yash",
""
],
[
"Jangra",
"Anubhav",
""
],
[
"Kumar",
"Raghvendra",
""
],
[
"Saha",
"Sriparna",
""
]
] |
new_dataset
| 0.999544 |
2302.06561
|
Baxi Chong
|
Baxi Chong, Tianyu Wang, Daniel Irvine, Velin Kojouharov, Bo Lin,
Howie Choset, Daniel I. Goldman, Grigoriy Blekherman
|
Gait design for limbless obstacle aided locomotion using geometric
mechanics
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Limbless robots have the potential to maneuver through cluttered environments
that conventional robots cannot traverse. As illustrated in their biological
counterparts such as snakes and nematodes, limbless locomotors can benefit from
interactions with obstacles, yet such obstacle-aided locomotion (OAL) requires
properly coordinated high-level self-deformation patterns (gait templates) as
well as low-level body adaptation to environments. Most prior work on OAL
utilized stereotyped traveling-wave gait templates and relied on local body
deformations (e.g., passive body mechanics or decentralized controller
parameter adaptation based on force feedback) for obstacle navigation, while
gait template design for OAL remains less studied. In this paper, we explore
novel gait templates for OAL based on tools derived from geometric mechanics
(GM), which thus far has been limited to homogeneous environments. Here, we
expand the scope of GM to obstacle-rich environments. Specifically, we
establish a model that maps the presence of an obstacle to directional
constraints in optimization. In doing so, we identify novel gait templates
suitable for sparsely and densely distributed obstacle-rich environments
respectively. Open-loop robophysical experiments verify the effectiveness of
our identified OAL gaits in obstacle-rich environments. We posit that when such
OAL gait templates are augmented with appropriate sensing and feedback
controls, limbless locomotors will gain robust function in obstacle rich
environments.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 18:06:06 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Chong",
"Baxi",
""
],
[
"Wang",
"Tianyu",
""
],
[
"Irvine",
"Daniel",
""
],
[
"Kojouharov",
"Velin",
""
],
[
"Lin",
"Bo",
""
],
[
"Choset",
"Howie",
""
],
[
"Goldman",
"Daniel I.",
""
],
[
"Blekherman",
"Grigoriy",
""
]
] |
new_dataset
| 0.965897 |
2302.06568
|
Louis Blankemeier
|
Louis Blankemeier, Arjun Desai, Juan Manuel Zambrano Chaves, Andrew
Wentland, Sally Yao, Eduardo Reis, Malte Jensen, Bhanushree Bahl, Khushboo
Arora, Bhavik N. Patel, Leon Lenchik, Marc Willis, Robert D. Boutin, Akshay
S. Chaudhari
|
Comp2Comp: Open-Source Body Composition Assessment on Computed
Tomography
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Computed tomography (CT) is routinely used in clinical practice to evaluate a
wide variety of medical conditions. While CT scans provide diagnoses, they also
offer the ability to extract quantitative body composition metrics to analyze
tissue volume and quality. Extracting quantitative body composition measures
manually from CT scans is a cumbersome and time-consuming task. Proprietary
software has been developed recently to automate this process, but the
closed-source nature impedes widespread use. There is a growing need for fully
automated body composition software that is more accessible and easier to use,
especially for clinicians and researchers who are not experts in medical image
processing. To this end, we have built Comp2Comp, an open-source Python package
for rapid and automated body composition analysis of CT scans. This package
offers models, post-processing heuristics, body composition metrics, automated
batching, and polychromatic visualizations. Comp2Comp currently computes body
composition measures for bone, skeletal muscle, visceral adipose tissue, and
subcutaneous adipose tissue on CT scans of the abdomen. We have created two
pipelines for this purpose. The first pipeline computes vertebral measures, as
well as muscle and adipose tissue measures, at the T12 - L5 vertebral levels
from abdominal CT scans. The second pipeline computes muscle and adipose tissue
measures on user-specified 2D axial slices. In this guide, we discuss the
architecture of the Comp2Comp pipelines, provide usage instructions, and report
internal and external validation results to measure the quality of
segmentations and body composition measures. Comp2Comp can be found at
https://github.com/StanfordMIMI/Comp2Comp.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 18:11:54 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Blankemeier",
"Louis",
""
],
[
"Desai",
"Arjun",
""
],
[
"Chaves",
"Juan Manuel Zambrano",
""
],
[
"Wentland",
"Andrew",
""
],
[
"Yao",
"Sally",
""
],
[
"Reis",
"Eduardo",
""
],
[
"Jensen",
"Malte",
""
],
[
"Bahl",
"Bhanushree",
""
],
[
"Arora",
"Khushboo",
""
],
[
"Patel",
"Bhavik N.",
""
],
[
"Lenchik",
"Leon",
""
],
[
"Willis",
"Marc",
""
],
[
"Boutin",
"Robert D.",
""
],
[
"Chaudhari",
"Akshay S.",
""
]
] |
new_dataset
| 0.999647 |
2302.06582
|
Mithun Goutham
|
Mithun Goutham, Meghna Menon, Sarah Garrow and Stephanie Stockar
|
A Convex Hull Cheapest Insertion Heuristic for the Non-Euclidean and
Precedence Constrained TSPs
|
Manuscript submitted 4 February 2023 to the IEEE Transactions on
Intelligent Transportation Systems (T-ITS)
| null | null | null |
cs.AI cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
The convex hull cheapest insertion heuristic is known to generate good
solutions to the Euclidean Traveling Salesperson Problem. This paper presents
an adaptation of this heuristic to the non-Euclidean version of the problem and
further extends it to the problem with precedence constraints, also known as
the Sequential Ordering Problem. To test the proposed algorithm, the well-known
TSPLIB benchmark data-set is modified in a replicable manner to create
non-Euclidean instances and precedence constraints. The proposed algorithm is
shown to outperform the commonly used Nearest Neighbor algorithm in 97% of the
cases that do not have precedence constraints. When precedence constraints
exist such that the child nodes are centrally located, the algorithm again
outperforms the Nearest Neighbor algorithm in 98% of the studied instances.
Considering all spatial layouts of precedence constraints, the algorithm
outperforms the Nearest Neighbor heuristic 68% of the time.
|
[
{
"version": "v1",
"created": "Sun, 5 Feb 2023 13:56:19 GMT"
}
] | 2023-02-14T00:00:00 |
[
[
"Goutham",
"Mithun",
""
],
[
"Menon",
"Meghna",
""
],
[
"Garrow",
"Sarah",
""
],
[
"Stockar",
"Stephanie",
""
]
] |
new_dataset
| 0.981807 |
2102.07362
|
Hanwen Yao
|
Hanwen Yao, Arman Fazeli and Alexander Vardy
|
A Deterministic Algorithm for Computing the Weight Distribution of Polar
Codes
|
Accepted by the IEEE Transactions on Information Theory. Presented in
part at ISIT 2021
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In this work, we present a deterministic algorithm for computing the entire
weight distribution of polar codes. As the first step, we derive an efficient
recursive procedure to compute the weight distribution that arises in
successive cancellation decoding of polar codes along any decoding path. This
solves the open problem recently posed by Polyanskaya, Davletshin, and
Polyanskii. Using this recursive procedure, at code length n, we can compute
the weight distribution of any polar cosets in time O(n^2). We show that any
polar code can be represented as a disjoint union of such polar cosets;
moreover, this representation extends to polar codes with dynamically frozen
bits. However, the number of polar cosets in such representation scales
exponentially with a parameter introduced herein, which we call the mixing
factor. To upper bound the complexity of our algorithm for polar codes being
decreasing monomial codes, we study the range of their mixing factors. We prove
that among all decreasing monomial codes with rates at most 1/2, self-dual
Reed-Muller codes have the largest mixing factors. To further reduce the
complexity of our algorithm, we make use of the fact that, as decreasing
monomial codes, polar codes have a large automorphism group. That automorphism
group includes the block lower-triangular affine group (BLTA), which in turn
contains the lower-triangular affine group (LTA). We prove that a subgroup of
LTA acts transitively on certain subsets of decreasing monomial codes, thereby
drastically reducing the number of polar cosets that we need to evaluate. This
complexity reduction makes it possible to compute the weight distribution of
polar codes at length n = 128.
|
[
{
"version": "v1",
"created": "Mon, 15 Feb 2021 06:50:24 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Feb 2023 21:18:43 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Yao",
"Hanwen",
""
],
[
"Fazeli",
"Arman",
""
],
[
"Vardy",
"Alexander",
""
]
] |
new_dataset
| 0.999229 |
2205.12644
|
Arie Cattan
|
Shon Otmazgin, Arie Cattan, Yoav Goldberg
|
LingMess: Linguistically Informed Multi Expert Scorers for Coreference
Resolution
|
EACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
While coreference resolution typically involves various linguistic
challenges, recent models are based on a single pairwise scorer for all types
of pairs. We present LingMess, a new coreference model that defines different
categories of coreference cases and optimize multiple pairwise scorers, where
each scorer learns a specific set of linguistic challenges. Our model
substantially improves pairwise scores for most categories and outperforms
cluster-level performance on Ontonotes and 5 additional datasets. Our model is
available in https://github.com/shon-otmazgin/lingmess-coref
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 10:39:46 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Oct 2022 11:50:21 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Feb 2023 11:09:19 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Otmazgin",
"Shon",
""
],
[
"Cattan",
"Arie",
""
],
[
"Goldberg",
"Yoav",
""
]
] |
new_dataset
| 0.991032 |
2207.07742
|
Jakub Rozlivek
|
Jan Docekal, Jakub Rozlivek, Jiri Matas, and Matej Hoffmann
|
Human keypoint detection for close proximity human-robot interaction
|
8 pages 8 figures
|
IEEE-RAS International Conference on Humanoid Robots (Humanoids
2022)
|
10.1109/Humanoids53995.2022.10000133
| null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the performance of state-of-the-art human keypoint detectors in the
context of close proximity human-robot interaction. The detection in this
scenario is specific in that only a subset of body parts such as hands and
torso are in the field of view. In particular, (i) we survey existing datasets
with human pose annotation from the perspective of close proximity images and
prepare and make publicly available a new Human in Close Proximity (HiCP)
dataset; (ii) we quantitatively and qualitatively compare state-of-the-art
human whole-body 2D keypoint detection methods (OpenPose, MMPose, AlphaPose,
Detectron2) on this dataset; (iii) since accurate detection of hands and
fingers is critical in applications with handovers, we evaluate the performance
of the MediaPipe hand detector; (iv) we deploy the algorithms on a humanoid
robot with an RGB-D camera on its head and evaluate the performance in 3D human
keypoint detection. A motion capture system is used as reference.
The best performing whole-body keypoint detectors in close proximity were
MMPose and AlphaPose, but both had difficulty with finger detection. Thus, we
propose a combination of MMPose or AlphaPose for the body and MediaPipe for the
hands in a single framework providing the most accurate and robust detection.
We also analyse the failure modes of individual detectors -- for example, to
what extent the absence of the head of the person in the image degrades
performance. Finally, we demonstrate the framework in a scenario where a
humanoid robot interacting with a person uses the detected 3D keypoints for
whole-body avoidance maneuvers.
|
[
{
"version": "v1",
"created": "Fri, 15 Jul 2022 20:33:29 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Feb 2023 19:51:34 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Docekal",
"Jan",
""
],
[
"Rozlivek",
"Jakub",
""
],
[
"Matas",
"Jiri",
""
],
[
"Hoffmann",
"Matej",
""
]
] |
new_dataset
| 0.99771 |
2208.02160
|
David Watkins-Valls
|
David Watkins
|
Scrypt Mining with ASICs
|
Published in 2014
| null |
10.13140/RG.2.2.10976.97287
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Cryptocurrencies have garnered a lot of attention by governments and internet
enthusiasts over the past three years. These currencies are celebrated for
their security and speedy transactions in a modern era of digital commerce.
Bitcoin was the first of these currencies to gain a large advantage over
subsequent iterations. Bitcoin was first conceived by Satoshi Nakamoto who
mentioned the concept of a cryptocurrency in his paper titled Bitcoin. It
featured new concepts such as proof of work and transactions which utilized
hash based encryption. One particular alternative cryptocurrency is known as
Litecoin. Backed by a memory intensive algorithm known as Scrypt, many
cryptocurrency enthusiasts have decided to celebrate this particular coin.
Scrypt expands on Bitcoin's proof of work algorithm by adding the amount of
work it takes to commit a transaction within the Litecoin network. Scrypt
forces more work on the device that is being used to perform the algorithm by
making frequent memory requests. This makes it difficult to create specialized
hardware to create new coins and to commit transactions due to the nature of
memory intensive applications.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 17:09:37 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Watkins",
"David",
""
]
] |
new_dataset
| 0.964938 |
2209.01315
|
Sicheng Wang
|
Sicheng Wang, Eugenio Frias Miranda, and Laura H. Blumenschein
|
The Folded Pneumatic Artificial Muscle (foldPAM): Towards
Programmability and Control via End Geometry
|
Manuscript accepted by IEEE Robotics and Automation Letters,
available on IEEE Xplore
|
in IEEE Robotics and Automation Letters, vol. 8, no. 3, pp.
1383-1390, March 2023
|
10.1109/LRA.2023.3238160
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Soft pneumatic actuators have seen applications in many soft robotic systems,
and their pressure-driven nature presents unique challenges and opportunities
for controlling their motion. In this work, we present a new concept: designing
and controlling pneumatic actuators via end geometry. We demonstrate a novel
actuator class, named the folded Pneumatic Artificial Muscle (foldPAM), which
features a thin-filmed air pouch that is symmetrically folded on each side.
Varying the folded portion of the actuator changes the end constraints and,
hence, the force-strain relationships. We investigated this change
experimentally by measuring the force-strain relationship of individual foldPAM
units with various lengths and amounts of folding. In addition to
static-geometry units, an actuated foldPAM device was designed to produce
continuous, on-demand adjustment of the end geometry, enabling closed-loop
position control while maintaining constant pressure. Experiments with the
device indicate that geometry control allows access to different areas on the
force-strain plane and that closed-loop geometry control can achieve errors
within 0.5% of the actuation range.
|
[
{
"version": "v1",
"created": "Sat, 3 Sep 2022 02:53:08 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Feb 2023 19:53:14 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Wang",
"Sicheng",
""
],
[
"Miranda",
"Eugenio Frias",
""
],
[
"Blumenschein",
"Laura H.",
""
]
] |
new_dataset
| 0.997763 |
2209.11451
|
Shuhao Zheng
|
Shuhao Zheng, Yanxi Lin, Yang Yu, Ye Yuan, Yongzheng Jia, Xue Liu
|
FIAT: Fine-grained Information Audit for Trustless Transborder Data Flow
|
10 pages, 6 figures, 1 table
| null | null | null |
cs.IT cs.SY eess.SY math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Auditing the information leakage of latent sensitive features during the
transborder data flow has attracted sufficient attention from global digital
regulators. However, there is missing a technical approach for the audit
practice due to two technical challenges. Firstly, there is a lack of theory
and tools for measuring the information of sensitive latent features in a
dataset. Secondly, the transborder data flow involves multi-stakeholders with
diverse interests, which means the audit must be trustless. Despite the
tremendous efforts in protecting data privacy, an important issue that has long
been neglected is that the transmitted data in data flows can leak other
regulated information that is not explicitly contained in the data, leading to
unaware information leakage risks. To unveil such risks trustfully before the
actual data transfer, we propose FIAT, a Fine-grained Information Audit system
for Trustless transborder data flow. In FIAT, we use a learning approach to
quantify the amount of information leakage, while the technologies of
zero-knowledge proof and smart contracts are applied to provide trustworthy and
privacy-preserving auditing results. Experiments show that large information
leakage can boost the predictability of uninvolved information using simple
machine-learning models, revealing the importance of information auditing.
Further performance benchmarking also validates the efficiency and scalability
of the FIAT auditing system.
|
[
{
"version": "v1",
"created": "Fri, 23 Sep 2022 07:25:05 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2022 01:00:02 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Feb 2023 06:55:08 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Zheng",
"Shuhao",
""
],
[
"Lin",
"Yanxi",
""
],
[
"Yu",
"Yang",
""
],
[
"Yuan",
"Ye",
""
],
[
"Jia",
"Yongzheng",
""
],
[
"Liu",
"Xue",
""
]
] |
new_dataset
| 0.961612 |
2211.03895
|
Yutao Tang
|
Yutao Tang, Benjam\'in B\'ejar, Joey K.-Y. Essoe, Joseph F. McGuire
and Ren\'e Vidal
|
Facial Tic Detection in Untrimmed Videos of Tourette Syndrome Patients
| null |
ICPR2022
|
10.1109/ICPR56361.2022.9956140
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Tourette Syndrome (TS) is a behavior disorder that onsets in childhood and is
characterized by the expression of involuntary movements and sounds commonly
referred to as tics. Behavioral therapy is the first-line treatment for
patients with TS, and it helps patients raise awareness about tic occurrence as
well as develop tic inhibition strategies. However, the limited availability of
therapists and the difficulties for in-home follow up work limits its
effectiveness. An automatic tic detection system that is easy to deploy could
alleviate the difficulties of home-therapy by providing feedback to the
patients while exercising tic awareness. In this work, we propose a novel
architecture (T-Net) for automatic tic detection and classification from
untrimmed videos. T-Net combines temporal detection and segmentation and
operates on features that are interpretable to a clinician. We compare T-Net to
several state-of-the-art systems working on deep features extracted from the
raw videos and T-Net achieves comparable performance in terms of average
precision while relying on interpretable features needed in clinical practice.
|
[
{
"version": "v1",
"created": "Mon, 7 Nov 2022 22:59:58 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Tang",
"Yutao",
""
],
[
"Béjar",
"Benjamín",
""
],
[
"Essoe",
"Joey K. -Y.",
""
],
[
"McGuire",
"Joseph F.",
""
],
[
"Vidal",
"René",
""
]
] |
new_dataset
| 0.999653 |
2212.01458
|
Dennis Rohde
|
Alexander Neuhaus and Dennis Rohde
|
Stabbing balls with line segments and polygonal paths
|
Flawed proof
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of ordered stabbing of $n$ balls (of arbitrary and
possibly different radii, no ball contained in another) in $\mathbb{R}^d$, $d
\geq 3$, with either a directed line segment or a (directed) polygonal curve.
Here, the line segment, respectively polygonal curve, shall visit (intersect)
the given sequence of balls in the order of the sequence. We present a
deterministic algorithm that decides whether there exists a line segment
stabbing the given sequence of balls in order, in time $O(n^{4d-2} \log n)$.
Due to the descriptional complexity of the region containing these line
segments, we can not extend this algorithm to actually compute one. We
circumvent this hurdle by devising a randomized algorithm for a relaxed variant
of the ordered line segment stabbing problem, which is built upon the central
insights from the aforementioned decision algorithm. We further show that this
algorithm can be plugged into an algorithmic scheme by Guibas et al., yielding
an algorithm for a relaxed variant of the minimum-link ordered stabbing path
problem that achieves approximation factor 2 with respect to the number of
links. We conclude with experimental evaluations of the latter two algorithms,
showing practical applicability.
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2022 21:47:54 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Feb 2023 10:33:55 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Neuhaus",
"Alexander",
""
],
[
"Rohde",
"Dennis",
""
]
] |
new_dataset
| 0.997208 |
2301.00199
|
Thorsten Wi{\ss}mann
|
Frits Vaandrager and Thorsten Wi{\ss}mann
|
Action Codes
| null | null | null | null |
cs.FL cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We provide a new perspective on the problem how high-level state machine
models with abstract actions can be related to low-level models in which these
actions are refined by sequences of concrete actions. We describe the
connection between high-level and low-level actions using \emph{action codes},
a variation of the prefix codes known from coding theory. For each action code
${\mathcal{R}}$, we introduce a \emph{contraction} operator
$\alpha_{\mathcal{R}}$ that turns a low-level model $\mathcal{M}$ into a
high-level model, and a \emph{refinement} operator $\rho_{\mathcal{R}}$ that
transforms a high-level model $\mathcal{N}$ into a low-level model. We
establish a Galois connection $\rho_{\mathcal{R}}(\mathcal{N}) \sqsubseteq
\mathcal{M} \Leftrightarrow \mathcal{N} \sqsubseteq
\alpha_{\mathcal{R}}(\mathcal{M})$, where $\sqsubseteq$ is the well-known
simulation preorder. For conformance, we typically want to obtain an
overapproximation of model $\mathcal{M}$. To this end, we also introduce a
\emph{concretization} operator $\gamma_{\mathcal{R}}$, which behaves like the
refinement operator but adds arbitrary behavior at intermediate points, giving
us a second Galois connection $\alpha_{\mathcal{R}}(\mathcal{M}) \sqsubseteq
\mathcal{N} \Leftrightarrow \mathcal{M} \sqsubseteq
\gamma_{\mathcal{R}}(\mathcal{N})$. Action codes may be used to construct
adaptors that translate between concrete and abstract actions during learning
and testing of Mealy machines. If Mealy machine $\mathcal{M}$ models a
black-box system then $\alpha_{\mathcal{R}}(\mathcal{M})$ describes the
behavior that can be observed by a learner/tester that interacts with this
system via an adaptor derived from code ${\mathcal{R}}$. Whenever
$\alpha_{\mathcal{R}}(\mathcal{M})$ implements (or conforms to) $\mathcal{N}$,
we may conclude that $\mathcal{M}$ implements (or conforms to)
$\gamma_{{\mathcal{R}}} (\mathcal{N})$.
|
[
{
"version": "v1",
"created": "Sat, 31 Dec 2022 13:43:15 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Feb 2023 11:15:18 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Vaandrager",
"Frits",
""
],
[
"Wißmann",
"Thorsten",
""
]
] |
new_dataset
| 0.999166 |
2301.05777
|
Asef Islam
|
Asef Islam, Anthony Ronco, Stephen M. Becker, Jeremiah Blackburn,
Johannes C. Schittny, Kyoungmi Kim, Rebecca Stein-Wexler, Anthony S. Wexler
|
Lung airway geometry as an early predictor of autism: A preliminary
machine learning-based study
| null | null | null | null |
cs.LG eess.IV q-bio.TO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The goal of this study is to assess the feasibility of airway geometry as a
biomarker for ASD. Chest CT images of children with a documented diagnosis of
ASD as well as healthy controls were identified retrospectively. 54 scans were
obtained for analysis, including 31 ASD cases and 23 age and sex-matched
controls. A feature selection and classification procedure using principal
component analysis (PCA) and support vector machine (SVM) achieved a peak cross
validation accuracy of nearly 89% using a feature set of 8 airway branching
angles. Sensitivity was 94%, but specificity was only 78%. The results suggest
a measurable difference in airway branchpoint angles between children with ASD
and the control population. Under review at Scientific Reports
|
[
{
"version": "v1",
"created": "Fri, 13 Jan 2023 22:21:58 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Feb 2023 02:20:49 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Feb 2023 20:58:12 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Islam",
"Asef",
""
],
[
"Ronco",
"Anthony",
""
],
[
"Becker",
"Stephen M.",
""
],
[
"Blackburn",
"Jeremiah",
""
],
[
"Schittny",
"Johannes C.",
""
],
[
"Kim",
"Kyoungmi",
""
],
[
"Stein-Wexler",
"Rebecca",
""
],
[
"Wexler",
"Anthony S.",
""
]
] |
new_dataset
| 0.992857 |
2301.10439
|
Truong Son Hy
|
Cong Dao Tran, Nhut Huy Pham, Anh Nguyen, Truong Son Hy, Tu Vu
|
ViDeBERTa: A powerful pre-trained language model for Vietnamese
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents ViDeBERTa, a new pre-trained monolingual language model
for Vietnamese, with three versions - ViDeBERTa_xsmall, ViDeBERTa_base, and
ViDeBERTa_large, which are pre-trained on a large-scale corpus of high-quality
and diverse Vietnamese texts using DeBERTa architecture. Although many
successful pre-trained language models based on Transformer have been widely
proposed for the English language, there are still few pre-trained models for
Vietnamese, a low-resource language, that perform good results on downstream
tasks, especially Question answering. We fine-tune and evaluate our model on
three important natural language downstream tasks, Part-of-speech tagging,
Named-entity recognition, and Question answering. The empirical results
demonstrate that ViDeBERTa with far fewer parameters surpasses the previous
state-of-the-art models on multiple Vietnamese-specific natural language
understanding tasks. Notably, ViDeBERTa_base with 86M parameters, which is only
about 23% of PhoBERT_large with 370M parameters, still performs the same or
better results than the previous state-of-the-art model. Our ViDeBERTa models
are available at: https://github.com/HySonLab/ViDeBERTa.
|
[
{
"version": "v1",
"created": "Wed, 25 Jan 2023 07:26:54 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Feb 2023 15:55:58 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Tran",
"Cong Dao",
""
],
[
"Pham",
"Nhut Huy",
""
],
[
"Nguyen",
"Anh",
""
],
[
"Hy",
"Truong Son",
""
],
[
"Vu",
"Tu",
""
]
] |
new_dataset
| 0.999443 |
2302.03292
|
Yifei Huang
|
Zecheng Yu, Yifei Huang, Ryosuke Furuta, Takuma Yagi, Yusuke Goutsu,
Yoichi Sato
|
Fine-grained Affordance Annotation for Egocentric Hand-Object
Interaction Videos
|
WACV 2023. Refined version of Workshop article arXiv:2206.05424
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Object affordance is an important concept in hand-object interaction,
providing information on action possibilities based on human motor capacity and
objects' physical property thus benefiting tasks such as action anticipation
and robot imitation learning. However, the definition of affordance in existing
datasets often: 1) mix up affordance with object functionality; 2) confuse
affordance with goal-related action; and 3) ignore human motor capacity. This
paper proposes an efficient annotation scheme to address these issues by
combining goal-irrelevant motor actions and grasp types as affordance labels
and introducing the concept of mechanical action to represent the action
possibilities between two objects. We provide new annotations by applying this
scheme to the EPIC-KITCHENS dataset and test our annotation with tasks such as
affordance recognition, hand-object interaction hotspots prediction, and
cross-domain evaluation of affordance. The results show that models trained
with our annotation can distinguish affordance from other concepts, predict
fine-grained interaction possibilities on objects, and generalize through
different domains.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 07:05:00 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Feb 2023 03:03:27 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Yu",
"Zecheng",
""
],
[
"Huang",
"Yifei",
""
],
[
"Furuta",
"Ryosuke",
""
],
[
"Yagi",
"Takuma",
""
],
[
"Goutsu",
"Yusuke",
""
],
[
"Sato",
"Yoichi",
""
]
] |
new_dataset
| 0.984222 |
2302.04455
|
Nikolay Filippov
|
Anna Bykova, Nikolay Filippov, Ivan P. Yamshchikov
|
Rehabilitating Homeless: Dataset and Key Insights
|
Dataset, code and appendix to this article are available at
https://github.com/LEYADEV/homeless
| null | null | null |
cs.LG cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a large anonymized dataset of homelessness alongside
insights into the data-driven rehabilitation of homeless people. The dataset
was gathered by a large nonprofit organization working on rehabilitating the
homeless for twenty years. This is the first dataset that we know of that
contains rich information on thousands of homeless individuals seeking
rehabilitation. We show how data analysis can help to make the rehabilitation
of homeless people more effective and successful. Thus, we hope this paper
alerts the data science community to the problem of homelessness.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 06:21:27 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Feb 2023 15:21:56 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Bykova",
"Anna",
""
],
[
"Filippov",
"Nikolay",
""
],
[
"Yamshchikov",
"Ivan P.",
""
]
] |
new_dataset
| 0.998673 |
2302.04640
|
Jeffrey Shallit
|
Jeffrey Shallit
|
Prefixes of the Fibonacci word
| null | null | null | null |
cs.FL cs.DM math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
Mignosi, Restivo, and Salemi (1998) proved that for all $\epsilon > 0$ there
exists an integer $N$ such that all prefixes of the Fibonacci word of length
$\geq N$ contain a suffix of exponent $\alpha^2-\epsilon$, where $\alpha =
(1+\sqrt{5})/2$ is the golden ratio. In this note we show how to prove an
explicit version of this theorem with tools from automata theory and logic.
Along the way we gain a better understanding of the repetitive structure of the
Fibonacci word.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 13:52:47 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Feb 2023 12:20:13 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Shallit",
"Jeffrey",
""
]
] |
new_dataset
| 0.994862 |
2302.04899
|
Dmitry Kazhdan
|
Dmitry Kazhdan, Botty Dimanov, Lucie Charlotte Magister, Pietro
Barbiero, Mateja Jamnik, Pietro Lio
|
GCI: A (G)raph (C)oncept (I)nterpretation Framework
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Explainable AI (XAI) underwent a recent surge in research on concept
extraction, focusing on extracting human-interpretable concepts from Deep
Neural Networks. An important challenge facing concept extraction approaches is
the difficulty of interpreting and evaluating discovered concepts, especially
for complex tasks such as molecular property prediction. We address this
challenge by presenting GCI: a (G)raph (C)oncept (I)nterpretation framework,
used for quantitatively measuring alignment between concepts discovered from
Graph Neural Networks (GNNs) and their corresponding human interpretations. GCI
encodes concept interpretations as functions, which can be used to
quantitatively measure the alignment between a given interpretation and concept
definition. We demonstrate four applications of GCI: (i) quantitatively
evaluating concept extractors, (ii) measuring alignment between concept
extractors and human interpretations, (iii) measuring the completeness of
interpretations with respect to an end task and (iv) a practical application of
GCI to molecular property prediction, in which we demonstrate how to use
chemical functional groups to explain GNNs trained on molecular property
prediction tasks, and implement interpretations with a 0.76 AUCROC completeness
score.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 19:02:45 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Kazhdan",
"Dmitry",
""
],
[
"Dimanov",
"Botty",
""
],
[
"Magister",
"Lucie Charlotte",
""
],
[
"Barbiero",
"Pietro",
""
],
[
"Jamnik",
"Mateja",
""
],
[
"Lio",
"Pietro",
""
]
] |
new_dataset
| 0.975872 |
2302.04926
|
Linhan Li
|
Linhan Li, ThanhVu Nguyen
|
COOLIO: A Language Support Extension for the Classroom Object Oriented
Language
|
4 pages, 4 figures. This extension is available from
https://marketplace.visualstudio.com/items?itemName=Linhan.cool-language-support
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
COOL is an Object-Oriented programming language used to teach compiler design
in many undergraduate and graduate courses. Because most students are
unfamiliar with the language and code editors and IDEs often lack the support
for COOL, writing code and test programs in COOL are a burden to students,
causing them to not fully understand many important and advanced features of
the language and compiler. In this tool paper, we describe COOLIO,an extension
to support COOL in the popular VSCode IDE. COOLIOprovides (i) syntax
highlighting supports for the COOL language through lexing and parsing, (ii)
semantics-aware autocompletion features that help students write less code and
reduce the burden of having to remember unfamiliar COOL grammar and syntax, and
(iii) relevant feedback from the underlying COOL interpreter/compiler (e.g.,
error messages, typing information) to the students through VSCode editor to
aid debugging. We believe that COOLIO will help students enjoy writing COOL
programs and consequently learn and appreciate more advanced compiler concepts.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 20:43:41 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Li",
"Linhan",
""
],
[
"Nguyen",
"ThanhVu",
""
]
] |
new_dataset
| 0.999763 |
2302.04936
|
Raymond Leung
|
Lloyd Windrim, Arman Melkumyan, Richard J. Murphy, Anna Chlingaryan,
Raymond Leung
|
Unsupervised ore/waste classification on open-cut mine faces using
close-range hyperspectral data
|
Manuscript has been accepted for publication in Geoscience Frontiers.
Keywords: Hyperspectral imaging, remote sensing, mineral mapping, machine
learning, convolutional neural networks, transfer learning, data
augmentation, illumination invariance
|
Geoscience Frontiers 14 (2023) 101562
|
10.1016/j.gsf.2023.101562
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The remote mapping of minerals and discrimination of ore and waste on
surfaces are important tasks for geological applications such as those in
mining. Such tasks have become possible using ground-based, close-range
hyperspectral sensors which can remotely measure the reflectance properties of
the environment with high spatial and spectral resolution. However, autonomous
mapping of mineral spectra measured on an open-cut mine face remains a
challenging problem due to the subtleness of differences in spectral absorption
features between mineral and rock classes as well as variability in the
illumination of the scene. An additional layer of difficulty arises when there
is no annotated data available to train a supervised learning algorithm. A
pipeline for unsupervised mapping of spectra on a mine face is proposed which
draws from several recent advances in the hyperspectral machine learning
literature. The proposed pipeline brings together unsupervised and
self-supervised algorithms in a unified system to map minerals on a mine face
without the need for human-annotated training data. The pipeline is evaluated
with a hyperspectral image dataset of an open-cut mine face comprising mineral
ore martite and non-mineralised shale. The combined system is shown to produce
a superior map to its constituent algorithms, and the consistency of its
mapping capability is demonstrated using data acquired at two different times
of day.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 21:03:03 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Windrim",
"Lloyd",
""
],
[
"Melkumyan",
"Arman",
""
],
[
"Murphy",
"Richard J.",
""
],
[
"Chlingaryan",
"Anna",
""
],
[
"Leung",
"Raymond",
""
]
] |
new_dataset
| 0.996485 |
2302.04965
|
Lining Yao
|
Qiuyu Lu, Lydia Yang, Aditi Maheshwari, Hengrong Ni, Tianyu Yu,
Jianzhe Gu, Advait Wadhwani, Andreea Danielescu, Lining Yao
|
Guttation Monitor: Wearable Guttation Sensor for Plant Condition
Monitoring and Diagnosis
|
15 pages, 13 figures
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Plant life plays a critical role in the ecosystem. However, it is difficult
for humans to perceive plants' reactions because the biopotential and
biochemical responses are invisible to humans. Guttation droplets contain
various chemicals which can reflect plant physiology and environmental
conditions in real-time. Traditionally, these droplets are collected manually
and analyzed in the lab with expensive instruments. Here, we introduce the
Guttation Monitor, an on-site and low-cost monitoring technology for guttation
droplets. It consists of three parts 1) a paper-based microfluidic chip that
can collect guttation droplets and perform colorimetric detection of six
chemicals, 2) a self-contained and solar-powered camera module that can capture
the result from the chip, and 3) an end-user app that can interpret the result.
We discuss this technology's design and implementation, conduct evaluations on
tomato plants, conduct interviews, and envision how such a technology could
enhance the human-plant relationship in four dimensions.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 22:50:21 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Lu",
"Qiuyu",
""
],
[
"Yang",
"Lydia",
""
],
[
"Maheshwari",
"Aditi",
""
],
[
"Ni",
"Hengrong",
""
],
[
"Yu",
"Tianyu",
""
],
[
"Gu",
"Jianzhe",
""
],
[
"Wadhwani",
"Advait",
""
],
[
"Danielescu",
"Andreea",
""
],
[
"Yao",
"Lining",
""
]
] |
new_dataset
| 0.99863 |
2302.05001
|
Mao Yang
|
Mao Yang, Zhongjiang Yan, Bo Li, Qingkun Li, Chenkai Liang,
Narengerile, Tony Xiao Han
|
Sensing Assisted Communication for the Next Generation mmWave WLAN:
System Simulation Perspective
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the proliferating of wireless demands, wireless local area network
(WLAN) becomes one of the most important wireless networks. Network
intelligence is promising for the next generation wireless networks, captured
lots of attentions. Sensing is one efficient enabler to achieve network
intelligence since utilizing sensing can obtain diverse and valuable
non-communication information. Thus, integrating sensing and communications
(ISAC) is a promising technology for future wireless networks. Sensing assisted
communication (SAC) is an important branch of ISAC, but there are few related
works focusing on the systematical and comprehensive analysis on SAC in WLAN.
This article is the first work to systematically analyze SAC in the next
generation WLAN from the system simulation perspective. We analyze the
scenarios and advantages of SAC. Then, from system simulation perspective,
several sources of performance gain brought from SAC are proposed, i.e. beam
link failure, protocol overhead, and intra-physical layer protocol data unit
(intra-PPDU) performance decrease, while several important influencing factors
are described in detail. Performance evaluation is deeply analyzed and the
performance gain of the SAC in both living room and street canyon scenarios are
verified by system simulation. Finally, we provide our insights on the future
directions of SAC for the next generation WLAN.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 01:04:25 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Yang",
"Mao",
""
],
[
"Yan",
"Zhongjiang",
""
],
[
"Li",
"Bo",
""
],
[
"Li",
"Qingkun",
""
],
[
"Liang",
"Chenkai",
""
],
[
"Narengerile",
"",
""
],
[
"Han",
"Tony Xiao",
""
]
] |
new_dataset
| 0.998118 |
2302.05002
|
Elias Neuman-Donihue
|
Elias Neuman-Donihue, Michael Jarvis, Yuhao Zhu
|
FastPoints: A State-of-the-Art Point Cloud Renderer for Unity
|
8 pages, 7 figures
| null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper, we introduce FastPoints, a state-of-the-art point cloud
renderer for the Unity game development platform. Our program supports standard
unprocessed point cloud formats with non-programmatic, drag-and-drop support,
and creates an out-of-core data structure for large clouds without requiring an
explicit preprocessing step; instead, the software renders a decimated point
cloud immediately and constructs a shallow octree online, during which time the
Unity editor remains fully interactive.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 01:16:16 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Neuman-Donihue",
"Elias",
""
],
[
"Jarvis",
"Michael",
""
],
[
"Zhu",
"Yuhao",
""
]
] |
new_dataset
| 0.997009 |
2302.05061
|
Zhen Wang
|
Zhen Wang, Peide Zhu, Jie Yang
|
ControversialQA: Exploring Controversy in Question Answering
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Controversy is widespread online. Previous studies mainly define controversy
based on vague assumptions of its relation to sentiment such as hate speech and
offensive words. This paper introduces the first question-answering dataset
that defines content controversy by user perception, i.e., votes from plenty of
users. It contains nearly 10K questions, and each question has a best answer
and a most controversial answer. Experimental results reveal that controversy
detection in question answering is essential and challenging, and there is no
strong correlation between controversy and sentiment tasks.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 05:39:29 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Wang",
"Zhen",
""
],
[
"Zhu",
"Peide",
""
],
[
"Yang",
"Jie",
""
]
] |
new_dataset
| 0.997175 |
2302.05097
|
Ben Chen
|
Ben Chen, Caihua Xiong, Qi Zhang
|
CCDN: Checkerboard Corner Detection Network for Robust Camera
Calibration
|
ICIRA 2018 oral. 11 pages, 4 figures, 2 tables
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Aiming to improve the checkerboard corner detection robustness against the
images with poor quality, such as lens distortion, extreme poses, and noise, we
propose a novel detection algorithm which can maintain high accuracy on inputs
under multiply scenarios without any prior knowledge of the checkerboard
pattern. This whole algorithm includes a checkerboard corner detection network
and some post-processing techniques. The network model is a fully convolutional
network with improvements of loss function and learning rate, which can deal
with the images of arbitrary size and produce correspondingly-sized output with
a corner score on each pixel by efficient inference and learning. Besides, in
order to remove the false positives, we employ three post-processing techniques
including threshold related to maximum response, non-maximum suppression, and
clustering. Evaluations on two different datasets show its superior robustness,
accuracy and wide applicability in quantitative comparisons with the
state-of-the-art methods, like MATE, ChESS, ROCHADE and OCamCalib.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 07:47:44 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Chen",
"Ben",
""
],
[
"Xiong",
"Caihua",
""
],
[
"Zhang",
"Qi",
""
]
] |
new_dataset
| 0.997233 |
2302.05179
|
Andrea Brunello
|
Andrea Bernardini, Andrea Brunello, Gian Luigi Gigli, Angelo
Montanari, Nicola Saccomanno
|
AIOSA: An approach to the automatic identification of obstructive sleep
apnea events based on deep learning
|
Final article published on Artificial Intelligence in Medicine
Journal
|
Artificial Intelligence in Medicine, Volume 118, 2021
|
10.1016/j.artmed.2021.102133
| null |
cs.LG cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Obstructive Sleep Apnea Syndrome (OSAS) is the most common sleep-related
breathing disorder. It is caused by an increased upper airway resistance during
sleep, which determines episodes of partial or complete interruption of
airflow. The detection and treatment of OSAS is particularly important in
stroke patients, because the presence of severe OSAS is associated with higher
mortality, worse neurological deficits, worse functional outcome after
rehabilitation, and a higher likelihood of uncontrolled hypertension. The gold
standard test for diagnosing OSAS is polysomnography (PSG). Unfortunately,
performing a PSG in an electrically hostile environment, like a stroke unit, on
neurologically impaired patients is a difficult task; also, the number of
strokes per day outnumbers the availability of polysomnographs and dedicated
healthcare professionals. Thus, a simple and automated recognition system to
identify OSAS among acute stroke patients, relying on routinely recorded vital
signs, is desirable. The majority of the work done so far focuses on data
recorded in ideal conditions and highly selected patients, and thus it is
hardly exploitable in real-life settings, where it would be of actual use. In
this paper, we propose a convolutional deep learning architecture able to
reduce the temporal resolution of raw waveform data, like physiological
signals, extracting key features that can be used for further processing. We
exploit models based on such an architecture to detect OSAS events in stroke
unit recordings obtained from the monitoring of unselected patients. Unlike
existing approaches, annotations are performed at one-second granularity,
allowing physicians to better interpret the model outcome. Results are
considered to be satisfactory by the domain experts. Moreover, based on a
widely-used benchmark, we show that the proposed approach outperforms current
state-of-the-art solutions.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 11:21:47 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Bernardini",
"Andrea",
""
],
[
"Brunello",
"Andrea",
""
],
[
"Gigli",
"Gian Luigi",
""
],
[
"Montanari",
"Angelo",
""
],
[
"Saccomanno",
"Nicola",
""
]
] |
new_dataset
| 0.996297 |
2302.05201
|
Cheng Wen
|
Cheng Wen, Jianzhi Long, Baosheng Yu, Dacheng Tao
|
PointWavelet: Learning in Spectral Domain for 3D Point Cloud Analysis
|
10 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With recent success of deep learning in 2D visual recognition, deep
learning-based 3D point cloud analysis has received increasing attention from
the community, especially due to the rapid development of autonomous driving
technologies. However, most existing methods directly learn point features in
the spatial domain, leaving the local structures in the spectral domain poorly
investigated. In this paper, we introduce a new method, PointWavelet, to
explore local graphs in the spectral domain via a learnable graph wavelet
transform. Specifically, we first introduce the graph wavelet transform to form
multi-scale spectral graph convolution to learn effective local structural
representations. To avoid the time-consuming spectral decomposition, we then
devise a learnable graph wavelet transform, which significantly accelerates the
overall training process. Extensive experiments on four popular point cloud
datasets, ModelNet40, ScanObjectNN, ShapeNet-Part, and S3DIS, demonstrate the
effectiveness of the proposed method on point cloud classification and
segmentation.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 12:07:26 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Wen",
"Cheng",
""
],
[
"Long",
"Jianzhi",
""
],
[
"Yu",
"Baosheng",
""
],
[
"Tao",
"Dacheng",
""
]
] |
new_dataset
| 0.999556 |
2302.05211
|
Alberto Pepe
|
Alberto Pepe, Joan Lasenby
|
CGA-PoseNet: Camera Pose Regression via a 1D-Up Approach to Conformal
Geometric Algebra
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We introduce CGA-PoseNet, which uses the 1D-Up approach to Conformal
Geometric Algebra (CGA) to represent rotations and translations with a single
mathematical object, the motor, for camera pose regression. We do so starting
from PoseNet, which successfully predicts camera poses from small datasets of
RGB frames. State-of-the-art methods, however, require expensive tuning to
balance the orientational and translational components of the camera pose.This
is usually done through complex, ad-hoc loss function to be minimized, and in
some cases also requires 3D points as well as images. Our approach has the
advantage of unifying the camera position and orientation through the motor.
Consequently, the network searches for a single object which lives in a
well-behaved 4D space with a Euclidean signature. This means that we can
address the case of image-only datasets and work efficiently with a simple loss
function, namely the mean squared error (MSE) between the predicted and ground
truth motors. We show that it is possible to achieve high accuracy camera pose
regression with a significantly simpler problem formulation. This 1D-Up
approach to CGA can be employed to overcome the dichotomy between translational
and orientational components in camera pose regression in a compact and elegant
way.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 12:27:48 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Pepe",
"Alberto",
""
],
[
"Lasenby",
"Joan",
""
]
] |
new_dataset
| 0.95393 |
2302.05311
|
Douglas Stebila
|
Carlos Aguilar-Melchor and Thomas Bailleux and Jason Goertzen and
David Joseph and Douglas Stebila
|
TurboTLS: TLS connection establishment with 1 less round trip
| null | null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
We show how to establish TLS connections using one less round trip. In our
approach, which we call TurboTLS, the initial client-to-server and
server-to-client flows of the TLS handshake are sent over UDP rather than TCP.
At the same time, in the same flights, the three-way TCP handshake is carried
out. Once the TCP connection is established, the client and server can complete
the final flight of the TLS handshake over the TCP connection and continue
using it for application data. No changes are made to the contents of the TLS
handshake protocol, only its delivery mechanism. We avoid problems with UDP
fragmentation by using request-based fragmentation, in which the client sends
in advance enough UDP requests to provide sufficient room for the server to fit
its response with one response packet per request packet. Clients can detect
which servers support this without an additional round trip, if the server
advertises its support in a DNS HTTPS resource record. Experiments using our
software implementation show substantial latency improvements. On reliable
connections, we effectively eliminate a round trip without any noticeable cost.
To ensure adequate performance on unreliable connections, we use lightweight
packet ordering and buffering; we can have a client wait a very small time to
receive a potentially lost packet (e.g., a fraction of the RTT observed for the
first fragment) before falling back to TCP without any further delay, since the
TCP connection was already in the process of being established. This approach
offers substantial performance improvements with low complexity, even in
heterogeneous network environments with poorly configured middleboxes.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 15:16:16 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Aguilar-Melchor",
"Carlos",
""
],
[
"Bailleux",
"Thomas",
""
],
[
"Goertzen",
"Jason",
""
],
[
"Joseph",
"David",
""
],
[
"Stebila",
"Douglas",
""
]
] |
new_dataset
| 0.998209 |
2302.05330
|
Nitin Kamra
|
Weichao Mao, Ruta Desai, Michael Louis Iuzzolino, Nitin Kamra
|
Action Dynamics Task Graphs for Learning Plannable Representations of
Procedural Tasks
|
AAAI 2023 Workshop on User-Centric Artificial Intelligence for
Assistance in At-Home Tasks
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Given video demonstrations and paired narrations of an at-home procedural
task such as changing a tire, we present an approach to extract the underlying
task structure -- relevant actions and their temporal dependencies -- via
action-centric task graphs. Learnt structured representations from our method,
Action Dynamics Task Graphs (ADTG), can then be used for understanding such
tasks in unseen videos of humans performing them. Furthermore, ADTG can enable
providing user-centric guidance to humans in these tasks, either for performing
them better or for learning new tasks. Specifically, we show how ADTG can be
used for: (1) tracking an ongoing task, (2) recommending next actions, and (3)
planning a sequence of actions to accomplish a procedural task. We compare
against state-of-the-art Neural Task Graph method and demonstrate substantial
gains on 18 procedural tasks from the CrossTask dataset, including 30.1%
improvement in task tracking accuracy and 20.3% accuracy gain in next action
prediction.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 21:44:37 GMT"
}
] | 2023-02-13T00:00:00 |
[
[
"Mao",
"Weichao",
""
],
[
"Desai",
"Ruta",
""
],
[
"Iuzzolino",
"Michael Louis",
""
],
[
"Kamra",
"Nitin",
""
]
] |
new_dataset
| 0.983601 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.