id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.06316
|
Kotaro Funakoshi
|
Kotaro Funakoshi
|
Non-Axiomatic Term Logic: A Computational Theory of Cognitive Symbolic
Reasoning
| null | null |
10.1527/tjsai.37-6_C-M11
| null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents Non-Axiomatic Term Logic (NATL) as a theoretical
computational framework of humanlike symbolic reasoning in artificial
intelligence. NATL unites a discrete syntactic system inspired from Aristotle's
term logic and a continuous semantic system based on the modern idea of
distributed representations, or embeddings. This paper positions the proposed
approach in the phylogeny and the literature of logic, and explains the
framework. As it is yet no more than a theory and it requires much further
elaboration to implement it, no quantitative evaluation is presented. Instead,
qualitative analyses of arguments using NATL, some applications to possible
cognitive science/robotics-related research, and remaining issues towards a
machinery implementation are discussed.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 15:31:35 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Funakoshi",
"Kotaro",
""
]
] |
new_dataset
| 0.988089 |
2210.16107
|
Xiaomin Lin
|
Xiaomin Lin, Cheng Liu, Allen Pattillo, Miao Yu, Yiannis Aloimonous
|
SeaDroneSim: Simulation of Aerial Images for Detection of Objects Above
Water
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Unmanned Aerial Vehicles (UAVs) are known for their fast and versatile
applicability. With UAVs' growth in availability and applications, they are now
of vital importance in serving as technological support in
search-and-rescue(SAR) operations in marine environments. High-resolution
cameras and GPUs can be equipped on the UAVs to provide effective and efficient
aid to emergency rescue operations. With modern computer vision algorithms, we
can detect objects for aiming such rescue missions. However, these modern
computer vision algorithms are dependent on numerous amounts of training data
from UAVs, which is time-consuming and labor-intensive for maritime
environments. To this end, we present a new benchmark suite, SeaDroneSim, that
can be used to create photo-realistic aerial image datasets with the ground
truth for segmentation masks of any given object. Utilizing only the synthetic
data generated from SeaDroneSim, we obtain 71 mAP on real aerial images for
detecting BlueROV as a feasibility study. This result from the new simulation
suit also serves as a baseline for the detection of BlueROV.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 21:50:50 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Oct 2022 17:37:55 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Nov 2022 22:54:06 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Lin",
"Xiaomin",
""
],
[
"Liu",
"Cheng",
""
],
[
"Pattillo",
"Allen",
""
],
[
"Yu",
"Miao",
""
],
[
"Aloimonous",
"Yiannis",
""
]
] |
new_dataset
| 0.999504 |
2211.00539
|
Eric Hambro
|
Eric Hambro, Roberta Raileanu, Danielle Rothermel, Vegard Mella, Tim
Rockt\"aschel, Heinrich K\"uttler, Naila Murray
|
Dungeons and Data: A Large-Scale NetHack Dataset
|
9 pages, to be published in the Proceedings of the 36th Conference on
Neural Information Processing Systems (NeurIPS 2022) Track on Datasets and
Benchmarks
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recent breakthroughs in the development of agents to solve challenging
sequential decision making problems such as Go, StarCraft, or DOTA, have relied
on both simulated environments and large-scale datasets. However, progress on
this research has been hindered by the scarcity of open-sourced datasets and
the prohibitive computational cost to work with them. Here we present the
NetHack Learning Dataset (NLD), a large and highly-scalable dataset of
trajectories from the popular game of NetHack, which is both extremely
challenging for current methods and very fast to run. NLD consists of three
parts: 10 billion state transitions from 1.5 million human trajectories
collected on the NAO public NetHack server from 2009 to 2020; 3 billion
state-action-score transitions from 100,000 trajectories collected from the
symbolic bot winner of the NetHack Challenge 2021; and, accompanying code for
users to record, load and stream any collection of such trajectories in a
highly compressed form. We evaluate a wide range of existing algorithms
including online and offline RL, as well as learning from demonstrations,
showing that significant research advances are needed to fully leverage
large-scale datasets for challenging sequential decision making tasks.
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2022 15:43:29 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Nov 2022 15:46:42 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Hambro",
"Eric",
""
],
[
"Raileanu",
"Roberta",
""
],
[
"Rothermel",
"Danielle",
""
],
[
"Mella",
"Vegard",
""
],
[
"Rocktäschel",
"Tim",
""
],
[
"Küttler",
"Heinrich",
""
],
[
"Murray",
"Naila",
""
]
] |
new_dataset
| 0.999342 |
2211.05257
|
Tetsuyou Watanabe
|
Toshihiro Nishimura, Tsubasa Muryoe, Yoshitatsu Asama, Hiroki Ikeuchi,
Ryo Toshima, and Tetsuyou Watanabe
|
Single-Fingered Reconfigurable Robotic Gripper With a Folding Mechanism
for Narrow Working Spaces
|
This study was presented at IROS 2022
|
IEEE Robotics and Automation Letters, Vol.7, No.4 (2022)
10192-10199
|
10.1109/LRA.2022.3192653
| null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This letter proposes a novel single-fingered reconfigurable robotic gripper
for grasping objects in narrow working spaces. The finger of the developed
gripper realizes two configurations, namely, the insertion and grasping modes,
using only a single motor. In the insertion mode, the finger assumes a thin
shape such that it can insert its tip into a narrow space. The grasping mode of
the finger is activated through a folding mechanism. Mode switching can be
achieved in two ways: switching the mode actively by a motor, or combining
passive rotation of the fingertip through contact with the support surface and
active motorized construction of the claw. The latter approach is effective
when it is unclear how much finger insertion is required for a specific task.
The structure provides a simple control scheme. The performance of the proposed
robotic gripper design and control methodology was experimentally evaluated.
The minimum width of the insertion space required to grasp an object is 4 mm (1
mm, when using a strategy).
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 23:27:28 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Nov 2022 02:11:48 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Nishimura",
"Toshihiro",
""
],
[
"Muryoe",
"Tsubasa",
""
],
[
"Asama",
"Yoshitatsu",
""
],
[
"Ikeuchi",
"Hiroki",
""
],
[
"Toshima",
"Ryo",
""
],
[
"Watanabe",
"Tetsuyou",
""
]
] |
new_dataset
| 0.999171 |
2211.11070
|
Hongrui Jin
|
Hongrui Jin
|
Who Tracks Who? A Surveillance Capitalist Examination of Commercial
Bluetooth Tracking Networks
|
14 pages
| null | null | null |
cs.SI cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Object and person tracking networks powered by Bluetooth and mobile devices
have become increasingly popular for purposes of public safety and individual
concerns. This essay examines popular commercial tracking networks and their
campaigns from Apple, Samsung and Tile with reference to surveillance
capitalism and digital privacy, discovering the hidden assets commodified
through said networks, and their potential of turning users into unregulated
digital labour while leaving individual privacy at risk.
|
[
{
"version": "v1",
"created": "Sun, 20 Nov 2022 20:15:12 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Nov 2022 10:28:56 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Jin",
"Hongrui",
""
]
] |
new_dataset
| 0.995173 |
2211.11187
|
Raviraj Joshi
|
Ananya Joshi, Aditi Kajale, Janhavi Gadre, Samruddhi Deode, Raviraj
Joshi
|
L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking
BERT Sentence Representations for Hindi and Marathi
|
Accepted at Computing Conference 2023
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sentence representation from vanilla BERT models does not work well on
sentence similarity tasks. Sentence-BERT models specifically trained on STS or
NLI datasets are shown to provide state-of-the-art performance. However,
building these models for low-resource languages is not straightforward due to
the lack of these specialized datasets. This work focuses on two low-resource
Indian languages, Hindi and Marathi. We train sentence-BERT models for these
languages using synthetic NLI and STS datasets prepared using machine
translation. We show that the strategy of NLI pre-training followed by STSb
fine-tuning is effective in generating high-performance sentence-similarity
models for Hindi and Marathi. The vanilla BERT models trained using this simple
strategy outperform the multilingual LaBSE trained using a complex training
strategy. These models are evaluated on downstream text classification and
similarity tasks. We evaluate these models on real text classification datasets
to show embeddings obtained from synthetic data training are generalizable to
real datasets as well and thus represent an effective training strategy for
low-resource languages. We also provide a comparative analysis of sentence
embeddings from fast text models, multilingual BERT models (mBERT, IndicBERT,
xlm-RoBERTa, MuRIL), multilingual sentence embedding models (LASER, LaBSE), and
monolingual BERT models based on L3Cube-MahaBERT and HindBERT. We release
L3Cube-MahaSBERT and HindSBERT, the state-of-the-art sentence-BERT models for
Marathi and Hindi respectively. Our work also serves as a guide to building
low-resource sentence embedding models.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 05:15:48 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Nov 2022 05:38:55 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Joshi",
"Ananya",
""
],
[
"Kajale",
"Aditi",
""
],
[
"Gadre",
"Janhavi",
""
],
[
"Deode",
"Samruddhi",
""
],
[
"Joshi",
"Raviraj",
""
]
] |
new_dataset
| 0.99965 |
2211.11658
|
Zuo Ye
|
Zuo Ye and Ohad Elishco
|
Binary $t_1$-Deletion-$t_2$-Insertion-Burst Correcting Codes and Codes
Correcting a Burst of Deletions
|
Results are covered by others' work
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We first give a construction of binary $t_1$-deletion-$t_2$-insertion-burst
correcting codes with redundancy at most $\log(n)+(t_1-t_2-1)\log\log(n)+O(1)$,
where $t_1\ge 2t_2$. Then we give an improved construction of binary codes
capable of correcting a burst of $4$ non-consecutive deletions, whose
redundancy is reduced from $7\log(n)+2\log\log(n)+O(1)$ to
$4\log(n)+6\log\log(n)+O(1)$. Lastly, by connecting non-binary
$b$-burst-deletion correcting codes with binary
$2b$-deletion-$b$-insertion-burst correcting codes, we give a new construction
of non-binary $b$-burst-deletion correcting codes with redundancy at most
$\log(n)+(b-1)\log\log(n)+O(1)$. This construction is different from previous
results.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 17:18:58 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Nov 2022 18:44:01 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Ye",
"Zuo",
""
],
[
"Elishco",
"Ohad",
""
]
] |
new_dataset
| 0.999406 |
2211.11811
|
Yann De Mont-Marin
|
Yann de Mont-Marin (WILLOW, DI-ENS), Jean Ponce (DI-ENS), Jean-Paul
Laumond (WILLOW, DI-ENS)
|
A minimum swept-volume metric structure for configuration space
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Borrowing elementary ideas from solid mechanics and differential geometry,
this presentation shows that the volume swept by a regular solid undergoing a
wide class of volume-preserving deformations induces a rather natural metric
structure with well-defined and computable geodesics on its configuration
space. This general result applies to concrete classes of articulated objects
such as robot manipulators, and we demonstrate as a proof of concept the
computation of geodesic paths for a free flying rod and planar robotic arms as
well as their use in path planning with many obstacles.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 19:26:44 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"de Mont-Marin",
"Yann",
"",
"WILLOW, DI-ENS"
],
[
"Ponce",
"Jean",
"",
"DI-ENS"
],
[
"Laumond",
"Jean-Paul",
"",
"WILLOW, DI-ENS"
]
] |
new_dataset
| 0.972392 |
2211.11839
|
Lily Chung
|
Lily Chung and Erik D. Demaine
|
Celeste is PSPACE-hard
|
15 pages, 13 figures. Presented at 23rd Thailand-Japan Conference on
Discrete and Computational Geometry, Graphs, and Games
| null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the complexity of the platform video game Celeste. We prove
that navigating Celeste is PSPACE-hard in five different ways, corresponding to
different subsets of the game mechanics. In particular, we prove the game
PSPACE-hard even without player input.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 20:23:42 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Chung",
"Lily",
""
],
[
"Demaine",
"Erik D.",
""
]
] |
new_dataset
| 0.995839 |
2211.11843
|
Kevin Dai
|
Kevin Dai, Ravesh Sukhnandan, Michael Bennington, Karen Whirley, Ryan
Bao, Lu Li, Jeffrey P. Gill, Hillel J. Chiel, and Victoria A. Webster-Wood
|
SLUGBOT, an Aplysia-inspired Robotic Grasper for Studying Control
|
Submitted and accepted to Living Machines 2022 conference
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Living systems can use a single periphery to perform a variety of tasks and
adapt to a dynamic environment. This multifunctionality is achieved through the
use of neural circuitry that adaptively controls the reconfigurable
musculature. Current robotic systems struggle to flexibly adapt to unstructured
environments. Through mimicry of the neuromechanical coupling seen in living
organisms, robotic systems could potentially achieve greater autonomy. The
tractable neuromechanics of the sea slug $\textit{Aplysia californica's}$
feeding apparatus, or buccal mass, make it an ideal candidate for applying
neuromechanical principles to the control of a soft robot. In this work, a
robotic grasper was designed to mimic specific morphology of the
$\textit{Aplysia}$ feeding apparatus. These include the use of soft actuators
akin to biological muscle, a deformable grasping surface, and a similar
muscular architecture. A previously developed Boolean neural controller was
then adapted for the control of this soft robotic system. The robot was capable
of qualitatively replicating swallowing behavior by cyclically ingesting a
plastic tube. The robot's normalized translational and rotational kinematics of
the odontophore followed profiles observed $\textit{in vivo}$ despite
morphological differences. This brings $\textit{Aplysia}$-inspired control
$\textit{in roboto}$ one step closer to multifunctional neural control schema
$\textit{in vivo}$ and $\textit{in silico}$. Future additions may improve
SLUGBOT's viability as a neuromechanical research platform.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 20:32:42 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Dai",
"Kevin",
""
],
[
"Sukhnandan",
"Ravesh",
""
],
[
"Bennington",
"Michael",
""
],
[
"Whirley",
"Karen",
""
],
[
"Bao",
"Ryan",
""
],
[
"Li",
"Lu",
""
],
[
"Gill",
"Jeffrey P.",
""
],
[
"Chiel",
"Hillel J.",
""
],
[
"Webster-Wood",
"Victoria A.",
""
]
] |
new_dataset
| 0.997834 |
2211.11867
|
Phillip Lane
|
Phillip Allen Lane, Jessica Lobrano
|
The AMD Rome Memory Barrier
|
Very, very early draft for IEEE SoutheastCon 2017, 9 pages (need to
get down to 8), 6 figures, 7 tables
| null | null | null |
cs.AR cs.DC cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid growth of AMD as a competitor in the CPU industry, it is
imperative that high-performance and architectural engineers analyze new AMD
CPUs. By understanding new and unfamiliar architectures, engineers are able to
adapt their algorithms to fully utilize new hardware. Furthermore, engineers
are able to anticipate the limitations of an architecture and determine when an
alternate platform is desirable for a particular workload. This paper presents
results which show that the AMD "Rome" architecture performance suffers once an
application's memory bandwidth exceeds 37.5 GiB/s for integer-heavy
applications, or 100 GiB/s for floating-point-heavy workloads. Strong positive
correlations between memory bandwidth and CPI are presented, as well as strong
positive correlations between increased memory load and time-to-completion of
benchmarks from the SPEC CPU2017 benchmark suites.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 21:41:57 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Lane",
"Phillip Allen",
""
],
[
"Lobrano",
"Jessica",
""
]
] |
new_dataset
| 0.995007 |
2211.11870
|
Fengyi Shen
|
Fengyi Shen, Zador Pataki, Akhil Gurram, Ziyuan Liu, He Wang, Alois
Knoll
|
LoopDA: Constructing Self-loops to Adapt Nighttime Semantic Segmentation
|
Accepted to WACV2023
|
2023 IEEE/CVF Winter Conference on Applications of Computer Vision
(WACV)
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Due to the lack of training labels and the difficulty of annotating, dealing
with adverse driving conditions such as nighttime has posed a huge challenge to
the perception system of autonomous vehicles. Therefore, adapting knowledge
from a labelled daytime domain to an unlabelled nighttime domain has been
widely researched. In addition to labelled daytime datasets, existing nighttime
datasets usually provide nighttime images with corresponding daytime reference
images captured at nearby locations for reference. The key challenge is to
minimize the performance gap between the two domains. In this paper, we propose
LoopDA for domain adaptive nighttime semantic segmentation. It consists of
self-loops that result in reconstructing the input data using predicted
semantic maps, by rendering them into the encoded features. In a warm-up
training stage, the self-loops comprise of an inner-loop and an outer-loop,
which are responsible for intra-domain refinement and inter-domain alignment,
respectively. To reduce the impact of day-night pose shifts, in the later
self-training stage, we propose a co-teaching pipeline that involves an offline
pseudo-supervision signal and an online reference-guided signal `DNA'
(Day-Night Agreement), bringing substantial benefits to enhance nighttime
segmentation. Our model outperforms prior methods on Dark Zurich and Nighttime
Driving datasets for semantic segmentation. Code and pretrained models are
available at https://github.com/fy-vision/LoopDA.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 21:46:05 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Shen",
"Fengyi",
""
],
[
"Pataki",
"Zador",
""
],
[
"Gurram",
"Akhil",
""
],
[
"Liu",
"Ziyuan",
""
],
[
"Wang",
"He",
""
],
[
"Knoll",
"Alois",
""
]
] |
new_dataset
| 0.997958 |
2211.11883
|
Aditi Agrawal
|
Aditi Agrawal, Archit Jain, Benjamin Reed
|
CodEval: Improving Student Success In Programming Assignments
| null |
EDULEARN 2022 Proceedings, pp. 7546-7554
|
10.21125/edulearn.2022.1767
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
CodEval is a code evaluation tool that integrates with the Canvas Learning
Management System to automatically evaluates students' work within a few
minutes of the submission. This early feedback allows students to catch and
correct problems in their submissions before their submission is graded and
gives them a clear idea of the quality of their submission. CodEval handles the
tedious aspects of grading, such as compiling and running tests, leaving
graders more time to spend on the qualitative aspect of grading.
Before using CodEval, instructors would not have a clear view of the
student's comprehension of the concept evaluated by the assignment until after
the due date. CodeEval helps instructors identify and address the gaps in
students' understanding and thus helps more students successfully complete the
assignment.
We implemented CodEval using Python using the public Canvas API. Any
instructor or grader for a Canvas course can use CodEval to automatically
evaluate submissions for programming assignments. We developed a syntax to
express requirements of submissions such as compilation parameters, inputs,
outputs, command-line arguments, timeouts, exit codes, functions used, files
generated, output validators, and more. We have made CodEval open source.
CodEval is an easy tool for students, graders, and instructors and seamlessly
integrates with Canvas. We share our experience with using CodEval in two
classes with a total of 90 students and multiple coding assignments.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 22:16:52 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Agrawal",
"Aditi",
""
],
[
"Jain",
"Archit",
""
],
[
"Reed",
"Benjamin",
""
]
] |
new_dataset
| 0.977106 |
2211.11890
|
Tianjun Zhang
|
Tianjun Zhang, Xuezhi Wang, Denny Zhou, Dale Schuurmans, Joseph E.
Gonzalez
|
TEMPERA: Test-Time Prompting via Reinforcement Learning
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Careful prompt design is critical to the use of large language models in
zero-shot or few-shot learning. As a consequence, there is a growing interest
in automated methods to design optimal prompts. In this work, we propose
Test-time Prompt Editing using Reinforcement learning (TEMPERA). In contrast to
prior prompt generation methods, TEMPERA can efficiently leverage prior
knowledge, is adaptive to different queries and provides an interpretable
prompt for every query. To achieve this, we design a novel action space that
allows flexible editing of the initial prompts covering a wide set of
commonly-used components like instructions, few-shot exemplars, and
verbalizers. The proposed method achieves significant gains compared with
recent SoTA approaches like prompt tuning, AutoPrompt, and RLPrompt, across a
variety of tasks including sentiment analysis, topic classification, natural
language inference, and reading comprehension. Our method achieves 5.33x on
average improvement in sample efficiency when compared to the traditional
fine-tuning methods.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 22:38:20 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Zhang",
"Tianjun",
""
],
[
"Wang",
"Xuezhi",
""
],
[
"Zhou",
"Denny",
""
],
[
"Schuurmans",
"Dale",
""
],
[
"Gonzalez",
"Joseph E.",
""
]
] |
new_dataset
| 0.999646 |
2211.11931
|
Alakh Aggarwal
|
Alakh Aggarwal and Jikai Wang and Steven Hogue and Saifeng Ni and
Madhukar Budagavi and Xiaohu Guo
|
Layered-Garment Net: Generating Multiple Implicit Garment Layers from a
Single Image
|
16th Asian Conference on Computer Vision (ACCV2022)
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent research works have focused on generating human models and garments
from their 2D images. However, state-of-the-art researches focus either on only
a single layer of the garment on a human model or on generating multiple
garment layers without any guarantee of the intersection-free geometric
relationship between them. In reality, people wear multiple layers of garments
in their daily life, where an inner layer of garment could be partially covered
by an outer one. In this paper, we try to address this multi-layer modeling
problem and propose the Layered-Garment Net (LGN) that is capable of generating
intersection-free multiple layers of garments defined by implicit function
fields over the body surface, given the person's near front-view image. With a
special design of garment indication fields (GIF), we can enforce an implicit
covering relationship between the signed distance fields (SDF) of different
layers to avoid self-intersections among different garment surfaces and the
human body. Experiments demonstrate the strength of our proposed LGN framework
in generating multi-layer garments as compared to state-of-the-art methods. To
the best of our knowledge, LGN is the first research work to generate
intersection-free multiple layers of garments on the human body from a single
image.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 00:55:42 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Aggarwal",
"Alakh",
""
],
[
"Wang",
"Jikai",
""
],
[
"Hogue",
"Steven",
""
],
[
"Ni",
"Saifeng",
""
],
[
"Budagavi",
"Madhukar",
""
],
[
"Guo",
"Xiaohu",
""
]
] |
new_dataset
| 0.950918 |
2211.12000
|
Injy Hamed
|
Injy Hamed, Nizar Habash, Slim Abdennadher, Ngoc Thang Vu
|
ArzEn-ST: A Three-way Speech Translation Corpus for Code-Switched
Egyptian Arabic - English
|
Accepted to the Seventh Arabic Natural Language Processing Workshop
(WANLP 2022)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present our work on collecting ArzEn-ST, a code-switched Egyptian Arabic -
English Speech Translation Corpus. This corpus is an extension of the ArzEn
speech corpus, which was collected through informal interviews with bilingual
speakers. In this work, we collect translations in both directions, monolingual
Egyptian Arabic and monolingual English, forming a three-way speech translation
corpus. We make the translation guidelines and corpus publicly available. We
also report results for baseline systems for machine translation and speech
translation tasks. We believe this is a valuable resource that can motivate and
facilitate further research studying the code-switching phenomenon from a
linguistic perspective and can be used to train and evaluate NLP systems.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 04:37:14 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Hamed",
"Injy",
""
],
[
"Habash",
"Nizar",
""
],
[
"Abdennadher",
"Slim",
""
],
[
"Vu",
"Ngoc Thang",
""
]
] |
new_dataset
| 0.99944 |
2211.12021
|
Hansi Liu
|
Hansi Liu, Kristin Dana, Marco Gruteser, Hongsheng Lu
|
ViFi-Loc: Multi-modal Pedestrian Localization using GAN with
Camera-Phone Correspondences
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In Smart City and Vehicle-to-Everything (V2X) systems, acquiring pedestrians'
accurate locations is crucial to traffic safety. Current systems adopt cameras
and wireless sensors to detect and estimate people's locations via sensor
fusion. Standard fusion algorithms, however, become inapplicable when
multi-modal data is not associated. For example, pedestrians are out of the
camera field of view, or data from camera modality is missing. To address this
challenge and produce more accurate location estimations for pedestrians, we
propose a Generative Adversarial Network (GAN) architecture. During training,
it learns the underlying linkage between pedestrians' camera-phone data
correspondences. During inference, it generates refined position estimations
based only on pedestrians' phone data that consists of GPS, IMU and FTM.
Results show that our GAN produces 3D coordinates at 1 to 2 meter localization
error across 5 different outdoor scenes. We further show that the proposed
model supports self-learning. The generated coordinates can be associated with
pedestrian's bounding box coordinates to obtain additional camera-phone data
correspondences. This allows automatic data collection during inference. After
fine-tuning on the expanded dataset, localization accuracy is improved by up to
26%.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 05:27:38 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Liu",
"Hansi",
""
],
[
"Dana",
"Kristin",
""
],
[
"Gruteser",
"Marco",
""
],
[
"Lu",
"Hongsheng",
""
]
] |
new_dataset
| 0.997879 |
2211.12033
|
Boya Du
|
Boya Du, Shaochuan Lin, Jiong Gao, Xiyu Ji, Mengya Wang, Taotao Zhou,
Hengxu He, Jia Jia, Ning Hu
|
BASM: A Bottom-up Adaptive Spatiotemporal Model for Online Food Ordering
Service
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online Food Ordering Service (OFOS) is a popular location-based service that
helps people to order what you want. Compared with traditional e-commerce
recommendation systems, users' interests may be diverse under different
spatiotemporal contexts, leading to various spatiotemporal data distribution,
which limits the fitting capacity of the model. However, numerous current works
simply mix all samples to train a set of model parameters, which makes it
difficult to capture the diversity in different spatiotemporal contexts.
Therefore, we address this challenge by proposing a Bottom-up Adaptive
Spatiotemporal Model(BASM) to adaptively fit the spatiotemporal data
distribution, which further improve the fitting capability of the model.
Specifically, a spatiotemporal-aware embedding layer performs weight adaptation
on field granularity in feature embedding, to achieve the purpose of
dynamically perceiving spatiotemporal contexts. Meanwhile, we propose a
spatiotemporal semantic transformation layer to explicitly convert the
concatenated input of the raw semantic to spatiotemporal semantic, which can
further enhance the semantic representation under different spatiotemporal
contexts. Furthermore, we introduce a novel spatiotemporal adaptive bias tower
to capture diverse spatiotemporal bias, reducing the difficulty to model
spatiotemporal distinction. To further verify the effectiveness of BASM, we
also novelly propose two new metrics, Time-period-wise AUC (TAUC) and City-wise
AUC (CAUC). Extensive offline evaluations on public and industrial datasets are
conducted to demonstrate the effectiveness of our proposed modle. The online
A/B experiment also further illustrates the practicability of the model online
service. This proposed method has now been implemented on the Ele.me, a major
online food ordering platform in China, serving more than 100 million online
users.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 06:08:57 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Du",
"Boya",
""
],
[
"Lin",
"Shaochuan",
""
],
[
"Gao",
"Jiong",
""
],
[
"Ji",
"Xiyu",
""
],
[
"Wang",
"Mengya",
""
],
[
"Zhou",
"Taotao",
""
],
[
"He",
"Hengxu",
""
],
[
"Jia",
"Jia",
""
],
[
"Hu",
"Ning",
""
]
] |
new_dataset
| 0.961082 |
2211.12038
|
Liu Yichen
|
Shengnan Liang, Yichen Liu, Shangzhe Wu, Yu-Wing Tai, Chi-Keung Tang
|
ONeRF: Unsupervised 3D Object Segmentation from Multiple Views
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present ONeRF, a method that automatically segments and reconstructs
object instances in 3D from multi-view RGB images without any additional manual
annotations. The segmented 3D objects are represented using separate Neural
Radiance Fields (NeRFs) which allow for various 3D scene editing and novel view
rendering. At the core of our method is an unsupervised approach using the
iterative Expectation-Maximization algorithm, which effectively aggregates 2D
visual features and the corresponding 3D cues from multi-views for joint 3D
object segmentation and reconstruction. Unlike existing approaches that can
only handle simple objects, our method produces segmented full 3D NeRFs of
individual objects with complex shapes, topologies and appearance. The
segmented ONeRfs enable a range of 3D scene editing, such as object
transformation, insertion and deletion.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 06:19:37 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Liang",
"Shengnan",
""
],
[
"Liu",
"Yichen",
""
],
[
"Wu",
"Shangzhe",
""
],
[
"Tai",
"Yu-Wing",
""
],
[
"Tang",
"Chi-Keung",
""
]
] |
new_dataset
| 0.995551 |
2211.12081
|
Ran Gu
|
Ran Gu, Guotai Wang, Jiangshan Lu, Jingyang Zhang, Wenhui Lei, Yinan
Chen, Wenjun Liao, Shichuan Zhang, Kang Li, Dimitris N. Metaxas, Shaoting
Zhang
|
CDDSA: Contrastive Domain Disentanglement and Style Augmentation for
Generalizable Medical Image Segmentation
|
14 pages, 8 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Generalization to previously unseen images with potential domain shifts and
different styles is essential for clinically applicable medical image
segmentation, and the ability to disentangle domain-specific and
domain-invariant features is key for achieving Domain Generalization (DG).
However, existing DG methods can hardly achieve effective disentanglement to
get high generalizability. To deal with this problem, we propose an efficient
Contrastive Domain Disentanglement and Style Augmentation (CDDSA) framework for
generalizable medical image segmentation. First, a disentangle network is
proposed to decompose an image into a domain-invariant anatomical
representation and a domain-specific style code, where the former is sent to a
segmentation model that is not affected by the domain shift, and the
disentangle network is regularized by a decoder that combines the anatomical
and style codes to reconstruct the input image. Second, to achieve better
disentanglement, a contrastive loss is proposed to encourage the style codes
from the same domain and different domains to be compact and divergent,
respectively. Thirdly, to further improve generalizability, we propose a style
augmentation method based on the disentanglement representation to synthesize
images in various unseen styles with shared anatomical structures. Our method
was validated on a public multi-site fundus image dataset for optic cup and
disc segmentation and an in-house multi-site Nasopharyngeal Carcinoma Magnetic
Resonance Image (NPC-MRI) dataset for nasopharynx Gross Tumor Volume (GTVnx)
segmentation. Experimental results showed that the proposed CDDSA achieved
remarkable generalizability across different domains, and it outperformed
several state-of-the-art methods in domain-generalizable segmentation.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 08:25:35 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Gu",
"Ran",
""
],
[
"Wang",
"Guotai",
""
],
[
"Lu",
"Jiangshan",
""
],
[
"Zhang",
"Jingyang",
""
],
[
"Lei",
"Wenhui",
""
],
[
"Chen",
"Yinan",
""
],
[
"Liao",
"Wenjun",
""
],
[
"Zhang",
"Shichuan",
""
],
[
"Li",
"Kang",
""
],
[
"Metaxas",
"Dimitris N.",
""
],
[
"Zhang",
"Shaoting",
""
]
] |
new_dataset
| 0.981177 |
2211.12087
|
Wei Sun
|
Wei Sun, Tingjun Chen, and Neil Gong
|
SoK: Inference Attacks and Defenses in Human-Centered Wireless Sensing
| null | null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Human-centered wireless sensing aims to understand the fine-grained
environment and activities of a human using the diverse wireless signals around
her. The wireless sensing community has demonstrated the superiority of such
techniques in many applications such as smart homes, human-computer
interactions, and smart cities. Like many other technologies, wireless sensing
is also a double-edged sword. While the sensed information about a human can be
used for many good purposes such as enhancing life quality, an adversary can
also abuse it to steal private information about the human (e.g., location,
living habits, and behavioral biometric characteristics). However, the
literature lacks a systematic understanding of the privacy vulnerabilities of
wireless sensing and the defenses against them.
In this work, we aim to bridge this gap. First, we propose a framework to
systematize wireless sensing-based inference attacks. Our framework consists of
three key steps: deploying a sniffing device, sniffing wireless signals, and
inferring private information. Our framework can be used to guide the design of
new inference attacks since different attacks can instantiate these three steps
differently. Second, we propose a defense-in-depth framework to systematize
defenses against such inference attacks. The prevention component of our
framework aims to prevent inference attacks via obfuscating the wireless
signals around a human, while the detection component aims to detect and
respond to attacks. Third, based on our attack and defense frameworks, we
identify gaps in the existing literature and discuss future research
directions.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 08:36:56 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Sun",
"Wei",
""
],
[
"Chen",
"Tingjun",
""
],
[
"Gong",
"Neil",
""
]
] |
new_dataset
| 0.998555 |
2211.12124
|
Ma\"el Houbre
|
Mael Houbre, Florian Boudin and Beatrice Daille
|
A Large-Scale Dataset for Biomedical Keyphrase Generation
|
Accepted at the 13th International Workshop on Health Text Mining and
Information Analysis (LOUHI 2022)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Keyphrase generation is the task consisting in generating a set of words or
phrases that highlight the main topics of a document. There are few datasets
for keyphrase generation in the biomedical domain and they do not meet the
expectations in terms of size for training generative models. In this paper, we
introduce kp-biomed, the first large-scale biomedical keyphrase generation
dataset with more than 5M documents collected from PubMed abstracts. We train
and release several generative models and conduct a series of experiments
showing that using large scale datasets improves significantly the performances
for present and absent keyphrase generation. The dataset is available under
CC-BY-NC v4.0 license at https://huggingface.co/ datasets/taln-ls2n/kpbiomed.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 09:53:23 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Houbre",
"Mael",
""
],
[
"Boudin",
"Florian",
""
],
[
"Daille",
"Beatrice",
""
]
] |
new_dataset
| 0.999838 |
2211.12142
|
Bernd Bohnet
|
Bernd Bohnet, Chris Alberti, Michael Collins
|
Coreference Resolution through a seq2seq Transition-Based System
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Most recent coreference resolution systems use search algorithms over
possible spans to identify mentions and resolve coreference. We instead present
a coreference resolution system that uses a text-to-text (seq2seq) paradigm to
predict mentions and links jointly. We implement the coreference system as a
transition system and use multilingual T5 as an underlying language model. We
obtain state-of-the-art accuracy on the CoNLL-2012 datasets with 83.3 F1-score
for English (a 2.3 higher F1-score than previous work (Dobrovolskii, 2021))
using only CoNLL data for training, 68.5 F1-score for Arabic (+4.1 higher than
previous work) and 74.3 F1-score for Chinese (+5.3). In addition we use the
SemEval-2010 data sets for experiments in the zero-shot setting, a few-shot
setting, and supervised setting using all available training data. We get
substantially higher zero-shot F1-scores for 3 out of 4 languages than previous
approaches and significantly exceed previous supervised state-of-the-art
results for all five tested languages.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 10:17:50 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Bohnet",
"Bernd",
""
],
[
"Alberti",
"Chris",
""
],
[
"Collins",
"Michael",
""
]
] |
new_dataset
| 0.999802 |
2211.12203
|
Sukanya Pandey
|
Matthew Johnson, Barnaby Martin, Siani Smith, Sukanya Pandey, Daniel
Paulusma, Erik Jan van Leeuwen
|
Edge Multiway Cut and Node Multiway Cut are NP-complete on subcubic
graphs
| null | null | null | null |
cs.CC cs.DM cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
We show that Edge Multiway Cut (also called Multiterminal Cut) and Node
Multiway Cut are NP-complete on graphs of maximum degree $3$ (also known as
subcubic graphs). This improves on a previous degree bound of $11$. Our
NP-completeness result holds even for subcubic graphs that are planar.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 11:57:41 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Johnson",
"Matthew",
""
],
[
"Martin",
"Barnaby",
""
],
[
"Smith",
"Siani",
""
],
[
"Pandey",
"Sukanya",
""
],
[
"Paulusma",
"Daniel",
""
],
[
"van Leeuwen",
"Erik Jan",
""
]
] |
new_dataset
| 0.981328 |
2211.12223
|
Hassan Hussein
|
Hassan Hussein, Allard Oelen, Oliver Karras, S\"oren Auer
|
KGMM -- A Maturity Model for Scholarly Knowledge Graphs based on
Intertwined Human-Machine Collaboration
|
Accepted as a full paper at the ICADL 2022: International Conference
on Asian Digital Libraries 2022
| null | null | null |
cs.DL cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Knowledge Graphs (KG) have gained increasing importance in science, business
and society in the last years. However, most knowledge graphs were either
extracted or compiled from existing sources. There are only relatively few
examples where knowledge graphs were genuinely created by an intertwined
human-machine collaboration. Also, since the quality of data and knowledge
graphs is of paramount importance, a number of data quality assessment models
have been proposed. However, they do not take the specific aspects of
intertwined human-machine curated knowledge graphs into account. In this work,
we propose a graded maturity model for scholarly knowledge graphs (KGMM), which
specifically focuses on aspects related to the joint, evolutionary curation of
knowledge graphs for digital libraries. Our model comprises 5 maturity stages
with 20 quality measures. We demonstrate the implementation of our model in a
large scale scholarly knowledge graph curation effort.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 12:29:08 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Hussein",
"Hassan",
""
],
[
"Oelen",
"Allard",
""
],
[
"Karras",
"Oliver",
""
],
[
"Auer",
"Sören",
""
]
] |
new_dataset
| 0.979835 |
2211.12227
|
EPTCS
|
Bruno Blanchet (Inria, Paris, France)
|
The Security Protocol Verifier ProVerif and its Horn Clause Resolution
Algorithm
|
In Proceedings HCVS/VPT 2022, arXiv:2211.10675
|
EPTCS 373, 2022, pp. 14-22
|
10.4204/EPTCS.373.2
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
ProVerif is a widely used security protocol verifier. Internally, ProVerif
uses an abstract representation of the protocol by Horn clauses and a
resolution algorithm on these clauses, in order to prove security properties of
the protocol or to find attacks. In this paper, we present an overview of
ProVerif and discuss some specificities of its resolution algorithm, related to
the particular application domain and the particular clauses that ProVerif
generates. This paper is a short summary that gives pointers to publications on
ProVerif in which the reader will find more details.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 12:35:04 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Blanchet",
"Bruno",
"",
"Inria, Paris, France"
]
] |
new_dataset
| 0.963413 |
2211.12231
|
EPTCS
|
Emanuele De Angelis (IASI-CNR, Italy), Hari Govind V K (University of
Waterloo, Canada)
|
CHC-COMP 2022: Competition Report
|
In Proceedings HCVS/VPT 2022, arXiv:2211.10675. arXiv admin note:
text overlap with arXiv:2109.04635, arXiv:2008.02939 by other authors
|
EPTCS 373, 2022, pp. 44-62
|
10.4204/EPTCS.373.5
| null |
cs.LO cs.SC
|
http://creativecommons.org/licenses/by/4.0/
|
CHC-COMP 2022 is the fifth edition of the competition of solvers for
Constrained Horn Clauses. The competition was run in March 2022; the results
were presented at the 9th Workshop on Horn Clauses for Verification and
Synthesis held in Munich, Germany, on April 3, 2022. This edition featured six
solvers, and eight tracks consisting of sets of linear and nonlinear clauses
with constraints over linear integer arithmetic, linear real arithmetic,
arrays, and algebraic data types. This report provides an overview of the
organization behind the competition runs: it includes the technical details of
the competition setup as well as presenting the results of the 2022 edition.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 12:35:56 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"De Angelis",
"Emanuele",
"",
"IASI-CNR, Italy"
],
[
"K",
"Hari Govind V",
"",
"University of\n Waterloo, Canada"
]
] |
new_dataset
| 0.999434 |
2211.12238
|
Lan Truong
|
Lan V. Truong, Albert Guill\'en i F\`abregas
|
Generalized Random Gilbert-Varshamov Codes: Typical Error Exponent and
Concentration Properties
|
60 pages, 2 figures
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We find the exact typical error exponent of constant composition generalized
random Gilbert-Varshamov (RGV) codes over DMCs channels with generalized
likelihood decoding. We show that the typical error exponent of the RGV
ensemble is equal to the expurgated error exponent, provided that the RGV
codebook parameters are chosen appropriately. We also prove that the random
coding exponent converges in probability to the typical error exponent, and the
corresponding non-asymptotic concentration rates are derived. Our results show
that the decay rate of the lower tail is exponential while that of the upper
tail is double exponential above the expurgated error exponent. The explicit
dependence of the decay rates on the RGV distance functions is characterized.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 12:47:02 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Truong",
"Lan V.",
""
],
[
"Fàbregas",
"Albert Guillén i",
""
]
] |
new_dataset
| 0.965276 |
2211.12287
|
Hyoil Kim
|
Daulet Kurmantayev, Dohyun Kwun, Hyoil Kim, Sung Whan Yoon
|
RiSi: Spectro-temporal RAN-agnostic Modulation Identification for OFDMA
Signals
|
10 pages, 10 figures
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blind modulation identification is essential for 6G's RAN-agnostic
communications, which identifies the modulation type of an incompatible
wireless signal without any prior knowledge. Nowadays, research on blind
modulation identification relies on deep convolutional networks that deal with
a received signal's raw I/Q samples, but they mostly are limited to
single-carrier signal recognition thus not pragmatic for identifying
spectro-temporal OFDM/OFDMA signals whose modulation varies with time and
frequency. Therefore, this paper proposes RiSi, a semantic segmentation neural
network designed to work on OFDMA's spectrograms, by replacing vanilla
DeepLabV3+'s 2D convolutions with 'flattened' convolutions to enforce the
time-frequency orthogonality constraint and to achieve the grid-like pattern of
OFDMA's resource blocks, and by introducing three-channel inputs consisting of
I/Q/amplitude. Then, we synthesized a realistic and effective dataset
consisting of OFDMA signals with various channel impairments to train the
proposed network. Moreover, we treated varying communication parameters as
different domains to apply domain generalization methods, to enhance our
model's adaptability to diverse communication environments. Extensive
evaluation shows that RiSi's modulation identification accuracy reaches 86%
averaged over four modulation types (BPSK, QPSK, 16-QAM, 64-QAM), while its
domain generalization performance for unseen data has been also shown to be
reliable.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 14:01:10 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Kurmantayev",
"Daulet",
""
],
[
"Kwun",
"Dohyun",
""
],
[
"Kim",
"Hyoil",
""
],
[
"Yoon",
"Sung Whan",
""
]
] |
new_dataset
| 0.998473 |
2211.12371
|
Xiao Han
|
Xiao Han, Peishan Cong, Lan Xu, Jingya Wang, Jingyi Yu, Yuexin Ma
|
LiCamGait: Gait Recognition in the Wild by Using LiDAR and Camera
Multi-modal Visual Sensors
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR can capture accurate depth information in large-scale scenarios without
the effect of light conditions, and the captured point cloud contains
gait-related 3D geometric properties and dynamic motion characteristics. We
make the first attempt to leverage LiDAR to remedy the limitation of
view-dependent and light-sensitive camera for more robust and accurate gait
recognition. In this paper, we propose a LiDAR-camera-based gait recognition
method with an effective multi-modal feature fusion strategy, which fully
exploits advantages of both point clouds and images. In particular, we propose
a new in-the-wild gait dataset, LiCamGait, involving multi-modal visual data
and diverse 2D/3D representations. Our method achieves state-of-the-art
performance on the new dataset. Code and dataset will be released when this
paper is published.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 16:05:58 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Han",
"Xiao",
""
],
[
"Cong",
"Peishan",
""
],
[
"Xu",
"Lan",
""
],
[
"Wang",
"Jingya",
""
],
[
"Yu",
"Jingyi",
""
],
[
"Ma",
"Yuexin",
""
]
] |
new_dataset
| 0.992518 |
2211.12400
|
Nikolas Lamb
|
Nikolas Lamb, Sean Banerjee, Natasha Kholgade Banerjee
|
DeepJoin: Learning a Joint Occupancy, Signed Distance, and Normal Field
Function for Shape Repair
|
To be published at SIGGRAPH Asia 2022 (Journal)
| null |
10.1145/3550454.3555470
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce DeepJoin, an automated approach to generate high-resolution
repairs for fractured shapes using deep neural networks. Existing approaches to
perform automated shape repair operate exclusively on symmetric objects,
require a complete proxy shape, or predict restoration shapes using
low-resolution voxels which are too coarse for physical repair. We generate a
high-resolution restoration shape by inferring a corresponding complete shape
and a break surface from an input fractured shape. We present a novel implicit
shape representation for fractured shape repair that combines the occupancy
function, signed distance function, and normal field. We demonstrate repairs
using our approach for synthetically fractured objects from ShapeNet, 3D scans
from the Google Scanned Objects dataset, objects in the style of ancient Greek
pottery from the QP Cultural Heritage dataset, and real fractured objects. We
outperform three baseline approaches in terms of chamfer distance and normal
consistency. Unlike existing approaches and restorations using subtraction,
DeepJoin restorations do not exhibit surface artifacts and join closely to the
fractured region of the fractured shape. Our code is available at:
https://github.com/Terascale-All-sensing-Research-Studio/DeepJoin.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 16:44:57 GMT"
}
] | 2022-11-23T00:00:00 |
[
[
"Lamb",
"Nikolas",
""
],
[
"Banerjee",
"Sean",
""
],
[
"Banerjee",
"Natasha Kholgade",
""
]
] |
new_dataset
| 0.991407 |
1706.08609
|
Artemy Kolchinsky
|
Artemy Kolchinsky, Nakul Dhande, Kengjeun Park, Yong-Yeol Ahn
|
The Minor Fall, the Major Lift: Inferring Emotional Valence of Musical
Chords through Lyrics
|
Royal Society Open Science, 2017
|
Royal Society Open Science, 2017
|
10.1098/rsos.150081
| null |
cs.CL cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the association between musical chords and lyrics by analyzing
a large dataset of user-contributed guitar tablatures. Motivated by the idea
that the emotional content of chords is reflected in the words used in
corresponding lyrics, we analyze associations between lyrics and chord
categories. We also examine the usage patterns of chords and lyrics in
different musical genres, historical eras, and geographical regions. Our
overall results confirms a previously known association between Major chords
and positive valence. We also report a wide variation in this association
across regions, genres, and eras. Our results suggest possible existence of
different emotional associations for other types of chords.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2017 21:34:29 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Dec 2017 05:07:37 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Kolchinsky",
"Artemy",
""
],
[
"Dhande",
"Nakul",
""
],
[
"Park",
"Kengjeun",
""
],
[
"Ahn",
"Yong-Yeol",
""
]
] |
new_dataset
| 0.999798 |
2201.12513
|
Ama\c{c} Herda\u{g}delen
|
Ama\c{c} Herda\u{g}delen, Lada Adamic, Bogdan State
|
The Geography of Facebook Groups in the United States
|
To be presented at AAAI ICWSM '23. Replication data is available at
https://doi.org/10.7910/DVN/OYQVEP
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We use exploratory factor analysis to investigate the online persistence of
known community-level patterns of social capital variance in the U.S. context.
Our analysis focuses on Facebook groups, specifically those that tend to
connect users in the same local area. We investigate the relationship between
established, localized measures of social capital at the county level and
patterns of participation in Facebook groups in the same areas. We identify
four main factors that distinguish Facebook group engagement by county. The
first captures small, private groups, dense with friendship connections. The
second captures very local and small groups. The third captures non-local,
large, public groups, with more age mixing. The fourth captures partially local
groups of medium to large size. The first and third factor correlate with
community level social capital measures, while the second and fourth do not.
Together and individually, the factors are predictive of offline social capital
measures, even controlling for various demographic attributes of the counties.
Our analysis reveals striking patterns of correlation between established
measures of social capital and patterns of online interaction in local Facebook
groups. To our knowledge this is the first systematic test of the association
between offline regional social capital and patterns of online community
engagement in the same regions.
|
[
{
"version": "v1",
"created": "Sat, 29 Jan 2022 06:42:38 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Nov 2022 15:32:04 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Herdağdelen",
"Amaç",
""
],
[
"Adamic",
"Lada",
""
],
[
"State",
"Bogdan",
""
]
] |
new_dataset
| 0.980054 |
2204.01089
|
Lingyu Lu
|
Lingyun Lu and Bang Wang and Zizhuo Zhang and Shenghao Liu and Han Xu
|
VRKG4Rec: Virtual Relational Knowledge Graphs for Recommendation
| null | null |
10.1145/3539597.3570482
| null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Incorporating knowledge graph as side information has become a new trend in
recommendation systems. Recent studies regard items as entities of a knowledge
graph and leverage graph neural networks to assist item encoding, yet by
considering each relation type individually. However, relation types are often
too many and sometimes one relation type involves too few entities. We argue
that it is not efficient nor effective to use every relation type for item
encoding. In this paper, we propose a VRKG4Rec model (Virtual Relational
Knowledge Graphs for Recommendation), which explicitly distinguish the
influence of different relations for item representation learning. We first
construct virtual relational graphs (VRKGs) by an unsupervised learning scheme.
We also design a local weighted smoothing (LWS) mechanism for encoding nodes,
which iteratively updates a node embedding only depending on the embedding of
its own and its neighbors, but involve no additional training parameters. We
also employ the LWS mechanism on a user-item bipartite graph for user
representation learning, which utilizes encodings of items with relational
knowledge to help training representations of users. Experiment results on two
public datasets validate that our VRKG4Rec model outperforms the
state-of-the-art methods. The implementations are available at
https://github.com/lulu0913/VRKG4Rec.
|
[
{
"version": "v1",
"created": "Sun, 3 Apr 2022 15:14:20 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2022 12:28:36 GMT"
},
{
"version": "v3",
"created": "Sat, 19 Nov 2022 08:02:52 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Lu",
"Lingyun",
""
],
[
"Wang",
"Bang",
""
],
[
"Zhang",
"Zizhuo",
""
],
[
"Liu",
"Shenghao",
""
],
[
"Xu",
"Han",
""
]
] |
new_dataset
| 0.996762 |
2204.04090
|
Yu-Rong Zhang
|
Yu-Rong Zhang, Ruei-Yang Su, Sheng Yen Chou, Shan-Hung Wu
|
Single-level Adversarial Data Synthesis based on Neural Tangent Kernels
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Abstract Generative adversarial networks (GANs) have achieved impressive
performance in data synthesis and have driven the development of many
applications. However, GANs are known to be hard to train due to their bilevel
objective, which leads to the problems of convergence, mode collapse, and
gradient vanishing. In this paper, we propose a new generative model called the
generative adversarial NTK (GA-NTK) that has a single-level objective. The
GA-NTK keeps the spirit of adversarial learning (which helps generate plausible
data) while avoiding the training difficulties of GANs. This is done by
modeling the discriminator as a Gaussian process with a neural tangent kernel
(NTK-GP) whose training dynamics can be completely described by a closed-form
formula. We analyze the convergence behavior of GA-NTK trained by gradient
descent and give some sufficient conditions for convergence. We also conduct
extensive experiments to study the advantages and limitations of GA-NTK and
propose some techniques that make GA-NTK more practical.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 14:17:46 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Apr 2022 06:03:06 GMT"
},
{
"version": "v3",
"created": "Sat, 21 May 2022 10:23:20 GMT"
},
{
"version": "v4",
"created": "Sun, 31 Jul 2022 07:16:05 GMT"
},
{
"version": "v5",
"created": "Thu, 1 Sep 2022 12:53:50 GMT"
},
{
"version": "v6",
"created": "Tue, 18 Oct 2022 14:21:02 GMT"
},
{
"version": "v7",
"created": "Sun, 20 Nov 2022 12:39:09 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Zhang",
"Yu-Rong",
""
],
[
"Su",
"Ruei-Yang",
""
],
[
"Chou",
"Sheng Yen",
""
],
[
"Wu",
"Shan-Hung",
""
]
] |
new_dataset
| 0.996597 |
2205.10098
|
Kazuhiko Kawamoto
|
Takuto Otomo, Hiroshi Kera, Kazuhiko Kawamoto
|
Adversarial joint attacks on legged robots
|
6 pages, 8 figures
| null |
10.1109/SMC53654.2022.9945546
| null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address adversarial attacks on the actuators at the joints of legged
robots trained by deep reinforcement learning. The vulnerability to the joint
attacks can significantly impact the safety and robustness of legged robots. In
this study, we demonstrate that the adversarial perturbations to the torque
control signals of the actuators can significantly reduce the rewards and cause
walking instability in robots. To find the adversarial torque perturbations, we
develop black-box adversarial attacks, where, the adversary cannot access the
neural networks trained by deep reinforcement learning. The black box attack
can be applied to legged robots regardless of the architecture and algorithms
of deep reinforcement learning. We employ three search methods for the
black-box adversarial attacks: random search, differential evolution, and
numerical gradient descent methods. In experiments with the quadruped robot
Ant-v2 and the bipedal robot Humanoid-v2, in OpenAI Gym environments, we find
that differential evolution can efficiently find the strongest torque
perturbations among the three methods. In addition, we realize that the
quadruped robot Ant-v2 is vulnerable to the adversarial perturbations, whereas
the bipedal robot Humanoid-v2 is robust to the perturbations. Consequently, the
joint attacks can be used for proactive diagnosis of robot walking instability.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 11:30:23 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Otomo",
"Takuto",
""
],
[
"Kera",
"Hiroshi",
""
],
[
"Kawamoto",
"Kazuhiko",
""
]
] |
new_dataset
| 0.986632 |
2205.10187
|
Kazuhiko Kawamoto
|
Takaaki Azakami, Hiroshi Kera, Kazuhiko Kawamoto
|
Adversarial Body Shape Search for Legged Robots
|
6 pages, 7 figures
| null |
10.1109/SMC53654.2022.9945257
| null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an evolutionary computation method for an adversarial attack on
the length and thickness of parts of legged robots by deep reinforcement
learning. This attack changes the robot body shape and interferes with
walking-we call the attacked body as adversarial body shape. The evolutionary
computation method searches adversarial body shape by minimizing the expected
cumulative reward earned through walking simulation. To evaluate the
effectiveness of the proposed method, we perform experiments with three-legged
robots, Walker2d, Ant-v2, and Humanoid-v2 in OpenAI Gym. The experimental
results reveal that Walker2d and Ant-v2 are more vulnerable to the attack on
the length than the thickness of the body parts, whereas Humanoid-v2 is
vulnerable to the attack on both of the length and thickness. We further
identify that the adversarial body shapes break left-right symmetry or shift
the center of gravity of the legged robots. Finding adversarial body shape can
be used to proactively diagnose the vulnerability of legged robot walking.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 13:55:47 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Azakami",
"Takaaki",
""
],
[
"Kera",
"Hiroshi",
""
],
[
"Kawamoto",
"Kazuhiko",
""
]
] |
new_dataset
| 0.992749 |
2205.12870
|
Bowen Shi
|
Bowen Shi and Diane Brentari and Greg Shakhnarovich and Karen Livescu
|
Open-Domain Sign Language Translation Learned from Online Video
|
EMNLP 2022
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing work on sign language translation - that is, translation from sign
language videos into sentences in a written language - has focused mainly on
(1) data collected in a controlled environment or (2) data in a specific
domain, which limits the applicability to real-world settings. In this paper,
we introduce OpenASL, a large-scale American Sign Language (ASL) - English
dataset collected from online video sites (e.g., YouTube). OpenASL contains 288
hours of ASL videos in multiple domains from over 200 signers and is the
largest publicly available ASL translation dataset to date. To tackle the
challenges of sign language translation in realistic settings and without
glosses, we propose a set of techniques including sign search as a pretext task
for pre-training and fusion of mouthing and handshape features. The proposed
techniques produce consistent and large improvements in translation quality,
over baseline models based on prior work. Our data and code are publicly
available at https://github.com/chevalierNoir/OpenASL
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 15:43:31 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Nov 2022 16:06:02 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Shi",
"Bowen",
""
],
[
"Brentari",
"Diane",
""
],
[
"Shakhnarovich",
"Greg",
""
],
[
"Livescu",
"Karen",
""
]
] |
new_dataset
| 0.999845 |
2206.05514
|
Wei Li
|
Wei Li, Qiming Zhang, Jing Zhang, Zhen Huang, Xinmei Tian, Dacheng Tao
|
Toward Real-world Single Image Deraining: A New Benchmark and Beyond
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Single image deraining (SID) in real scenarios attracts increasing attention
in recent years. Due to the difficulty in obtaining real-world rainy/clean
image pairs, previous real datasets suffer from low-resolution images,
homogeneous rain streaks, limited background variation, and even misalignment
of image pairs, resulting in incomprehensive evaluation of SID methods. To
address these issues, we establish a new high-quality dataset named
RealRain-1k, consisting of $1,120$ high-resolution paired clean and rainy
images with low- and high-density rain streaks, respectively. Images in
RealRain-1k are automatically generated from a large number of real-world rainy
video clips through a simple yet effective rain density-controllable filtering
method, and have good properties of high image resolution, background
diversity, rain streaks variety, and strict spatial alignment. RealRain-1k also
provides abundant rain streak layers as a byproduct, enabling us to build a
large-scale synthetic dataset named SynRain-13k by pasting the rain streak
layers on abundant natural images. Based on them and existing datasets, we
benchmark more than 10 representative SID methods on three tracks: (1) fully
supervised learning on RealRain-1k, (2) domain generalization to real datasets,
and (3) syn-to-real transfer learning. The experimental results (1) show the
difference of representative methods in image restoration performance and model
complexity, (2) validate the significance of the proposed datasets for model
generalization, and (3) provide useful insights on the superiority of learning
from diverse domains and shed lights on the future research on real-world SID.
The datasets will be released at https://github.com/hiker-lw/RealRain-1k
|
[
{
"version": "v1",
"created": "Sat, 11 Jun 2022 12:26:59 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Nov 2022 13:11:27 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Li",
"Wei",
""
],
[
"Zhang",
"Qiming",
""
],
[
"Zhang",
"Jing",
""
],
[
"Huang",
"Zhen",
""
],
[
"Tian",
"Xinmei",
""
],
[
"Tao",
"Dacheng",
""
]
] |
new_dataset
| 0.998882 |
2206.10885
|
Daniele Baieri
|
Stefano Esposito, Daniele Baieri, Stefan Zellmann, Andr\'e Hinkenjann,
Emanuele Rodol\`a
|
KiloNeuS: A Versatile Neural Implicit Surface Representation for
Real-Time Rendering
|
9 pages, 8 figures
| null | null | null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
NeRF-based techniques fit wide and deep multi-layer perceptrons (MLPs) to a
continuous radiance field that can be rendered from any unseen viewpoint.
However, the lack of surface and normals definition and high rendering times
limit their usage in typical computer graphics applications. Such limitations
have recently been overcome separately, but solving them together remains an
open problem. We present KiloNeuS, a neural representation reconstructing an
implicit surface represented as a signed distance function (SDF) from
multi-view images and enabling real-time rendering by partitioning the space
into thousands of tiny MLPs fast to inference. As we learn the implicit surface
locally using independent models, resulting in a globally coherent geometry is
non-trivial and needs to be addressed during training. We evaluate rendering
performance on a GPU-accelerated ray-caster with in-shader neural network
inference, resulting in an average of 46 FPS at high resolution, proving a
satisfying tradeoff between storage costs and rendering quality. In fact, our
evaluation for rendering quality and surface recovery shows that KiloNeuS
outperforms its single-MLP counterpart. Finally, to exhibit the versatility of
KiloNeuS, we integrate it into an interactive path-tracer taking full advantage
of its surface normals. We consider our work a crucial first step toward
real-time rendering of implicit neural representations under global
illumination.
|
[
{
"version": "v1",
"created": "Wed, 22 Jun 2022 07:33:26 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Nov 2022 10:13:29 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Esposito",
"Stefano",
""
],
[
"Baieri",
"Daniele",
""
],
[
"Zellmann",
"Stefan",
""
],
[
"Hinkenjann",
"André",
""
],
[
"Rodolà",
"Emanuele",
""
]
] |
new_dataset
| 0.99771 |
2206.14390
|
Xiaodong Gu
|
Zhaowei Zhang, Hongyu Zhang, Beijun Shen, Xiaodong Gu
|
Diet Code Is Healthy: Simplifying Programs for Pre-trained Models of
Code
|
Accepted to be published in ESEC/FSE 2022
| null |
10.1145/3540250.3549094
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Pre-trained code representation models such as CodeBERT have demonstrated
superior performance in a variety of software engineering tasks, yet they are
often heavy in complexity, quadratically with the length of the input sequence.
Our empirical analysis of CodeBERT's attention reveals that CodeBERT pays more
attention to certain types of tokens and statements such as keywords and
data-relevant statements. Based on these findings, we propose DietCode, which
aims at lightweight leverage of large pre-trained models for source code.
DietCode simplifies the input program of CodeBERT with three strategies,
namely, word dropout, frequency filtering, and an attention-based strategy
which selects statements and tokens that receive the most attention weights
during pre-training. Hence, it gives a substantial reduction in the
computational cost without hampering the model performance. Experimental
results on two downstream tasks show that DietCodeBERT provides comparable
results to CodeBERT with 40% less computational cost in fine-tuning and
testing.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2022 04:04:38 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Aug 2022 09:23:19 GMT"
},
{
"version": "v3",
"created": "Tue, 30 Aug 2022 12:14:22 GMT"
},
{
"version": "v4",
"created": "Tue, 20 Sep 2022 08:18:43 GMT"
},
{
"version": "v5",
"created": "Mon, 21 Nov 2022 13:31:39 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Zhang",
"Zhaowei",
""
],
[
"Zhang",
"Hongyu",
""
],
[
"Shen",
"Beijun",
""
],
[
"Gu",
"Xiaodong",
""
]
] |
new_dataset
| 0.995908 |
2207.05817
|
Raymond Li
|
Raymond Li, Ilya Valmianski, Li Deng, Xavier Amatriain, Anitha Kannan
|
OSLAT: Open Set Label Attention Transformer for Medical Entity Retrieval
and Span Extraction
|
18 pages, 2 figures, Camera-Ready for ML4H 2022 (Proceedings Track)
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Medical entity span extraction and linking are critical steps for many
healthcare NLP tasks. Most existing entity extraction methods either have a
fixed vocabulary of medical entities or require span annotations. In this
paper, we propose a method for linking an open set of entities that does not
require any span annotations. Our method, Open Set Label Attention Transformer
(OSLAT), uses the label-attention mechanism to learn candidate-entity
contextualized text representations. We find that OSLAT can not only link
entities but is also able to implicitly learn spans associated with entities.
We evaluate OSLAT on two tasks: (1) span extraction trained without explicit
span annotations, and (2) entity linking trained without span-level annotation.
We test the generalizability of our method by training two separate models on
two datasets with low entity overlap and comparing cross-dataset performance.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2022 20:22:55 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Nov 2022 19:31:43 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Li",
"Raymond",
""
],
[
"Valmianski",
"Ilya",
""
],
[
"Deng",
"Li",
""
],
[
"Amatriain",
"Xavier",
""
],
[
"Kannan",
"Anitha",
""
]
] |
new_dataset
| 0.950934 |
2210.01032
|
Weihua Zhou
|
Xuewei Cao, Joyce H Keyak, Sigurdur Sigurdsson, Chen Zhao, Weihua
Zhou, Anqi Liu, Thomas Lang, Hong-Wen Deng, Vilmundur Gudnason, Qiuying Sha
|
A New Hip Fracture Risk Index Derived from FEA-Computed Proximal Femur
Fracture Loads and Energies-to-Failure
|
27 pages, 4 figures
| null | null | null |
cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Hip fracture risk assessment is an important but challenging task.
Quantitative CT-based patient specific finite element analysis (FEA) computes
the force (fracture load) to break the proximal femur in a particular loading
condition. It provides different structural information about the proximal
femur that can influence a subject overall fracture risk. To obtain a more
robust measure of fracture risk, we used principal component analysis (PCA) to
develop a global FEA computed fracture risk index that incorporates the
FEA-computed yield and ultimate failure loads and energies to failure in four
loading conditions (single-limb stance and impact from a fall onto the
posterior, posterolateral, and lateral aspects of the greater trochanter) of
110 hip fracture subjects and 235 age and sex matched control subjects from the
AGES-Reykjavik study. We found that the first PC (PC1) of the FE parameters was
the only significant predictor of hip fracture. Using a logistic regression
model, we determined if prediction performance for hip fracture using PC1
differed from that using FE parameters combined by stratified random resampling
with respect to hip fracture status. The results showed that the average of the
area under the receive operating characteristic curve (AUC) using PC1 was
always higher than that using all FE parameters combined in the male subjects.
The AUC of PC1 and AUC of the FE parameters combined were not significantly
different than that in the female subjects or in all subjects
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 15:46:06 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Nov 2022 00:32:41 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Cao",
"Xuewei",
""
],
[
"Keyak",
"Joyce H",
""
],
[
"Sigurdsson",
"Sigurdur",
""
],
[
"Zhao",
"Chen",
""
],
[
"Zhou",
"Weihua",
""
],
[
"Liu",
"Anqi",
""
],
[
"Lang",
"Thomas",
""
],
[
"Deng",
"Hong-Wen",
""
],
[
"Gudnason",
"Vilmundur",
""
],
[
"Sha",
"Qiuying",
""
]
] |
new_dataset
| 0.999654 |
2210.06134
|
Hua Xuan Qin
|
Hua Xuan Qin, Yuyang Wang, Pan Hui
|
Identity, Crimes, and Law Enforcement in the Metaverse
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
With the boom in metaverse-related projects in major areas of the public's
life, the safety of users becomes a pressing concern. We believe that an
international legal framework should be established to promote collaboration
among nations, facilitate crime investigation, and support democratic
governance. In this paper, we discuss the legal concerns of identity, crimes
that could occur based on incidents in existing virtual worlds, and challenges
to unified law enforcement in the metaverse.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 12:45:31 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Nov 2022 11:08:19 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Qin",
"Hua Xuan",
""
],
[
"Wang",
"Yuyang",
""
],
[
"Hui",
"Pan",
""
]
] |
new_dataset
| 0.990916 |
2210.09049
|
Jianing Wang
|
Jianing Wang, Chengcheng Han, Chengyu Wang, Chuanqi Tan, Minghui Qiu,
Songfang Huang, Jun Huang, Ming Gao
|
SpanProto: A Two-stage Span-based Prototypical Network for Few-shot
Named Entity Recognition
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Few-shot Named Entity Recognition (NER) aims to identify named entities with
very little annotated data. Previous methods solve this problem based on
token-wise classification, which ignores the information of entity boundaries,
and inevitably the performance is affected by the massive non-entity tokens. To
this end, we propose a seminal span-based prototypical network (SpanProto) that
tackles few-shot NER via a two-stage approach, including span extraction and
mention classification. In the span extraction stage, we transform the
sequential tags into a global boundary matrix, enabling the model to focus on
the explicit boundary information. For mention classification, we leverage
prototypical learning to capture the semantic representations for each labeled
span and make the model better adapt to novel-class entities. To further
improve the model performance, we split out the false positives generated by
the span extractor but not labeled in the current episode set, and then present
a margin-based loss to separate them from each prototype region. Experiments
over multiple benchmarks demonstrate that our model outperforms strong
baselines by a large margin.
|
[
{
"version": "v1",
"created": "Mon, 17 Oct 2022 12:59:33 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Nov 2022 07:20:55 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Wang",
"Jianing",
""
],
[
"Han",
"Chengcheng",
""
],
[
"Wang",
"Chengyu",
""
],
[
"Tan",
"Chuanqi",
""
],
[
"Qiu",
"Minghui",
""
],
[
"Huang",
"Songfang",
""
],
[
"Huang",
"Jun",
""
],
[
"Gao",
"Ming",
""
]
] |
new_dataset
| 0.999258 |
2210.11060
|
Haomin Fu
|
Haomin Fu, Yeqin Zhang, Haiyang Yu, Jian Sun, Fei Huang, Luo Si,
Yongbin Li, Cam-Tu Nguyen
|
Doc2Bot: Accessing Heterogeneous Documents via Conversational Bots
|
17 pages, 14 figures. Accepted by Findings of EMNLP 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces Doc2Bot, a novel dataset for building machines that
help users seek information via conversations. This is of particular interest
for companies and organizations that own a large number of manuals or
instruction books. Despite its potential, the nature of our task poses several
challenges: (1) documents contain various structures that hinder the ability of
machines to comprehend, and (2) user information needs are often
underspecified. Compared to prior datasets that either focus on a single
structural type or overlook the role of questioning to uncover user needs, the
Doc2Bot dataset is developed to target such challenges systematically. Our
dataset contains over 100,000 turns based on Chinese documents from five
domains, larger than any prior document-grounded dialog dataset for information
seeking. We propose three tasks in Doc2Bot: (1) dialog state tracking to track
user intentions, (2) dialog policy learning to plan system actions and
contents, and (3) response generation which generates responses based on the
outputs of the dialog policy. Baseline methods based on the latest deep
learning models are presented, indicating that our proposed tasks are
challenging and worthy of further research.
|
[
{
"version": "v1",
"created": "Thu, 20 Oct 2022 07:33:05 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Oct 2022 13:43:42 GMT"
},
{
"version": "v3",
"created": "Sun, 20 Nov 2022 04:35:46 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Fu",
"Haomin",
""
],
[
"Zhang",
"Yeqin",
""
],
[
"Yu",
"Haiyang",
""
],
[
"Sun",
"Jian",
""
],
[
"Huang",
"Fei",
""
],
[
"Si",
"Luo",
""
],
[
"Li",
"Yongbin",
""
],
[
"Nguyen",
"Cam-Tu",
""
]
] |
new_dataset
| 0.978589 |
2210.11643
|
Erin Taylor
|
Shao-Heng Ko, Erin Taylor, Pankaj K. Agarwal, Kamesh Munagala
|
All Politics is Local: Redistricting via Local Fairness
| null | null | null | null |
cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose to use the concept of local fairness for auditing
and ranking redistricting plans. Given a redistricting plan, a deviating group
is a population-balanced contiguous region in which a majority of individuals
are of the same interest and in the minority of their respective districts;
such a set of individuals have a justified complaint with how the redistricting
plan was drawn. A redistricting plan with no deviating groups is called locally
fair. We show that the problem of auditing a given plan for local fairness is
NP-complete. We present an MCMC approach for auditing as well as ranking
redistricting plans. We also present a dynamic programming based algorithm for
the auditing problem that we use to demonstrate the efficacy of our MCMC
approach. Using these tools, we test local fairness on real-world election
data, showing that it is indeed possible to find plans that are almost or
exactly locally fair. Further, we show that such plans can be generated while
sacrificing very little in terms of compactness and existing fairness measures
such as competitiveness of the districts or seat shares of the plans.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 00:01:29 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Nov 2022 17:08:23 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Ko",
"Shao-Heng",
""
],
[
"Taylor",
"Erin",
""
],
[
"Agarwal",
"Pankaj K.",
""
],
[
"Munagala",
"Kamesh",
""
]
] |
new_dataset
| 0.998133 |
2210.16065
|
Puyu Yang
|
Puyu Yang and Giovanni Colavizza
|
Polarization and reliability of news sources in Wikipedia
|
15pages, 10 figures
| null | null | null |
cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
Wikipedia is the largest online encyclopedia: its open contribution policy
allows everyone to edit and share their knowledge. A challenge of radical
openness is that it facilitates introducing biased contents or perspectives in
Wikipedia. Wikipedia relies on numerous external sources such as journal
articles, books, news media, and more. News media sources, in particular, take
up nearly third of all citations from Wikipedia. However, despite their
importance for providing up-to-date and factual contents, there is still a
limited understanding on which news media sources are cited from Wikipedia.
Relying on a large-scale open dataset of nearly 30M citations from English
Wikipedia, we find a moderate yet systematic liberal polarization in the
selection of news media sources. We also show that this effect is not mitigated
by controlling for news media factual reliability. Our results contribute to
Wikipedia's knowledge integrity agenda in suggesting that a systematic effort
would help to better map potential biases in Wikipedia and find means to
strengthen its neutral point of view policy.
|
[
{
"version": "v1",
"created": "Fri, 28 Oct 2022 11:18:31 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Nov 2022 08:25:26 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Yang",
"Puyu",
""
],
[
"Colavizza",
"Giovanni",
""
]
] |
new_dataset
| 0.991811 |
2211.01827
|
Ioannis Mavromatis Dr
|
Ioannis Mavromatis and Aftab Khan
|
Demo: LE3D: A Privacy-preserving Lightweight Data Drift Detection
Framework
|
IEEE CCNC 2023, Las Vegas, USA
| null | null | null |
cs.LG cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents LE3D; a novel data drift detection framework for
preserving data integrity and confidentiality. LE3D is a generalisable platform
for evaluating novel drift detection mechanisms within the Internet of Things
(IoT) sensor deployments. Our framework operates in a distributed manner,
preserving data privacy while still being adaptable to new sensors with minimal
online reconfiguration. Our framework currently supports multiple drift
estimators for time-series IoT data and can easily be extended to accommodate
new data types and drift detection mechanisms. This demo will illustrate the
functionality of LE3D under a real-world-like scenario.
|
[
{
"version": "v1",
"created": "Thu, 3 Nov 2022 14:10:03 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Nov 2022 19:00:54 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Mavromatis",
"Ioannis",
""
],
[
"Khan",
"Aftab",
""
]
] |
new_dataset
| 0.991869 |
2211.05371
|
Jaechul Roh
|
Jaechul Roh, Minhao Cheng, Yajun Fang
|
MSDT: Masked Language Model Scoring Defense in Text Domain
|
5 pages, 1 figure, 4 tables, accepted as a conference paper at IEEE
UV 2022, Boston, USA
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pre-trained language models allowed us to process downstream tasks with the
help of fine-tuning, which aids the model to achieve fairly high accuracy in
various Natural Language Processing (NLP) tasks. Such easily-downloaded
language models from various websites empowered the public users as well as
some major institutions to give a momentum to their real-life application.
However, it was recently proven that models become extremely vulnerable when
they are backdoor attacked with trigger-inserted poisoned datasets by malicious
users. The attackers then redistribute the victim models to the public to
attract other users to use them, where the models tend to misclassify when
certain triggers are detected within the training sample. In this paper, we
will introduce a novel improved textual backdoor defense method, named MSDT,
that outperforms the current existing defensive algorithms in specific
datasets. The experimental results illustrate that our method can be effective
and constructive in terms of defending against backdoor attack in text domain.
Code is available at https://github.com/jcroh0508/MSDT.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 06:46:47 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Roh",
"Jaechul",
""
],
[
"Cheng",
"Minhao",
""
],
[
"Fang",
"Yajun",
""
]
] |
new_dataset
| 0.998925 |
2211.05995
|
Feiqi Cao
|
Yuanzhe Jia, Weixuan Wu, Feiqi Cao, Soyeon Caren Han
|
In-game Toxic Language Detection: Shared Task and Attention Residuals
|
Accepted at AAAI 2023 Poster
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In-game toxic language becomes the hot potato in the gaming industry and
community. There have been several online game toxicity analysis frameworks and
models proposed. However, it is still challenging to detect toxicity due to the
nature of in-game chat, which has extremely short length. In this paper, we
describe how the in-game toxic language shared task has been established using
the real-world in-game chat data. In addition, we propose and introduce the
model/framework for toxic language token tagging (slot filling) from the
in-game chat. The data and code will be released.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 04:33:45 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2022 04:20:18 GMT"
},
{
"version": "v3",
"created": "Sat, 19 Nov 2022 12:55:48 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Jia",
"Yuanzhe",
""
],
[
"Wu",
"Weixuan",
""
],
[
"Cao",
"Feiqi",
""
],
[
"Han",
"Soyeon Caren",
""
]
] |
new_dataset
| 0.978577 |
2211.06679
|
Guang Liu
|
Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang,
Ledell Wu
|
AltCLIP: Altering the Language Encoder in CLIP for Extended Language
Capabilities
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present a conceptually simple and effective method to train
a strong bilingual/multilingual multimodal representation model. Starting from
the pre-trained multimodal representation model CLIP released by OpenAI, we
altered its text encoder with a pre-trained multilingual text encoder XLM-R,
and aligned both languages and image representations by a two-stage training
schema consisting of teacher learning and contrastive learning. We validate our
method through evaluations of a wide range of tasks. We set new
state-of-the-art performances on a bunch of tasks including ImageNet-CN,
Flicker30k-CN, COCO-CN and XTD. Further, we obtain very close performances with
CLIP on almost all tasks, suggesting that one can simply alter the text encoder
in CLIP for extended capabilities such as multilingual understanding. Our
models and code are available at https://github.com/FlagAI-Open/FlagAI.
|
[
{
"version": "v1",
"created": "Sat, 12 Nov 2022 14:48:55 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Nov 2022 15:39:52 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Chen",
"Zhongzhi",
""
],
[
"Liu",
"Guang",
""
],
[
"Zhang",
"Bo-Wen",
""
],
[
"Ye",
"Fulong",
""
],
[
"Yang",
"Qinghong",
""
],
[
"Wu",
"Ledell",
""
]
] |
new_dataset
| 0.99251 |
2211.10470
|
Alessio Xompero
|
Xavier Weber, Alessio Xompero, Andrea Cavallaro
|
A mixed-reality dataset for category-level 6D pose and size estimation
of hand-occluded containers
|
5 pages, 4 figures, 1 table. Submitted to IEEE ICASSP 2023. Webpage
at https://corsmal.eecs.qmul.ac.uk/pose.html
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Estimating the 6D pose and size of household containers is challenging due to
large intra-class variations in the object properties, such as shape, size,
appearance, and transparency. The task is made more difficult when these
objects are held and manipulated by a person due to varying degrees of hand
occlusions caused by the type of grasps and by the viewpoint of the camera
observing the person holding the object. In this paper, we present a
mixed-reality dataset of hand-occluded containers for category-level 6D object
pose and size estimation. The dataset consists of 138,240 images of rendered
hands and forearms holding 48 synthetic objects, split into 3 grasp categories
over 30 real backgrounds. We re-train and test an existing model for 6D object
pose estimation on our mixed-reality dataset. We discuss the impact of the use
of this dataset in improving the task of 6D pose and size estimation.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 19:14:52 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Weber",
"Xavier",
""
],
[
"Xompero",
"Alessio",
""
],
[
"Cavallaro",
"Andrea",
""
]
] |
new_dataset
| 0.999736 |
2211.10480
|
Yunjin Wang
|
Yunjin Wang, Chia-Hao Chang, Anand Sivasubramaniam, Niranjan
Soundararajan
|
ACIC: Admission-Controlled Instruction Cache
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The front end bottleneck in datacenter workloads has come under increased
scrutiny, with the growing code footprint, involvement of numerous libraries
and OS services, and the unpredictability in the instruction stream. Our
examination of these workloads points to burstiness in accesses to instruction
blocks, which has also been observed in data accesses. Such burstiness is
largely due to spatial and short-duration temporal localities, that LRU fails
to recognize and optimize for, when a single cache caters to both forms of
locality. Instead, we incorporate a small i-Filter as in previous works to
separate spatial from temporal accesses. However, a simple separation does not
suffice, and we additionally need to predict whether the block will continue to
have temporal locality, after the burst of spatial locality. This combination
of i-Filter and temporal locality predictor constitutes our
Admission-Controlled Instruction Cache (ACIC). ACIC outperforms a number of
state-of-the-art pollution reduction techniques (replacement algorithms,
bypassing mechanisms, victim caches), providing 1.0223 speedup on the average
over a baseline LRU based conventional i-cache (bridging over half of the gap
between LRU and OPT) across several datacenter workloads.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 19:31:48 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Wang",
"Yunjin",
""
],
[
"Chang",
"Chia-Hao",
""
],
[
"Sivasubramaniam",
"Anand",
""
],
[
"Soundararajan",
"Niranjan",
""
]
] |
new_dataset
| 0.971736 |
2211.10567
|
Yao Zhang
|
Yao Zhang, Haokun Chen, Ahmed Frikha, Yezi Yang, Denis Krompass,
Gengyuan Zhang, Jindong Gu, Volker Tresp
|
CL-CrossVQA: A Continual Learning Benchmark for Cross-Domain Visual
Question Answering
|
10 pages, 6 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual Question Answering (VQA) is a multi-discipline research task. To
produce the right answer, it requires an understanding of the visual content of
images, the natural language questions, as well as commonsense reasoning over
the information contained in the image and world knowledge. Recently,
large-scale Vision-and-Language Pre-trained Models (VLPMs) have been the
mainstream approach to VQA tasks due to their superior performance. The
standard practice is to fine-tune large-scale VLPMs pre-trained on huge
general-domain datasets using the domain-specific VQA datasets. However, in
reality, the application domain can change over time, necessitating VLPMs to
continually learn and adapt to new domains without forgetting previously
acquired knowledge. Most existing continual learning (CL) research concentrates
on unimodal tasks, whereas a more practical application scenario, i.e, CL on
cross-domain VQA, has not been studied. Motivated by this, we introduce
CL-CrossVQA, a rigorous Continual Learning benchmark for Cross-domain Visual
Question Answering, through which we conduct extensive experiments on 4 VLPMs,
4 CL approaches, and 5 VQA datasets from different domains. In addition, by
probing the forgetting phenomenon of the intermediate layers, we provide
insights into how model architecture affects CL performance, why CL approaches
can help mitigate forgetting in VLPMs to some extent, and how to design CL
approaches suitable for VLPMs in this challenging continual learning
environment. To facilitate future work on CL for cross-domain VQA, we will
release our datasets and code.
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 02:43:30 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Zhang",
"Yao",
""
],
[
"Chen",
"Haokun",
""
],
[
"Frikha",
"Ahmed",
""
],
[
"Yang",
"Yezi",
""
],
[
"Krompass",
"Denis",
""
],
[
"Zhang",
"Gengyuan",
""
],
[
"Gu",
"Jindong",
""
],
[
"Tresp",
"Volker",
""
]
] |
new_dataset
| 0.99217 |
2211.10649
|
Xiaoliang Lei
|
Hao Mei, Xiaoliang Lei, Longchao Da, Bin Shi, Hua Wei
|
LibSignal: An Open Library for Traffic Signal Control
|
11 pages + 6 pages appendix. Accepted by NeurIPS 2022 Workshop:
Reinforcement Learning for Real Life. Website:
https://darl-libsignal.github.io/
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces a library for cross-simulator comparison of
reinforcement learning models in traffic signal control tasks. This library is
developed to implement recent state-of-the-art reinforcement learning models
with extensible interfaces and unified cross-simulator evaluation metrics. It
supports commonly-used simulators in traffic signal control tasks, including
Simulation of Urban MObility(SUMO) and CityFlow, and multiple benchmark
datasets for fair comparisons. We conducted experiments to validate our
implementation of the models and to calibrate the simulators so that the
experiments from one simulator could be referential to the other. Based on the
validated models and calibrated environments, this paper compares and reports
the performance of current state-of-the-art RL algorithms across different
datasets and simulators. This is the first time that these methods have been
compared fairly under the same datasets with different simulators.
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 10:21:50 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Mei",
"Hao",
""
],
[
"Lei",
"Xiaoliang",
""
],
[
"Da",
"Longchao",
""
],
[
"Shi",
"Bin",
""
],
[
"Wei",
"Hua",
""
]
] |
new_dataset
| 0.999733 |
2211.10661
|
Jaikai Wang
|
Jiakai Wang, Zhendong Chen, Zixin Yin, Qinghong Yang, Xianglong Liu
|
Phonemic Adversarial Attack against Audio Recognition in Real World
| null | null | null | null |
cs.SD cs.CR eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, adversarial attacks for audio recognition have attracted much
attention. However, most of the existing studies mainly rely on the
coarse-grain audio features at the instance level to generate adversarial
noises, which leads to expensive generation time costs and weak universal
attacking ability. Motivated by the observations that all audio speech consists
of fundamental phonemes, this paper proposes a phonemic adversarial tack (PAT)
paradigm, which attacks the fine-grain audio features at the phoneme level
commonly shared across audio instances, to generate phonemic adversarial
noises, enjoying the more general attacking ability with fast generation speed.
Specifically, for accelerating the generation, a phoneme density balanced
sampling strategy is introduced to sample quantity less but phonemic features
abundant audio instances as the training data via estimating the phoneme
density, which substantially alleviates the heavy dependency on the large
training dataset. Moreover, for promoting universal attacking ability, the
phonemic noise is optimized in an asynchronous way with a sliding window, which
enhances the phoneme diversity and thus well captures the critical fundamental
phonemic patterns. By conducting extensive experiments, we comprehensively
investigate the proposed PAT framework and demonstrate that it outperforms the
SOTA baselines by large margins (i.e., at least 11X speed up and 78% attacking
ability improvement).
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 11:01:21 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Wang",
"Jiakai",
""
],
[
"Chen",
"Zhendong",
""
],
[
"Yin",
"Zixin",
""
],
[
"Yang",
"Qinghong",
""
],
[
"Liu",
"Xianglong",
""
]
] |
new_dataset
| 0.999401 |
2211.10666
|
Rongjie Huang
|
Chenye Cui, Yi Ren, Jinglin Liu, Rongjie Huang, Zhou Zhao
|
VarietySound: Timbre-Controllable Video to Sound Generation via
Unsupervised Information Disentanglement
| null | null | null | null |
cs.MM cs.CV cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video to sound generation aims to generate realistic and natural sound given
a video input. However, previous video-to-sound generation methods can only
generate a random or average timbre without any controls or specializations of
the generated sound timbre, leading to the problem that people cannot obtain
the desired timbre under these methods sometimes. In this paper, we pose the
task of generating sound with a specific timbre given a video input and a
reference audio sample. To solve this task, we disentangle each target sound
audio into three components: temporal information, acoustic information, and
background information. We first use three encoders to encode these components
respectively: 1) a temporal encoder to encode temporal information, which is
fed with video frames since the input video shares the same temporal
information as the original audio; 2) an acoustic encoder to encode timbre
information, which takes the original audio as input and discards its temporal
information by a temporal-corrupting operation; and 3) a background encoder to
encode the residual or background sound, which uses the background part of the
original audio as input. To make the generated result achieve better quality
and temporal alignment, we also adopt a mel discriminator and a temporal
discriminator for the adversarial training. Our experimental results on the VAS
dataset demonstrate that our method can generate high-quality audio samples
with good synchronization with events in video and high timbre similarity with
the reference audio.
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 11:12:01 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Cui",
"Chenye",
""
],
[
"Ren",
"Yi",
""
],
[
"Liu",
"Jinglin",
""
],
[
"Huang",
"Rongjie",
""
],
[
"Zhao",
"Zhou",
""
]
] |
new_dataset
| 0.995681 |
2211.10701
|
Zhongnian Li
|
Zhongnian Li, Jian Zhang, Mengting Xu, Xinzheng Xu, Daoqiang Zhang
|
Complementary Labels Learning with Augmented Classes
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Complementary Labels Learning (CLL) arises in many real-world tasks such as
private questions classification and online learning, which aims to alleviate
the annotation cost compared with standard supervised learning. Unfortunately,
most previous CLL algorithms were in a stable environment rather than an open
and dynamic scenarios, where data collected from unseen augmented classes in
the training process might emerge in the testing phase. In this paper, we
propose a novel problem setting called Complementary Labels Learning with
Augmented Classes (CLLAC), which brings the challenge that classifiers trained
by complementary labels should not only be able to classify the instances from
observed classes accurately, but also recognize the instance from the Augmented
Classes in the testing phase. Specifically, by using unlabeled data, we propose
an unbiased estimator of classification risk for CLLAC, which is guaranteed to
be provably consistent. Moreover, we provide generalization error bound for
proposed method which shows that the optimal parametric convergence rate is
achieved for estimation error. Finally, the experimental results on several
benchmark datasets verify the effectiveness of the proposed method.
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 13:55:27 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Li",
"Zhongnian",
""
],
[
"Zhang",
"Jian",
""
],
[
"Xu",
"Mengting",
""
],
[
"Xu",
"Xinzheng",
""
],
[
"Zhang",
"Daoqiang",
""
]
] |
new_dataset
| 0.958975 |
2211.10707
|
TaeYoung Kang
|
TaeYoung Kang, Hanbin Lee
|
Suffering from Vaccines or from Government? : Partisan Bias in COVID-19
Vaccine Adverse Events Coverage
|
5 pages, 5 figures, 2 tables
| null | null | null |
cs.CL cs.CY cs.SI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Vaccine adverse events have been presumed to be a relatively objective
measure that is immune to political polarization. The real-world data, however,
shows the correlation between presidential disapproval ratings and the
subjective severity of adverse events. This paper investigates the partisan
bias in COVID vaccine adverse events coverage with language models that can
classify the topic of vaccine-related articles and the political disposition of
news comments. Based on 90K news articles from 52 major newspaper companies, we
found that conservative media are inclined to report adverse events more
frequently than their liberal counterparts, while the coverage itself was
statistically uncorrelated with the severity of real-world adverse events. The
users who support the conservative opposing party were more likely to write the
popular comments from 2.3K random sampled articles on news platforms. This
research implies that bipartisanship can still play a significant role in
forming public opinion on the COVID vaccine even after the majority of the
population's vaccination
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 14:17:07 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Kang",
"TaeYoung",
""
],
[
"Lee",
"Hanbin",
""
]
] |
new_dataset
| 0.996208 |
2211.10715
|
Sen He
|
Sen He, Yi-Zhe Song, Tao Xiang
|
Single Stage Multi-Pose Virtual Try-On
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Multi-pose virtual try-on (MPVTON) aims to fit a target garment onto a person
at a target pose. Compared to traditional virtual try-on (VTON) that fits the
garment but keeps the pose unchanged, MPVTON provides a better try-on
experience, but is also more challenging due to the dual garment and pose
editing objectives. Existing MPVTON methods adopt a pipeline comprising three
disjoint modules including a target semantic layout prediction module, a coarse
try-on image generator and a refinement try-on image generator. These models
are trained separately, leading to sub-optimal model training and
unsatisfactory results. In this paper, we propose a novel single stage model
for MPVTON. Key to our model is a parallel flow estimation module that predicts
the flow fields for both person and garment images conditioned on the target
pose. The predicted flows are subsequently used to warp the appearance feature
maps of the person and the garment images to construct a style map. The map is
then used to modulate the target pose's feature map for target try-on image
generation. With the parallel flow estimation design, our model can be trained
end-to-end in a single stage and is more computationally efficient, resulting
in new SOTA performance on existing MPVTON benchmarks. We further introduce
multi-task training and demonstrate that our model can also be applied for
traditional VTON and pose transfer tasks and achieve comparable performance to
SOTA specialized models on both tasks.
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 15:02:11 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"He",
"Sen",
""
],
[
"Song",
"Yi-Zhe",
""
],
[
"Xiang",
"Tao",
""
]
] |
new_dataset
| 0.999435 |
2211.10724
|
Youwei Huang
|
Youwei Huang, Tao Zhang, Sen Fang, Youshuai Tan
|
Deep Smart Contract Intent Detection
|
12 pages, 9 figures, conference
| null | null | null |
cs.SE cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Nowadays, security activities in smart contracts concentrate on vulnerability
detection. Despite early success, we find that developers' intent to write
smart contracts is a more noteworthy security concern because smart contracts
with malicious intent have caused significant users' financial loss.
Unfortunately, current approaches to identify the aforementioned malicious
smart contracts rely on smart contract security audits, which entail huge
manpower consumption and financial expenditure. To resolve this issue, we
propose a novel deep learning-based approach, SmartIntentNN, to conduct
automated smart contract intent detection. SmartIntentNN consists of three
primary parts: a pre-trained sentence encoder to generate the contextual
representations of smart contracts, a K-means clustering method to highlight
intent-related representations, and a bidirectional LSTM-based (long-short term
memory) multi-label classification network to predict the intents in smart
contracts. To evaluate the performance of SmartIntentNN, we collect more than
40,000 real smart contracts and perform a series of comparison experiments with
our selected baseline approaches. The experimental results demonstrate that
SmartIntentNN outperforms all baselines by up to 0.8212 in terms of the
f1-score metric.
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 15:40:26 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Huang",
"Youwei",
""
],
[
"Zhang",
"Tao",
""
],
[
"Fang",
"Sen",
""
],
[
"Tan",
"Youshuai",
""
]
] |
new_dataset
| 0.994342 |
2211.10739
|
Chang Liu
|
Chang Liu, Yuwen Yang, Yue Ding, Hongtao Lu
|
EDEN: A Plug-in Equivariant Distance Encoding to Beyond the 1-WL Test
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The message-passing scheme is the core of graph representation learning.
While most existing message-passing graph neural networks (MPNNs) are
permutation-invariant in graph-level representation learning and
permutation-equivariant in node- and edge-level representation learning, their
expressive power is commonly limited by the 1-Weisfeiler-Lehman (1-WL) graph
isomorphism test. Recently proposed expressive graph neural networks (GNNs)
with specially designed complex message-passing mechanisms are not practical.
To bridge the gap, we propose a plug-in Equivariant Distance ENcoding (EDEN)
for MPNNs. EDEN is derived from a series of interpretable transformations on
the graph's distance matrix. We theoretically prove that EDEN is
permutation-equivariant for all level graph representation learning, and we
empirically illustrate that EDEN's expressive power can reach up to the 3-WL
test. Extensive experiments on real-world datasets show that combining EDEN
with conventional GNNs surpasses recent advanced GNNs.
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 16:36:28 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Liu",
"Chang",
""
],
[
"Yang",
"Yuwen",
""
],
[
"Ding",
"Yue",
""
],
[
"Lu",
"Hongtao",
""
]
] |
new_dataset
| 0.992713 |
2211.10763
|
Heng Fan
|
Libo Zhang, Lutao Jiang, Ruyi Ji, Heng Fan
|
PIDray: A Large-scale X-ray Benchmark for Real-World Prohibited Item
Detection
|
Tech. report. arXiv admin note: text overlap with arXiv:2108.07020
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic security inspection relying on computer vision technology is a
challenging task in real-world scenarios due to many factors, such as
intra-class variance, class imbalance, and occlusion. Most previous methods
rarely touch the cases where the prohibited items are deliberately hidden in
messy objects because of the scarcity of large-scale datasets, hindering their
applications. To address this issue and facilitate related research, we present
a large-scale dataset, named PIDray, which covers various cases in real-world
scenarios for prohibited item detection, especially for deliberately hidden
items. In specific, PIDray collects 124,486 X-ray images for $12$ categories of
prohibited items, and each image is manually annotated with careful inspection,
which makes it, to our best knowledge, to largest prohibited items detection
dataset to date. Meanwhile, we propose a general divide-and-conquer pipeline to
develop baseline algorithms on PIDray. Specifically, we adopt the tree-like
structure to suppress the influence of the long-tailed issue in the PIDray
dataset, where the first course-grained node is tasked with the binary
classification to alleviate the influence of head category, while the
subsequent fine-grained node is dedicated to the specific tasks of the tail
categories. Based on this simple yet effective scheme, we offer strong
task-specific baselines across object detection, instance segmentation, and
multi-label classification tasks and verify the generalization ability on
common datasets (e.g., COCO and PASCAL VOC). Extensive experiments on PIDray
demonstrate that the proposed method performs favorably against current
state-of-the-art methods, especially for deliberately hidden items. Our
benchmark and codes will be released at https://github.com/lutao2021/PIDray.
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 18:31:34 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Zhang",
"Libo",
""
],
[
"Jiang",
"Lutao",
""
],
[
"Ji",
"Ruyi",
""
],
[
"Fan",
"Heng",
""
]
] |
new_dataset
| 0.99987 |
2211.10780
|
Youssef Mohamed
|
Youssef Mohamed, Mohamed Abdelfattah, Shyma Alhuwaider, Feifan Li,
Xiangliang Zhang, Kenneth Ward Church, Mohamed Elhoseiny
|
ArtELingo: A Million Emotion Annotations of WikiArt with Emphasis on
Diversity over Language and Culture
|
9 pages, Accepted at EMNLP 22, for more details see
https://www.artelingo.org/
| null | null | null |
cs.CL cs.AI cs.CY cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper introduces ArtELingo, a new benchmark and dataset, designed to
encourage work on diversity across languages and cultures. Following ArtEmis, a
collection of 80k artworks from WikiArt with 0.45M emotion labels and
English-only captions, ArtELingo adds another 0.79M annotations in Arabic and
Chinese, plus 4.8K in Spanish to evaluate "cultural-transfer" performance. More
than 51K artworks have 5 annotations or more in 3 languages. This diversity
makes it possible to study similarities and differences across languages and
cultures. Further, we investigate captioning tasks, and find diversity improves
the performance of baseline models. ArtELingo is publicly available at
https://www.artelingo.org/ with standard splits and baseline models. We hope
our work will help ease future research on multilinguality and culturally-aware
AI.
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 19:34:18 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Mohamed",
"Youssef",
""
],
[
"Abdelfattah",
"Mohamed",
""
],
[
"Alhuwaider",
"Shyma",
""
],
[
"Li",
"Feifan",
""
],
[
"Zhang",
"Xiangliang",
""
],
[
"Church",
"Kenneth Ward",
""
],
[
"Elhoseiny",
"Mohamed",
""
]
] |
new_dataset
| 0.999353 |
2211.10806
|
Constantinos Patsakis
|
Alexandros Zacharis and Constantinos Patsakis
|
AiCEF: An AI-assisted Cyber Exercise Content Generation Framework Using
Named Entity Recognition
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Content generation that is both relevant and up to date with the current
threats of the target audience is a critical element in the success of any
Cyber Security Exercise (CSE). Through this work, we explore the results of
applying machine learning techniques to unstructured information sources to
generate structured CSE content. The corpus of our work is a large dataset of
publicly available cyber security articles that have been used to predict
future threats and to form the skeleton for new exercise scenarios. Machine
learning techniques, like named entity recognition (NER) and topic extraction,
have been utilised to structure the information based on a novel ontology we
developed, named Cyber Exercise Scenario Ontology (CESO). Moreover, we used
clustering with outliers to classify the generated extracted data into objects
of our ontology. Graph comparison methodologies were used to match generated
scenario fragments to known threat actors' tactics and help enrich the proposed
scenario accordingly with the help of synthetic text generators. CESO has also
been chosen as the prominent way to express both fragments and the final
proposed scenario content by our AI-assisted Cyber Exercise Framework (AiCEF).
Our methodology was put to test by providing a set of generated scenarios for
evaluation to a group of experts to be used as part of a real-world awareness
tabletop exercise.
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 21:42:12 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Zacharis",
"Alexandros",
""
],
[
"Patsakis",
"Constantinos",
""
]
] |
new_dataset
| 0.99858 |
2211.10894
|
\.Ismail Emir Y\"uksel
|
\.Ismail Emir Y\"uksel, Ataberk Olgun, Behzad Salami, F. Nisa
Bostanc{\i}, Yahya Can Tu\u{g}rul, A. Giray Ya\u{g}l{\i}k\c{c}{\i}, Nika
Mansouri Ghiasi, Onur Mutlu, O\u{g}uz Ergin
|
TuRaN: True Random Number Generation Using Supply Voltage Underscaling
in SRAMs
| null | null | null | null |
cs.AR cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Prior works propose SRAM-based TRNGs that extract entropy from SRAM arrays.
SRAM arrays are widely used in a majority of specialized or general-purpose
chips that perform the computation to store data inside the chip. Thus,
SRAM-based TRNGs present a low-cost alternative to dedicated hardware TRNGs.
However, existing SRAM-based TRNGs suffer from 1) low TRNG throughput, 2) high
energy consumption, 3) high TRNG latency, and 4) the inability to generate true
random numbers continuously, which limits the application space of SRAM-based
TRNGs. Our goal in this paper is to design an SRAM-based TRNG that overcomes
these four key limitations and thus, extends the application space of
SRAM-based TRNGs. To this end, we propose TuRaN, a new high-throughput,
energy-efficient, and low-latency SRAM-based TRNG that can sustain continuous
operation. TuRaN leverages the key observation that accessing SRAM cells
results in random access failures when the supply voltage is reduced below the
manufacturer-recommended supply voltage. TuRaN generates random numbers at high
throughput by repeatedly accessing SRAM cells with reduced supply voltage and
post-processing the resulting random faults using the SHA-256 hash function. To
demonstrate the feasibility of TuRaN, we conduct SPICE simulations on different
process nodes and analyze the potential of access failure for use as an entropy
source. We verify and support our simulation results by conducting real-world
experiments on two commercial off-the-shelf FPGA boards. We evaluate the
quality of the random numbers generated by TuRaN using the widely-adopted NIST
standard randomness tests and observe that TuRaN passes all tests. TuRaN
generates true random numbers with (i) an average (maximum) throughput of
1.6Gbps (1.812Gbps), (ii) 0.11nJ/bit energy consumption, and (iii) 278.46us
latency.
|
[
{
"version": "v1",
"created": "Sun, 20 Nov 2022 07:45:07 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Yüksel",
"İsmail Emir",
""
],
[
"Olgun",
"Ataberk",
""
],
[
"Salami",
"Behzad",
""
],
[
"Bostancı",
"F. Nisa",
""
],
[
"Tuğrul",
"Yahya Can",
""
],
[
"Yağlıkçı",
"A. Giray",
""
],
[
"Ghiasi",
"Nika Mansouri",
""
],
[
"Mutlu",
"Onur",
""
],
[
"Ergin",
"Oğuz",
""
]
] |
new_dataset
| 0.985153 |
2211.10923
|
Ruohan Meng
|
Ruohan Meng, Zhili Zhou, Qi Cui, Kwok-Yan Lam, Alex Kot
|
Traceable and Authenticable Image Tagging for Fake News Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To prevent fake news images from misleading the public, it is desirable not
only to verify the authenticity of news images but also to trace the source of
fake news, so as to provide a complete forensic chain for reliable fake news
detection. To simultaneously achieve the goals of authenticity verification and
source tracing, we propose a traceable and authenticable image tagging approach
that is based on a design of Decoupled Invertible Neural Network (DINN). The
designed DINN can simultaneously embed the dual-tags, \textit{i.e.},
authenticable tag and traceable tag, into each news image before publishing,
and then separately extract them for authenticity verification and source
tracing. Moreover, to improve the accuracy of dual-tags extraction, we design a
parallel Feature Aware Projection Model (FAPM) to help the DINN preserve
essential tag information. In addition, we define a Distance Metric-Guided
Module (DMGM) that learns asymmetric one-class representations to enable the
dual-tags to achieve different robustness performances under malicious
manipulations. Extensive experiments, on diverse datasets and unseen
manipulations, demonstrate that the proposed tagging approach achieves
excellent performance in the aspects of both authenticity verification and
source tracing for reliable fake news detection and outperforms the prior
works.
|
[
{
"version": "v1",
"created": "Sun, 20 Nov 2022 09:42:27 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Meng",
"Ruohan",
""
],
[
"Zhou",
"Zhili",
""
],
[
"Cui",
"Qi",
""
],
[
"Lam",
"Kwok-Yan",
""
],
[
"Kot",
"Alex",
""
]
] |
new_dataset
| 0.982175 |
2211.10927
|
Jiahao Nie
|
Jiahao Nie, Zhiwei He, Yuxiang Yang, Mingyu Gao, Jing Zhang
|
GLT-T: Global-Local Transformer Voting for 3D Single Object Tracking in
Point Clouds
|
Accepted to AAAI 2023. The source code and models will be available
at https://github.com/haooozi/GLT-T
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current 3D single object tracking methods are typically based on VoteNet, a
3D region proposal network. Despite the success, using a single seed point
feature as the cue for offset learning in VoteNet prevents high-quality 3D
proposals from being generated. Moreover, seed points with different importance
are treated equally in the voting process, aggravating this defect. To address
these issues, we propose a novel global-local transformer voting scheme to
provide more informative cues and guide the model pay more attention on
potential seed points, promoting the generation of high-quality 3D proposals.
Technically, a global-local transformer (GLT) module is employed to integrate
object- and patch-aware prior into seed point features to effectively form
strong feature representation for geometric positions of the seed points, thus
providing more robust and accurate cues for offset learning. Subsequently, a
simple yet effective training strategy is designed to train the GLT module. We
develop an importance prediction branch to learn the potential importance of
the seed points and treat the output weights vector as a training constraint
term. By incorporating the above components together, we exhibit a superior
tracking method GLT-T. Extensive experiments on challenging KITTI and NuScenes
benchmarks demonstrate that GLT-T achieves state-of-the-art performance in the
3D single object tracking task. Besides, further ablation studies show the
advantages of the proposed global-local transformer voting scheme over the
original VoteNet. Code and models will be available at
https://github.com/haooozi/GLT-T.
|
[
{
"version": "v1",
"created": "Sun, 20 Nov 2022 09:53:24 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Nie",
"Jiahao",
""
],
[
"He",
"Zhiwei",
""
],
[
"Yang",
"Yuxiang",
""
],
[
"Gao",
"Mingyu",
""
],
[
"Zhang",
"Jing",
""
]
] |
new_dataset
| 0.995494 |
2211.10966
|
Shashank Singh
|
Karan Uppal, Jaeah Kim, Shashank Singh
|
Decoding Attention from Gaze: A Benchmark Dataset and End-to-End Models
|
To be published in Proceedings of the NeurIPS 2022 Gaze Meets ML
Workshop
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Eye-tracking has potential to provide rich behavioral data about human
cognition in ecologically valid environments. However, analyzing this rich data
is often challenging. Most automated analyses are specific to simplistic
artificial visual stimuli with well-separated, static regions of interest,
while most analyses in the context of complex visual stimuli, such as most
natural scenes, rely on laborious and time-consuming manual annotation. This
paper studies using computer vision tools for "attention decoding", the task of
assessing the locus of a participant's overt visual attention over time. We
provide a publicly available Multiple Object Eye-Tracking (MOET) dataset,
consisting of gaze data from participants tracking specific objects, annotated
with labels and bounding boxes, in crowded real-world videos, for training and
evaluating attention decoding algorithms. We also propose two end-to-end deep
learning models for attention decoding and compare these to state-of-the-art
heuristic methods.
|
[
{
"version": "v1",
"created": "Sun, 20 Nov 2022 12:24:57 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Uppal",
"Karan",
""
],
[
"Kim",
"Jaeah",
""
],
[
"Singh",
"Shashank",
""
]
] |
new_dataset
| 0.996424 |
2211.10986
|
Zengzhi Wang
|
Zengzhi Wang, Rui Xia, Jianfei Yu
|
UnifiedABSA: A Unified ABSA Framework Based on Multi-task Instruction
Tuning
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aspect-Based Sentiment Analysis (ABSA) aims to provide fine-grained
aspect-level sentiment information. There are many ABSA tasks, and the current
dominant paradigm is to train task-specific models for each task. However,
application scenarios of ABSA tasks are often diverse. This solution usually
requires a large amount of labeled data from each task to perform excellently.
These dedicated models are separately trained and separately predicted,
ignoring the relationship between tasks. To tackle these issues, we present
UnifiedABSA, a general-purpose ABSA framework based on multi-task instruction
tuning, which can uniformly model various tasks and capture the inter-task
dependency with multi-task learning. Extensive experiments on two benchmark
datasets show that UnifiedABSA can significantly outperform dedicated models on
11 ABSA tasks and show its superiority in terms of data efficiency.
|
[
{
"version": "v1",
"created": "Sun, 20 Nov 2022 14:21:09 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Wang",
"Zengzhi",
""
],
[
"Xia",
"Rui",
""
],
[
"Yu",
"Jianfei",
""
]
] |
new_dataset
| 0.999301 |
2211.11001
|
Tung Cao Hoang
|
Giang Hoang, Tuan Nguyen Dinh, Tung Cao Hoang, Son Le Duy, Keisuke
Hihara, Yumeka Utada, Akihiko Torii, Naoki Izumi, Long Tran Quoc
|
F2SD: A dataset for end-to-end group detection algorithms
|
Accepted at ICMV 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The lack of large-scale datasets has been impeding the advance of deep
learning approaches to the problem of F-formation detection. Moreover, most
research works on this problem rely on input sensor signals of object location
and orientation rather than image signals. To address this, we develop a new,
large-scale dataset of simulated images for F-formation detection, called
F-formation Simulation Dataset (F2SD). F2SD contains nearly 60,000 images
simulated from GTA-5, with bounding boxes and orientation information on
images, making it useful for a wide variety of modelling approaches. It is also
closer to practical scenarios, where three-dimensional location and orientation
information are costly to record. It is challenging to construct such a
large-scale simulated dataset while keeping it realistic. Furthermore, the
available research utilizes conventional methods to detect groups. They do not
detect groups directly from the image. In this work, we propose (1) a
large-scale simulation dataset F2SD and a pipeline for F-formation simulation,
(2) a first-ever end-to-end baseline model for the task, and experiments on our
simulation dataset.
|
[
{
"version": "v1",
"created": "Sun, 20 Nov 2022 15:42:22 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Hoang",
"Giang",
""
],
[
"Dinh",
"Tuan Nguyen",
""
],
[
"Hoang",
"Tung Cao",
""
],
[
"Duy",
"Son Le",
""
],
[
"Hihara",
"Keisuke",
""
],
[
"Utada",
"Yumeka",
""
],
[
"Torii",
"Akihiko",
""
],
[
"Izumi",
"Naoki",
""
],
[
"Quoc",
"Long Tran",
""
]
] |
new_dataset
| 0.999851 |
2211.11113
|
Xinyi Zhou
|
Xinyi Zhou, Reza Zafarani, Emilio Ferrara
|
From Fake News to #FakeNews: Mining Direct and Indirect Relationships
among Hashtags for Fake News Detection
| null | null | null | null |
cs.SI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
The COVID-19 pandemic has gained worldwide attention and allowed fake news,
such as ``COVID-19 is the flu,'' to spread quickly and widely on social media.
Combating this coronavirus infodemic demands effective methods to detect fake
news. To this end, we propose a method to infer news credibility from hashtags
involved in news dissemination on social media, motivated by the tight
connection between hashtags and news credibility observed in our empirical
analyses. We first introduce a new graph that captures all (direct and
\textit{indirect}) relationships among hashtags. Then, a language-independent
semi-supervised algorithm is developed to predict fake news based on this
constructed graph. This study first investigates the indirect relationship
among hashtags; the proposed approach can be extended to any homogeneous graph
to capture a comprehensive relationship among nodes. Language independence
opens the proposed method to multilingual fake news detection. Experiments
conducted on two real-world datasets demonstrate the effectiveness of our
approach in identifying fake news, especially at an \textit{early} stage of
propagation.
|
[
{
"version": "v1",
"created": "Sun, 20 Nov 2022 22:53:12 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Zhou",
"Xinyi",
""
],
[
"Zafarani",
"Reza",
""
],
[
"Ferrara",
"Emilio",
""
]
] |
new_dataset
| 0.99759 |
2211.11155
|
Likai Wang
|
Likai Wang, Ruize Han, Wei Feng, Song Wang
|
From Indoor To Outdoor: Unsupervised Domain Adaptive Gait Recognition
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Gait recognition is an important AI task, which has been progressed rapidly
with the development of deep learning. However, existing learning based gait
recognition methods mainly focus on the single domain, especially the
constrained laboratory environment. In this paper, we study a new problem of
unsupervised domain adaptive gait recognition (UDA-GR), that learns a gait
identifier with supervised labels from the indoor scenes (source domain), and
is applied to the outdoor wild scenes (target domain). For this purpose, we
develop an uncertainty estimation and regularization based UDA-GR method.
Specifically, we investigate the characteristic of gaits in the indoor and
outdoor scenes, for estimating the gait sample uncertainty, which is used in
the unsupervised fine-tuning on the target domain to alleviate the noises of
the pseudo labels. We also establish a new benchmark for the proposed problem,
experimental results on which show the effectiveness of the proposed method. We
will release the benchmark and source code in this work to the public.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 02:47:29 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Wang",
"Likai",
""
],
[
"Han",
"Ruize",
""
],
[
"Feng",
"Wei",
""
],
[
"Wang",
"Song",
""
]
] |
new_dataset
| 0.953801 |
2211.11165
|
Likai Wang
|
Likai Wang, Xiangqun Zhang, Ruize Han, Jialin Yang, Xiaoyu Li, Wei
Feng, Song Wang
|
A Benchmark of Video-Based Clothes-Changing Person Re-Identification
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Person re-identification (Re-ID) is a classical computer vision task and has
achieved great progress so far. Recently, long-term Re-ID with clothes-changing
has attracted increasing attention. However, existing methods mainly focus on
image-based setting, where richer temporal information is overlooked. In this
paper, we focus on the relatively new yet practical problem of clothes-changing
video-based person re-identification (CCVReID), which is less studied. We
systematically study this problem by simultaneously considering the challenge
of the clothes inconsistency issue and the temporal information contained in
the video sequence for the person Re-ID problem. Based on this, we develop a
two-branch confidence-aware re-ranking framework for handling the CCVReID
problem. The proposed framework integrates two branches that consider both the
classical appearance features and cloth-free gait features through a
confidence-guided re-ranking strategy. This method provides the baseline method
for further studies. Also, we build two new benchmark datasets for CCVReID
problem, including a large-scale synthetic video dataset and a real-world one,
both containing human sequences with various clothing changes. We will release
the benchmark and code in this work to the public.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 03:38:18 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Wang",
"Likai",
""
],
[
"Zhang",
"Xiangqun",
""
],
[
"Han",
"Ruize",
""
],
[
"Yang",
"Jialin",
""
],
[
"Li",
"Xiaoyu",
""
],
[
"Feng",
"Wei",
""
],
[
"Wang",
"Song",
""
]
] |
new_dataset
| 0.990711 |
2211.11185
|
Yiqin Wang
|
Yiqin Wang, Yuanbo Li, Yi Chen, Ziming Yu, Chong Han
|
Terahertz Channel Measurement and Analysis on a University Campus Street
|
6 pages, 15 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Owning abundant bandwidth resource, the Terahertz (0.1-10 THz) band is a
promising spectrum to support sixth-generation (6G) and beyond communications.
As the foundation of channel study in the spectrum, channel measurement is
ongoing in covering representative 6G communication scenarios and promising THz
frequency bands. In this paper, a wideband channel measurement in an L-shaped
university campus street is conducted at 306-321 GHz and 356-371 GHz. In
particular, ten line-of-sight (LoS) and eight non-line-of-sight (NLoS) points
are measured at the two frequency bands, respectively. In total, 6480 channel
impulse responses (CIRs) are obtained from the measurement, based on which
multi-path propagation in the L-shaped roadway in the THz band is elaborated to
identify major scatterers of walls, vehicles, etc. in the environment and their
impact on multi-path components (MPCs). Furthermore, outdoor THz channel
characteristics in the two frequency bands are analyzed, including path losses,
shadow fading, cluster parameters, delay spread and angular spread. In contrast
with the counterparts in the similar outdoor scenario at lower frequencies, the
results verify the sparsity of MPCs at THz frequencies and indicate smaller
power spreads in both temporal and spatial domains in the THz band.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 05:04:15 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Wang",
"Yiqin",
""
],
[
"Li",
"Yuanbo",
""
],
[
"Chen",
"Yi",
""
],
[
"Yu",
"Ziming",
""
],
[
"Han",
"Chong",
""
]
] |
new_dataset
| 0.998548 |
2211.11225
|
Nicolas Jonason
|
Nicolas Jonason, Bob L.T. Sturm
|
TimbreCLIP: Connecting Timbre to Text and Images
|
Submitted to AAAI workshop on creative AI across modalities
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present work in progress on TimbreCLIP, an audio-text cross modal
embedding trained on single instrument notes. We evaluate the models with a
cross-modal retrieval task on synth patches. Finally, we demonstrate the
application of TimbreCLIP on two tasks: text-driven audio equalization and
timbre to image generation.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 07:40:01 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Jonason",
"Nicolas",
""
],
[
"Sturm",
"Bob L. T.",
""
]
] |
new_dataset
| 0.999318 |
2211.11304
|
Ting Han
|
Ting Han, Kunhao Pan, Xinyu Chen, Dingjie Song, Yuchen Fan, Xinyu Gao,
Ruyi Gan, Jiaxing Zhang
|
TCBERT: A Technical Report for Chinese Topic Classification BERT
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bidirectional Encoder Representations from Transformers or
BERT~\cite{devlin-etal-2019-bert} has been one of the base models for various
NLP tasks due to its remarkable performance. Variants customized for different
languages and tasks are proposed to further improve the performance. In this
work, we investigate supervised continued
pre-training~\cite{gururangan-etal-2020-dont} on BERT for Chinese topic
classification task. Specifically, we incorporate prompt-based learning and
contrastive learning into the pre-training. To adapt to the task of Chinese
topic classification, we collect around 2.1M Chinese data spanning various
topics. The pre-trained Chinese Topic Classification BERTs (TCBERTs) with
different parameter sizes are open-sourced at
\url{https://huggingface.co/IDEA-CCNL}.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 09:45:15 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Han",
"Ting",
""
],
[
"Pan",
"Kunhao",
""
],
[
"Chen",
"Xinyu",
""
],
[
"Song",
"Dingjie",
""
],
[
"Fan",
"Yuchen",
""
],
[
"Gao",
"Xinyu",
""
],
[
"Gan",
"Ruyi",
""
],
[
"Zhang",
"Jiaxing",
""
]
] |
new_dataset
| 0.992722 |
2211.11331
|
Floran de Putter
|
Maarten Molendijk, Floran de Putter, Manil Gomony, Pekka
J\"a\"askel\"ainen and Henk Corporaal
|
BrainTTA: A 35 fJ/op Compiler Programmable Mixed-Precision
Transport-Triggered NN SoC
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, accelerators for extremely quantized deep neural network (DNN)
inference with operand widths as low as 1-bit have gained popularity due to
their ability to largely cut down energy cost per inference. In this paper, a
flexible SoC with mixed-precision support is presented. Contrary to the current
trend of fixed-datapath accelerators, this architecture makes use of a flexible
datapath based on a Transport-Triggered Architecture (TTA). The architecture is
fully programmable using C. The accelerator has a peak energy efficiency of
35/67/405 fJ/op (binary, ternary, and 8-bit precision) and a throughput of
614/307/77 GOPS, which is unprecedented for a programmable architecture.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 10:33:13 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Molendijk",
"Maarten",
""
],
[
"de Putter",
"Floran",
""
],
[
"Gomony",
"Manil",
""
],
[
"Jääskeläinen",
"Pekka",
""
],
[
"Corporaal",
"Henk",
""
]
] |
new_dataset
| 0.986686 |
2211.11354
|
Simon Bultmann
|
Julian Hau, Simon Bultmann, Sven Behnke
|
Object-level 3D Semantic Mapping using a Network of Smart Edge Sensors
|
9 pages, 12 figures, 6th IEEE International Conference on Robotic
Computing (IRC), Naples, Italy, December 2022
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous robots that interact with their environment require a detailed
semantic scene model. For this, volumetric semantic maps are frequently used.
The scene understanding can further be improved by including object-level
information in the map. In this work, we extend a multi-view 3D semantic
mapping system consisting of a network of distributed smart edge sensors with
object-level information, to enable downstream tasks that need object-level
input. Objects are represented in the map via their 3D mesh model or as an
object-centric volumetric sub-map that can model arbitrary object geometry when
no detailed 3D model is available. We propose a keypoint-based approach to
estimate object poses via PnP and refinement via ICP alignment of the 3D object
model with the observed point cloud segments. Object instances are tracked to
integrate observations over time and to be robust against temporary occlusions.
Our method is evaluated on the public Behave dataset where it shows pose
estimation accuracy within a few centimeters and in real-world experiments with
the sensor network in a challenging lab environment where multiple chairs and a
table are tracked through the scene online, in real time even under high
occlusions.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 11:13:08 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Hau",
"Julian",
""
],
[
"Bultmann",
"Simon",
""
],
[
"Behnke",
"Sven",
""
]
] |
new_dataset
| 0.971297 |
2211.11362
|
Deeksha Arya
|
Deeksha Arya (1), Hiroya Maeda (2), Sanjay Kumar Ghosh (3), Durga
Toshniwal (3), Hiroshi Omata (1), Takehiro Kashiyama (4), Yoshihide Sekimoto
(1) ((1) The University of Tokyo, Japan, (2) UrbanX Technologies, Inc.,
Tokyo, Japan (3) Indian Institute of Technology Roorkee, India, (4) Osaka
University of Economics, Japan)
|
Crowdsensing-based Road Damage Detection Challenge (CRDDC-2022)
|
9 pages 2 figures 5 tables. arXiv admin note: text overlap with
arXiv:2011.08740
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper summarizes the Crowdsensing-based Road Damage Detection Challenge
(CRDDC), a Big Data Cup organized as a part of the IEEE International
Conference on Big Data'2022. The Big Data Cup challenges involve a released
dataset and a well-defined problem with clear evaluation metrics. The
challenges run on a data competition platform that maintains a real-time online
evaluation system for the participants. In the presented case, the data
constitute 47,420 road images collected from India, Japan, the Czech Republic,
Norway, the United States, and China to propose methods for automatically
detecting road damages in these countries. More than 60 teams from 19 countries
registered for this competition. The submitted solutions were evaluated using
five leaderboards based on performance for unseen test images from the
aforementioned six countries. This paper encapsulates the top 11 solutions
proposed by these teams. The best-performing model utilizes ensemble learning
based on YOLO and Faster-RCNN series models to yield an F1 score of 76% for
test data combined from all 6 countries. The paper concludes with a comparison
of current and past challenges and provides direction for the future.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 11:29:21 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Arya",
"Deeksha",
""
],
[
"Maeda",
"Hiroya",
""
],
[
"Ghosh",
"Sanjay Kumar",
""
],
[
"Toshniwal",
"Durga",
""
],
[
"Omata",
"Hiroshi",
""
],
[
"Kashiyama",
"Takehiro",
""
],
[
"Sekimoto",
"Yoshihide",
""
]
] |
new_dataset
| 0.998182 |
2211.11444
|
Peter Hillmann
|
Erik Heiland, Peter Hillmann
|
(B)LOCKBOX -- Secure Software Architecture with Blockchain Verification
| null | null | null | null |
cs.CR cs.DC cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
According to experts, one third of all IT vulnerabilities today are due to
inadequate software verification. Internal program processes are not
sufficiently secured against manipulation by attackers, especially if access
has been gained. There is a lack of internal control instances that can monitor
and control program flows. Especially when a software vulnerability becomes
known, quick action is required, whereby the consequences for an individual
application are often not foreseeable. With our approach (B)LOCKBOX, software
building blocks act as verified entities within a transaction-based blockchain
network. Source Code, binaries and application execution become supervised.
Unwanted interference and manipulation are prevented by the integrity of the
distributed system.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 13:33:05 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Heiland",
"Erik",
""
],
[
"Hillmann",
"Peter",
""
]
] |
new_dataset
| 0.981816 |
2211.11479
|
Charilaos Papaioannou
|
Charilaos Papaioannou, Ioannis Valiantzas, Theodoros Giannakopoulos,
Maximos Kaliakatsos-Papakostas, Alexandros Potamianos
|
A Dataset for Greek Traditional and Folk Music: Lyra
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Studying under-represented music traditions under the MIR scope is crucial,
not only for developing novel analysis tools, but also for unveiling musical
functions that might prove useful in studying world musics. This paper presents
a dataset for Greek Traditional and Folk music that includes 1570 pieces,
summing in around 80 hours of data. The dataset incorporates YouTube
timestamped links for retrieving audio and video, along with rich metadata
information with regards to instrumentation, geography and genre, among others.
The content has been collected from a Greek documentary series that is
available online, where academics present music traditions of Greece with live
music and dance performance during the show, along with discussions about
social, cultural and musicological aspects of the presented music. Therefore,
this procedure has resulted in a significant wealth of descriptions regarding a
variety of aspects, such as musical genre, places of origin and musical
instruments. In addition, the audio recordings were performed under strict
production-level specifications, in terms of recording equipment, leading to
very clean and homogeneous audio content. In this work, apart from presenting
the dataset in detail, we propose a baseline deep-learning classification
approach to recognize the involved musicological attributes. The dataset, the
baseline classification methods and the models are provided in public
repositories. Future directions for further refining the dataset are also
discussed.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 14:15:43 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Papaioannou",
"Charilaos",
""
],
[
"Valiantzas",
"Ioannis",
""
],
[
"Giannakopoulos",
"Theodoros",
""
],
[
"Kaliakatsos-Papakostas",
"Maximos",
""
],
[
"Potamianos",
"Alexandros",
""
]
] |
new_dataset
| 0.999871 |
2211.11501
|
Yiming Wang
|
Yuhang Lai and Chengxi Li and Yiming Wang and Tianyi Zhang and Ruiqi
Zhong and Luke Zettlemoyer and Scott Wen-tau Yih and Daniel Fried and Sida
Wang and Tao Yu
|
DS-1000: A Natural and Reliable Benchmark for Data Science Code
Generation
| null | null | null | null |
cs.SE cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce DS-1000, a code generation benchmark with a thousand data
science problems spanning seven Python libraries, such as NumPy and Pandas.
Compared to prior works, DS-1000 incorporates three core features. First, our
problems reflect diverse, realistic, and practical use cases since we collected
them from StackOverflow. Second, our automatic evaluation is highly specific
(reliable) -- across all Codex-002-predicted solutions that our evaluation
accept, only 1.8% of them are incorrect; we achieve this with multi-criteria
metrics, checking both functional correctness by running test cases and
surface-form constraints by restricting API usages or keywords. Finally, we
proactively defend against memorization by slightly modifying our problems to
be different from the original StackOverflow source; consequently, models
cannot answer them correctly by memorizing the solutions from pre-training. The
current best public system (Codex-002) achieves 43.3% accuracy, leaving ample
room for improvement. We release our benchmark at
https://ds1000-code-gen.github.io.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 17:20:27 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Lai",
"Yuhang",
""
],
[
"Li",
"Chengxi",
""
],
[
"Wang",
"Yiming",
""
],
[
"Zhang",
"Tianyi",
""
],
[
"Zhong",
"Ruiqi",
""
],
[
"Zettlemoyer",
"Luke",
""
],
[
"Yih",
"Scott Wen-tau",
""
],
[
"Fried",
"Daniel",
""
],
[
"Wang",
"Sida",
""
],
[
"Yu",
"Tao",
""
]
] |
new_dataset
| 0.997788 |
2211.11514
|
Shishuai Hu
|
Shishuai Hu, Zehui Liao, Yong Xia
|
ProSFDA: Prompt Learning based Source-free Domain Adaptation for Medical
Image Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The domain discrepancy existed between medical images acquired in different
situations renders a major hurdle in deploying pre-trained medical image
segmentation models for clinical use. Since it is less possible to distribute
training data with the pre-trained model due to the huge data size and privacy
concern, source-free unsupervised domain adaptation (SFDA) has recently been
increasingly studied based on either pseudo labels or prior knowledge. However,
the image features and probability maps used by pseudo label-based SFDA and the
consistent prior assumption and the prior prediction network used by
prior-guided SFDA may become less reliable when the domain discrepancy is
large. In this paper, we propose a \textbf{Pro}mpt learning based \textbf{SFDA}
(\textbf{ProSFDA}) method for medical image segmentation, which aims to improve
the quality of domain adaption by minimizing explicitly the domain discrepancy.
Specifically, in the prompt learning stage, we estimate source-domain images
via adding a domain-aware prompt to target-domain images, then optimize the
prompt via minimizing the statistic alignment loss, and thereby prompt the
source model to generate reliable predictions on (altered) target-domain
images. In the feature alignment stage, we also align the features of
target-domain images and their styles-augmented counterparts to optimize the
source model, and hence push the model to extract compact features. We evaluate
our ProSFDA on two multi-domain medical image segmentation benchmarks. Our
results indicate that the proposed ProSFDA outperforms substantially other SFDA
methods and is even comparable to UDA methods. Code will be available at
\url{https://github.com/ShishuaiHu/ProSFDA}.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 14:57:04 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Hu",
"Shishuai",
""
],
[
"Liao",
"Zehui",
""
],
[
"Xia",
"Yong",
""
]
] |
new_dataset
| 0.996606 |
2211.11521
|
Philippe Suignard
|
Suignard Philippe (EDF R&D ICAME)
|
Un discours et un public "Gilets Jaunes" au coeur du Grand D\'ebat
National? Combinaison des approches IA et textom\'etriques pour l'analyse de
discours des plateformes "Grand D\'ebat National" et "Vrai d\'ebat"
|
in French language. JADT 2020, Jun 2020, Toulouse, France
| null | null | null |
cs.CY cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this contribution, we propose to analyze the statements coming from two
''civic tech'' platforms-the governmental platform, ''Grand D{\'e}bat
National'' and, its political and algorithmic response proposed by a Yellow
Vest collective, ''Vrai D{\'e}bat''-, by confronting two families of algorithms
dedicated to text analysis. We propose to implement, on the one hand, proven
approaches in textual data analysis (Reinert/Iramuteq Method) which have
recently shown their interest in the analysis of very large corpora and, on the
other hand, new methods resulting from the crossroads of the computer worlds,
artificial intelligence and automatic language processing. We will examine the
methodological solutions for qualifying the social properties of speakers about
whom we have little direct information. Finally, we will attempt to present
some research questions at the crossroads of the political sociology of public
opinion and data science, which such a confrontation opens up.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 18:51:46 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Philippe",
"Suignard",
"",
"EDF R&D ICAME"
]
] |
new_dataset
| 0.986515 |
2211.11530
|
Zhongxiang Zhou
|
Zhongxiang Zhou, Yifei Yang, Yue Wang, Rong Xiong
|
Open-Set Object Detection Using Classification-free Object Proposal and
Instance-level Contrastive Learning with Appendix
|
Submit to IEEE Robotics and Automation Letters
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting both known and unknown objects is a fundamental skill for robot
manipulation in unstructured environments. Open-set object detection (OSOD) is
a promising direction to handle the problem consisting of two subtasks: objects
and background separation, and open-set object classification. In this paper,
we present Openset RCNN to address the challenging OSOD. To disambiguate
unknown objects and background in the first subtask, we propose to use
classification-free region proposal network (CF-RPN) which estimates the
objectness score of each region purely using cues from object's location and
shape preventing overfitting to the training categories. To identify unknown
objects in the second subtask, we propose to represent them using the
complementary region of known categories in a latent space which is
accomplished by a prototype learning network (PLN). PLN performs instance-level
contrastive learning to encode proposals to a latent space and builds a compact
region centering with a prototype for each known category. Further, we note
that the detection performance of unknown objects can not be unbiasedly
evaluated on the situation that commonly used object detection datasets are not
fully annotated. Thus, a new benchmark is introduced by reorganizing
GraspNet-1billion, a robotic grasp pose detection dataset with complete
annotation. Extensive experiments demonstrate the merits of our method. We
finally show that our Openset RCNN can endow the robot with an open-set
perception ability to support robotic rearrangement tasks in cluttered
environments. More details can be found in
https://sites.google.com/view/openest-rcnn/
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 15:00:04 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Zhou",
"Zhongxiang",
""
],
[
"Yang",
"Yifei",
""
],
[
"Wang",
"Yue",
""
],
[
"Xiong",
"Rong",
""
]
] |
new_dataset
| 0.997274 |
2211.11559
|
Tanmay Gupta
|
Tanmay Gupta and Aniruddha Kembhavi
|
Visual Programming: Compositional visual reasoning without training
| null | null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present VISPROG, a neuro-symbolic approach to solving complex and
compositional visual tasks given natural language instructions. VISPROG avoids
the need for any task-specific training. Instead, it uses the in-context
learning ability of large language models to generate python-like modular
programs, which are then executed to get both the solution and a comprehensive
and interpretable rationale. Each line of the generated program may invoke one
of several off-the-shelf computer vision models, image processing routines, or
python functions to produce intermediate outputs that may be consumed by
subsequent parts of the program. We demonstrate the flexibility of VISPROG on 4
diverse tasks - compositional visual question answering, zero-shot reasoning on
image pairs, factual knowledge object tagging, and language-guided image
editing. We believe neuro-symbolic approaches like VISPROG are an exciting
avenue to easily and effectively expand the scope of AI systems to serve the
long tail of complex tasks that people may wish to perform.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 18:50:09 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Gupta",
"Tanmay",
""
],
[
"Kembhavi",
"Aniruddha",
""
]
] |
new_dataset
| 0.969359 |
2211.11617
|
Yinpei Dai
|
Yinpei Dai, Wanwei He, Bowen Li, Yuchuan Wu, Zheng Cao, Zhongqi An,
Jian Sun, Yongbin Li
|
CGoDial: A Large-Scale Benchmark for Chinese Goal-oriented Dialog
Evaluation
|
EMNLP 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Practical dialog systems need to deal with various knowledge sources, noisy
user expressions, and the shortage of annotated data. To better solve the above
problems, we propose CGoDial, new challenging and comprehensive Chinese
benchmark for multi-domain Goal-oriented Dialog evaluation. It contains 96,763
dialog sessions and 574,949 dialog turns totally, covering three datasets with
different knowledge sources: 1) a slot-based dialog (SBD) dataset with
table-formed knowledge, 2) a flow-based dialog (FBD) dataset with tree-formed
knowledge, and a retrieval-based dialog (RBD) dataset with candidate-formed
knowledge. To bridge the gap between academic benchmarks and spoken dialog
scenarios, we either collect data from real conversations or add spoken
features to existing datasets via crowd-sourcing. The proposed experimental
settings include the combinations of training with either the entire training
set or a few-shot training set, and testing with either the standard test set
or a hard test subset, which can assess model capabilities in terms of general
prediction, fast adaptability and reliable robustness.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 16:21:41 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Dai",
"Yinpei",
""
],
[
"He",
"Wanwei",
""
],
[
"Li",
"Bowen",
""
],
[
"Wu",
"Yuchuan",
""
],
[
"Cao",
"Zheng",
""
],
[
"An",
"Zhongqi",
""
],
[
"Sun",
"Jian",
""
],
[
"Li",
"Yongbin",
""
]
] |
new_dataset
| 0.999818 |
2211.11647
|
Sandro Magalh\~aes
|
Sandro Costa Magalh\~aes and Filipe Neves Santos and Pedro Machado and
Ant\'onio Paulo Moreira and Jorge Dias
|
Benchmarking Edge Computing Devices for Grape Bunches and Trunks
Detection using Accelerated Object Detection Single Shot MultiBox Deep
Learning Models
| null |
EAAI, 117, 105604 (2022)
|
10.1016/j.engappai.2022.105604
| null |
cs.CV cs.AR cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Purpose: Visual perception enables robots to perceive the environment. Visual
data is processed using computer vision algorithms that are usually
time-expensive and require powerful devices to process the visual data in
real-time, which is unfeasible for open-field robots with limited energy. This
work benchmarks the performance of different heterogeneous platforms for object
detection in real-time. This research benchmarks three architectures: embedded
GPU -- Graphical Processing Units (such as NVIDIA Jetson Nano 2 GB and 4 GB,
and NVIDIA Jetson TX2), TPU -- Tensor Processing Unit (such as Coral Dev Board
TPU), and DPU -- Deep Learning Processor Unit (such as in AMD-Xilinx ZCU104
Development Board, and AMD-Xilinx Kria KV260 Starter Kit). Method: The authors
used the RetinaNet ResNet-50 fine-tuned using the natural VineSet dataset.
After the trained model was converted and compiled for target-specific hardware
formats to improve the execution efficiency. Conclusions and Results: The
platforms were assessed in terms of performance of the evaluation metrics and
efficiency (time of inference). Graphical Processing Units (GPUs) were the
slowest devices, running at 3 FPS to 5 FPS, and Field Programmable Gate Arrays
(FPGAs) were the fastest devices, running at 14 FPS to 25 FPS. The efficiency
of the Tensor Processing Unit (TPU) is irrelevant and similar to NVIDIA Jetson
TX2. TPU and GPU are the most power-efficient, consuming about 5W. The
performance differences, in the evaluation metrics, across devices are
irrelevant and have an F1 of about 70 % and mean Average Precision (mAP) of
about 60 %.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 17:02:33 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Magalhães",
"Sandro Costa",
""
],
[
"Santos",
"Filipe Neves",
""
],
[
"Machado",
"Pedro",
""
],
[
"Moreira",
"António Paulo",
""
],
[
"Dias",
"Jorge",
""
]
] |
new_dataset
| 0.996471 |
2211.11687
|
Zihao Wang
|
Zihao Wang, Yingyu Yang, Maxime Sermesant, Herve Delingette
|
Unsupervised Echocardiography Registration through Patch-based MLPs and
Transformers
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Image registration is an essential but challenging task in medical image
computing, especially for echocardiography, where the anatomical structures are
relatively noisy compared to other imaging modalities. Traditional
(non-learning) registration approaches rely on the iterative optimization of a
similarity metric which is usually costly in time complexity. In recent years,
convolutional neural network (CNN) based image registration methods have shown
good effectiveness. In the meantime, recent studies show that the
attention-based model (e.g., Transformer) can bring superior performance in
pattern recognition tasks. In contrast, whether the superior performance of the
Transformer comes from the long-winded architecture or is attributed to the use
of patches for dividing the inputs is unclear yet. This work introduces three
patch-based frameworks for image registration using MLPs and transformers. We
provide experiments on 2D-echocardiography registration to answer the former
question partially and provide a benchmark solution. Our results on a large
public 2D echocardiography dataset show that the patch-based MLP/Transformer
model can be effectively used for unsupervised echocardiography registration.
They demonstrate comparable and even better registration performance than a
popular CNN registration model. In particular, patch-based models better
preserve volume changes in terms of Jacobian determinants, thus generating
robust registration fields with less unrealistic deformation. Our results
demonstrate that patch-based learning methods, whether with attention or not,
can perform high-performance unsupervised registration tasks with adequate time
and space complexity. Our codes are available
https://gitlab.inria.fr/epione/mlp\_transformer\_registration
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 17:59:04 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Wang",
"Zihao",
""
],
[
"Yang",
"Yingyu",
""
],
[
"Sermesant",
"Maxime",
""
],
[
"Delingette",
"Herve",
""
]
] |
new_dataset
| 0.992907 |
2211.11692
|
Tien Thanh Le Mr
|
Tien Thanh Le, Yusheng Ji, John C.S Lui
|
TinyQMIX: Distributed Access Control for mMTC via Multi-agent
Reinforcement Learning
|
6 pages, 4 figures, presented at VTC Fall 2022
| null | null | null |
cs.NI cs.LG cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
Distributed access control is a crucial component for massive machine type
communication (mMTC). In this communication scenario, centralized resource
allocation is not scalable because resource configurations have to be sent
frequently from the base station to a massive number of devices. We investigate
distributed reinforcement learning for resource selection without relying on
centralized control. Another important feature of mMTC is the sporadic and
dynamic change of traffic. Existing studies on distributed access control
assume that traffic load is static or they are able to gradually adapt to the
dynamic traffic. We minimize the adaptation period by training TinyQMIX, which
is a lightweight multi-agent deep reinforcement learning model, to learn a
distributed wireless resource selection policy under various traffic patterns
before deployment. Therefore, the trained agents are able to quickly adapt to
dynamic traffic and provide low access delay. Numerical results are presented
to support our claims.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 18:09:10 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Le",
"Tien Thanh",
""
],
[
"Ji",
"Yusheng",
""
],
[
"Lui",
"John C. S",
""
]
] |
new_dataset
| 0.971273 |
2211.11724
|
Noah Bergam
|
Noah Bergam, Emily Allaway, and Kathleen McKeown
|
Legal and Political Stance Detection of SCOTUS Language
|
Natural Legal Language Processing Workshop at EMNLP 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We analyze publicly available US Supreme Court documents using automated
stance detection. In the first phase of our work, we investigate the extent to
which the Court's public-facing language is political. We propose and calculate
two distinct ideology metrics of SCOTUS justices using oral argument
transcripts. We then compare these language-based metrics to existing social
scientific measures of the ideology of the Supreme Court and the public.
Through this cross-disciplinary analysis, we find that justices who are more
responsive to public opinion tend to express their ideology during oral
arguments. This observation provides a new kind of evidence in favor of the
attitudinal change hypothesis of Supreme Court justice behavior. As a natural
extension of this political stance detection, we propose the more specialized
task of legal stance detection with our new dataset SC-stance, which matches
written opinions to legal questions. We find competitive performance on this
dataset using language adapters trained on legal documents.
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 18:45:57 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Bergam",
"Noah",
""
],
[
"Allaway",
"Emily",
""
],
[
"McKeown",
"Kathleen",
""
]
] |
new_dataset
| 0.994499 |
2211.11742
|
Yu Zeng
|
Yu Zeng, Zhe Lin, Jianming Zhang, Qing Liu, John Collomosse, Jason
Kuen, Vishal M. Patel
|
SceneComposer: Any-Level Semantic Image Synthesis
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a new framework for conditional image synthesis from semantic
layouts of any precision levels, ranging from pure text to a 2D semantic canvas
with precise shapes. More specifically, the input layout consists of one or
more semantic regions with free-form text descriptions and adjustable precision
levels, which can be set based on the desired controllability. The framework
naturally reduces to text-to-image (T2I) at the lowest level with no shape
information, and it becomes segmentation-to-image (S2I) at the highest level.
By supporting the levels in-between, our framework is flexible in assisting
users of different drawing expertise and at different stages of their creative
workflow. We introduce several novel techniques to address the challenges
coming with this new setup, including a pipeline for collecting training data;
a precision-encoded mask pyramid and a text feature map representation to
jointly encode precision level, semantics, and composition information; and a
multi-scale guided diffusion model to synthesize images. To evaluate the
proposed method, we collect a test dataset containing user-drawn layouts with
diverse scenes and styles. Experimental results show that the proposed method
can generate high-quality images following the layout at given precision, and
compares favorably against existing methods. Project page
\url{https://zengxianyu.github.io/scenec/}
|
[
{
"version": "v1",
"created": "Mon, 21 Nov 2022 18:59:05 GMT"
}
] | 2022-11-22T00:00:00 |
[
[
"Zeng",
"Yu",
""
],
[
"Lin",
"Zhe",
""
],
[
"Zhang",
"Jianming",
""
],
[
"Liu",
"Qing",
""
],
[
"Collomosse",
"John",
""
],
[
"Kuen",
"Jason",
""
],
[
"Patel",
"Vishal M.",
""
]
] |
new_dataset
| 0.999677 |
2202.01508
|
Kathrin Garb
|
Kathrin Garb, Marvin Xhemrishi, Ludwig K\"urzinger and Christoph
Frisch
|
The Wiretap Channel for Capacitive PUF-Based Security Enclosures
| null |
IACR Transactions on Cryptographic Hardware and Embedded Systems,
2022(3), 165--191 (2022)
|
10.46586/tches.v2022.i3.165-191
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to protect devices from physical manipulations, protective security
enclosures were developed. However, these battery-backed solutions come with a
reduced lifetime, and have to be actively and continuously monitored. In order
to overcome these drawbacks, batteryless capacitive enclosures based on
Physical Unclonable Functions (PUFs) have been developed that generate a
key-encryption-key (KEK) for decryption of the key chain. In order to reproduce
the PUF-key reliably and to compensate the effect of noise and environmental
influences, the key generation includes error correction codes. However,
drilling attacks that aim at partially destroying the enclosure also alter the
PUF-response and are subjected to the same error correction procedures.
Correcting attack effects, however, is highly undesirable as it would destroy
the security concept of the enclosure. In general, designing error correction
codes such that they provide tamper-sensitivity to attacks, while still
correcting noise and environmental effects is a challenging task. We tackle
this problem by first analyzing the behavior of the PUF-response under external
influences and different post-processing parameters. From this, we derive a
system model of the PUF-based enclosure, and construct a wiretap channel
implementation from q-ary polar codes. We verify the obtained error correction
scheme in a Monte Carlo simulation and demonstrate that our wiretap channel
implementation achieves a physical layer security of 100 bits for 306 bits of
entropy for the PUF-secret. Through this, we further develop capacitive
PUF-based security enclosures and bring them one step closer to their
commercial deployment.
|
[
{
"version": "v1",
"created": "Thu, 3 Feb 2022 10:39:00 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Nov 2022 00:16:25 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Garb",
"Kathrin",
""
],
[
"Xhemrishi",
"Marvin",
""
],
[
"Kürzinger",
"Ludwig",
""
],
[
"Frisch",
"Christoph",
""
]
] |
new_dataset
| 0.997379 |
2202.12788
|
Sumit Mishra
|
Sumit Mishra, Praveen Kumar Rajendran, Luiz Felipe Vecchietti, and
Dongsoo Har
|
Sensing accident-prone features in urban scenes for proactive driving
and accident prevention
|
(13 pages, 9 figures, 6 tables, under review in IEEE Transactions on
Intelligent Transportation Systems)
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In urban cities, visual information on and along roadways is likely to
distract drivers and lead to missing traffic signs and other accident-prone
(AP) features. To avoid accidents due to missing these visual cues, this paper
proposes a visual notification of AP-features to drivers based on real-time
images obtained via dashcam. For this purpose, Google Street View images around
accident hotspots (areas of dense accident occurrence) identified by a
real-accident dataset are used to train a novel attention module to classify a
given urban scene into an accident hotspot or a non-hotspot (area of sparse
accident occurrence). The proposed module leverages channel, point, and
spatial-wise attention learning on top of different CNN backbones. This leads
to better classification results and more certain AP-features with better
contextual knowledge when compared with CNN backbones alone. Our proposed
module achieves up to 92% classification accuracy. The capability of detecting
AP-features by the proposed model were analyzed by a comparative study of three
different class activation map (CAM) methods, which were used to inspect
specific AP-features causing the classification decision. Outputs of CAM
methods were processed by an image processing pipeline to extract only the
AP-features that are explainable to drivers and notified using a visual
notification system. Range of experiments was performed to prove the efficacy
and AP-features of the system. Ablation of the AP-features taking 9.61%, on
average, of the total area in each image increased the chance of a given area
to be classified as a non-hotspot by up to 21.8%.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 16:05:53 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Nov 2022 13:54:28 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Mishra",
"Sumit",
""
],
[
"Rajendran",
"Praveen Kumar",
""
],
[
"Vecchietti",
"Luiz Felipe",
""
],
[
"Har",
"Dongsoo",
""
]
] |
new_dataset
| 0.999829 |
2203.03054
|
Fahad Alhabardi
|
Fahad F. Alhabardi, Arnold Beckmann, Bogdan Lazar and Anton Setzer
|
Verification of Bitcoin Script in Agda using Weakest Preconditions for
Access Control
|
27 pages
| null |
10.4230/LIPIcs.TYPES.2021.1
| null |
cs.SC cs.CR cs.LO cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper contributes to the verification of programs written in Bitcoin's
smart contract language SCRIPT in the interactive theorem prover Agda. It
focuses on the security property of access control for SCRIPT programs that
govern the distribution of Bitcoins. It advocates that weakest preconditions in
the context of Hoare triples are the appropriate notion for verifying access
control. It aims at obtaining human-readable descriptions of weakest
preconditions in order to close the validation gap between user requirements
and formal specification of smart contracts.
As examples for the proposed approach, the paper focuses on two standard
SCRIPT programs that govern the distribution of Bitcoins, Pay to Public Key
Hash (P2PKH) and Pay to Multisig (P2MS). The paper introduces an operational
semantics of the SCRIPT commands used in P2PKH and P2MS, which is formalised in
the Agda proof assistant and reasoned about using Hoare triples. Two
methodologies for obtaining human-readable descriptions of weakest
preconditions are discussed:
(1) a step-by-step approach, which works backwards instruction by instruction
through a script, sometimes grouping several instructions together;
(2) symbolic execution of the code and translation into a nested case
distinction, which allows to read off weakest preconditions as the disjunction
of conjunctions of conditions along accepting paths.
A syntax for equational reasoning with Hoare Triples is defined in order to
formalise those approaches in Agda.
Keywords and phrases: Blockchain; Cryptocurrency; Bitcoin; Agda;
Verification; Hoare logic; Bitcoin script; P2PKH; P2MS; Access control; Weakest
precondition; Predicate transformer semantics; Provable correctness; Symbolic
execution; Smart contracts
|
[
{
"version": "v1",
"created": "Sun, 6 Mar 2022 21:07:28 GMT"
},
{
"version": "v2",
"created": "Sun, 8 May 2022 00:01:21 GMT"
},
{
"version": "v3",
"created": "Sun, 12 Jun 2022 00:54:29 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Alhabardi",
"Fahad F.",
""
],
[
"Beckmann",
"Arnold",
""
],
[
"Lazar",
"Bogdan",
""
],
[
"Setzer",
"Anton",
""
]
] |
new_dataset
| 0.996417 |
2204.11484
|
Prithviraj Pramanik
|
Prithviraj Pramanik, Prasenjit Karmakar, Praveen Kumar Sharma,
Soumyajit Chatterjee, Abhijit Roy, Santanu Mandal, Subrata Nandi, Sandip
Chakraborty, Mousumi Saha and Sujoy Saha
|
AQuaMoHo: Localized Low-Cost Outdoor Air Quality Sensing over a
Thermo-Hygrometer
|
26 Pages, 17 Figures, Journal
| null | null | null |
cs.CY cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficient air quality sensing serves as one of the essential services
provided in any recent smart city. Mostly facilitated by sparsely deployed Air
Quality Monitoring Stations (AQMSs) that are difficult to install and maintain,
the overall spatial variation heavily impacts air quality monitoring for
locations far enough from these pre-deployed public infrastructures. To
mitigate this, we in this paper propose a framework named AQuaMoHo that can
annotate data obtained from a low-cost thermo-hygrometer (as the sole physical
sensing device) with the AQI labels, with the help of additional publicly
crawled Spatio-temporal information of that locality. At its core, AQuaMoHo
exploits the temporal patterns from a set of readily available spatial features
using an LSTM-based model and further enhances the overall quality of the
annotation using temporal attention. From a thorough study of two different
cities, we observe that AQuaMoHo can significantly help annotate the air
quality data on a personal scale.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 08:01:22 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jun 2022 08:13:59 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Nov 2022 02:27:31 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Pramanik",
"Prithviraj",
""
],
[
"Karmakar",
"Prasenjit",
""
],
[
"Sharma",
"Praveen Kumar",
""
],
[
"Chatterjee",
"Soumyajit",
""
],
[
"Roy",
"Abhijit",
""
],
[
"Mandal",
"Santanu",
""
],
[
"Nandi",
"Subrata",
""
],
[
"Chakraborty",
"Sandip",
""
],
[
"Saha",
"Mousumi",
""
],
[
"Saha",
"Sujoy",
""
]
] |
new_dataset
| 0.993665 |
2204.12115
|
Ming-Min Zhao
|
Yang Lu, Ming-Min Zhao, Ming Lei, and Min-Jian Zhao
|
Fast Successive-Cancellation Decoding of Polar Codes with Sequence Nodes
|
30 pages, 6 figures, submitted for possible journal publication
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the sequential nature of the successive-cancellation (SC) algorithm,
the decoding of polar codes suffers from significant decoding latencies. Fast
SC decoding is able to speed up the SC decoding process, by implementing
parallel decoders at the intermediate levels of the SC decoding tree for some
special nodes with specific information and frozen bit patterns. To further
improve the parallelism of SC decoding, this paper present a new class of
special nodes composed of a sequence of rate one or single-parity-check
(SR1/SPC) nodes, which can be easily found especially in high-rate polar code
and is able to envelop a wide variety of existing special node types. Then, we
analyse the parity constraints caused by the frozen bits in each descendant
node, such that the decoding performance of the SR1/SPC node can be preserved
once the parity constraints are satisfied. Finally, a generalized fast decoding
algorithm is proposed to decode SR1/SPC nodes efficiently, where the
corresponding parity constraints are taken into consideration. Simulation
results show that the proposed decoding algorithm of the SR1/SPC node can
achieve near-ML performance, and the overall decoding latency can be reduced by
43.8% as compared to the state-of-the-art fast SC decoder.
|
[
{
"version": "v1",
"created": "Tue, 26 Apr 2022 07:20:44 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Oct 2022 06:25:16 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Nov 2022 09:16:42 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Lu",
"Yang",
""
],
[
"Zhao",
"Ming-Min",
""
],
[
"Lei",
"Ming",
""
],
[
"Zhao",
"Min-Jian",
""
]
] |
new_dataset
| 0.989488 |
2205.02222
|
Jiaxun Cui
|
Jiaxun Cui, Hang Qiu, Dian Chen, Peter Stone, Yuke Zhu
|
COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles
| null |
CVPR 2022
|
10.1109/CVPR52688.2022.01674
| null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optical sensors and learning algorithms for autonomous vehicles have
dramatically advanced in the past few years. Nonetheless, the reliability of
today's autonomous vehicles is hindered by the limited line-of-sight sensing
capability and the brittleness of data-driven methods in handling extreme
situations. With recent developments of telecommunication technologies,
cooperative perception with vehicle-to-vehicle communications has become a
promising paradigm to enhance autonomous driving in dangerous or emergency
situations. We introduce COOPERNAUT, an end-to-end learning model that uses
cross-vehicle perception for vision-based cooperative driving. Our model
encodes LiDAR information into compact point-based representations that can be
transmitted as messages between vehicles via realistic wireless channels. To
evaluate our model, we develop AutoCastSim, a network-augmented driving
simulation framework with example accident-prone scenarios. Our experiments on
AutoCastSim suggest that our cooperative perception driving models lead to a
40% improvement in average success rate over egocentric driving models in these
challenging driving situations and a 5 times smaller bandwidth requirement than
prior work V2VNet. COOPERNAUT and AUTOCASTSIM are available at
https://ut-austin-rpl.github.io/Coopernaut/.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 17:55:12 GMT"
}
] | 2022-11-21T00:00:00 |
[
[
"Cui",
"Jiaxun",
""
],
[
"Qiu",
"Hang",
""
],
[
"Chen",
"Dian",
""
],
[
"Stone",
"Peter",
""
],
[
"Zhu",
"Yuke",
""
]
] |
new_dataset
| 0.999586 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.