id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2302.09350
|
Yftah Ziser
|
Weixian Waylon Li, Yftah Ziser, Maximin Coavoux and Shay B. Cohen
|
BERT is not The Count: Learning to Match Mathematical Statements with
Proofs
|
Accepted to the Conference of the European Chapter of the Association
for Computational Linguistics (EACL), 2023; 14 pages. arXiv admin note:
substantial text overlap with arXiv:2102.02110
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a task consisting in matching a proof to a given mathematical
statement. The task fits well within current research on Mathematical
Information Retrieval and, more generally, mathematical article analysis
(Mathematical Sciences, 2014). We present a dataset for the task (the MATcH
dataset) consisting of over 180k statement-proof pairs extracted from modern
mathematical research articles. We find this dataset highly representative of
our task, as it consists of relatively new findings useful to mathematicians.
We propose a bilinear similarity model and two decoding methods to match
statements to proofs effectively. While the first decoding method matches a
proof to a statement without being aware of other statements or proofs, the
second method treats the task as a global matching problem. Through a symbol
replacement procedure, we analyze the "insights" that pre-trained language
models have in such mathematical article analysis and show that while these
models perform well on this task with the best performing mean reciprocal rank
of 73.7, they follow a relatively shallow symbolic analysis and matching to
achieve that performance.
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2023 14:48:20 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Li",
"Weixian Waylon",
""
],
[
"Ziser",
"Yftah",
""
],
[
"Coavoux",
"Maximin",
""
],
[
"Cohen",
"Shay B.",
""
]
] |
new_dataset
| 0.99978 |
2302.09363
|
Jordi De La Torre
|
Jordi de la Torre
|
Autocodificadores Variacionales (VAE) Fundamentos Te\'oricos y
Aplicaciones
|
15 pages, in Spanish language, 2 figures, review
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
VAEs are probabilistic graphical models based on neural networks that allow
the coding of input data in a latent space formed by simpler probability
distributions and the reconstruction, based on such latent variables, of the
source data. After training, the reconstruction network, called decoder, is
capable of generating new elements belonging to a close distribution, ideally
equal to the original one. This article has been written in Spanish to
facilitate the arrival of this scientific knowledge to the Spanish-speaking
community.
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2023 15:29:55 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"de la Torre",
"Jordi",
""
]
] |
new_dataset
| 0.951442 |
2302.09422
|
Hyoungwook Nam
|
Hyoungwook Nam, Seung Byum Seo
|
Neural Attention Memory
|
Submitted to ICML 2023
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel perspective of the attention mechanism by reinventing it
as a memory architecture for neural networks, namely Neural Attention Memory
(NAM). NAM is a memory structure that is both readable and writable via
differentiable linear algebra operations. We explore three use cases of NAM:
memory-augmented neural network (MANN), few-shot learning, and efficient
long-range attention. First, we design two NAM-based MANNs of Long Short-term
Memory (LSAM) and NAM Turing Machine (NAM-TM) that show better computational
powers in algorithmic zero-shot generalization tasks compared to other
baselines such as differentiable neural computer (DNC). Next, we apply NAM to
the N-way K-shot learning task and show that it is more effective at reducing
false positives compared to the baseline cosine classifier. Finally, we
implement an efficient Transformer with NAM and evaluate it with long-range
arena tasks to show that NAM can be an efficient and effective alternative for
scaled dot-product attention.
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2023 21:19:21 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Nam",
"Hyoungwook",
""
],
[
"Seo",
"Seung Byum",
""
]
] |
new_dataset
| 0.962847 |
2302.09486
|
Wenyang Zhou
|
Wenyang Zhou, Lu Yuan, Shuyu Chen, Lin Gao, Shimin Hu
|
LC-NeRF: Local Controllable Face Generation in Neural Randiance Field
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
3D face generation has achieved high visual quality and 3D consistency thanks
to the development of neural radiance fields (NeRF). Recently, to generate and
edit 3D faces with NeRF representation, some methods are proposed and achieve
good results in decoupling geometry and texture. The latent codes of these
generative models affect the whole face, and hence modifications to these codes
cause the entire face to change. However, users usually edit a local region
when editing faces and do not want other regions to be affected. Since changes
to the latent code affect global generation results, these methods do not allow
for fine-grained control of local facial regions. To improve local
controllability in NeRF-based face editing, we propose LC-NeRF, which is
composed of a Local Region Generators Module and a Spatial-Aware Fusion Module,
allowing for local geometry and texture control of local facial regions.
Qualitative and quantitative evaluations show that our method provides better
local editing than state-of-the-art face editing methods. Our method also
performs well in downstream tasks, such as text-driven facial image editing.
|
[
{
"version": "v1",
"created": "Sun, 19 Feb 2023 05:50:08 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Zhou",
"Wenyang",
""
],
[
"Yuan",
"Lu",
""
],
[
"Chen",
"Shuyu",
""
],
[
"Gao",
"Lin",
""
],
[
"Hu",
"Shimin",
""
]
] |
new_dataset
| 0.9966 |
2302.09536
|
Seungmo Kim
|
Dhruba Sunuwar, Seungmo Kim, and Zachary Reyes
|
Is 30 MHz Enough for C-V2X?
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Connected vehicles are no longer a futuristic dream coming out of a science
fiction, but they are swiftly taking a bigger part of one's everyday life. One
of the key technologies actualizing the connected vehicles is
vehicle-to-everything communications (V2X). Nonetheless, the United States
(U.S.) federal government decided to reallocate the spectrum band that used to
be dedicated to V2X uses (namely, the ``5.9 GHz band'') and to leave only 40\%
of the original chunk (i.e., 30 MHz of bandwidth) for V2X. It ignited concern
of whether the 30-MHz spectrum suffices key V2X safety messages and the
respective applications. We lay out an extensive study on the safety message
types and their latency requirements. Then, we present our simulation results
examining whether they can be supported in the 30-MHz spectrum setup.
|
[
{
"version": "v1",
"created": "Sun, 19 Feb 2023 11:07:16 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Sunuwar",
"Dhruba",
""
],
[
"Kim",
"Seungmo",
""
],
[
"Reyes",
"Zachary",
""
]
] |
new_dataset
| 0.993925 |
2302.09606
|
Paul Maria Scheikl
|
Paul Maria Scheikl, Bal\'azs Gyenes, Rayan Younis, Christoph Haas,
Gerhard Neumann, Martin Wagner, Franziska Mathis-Ullrich
|
LapGym -- An Open Source Framework for Reinforcement Learning in
Robot-Assisted Laparoscopic Surgery
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in reinforcement learning (RL) have increased the promise of
introducing cognitive assistance and automation to robot-assisted laparoscopic
surgery (RALS). However, progress in algorithms and methods depends on the
availability of standardized learning environments that represent skills
relevant to RALS. We present LapGym, a framework for building RL environments
for RALS that models the challenges posed by surgical tasks, and sofa_env, a
diverse suite of 12 environments. Motivated by surgical training, these
environments are organized into 4 tracks: Spatial Reasoning, Deformable Object
Manipulation & Grasping, Dissection, and Thread Manipulation. Each environment
is highly parametrizable for increasing difficulty, resulting in a high
performance ceiling for new algorithms. We use Proximal Policy Optimization
(PPO) to establish a baseline for model-free RL algorithms, investigating the
effect of several environment parameters on task difficulty. Finally, we show
that many environments and parameter configurations reflect well-known, open
problems in RL research, allowing researchers to continue exploring these
fundamental problems in a surgical context. We aim to provide a challenging,
standard environment suite for further development of RL for RALS, ultimately
helping to realize the full potential of cognitive surgical robotics. LapGym is
publicly accessible through GitHub (https://github.com/ScheiklP/lap_gym).
|
[
{
"version": "v1",
"created": "Sun, 19 Feb 2023 16:02:25 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Scheikl",
"Paul Maria",
""
],
[
"Gyenes",
"Balázs",
""
],
[
"Younis",
"Rayan",
""
],
[
"Haas",
"Christoph",
""
],
[
"Neumann",
"Gerhard",
""
],
[
"Wagner",
"Martin",
""
],
[
"Mathis-Ullrich",
"Franziska",
""
]
] |
new_dataset
| 0.964107 |
2302.09632
|
Chen Liang
|
Chen Liang, Haoming Jiang, Zheng Li, Xianfeng Tang, Bin Yin and Tuo
Zhao
|
HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained
Transformers
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge distillation has been shown to be a powerful model compression
approach to facilitate the deployment of pre-trained language models in
practice. This paper focuses on task-agnostic distillation. It produces a
compact pre-trained model that can be easily fine-tuned on various tasks with
small computational costs and memory footprints. Despite the practical
benefits, task-agnostic distillation is challenging. Since the teacher model
has a significantly larger capacity and stronger representation power than the
student model, it is very difficult for the student to produce predictions that
match the teacher's over a massive amount of open-domain training data. Such a
large prediction discrepancy often diminishes the benefits of knowledge
distillation. To address this challenge, we propose Homotopic Distillation
(HomoDistil), a novel task-agnostic distillation approach equipped with
iterative pruning. Specifically, we initialize the student model from the
teacher model, and iteratively prune the student's neurons until the target
width is reached. Such an approach maintains a small discrepancy between the
teacher's and student's predictions throughout the distillation process, which
ensures the effectiveness of knowledge transfer. Extensive experiments
demonstrate that HomoDistil achieves significant improvements on existing
baselines.
|
[
{
"version": "v1",
"created": "Sun, 19 Feb 2023 17:37:24 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Liang",
"Chen",
""
],
[
"Jiang",
"Haoming",
""
],
[
"Li",
"Zheng",
""
],
[
"Tang",
"Xianfeng",
""
],
[
"Yin",
"Bin",
""
],
[
"Zhao",
"Tuo",
""
]
] |
new_dataset
| 0.969813 |
2302.09655
|
Joohyung Kim
|
Joohyung Kim, Dhruv C Mathur, Kazuki Shin, Sean Taylor
|
PAPRAS: Plug-And-Play Robotic Arm System
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a novel robotic arm system, named PAPRAS (Plug-And-Play
Robotic Arm System). PAPRAS consists of a portable robotic arm(s), docking
mount(s), and software architecture including a control system. By analyzing
the target task spaces at home, the dimensions and configuration of PAPRAS are
determined. PAPRAS's arm is light (less than 6kg) with an optimized 3D-printed
structure, and it has a high payload (3kg) as a human-arm-sized manipulator. A
locking mechanism is embedded in the structure for better portability and the
3D-printed docking mount can be installed easily. PAPRAS's software
architecture is developed on an open-source framework and optimized for
low-latency multiagent-based distributed manipulator control. A process to
create new demonstrations is presented to show PAPRAS's ease of use and
efficiency. In the paper, simulations and hardware experiments are presented in
various demonstrations, including sink-to-dishwasher manipulation, coffee
making, mobile manipulation on a quadruped, and suit-up demo to validate the
hardware and software design.
|
[
{
"version": "v1",
"created": "Sun, 19 Feb 2023 19:02:41 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Kim",
"Joohyung",
""
],
[
"Mathur",
"Dhruv C",
""
],
[
"Shin",
"Kazuki",
""
],
[
"Taylor",
"Sean",
""
]
] |
new_dataset
| 0.999813 |
2302.09657
|
Kaustubh Kulkarni
|
Kaustubh Milind Kulkarni, Rohan S Jamadagni, Jeffrey Aaron Paul,
Sucheth Shenoy
|
Table Tennis Stroke Detection and Recognition Using Ball Trajectory Data
|
9 pages, 5 figures, 6 tables
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, the novel task of detecting and classifying table tennis
strokes solely using the ball trajectory has been explored. A single camera
setup positioned in the umpire's view has been employed to procure a dataset
consisting of six stroke classes executed by four professional table tennis
players. Ball tracking using YOLOv4, a traditional object detection model, and
TrackNetv2, a temporal heatmap based model, have been implemented on our
dataset and their performances have been benchmarked. A mathematical approach
developed to extract temporal boundaries of strokes using the ball trajectory
data yielded a total of 2023 valid strokes in our dataset, while also detecting
services and missed strokes successfully. The temporal convolutional network
developed performed stroke recognition on completely unseen data with an
accuracy of 87.155%. Several machine learning and deep learning based model
architectures have been trained for stroke recognition using ball trajectory
input and benchmarked based on their performances. While stroke recognition in
the field of table tennis has been extensively explored based on human action
recognition using video data focused on the player's actions, the use of ball
trajectory data for the same is an unexplored characteristic of the sport.
Hence, the motivation behind the work is to demonstrate that meaningful
inferences such as stroke detection and recognition can be drawn using minimal
input information.
|
[
{
"version": "v1",
"created": "Sun, 19 Feb 2023 19:13:24 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Kulkarni",
"Kaustubh Milind",
""
],
[
"Jamadagni",
"Rohan S",
""
],
[
"Paul",
"Jeffrey Aaron",
""
],
[
"Shenoy",
"Sucheth",
""
]
] |
new_dataset
| 0.99976 |
2302.09790
|
Cai Jialun
|
Jialun Cai, Hong Liu, Runwei Ding, Wenhao Li, Jianbing Wu, Miaoju Ban
|
HTNet: Human Topology Aware Network for 3D Human Pose Estimation
|
ICASSP23 Accepted Paper
| null | null | null |
cs.CV cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
3D human pose estimation errors would propagate along the human body topology
and accumulate at the end joints of limbs. Inspired by the backtracking
mechanism in automatic control systems, we design an Intra-Part Constraint
module that utilizes the parent nodes as the reference to build topological
constraints for end joints at the part level. Further considering the hierarchy
of the human topology, joint-level and body-level dependencies are captured via
graph convolutional networks and self-attentions, respectively. Based on these
designs, we propose a novel Human Topology aware Network (HTNet), which adopts
a channel-split progressive strategy to sequentially learn the structural
priors of the human topology from multiple semantic levels: joint, part, and
body. Extensive experiments show that the proposed method improves the
estimation accuracy by 18.7% on the end joints of limbs and achieves
state-of-the-art results on Human3.6M and MPI-INF-3DHP datasets. Code is
available at https://github.com/vefalun/HTNet.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 06:31:29 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Cai",
"Jialun",
""
],
[
"Liu",
"Hong",
""
],
[
"Ding",
"Runwei",
""
],
[
"Li",
"Wenhao",
""
],
[
"Wu",
"Jianbing",
""
],
[
"Ban",
"Miaoju",
""
]
] |
new_dataset
| 0.981323 |
2302.09825
|
Jani Boutellier
|
Masud Fahim, Ilona S\"ochting, Luca Ferranti, Juho Kannala, Jani
Boutellier
|
TBPos: Dataset for Large-Scale Precision Visual Localization
|
Scandinavian Conference on Image Analysis 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image based localization is a classical computer vision challenge, with
several well-known datasets. Generally, datasets consist of a visual 3D
database that captures the modeled scenery, as well as query images whose 3D
pose is to be discovered. Usually the query images have been acquired with a
camera that differs from the imaging hardware used to collect the 3D database;
consequently, it is hard to acquire accurate ground truth poses between query
images and the 3D database. As the accuracy of visual localization algorithms
constantly improves, precise ground truth becomes increasingly important. This
paper proposes TBPos, a novel large-scale visual dataset for image based
positioning, which provides query images with fully accurate ground truth
poses: both the database images and the query images have been derived from the
same laser scanner data. In the experimental part of the paper, the proposed
dataset is evaluated by means of an image-based localization pipeline.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 08:14:13 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Fahim",
"Masud",
""
],
[
"Söchting",
"Ilona",
""
],
[
"Ferranti",
"Luca",
""
],
[
"Kannala",
"Juho",
""
],
[
"Boutellier",
"Jani",
""
]
] |
new_dataset
| 0.999765 |
2302.09842
|
Ohad Elishco
|
Zuo Ye and Ohad Elishco
|
Codes Over Absorption Channels
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a novel communication channel, called the
absorption channel, inspired by information transmission in neurons. Our
motivation comes from in-vivo nano-machines, emerging medical applications, and
brain-machine interfaces that communicate over the nervous system. Another
motivation comes from viewing our model as a specific deletion channel, which
may provide a new perspective and ideas to study the general deletion channel.
For any given finite alphabet, we give codes that can correct absorption
errors. For the binary alphabet, the problem is relatively trivial and we can
apply binary (multiple-) deletion correcting codes. For single-absorption
error, we prove that the Varshamov-Tenengolts codes can provide a near-optimal
code in our setting. When the alphabet size $q$ is at least $3$, we first
construct a single-absorption correcting code whose redundancy is at most
$3\log_q(n)+O(1)$. Then, based on this code and ideas introduced in
\cite{Gabrys2022IT}, we give a second construction of single-absorption
correcting codes with redundancy $\log_q(n)+12\log_q\log_q(n)+O(1)$, which is
optimal up to an $O\left(\log_q\log_q(n)\right)$.
Finally, we apply the syndrome compression technique with pre-coding to
obtain a subcode of the single-absorption correcting code. This subcode can
combat multiple-absorption errors and has low redundancy. For each setup,
efficient encoders and decoders are provided.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 08:57:23 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Ye",
"Zuo",
""
],
[
"Elishco",
"Ohad",
""
]
] |
new_dataset
| 0.999262 |
2302.09857
|
Ivan Magrin-Chagnolleau
|
Felipe Ariani (PRISM), Marcelo Caetano (PRISM), Javier Elipe Gimeno
(PRISM), Ivan Magrin-Chagnolleau (PRISM)
|
Computational Creativity: Compose the Music for a Movie using only its
Automatically Extracted Brightness Curve
|
in French language
|
Art et sciences , 2023, 7 (1), pp.12-21
| null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since its conception, the computer has found applications to accompany human
creativity. Today, the debate about computers and creativity involves several
challenges, such as understanding human creativity, modeling the creative
process, and programming the computer to exhibit behavior that appears to be
creative to some extent. In this paper, we are interested in how the computer
can be used as a tool to promote creativity in a musical composition. We
automatically extracted the brightness curve from a silent movie and then used
it to compose a piece of music to accompany the movie. We extracted several
parameters from the brightness curve, and applied compositional rules from
these parameters to write the instrumental music for the film. The final
composition has a synchronicity and aesthetic fit with the film that are
surprising. This compositional process also allowed for a degree of aesthetic
freedom that would otherwise have been impossible.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 09:39:29 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Ariani",
"Felipe",
"",
"PRISM"
],
[
"Caetano",
"Marcelo",
"",
"PRISM"
],
[
"Gimeno",
"Javier Elipe",
"",
"PRISM"
],
[
"Magrin-Chagnolleau",
"Ivan",
"",
"PRISM"
]
] |
new_dataset
| 0.964492 |
2302.09927
|
Guoxin Kang
|
Guoxin Kang, Lei Wang, Simin Chen, and Jianfeng Zhan
|
NHtapDB: Native HTAP Databases
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Native database (1) provides a near-data machine learning framework to
facilitate generating real-time business insight, and predefined change
thresholds will trigger online training and deployment of new models, and (2)
offers a mixed-format store to guarantee the performance of HTAP workloads,
especially the hybrid workloads that consist of OLAP queries in-between online
transactions. We make rigorous test plans for native database with an enhanced
state-of-the-art HTAP benchmark.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 11:46:50 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Kang",
"Guoxin",
""
],
[
"Wang",
"Lei",
""
],
[
"Chen",
"Simin",
""
],
[
"Zhan",
"Jianfeng",
""
]
] |
new_dataset
| 0.999527 |
2302.09997
|
Daniel Barath
|
Daniel Barath, Dmytro Mishkin, Michal Polic, Wolfgang F\"orstner, Jiri
Matas
|
A Large Scale Homography Benchmark
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a large-scale dataset of Planes in 3D, Pi3D, of roughly 1000
planes observed in 10 000 images from the 1DSfM dataset, and HEB, a large-scale
homography estimation benchmark leveraging Pi3D. The applications of the Pi3D
dataset are diverse, e.g. training or evaluating monocular depth, surface
normal estimation and image matching algorithms. The HEB dataset consists of
226 260 homographies and includes roughly 4M correspondences. The homographies
link images that often undergo significant viewpoint and illumination changes.
As applications of HEB, we perform a rigorous evaluation of a wide range of
robust estimators and deep learning-based correspondence filtering methods,
establishing the current state-of-the-art in robust homography estimation. We
also evaluate the uncertainty of the SIFT orientations and scales w.r.t. the
ground truth coming from the underlying homographies and provide codes for
comparing uncertainty of custom detectors. The dataset is available at
\url{https://github.com/danini/homography-benchmark}.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 14:18:09 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Barath",
"Daniel",
""
],
[
"Mishkin",
"Dmytro",
""
],
[
"Polic",
"Michal",
""
],
[
"Förstner",
"Wolfgang",
""
],
[
"Matas",
"Jiri",
""
]
] |
new_dataset
| 0.999855 |
2302.09998
|
Adrian Holzbock
|
Adrian Holzbock, Nicolai Kern, Christian Waldschmidt, Klaus Dietmayer,
Vasileios Belagiannis
|
Gesture Recognition with Keypoint and Radar Stream Fusion for Automated
Vehicles
|
Accepted for presentation at the 3rd AVVision Workshop at ECCV 2022,
October 23, 2022, Tel Aviv, Israel
|
In Computer Vision-ECCV 2022 Workshops: Tel Aviv, Israel, October
23-27, 2022, Proceedings, Part I (pp. 570-584). Cham: Springer Nature
Switzerland
|
10.1007/978-3-031-25056-9_36
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a joint camera and radar approach to enable autonomous vehicles to
understand and react to human gestures in everyday traffic. Initially, we
process the radar data with a PointNet followed by a spatio-temporal multilayer
perceptron (stMLP). Independently, the human body pose is extracted from the
camera frame and processed with a separate stMLP network. We propose a fusion
neural network for both modalities, including an auxiliary loss for each
modality. In our experiments with a collected dataset, we show the advantages
of gesture recognition with two modalities. Motivated by adverse weather
conditions, we also demonstrate promising performance when one of the sensors
lacks functionality.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 14:18:11 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Holzbock",
"Adrian",
""
],
[
"Kern",
"Nicolai",
""
],
[
"Waldschmidt",
"Christian",
""
],
[
"Dietmayer",
"Klaus",
""
],
[
"Belagiannis",
"Vasileios",
""
]
] |
new_dataset
| 0.994393 |
2302.10082
|
Zhang Xiaoyi
|
Zhang Xiaoyi, Cao Xuefeng, Yu Anzhu, Yu Wenshuai, Li Zhenqi, Quan
Yujun
|
UAVStereo: A Multiple Resolution Dataset for Stereo Matching in UAV
Scenarios
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stereo matching is a fundamental task for 3D scene reconstruction. Recently,
deep learning based methods have proven effective on some benchmark datasets,
such as KITTI and Scene Flow. UAVs (Unmanned Aerial Vehicles) are commonly
utilized for surface observation, and their captured images are frequently used
for detailed 3D reconstruction due to high resolution and low-altitude
acquisition. At present, the mainstream supervised learning network requires a
significant amount of training data with ground-truth labels to learn model
parameters. However, due to the scarcity of UAV stereo matching datasets, the
learning-based network cannot be applied to UAV images. To facilitate further
research, this paper proposes a novel pipeline to generate accurate and dense
disparity maps using detailed meshes reconstructed by UAV images and LiDAR
point clouds. Through the proposed pipeline, this paper constructs a
multi-resolution UAV scenario dataset, called UAVStereo, with over 34k stereo
image pairs covering 3 typical scenes. As far as we know, UAVStereo is the
first stereo matching dataset of UAV low-altitude scenarios. The dataset
includes synthetic and real stereo pairs to enable generalization from the
synthetic domain to the real domain. Furthermore, our UAVStereo dataset
provides multi-resolution and multi-scene images pairs to accommodate a variety
of sensors and environments. In this paper, we evaluate traditional and
state-of-the-art deep learning methods, highlighting their limitations in
addressing challenges in UAV scenarios and offering suggestions for future
research. The dataset is available at
https://github.com/rebecca0011/UAVStereo.git
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 16:45:27 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Xiaoyi",
"Zhang",
""
],
[
"Xuefeng",
"Cao",
""
],
[
"Anzhu",
"Yu",
""
],
[
"Wenshuai",
"Yu",
""
],
[
"Zhenqi",
"Li",
""
],
[
"Yujun",
"Quan",
""
]
] |
new_dataset
| 0.999741 |
2302.10109
|
Jiatao Gu
|
Jiatao Gu, Alex Trevithick, Kai-En Lin, Josh Susskind, Christian
Theobalt, Lingjie Liu, Ravi Ramamoorthi
|
NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion
|
Project page: https://jiataogu.me/nerfdiff/
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Novel view synthesis from a single image requires inferring occluded regions
of objects and scenes whilst simultaneously maintaining semantic and physical
consistency with the input. Existing approaches condition neural radiance
fields (NeRF) on local image features, projecting points to the input image
plane, and aggregating 2D features to perform volume rendering. However, under
severe occlusion, this projection fails to resolve uncertainty, resulting in
blurry renderings that lack details. In this work, we propose NerfDiff, which
addresses this issue by distilling the knowledge of a 3D-aware conditional
diffusion model (CDM) into NeRF through synthesizing and refining a set of
virtual views at test time. We further propose a novel NeRF-guided distillation
algorithm that simultaneously generates 3D consistent virtual views from the
CDM samples, and finetunes the NeRF based on the improved virtual views. Our
approach significantly outperforms existing NeRF-based and geometry-free
approaches on challenging datasets, including ShapeNet, ABO, and Clevr3D.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 17:12:00 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Gu",
"Jiatao",
""
],
[
"Trevithick",
"Alex",
""
],
[
"Lin",
"Kai-En",
""
],
[
"Susskind",
"Josh",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Liu",
"Lingjie",
""
],
[
"Ramamoorthi",
"Ravi",
""
]
] |
new_dataset
| 0.998665 |
1901.05894
|
Shiv Ram Dubey
|
Swalpa Kumar Roy, Suvojit Manna, Shiv Ram Dubey, Bidyut Baran
Chaudhuri
|
LiSHT: Non-Parametric Linearly Scaled Hyperbolic Tangent Activation
Function for Neural Networks
|
Accepted in 7th International Conference on Computer Vision and Image
Processing (CVIP), 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The activation function in neural network introduces the non-linearity
required to deal with the complex tasks. Several activation/non-linearity
functions are developed for deep learning models. However, most of the existing
activation functions suffer due to the dying gradient problem and
non-utilization of the large negative input values. In this paper, we propose a
Linearly Scaled Hyperbolic Tangent (LiSHT) for Neural Networks (NNs) by scaling
the Tanh linearly. The proposed LiSHT is non-parametric and tackles the dying
gradient problem. We perform the experiments on benchmark datasets of different
type, such as vector data, image data and natural language data. We observe the
superior performance using Multi-layer Perceptron (MLP), Residual Network
(ResNet) and Long-short term memory (LSTM) for data classification, image
classification and tweets classification tasks, respectively. The accuracy on
CIFAR100 dataset using ResNet model with LiSHT is improved by 9.48, 3.40, 3.16,
4.26, and 1.17\% as compared to Tanh, ReLU, PReLU, LReLU, and Swish,
respectively. We also show the qualitative results using loss landscape, weight
distribution and activations maps in support of the proposed activation
function.
|
[
{
"version": "v1",
"created": "Tue, 1 Jan 2019 02:24:06 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Aug 2020 10:51:23 GMT"
},
{
"version": "v3",
"created": "Wed, 25 May 2022 07:03:45 GMT"
},
{
"version": "v4",
"created": "Fri, 17 Feb 2023 01:49:12 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Roy",
"Swalpa Kumar",
""
],
[
"Manna",
"Suvojit",
""
],
[
"Dubey",
"Shiv Ram",
""
],
[
"Chaudhuri",
"Bidyut Baran",
""
]
] |
new_dataset
| 0.998114 |
2008.02275
|
Dan Hendrycks
|
Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and
Jerry Li and Dawn Song and Jacob Steinhardt
|
Aligning AI With Shared Human Values
|
ICLR 2021; the ETHICS dataset is available at
https://github.com/hendrycks/ethics/
| null | null | null |
cs.CY cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show how to assess a language model's knowledge of basic concepts of
morality. We introduce the ETHICS dataset, a new benchmark that spans concepts
in justice, well-being, duties, virtues, and commonsense morality. Models
predict widespread moral judgments about diverse text scenarios. This requires
connecting physical and social world knowledge to value judgements, a
capability that may enable us to steer chatbot outputs or eventually regularize
open-ended reinforcement learning agents. With the ETHICS dataset, we find that
current language models have a promising but incomplete ability to predict
basic human ethical judgements. Our work shows that progress can be made on
machine ethics today, and it provides a steppingstone toward AI that is aligned
with human values.
|
[
{
"version": "v1",
"created": "Wed, 5 Aug 2020 17:59:16 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Sep 2020 06:02:59 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Jan 2021 18:57:47 GMT"
},
{
"version": "v4",
"created": "Thu, 4 Mar 2021 21:47:22 GMT"
},
{
"version": "v5",
"created": "Sat, 24 Jul 2021 04:40:33 GMT"
},
{
"version": "v6",
"created": "Fri, 17 Feb 2023 16:08:22 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Hendrycks",
"Dan",
""
],
[
"Burns",
"Collin",
""
],
[
"Basart",
"Steven",
""
],
[
"Critch",
"Andrew",
""
],
[
"Li",
"Jerry",
""
],
[
"Song",
"Dawn",
""
],
[
"Steinhardt",
"Jacob",
""
]
] |
new_dataset
| 0.951089 |
2110.14795
|
Jiancheng Yang
|
Jiancheng Yang, Rui Shi, Donglai Wei, Zequan Liu, Lin Zhao, Bilian Ke,
Hanspeter Pfister, Bingbing Ni
|
MedMNIST v2 -- A large-scale lightweight benchmark for 2D and 3D
biomedical image classification
|
The data and code are publicly available at https://medmnist.com/.
arXiv admin note: text overlap with arXiv:2010.14925
|
Scientific Data 2023
|
10.1038/s41597-022-01721-8
| null |
cs.CV cs.AI cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce MedMNIST v2, a large-scale MNIST-like dataset collection of
standardized biomedical images, including 12 datasets for 2D and 6 datasets for
3D. All images are pre-processed into a small size of 28x28 (2D) or 28x28x28
(3D) with the corresponding classification labels so that no background
knowledge is required for users. Covering primary data modalities in biomedical
images, MedMNIST v2 is designed to perform classification on lightweight 2D and
3D images with various dataset scales (from 100 to 100,000) and diverse tasks
(binary/multi-class, ordinal regression, and multi-label). The resulting
dataset, consisting of 708,069 2D images and 10,214 3D images in total, could
support numerous research / educational purposes in biomedical image analysis,
computer vision, and machine learning. We benchmark several baseline methods on
MedMNIST v2, including 2D / 3D neural networks and open-source / commercial
AutoML tools. The data and code are publicly available at
https://medmnist.com/.
|
[
{
"version": "v1",
"created": "Wed, 27 Oct 2021 22:02:04 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Sep 2022 06:07:53 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Yang",
"Jiancheng",
""
],
[
"Shi",
"Rui",
""
],
[
"Wei",
"Donglai",
""
],
[
"Liu",
"Zequan",
""
],
[
"Zhao",
"Lin",
""
],
[
"Ke",
"Bilian",
""
],
[
"Pfister",
"Hanspeter",
""
],
[
"Ni",
"Bingbing",
""
]
] |
new_dataset
| 0.999909 |
2112.12310
|
Xiang Ling Dr.
|
Xiang Ling, Lingfei Wu, Jiangyu Zhang, Zhenqing Qu, Wei Deng, Xiang
Chen, Yaguan Qian, Chunming Wu, Shouling Ji, Tianyue Luo, Jingzheng Wu,
Yanjun Wu
|
Adversarial Attacks against Windows PE Malware Detection: A Survey of
the State-of-the-Art
|
Accepted by ELSEVIER Computers & Security (COSE)
| null |
10.1016/j.cose.2023.103134
| null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Malware has been one of the most damaging threats to computers that span
across multiple operating systems and various file formats. To defend against
ever-increasing and ever-evolving malware, tremendous efforts have been made to
propose a variety of malware detection that attempt to effectively and
efficiently detect malware so as to mitigate possible damages as early as
possible. Recent studies have shown that, on the one hand, existing ML and DL
techniques enable superior solutions in detecting newly emerging and previously
unseen malware. However, on the other hand, ML and DL models are inherently
vulnerable to adversarial attacks in the form of adversarial examples. In this
paper, we focus on malware with the file format of portable executable (PE) in
the family of Windows operating systems, namely Windows PE malware, as a
representative case to study the adversarial attack methods in such adversarial
settings. To be specific, we start by first outlining the general learning
framework of Windows PE malware detection based on ML/DL and subsequently
highlighting three unique challenges of performing adversarial attacks in the
context of Windows PE malware. Then, we conduct a comprehensive and systematic
review to categorize the state-of-the-art adversarial attacks against PE
malware detection, as well as corresponding defenses to increase the robustness
of Windows PE malware detection. Finally, we conclude the paper by first
presenting other related attacks against Windows PE malware detection beyond
the adversarial attacks and then shedding light on future research directions
and opportunities. In addition, a curated resource list of adversarial attacks
and defenses for Windows PE malware detection is also available at
https://github.com/ryderling/adversarial-attacks-and-defenses-for-windows-pe-malware-detection.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 02:12:43 GMT"
},
{
"version": "v2",
"created": "Sun, 9 Oct 2022 07:36:55 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Dec 2022 17:26:14 GMT"
},
{
"version": "v4",
"created": "Thu, 16 Feb 2023 06:38:58 GMT"
},
{
"version": "v5",
"created": "Fri, 17 Feb 2023 02:43:36 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Ling",
"Xiang",
""
],
[
"Wu",
"Lingfei",
""
],
[
"Zhang",
"Jiangyu",
""
],
[
"Qu",
"Zhenqing",
""
],
[
"Deng",
"Wei",
""
],
[
"Chen",
"Xiang",
""
],
[
"Qian",
"Yaguan",
""
],
[
"Wu",
"Chunming",
""
],
[
"Ji",
"Shouling",
""
],
[
"Luo",
"Tianyue",
""
],
[
"Wu",
"Jingzheng",
""
],
[
"Wu",
"Yanjun",
""
]
] |
new_dataset
| 0.994885 |
2201.00947
|
Shiv Ram Dubey
|
Bulla Rajesh, Abhishek Kumar Gupta, Ayush Raj, Mohammed Javed, Shiv
Ram Dubey
|
HWRCNet: Handwritten Word Recognition in JPEG Compressed Domain using
CNN-BiLSTM Network
|
Accepted in International Conference on Data Analytics and Learning,
2022
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Handwritten word recognition from document images using deep learning is an
active research area in the field of Document Image Analysis and Recognition.
In the present era of Big data, since more and more documents are being
generated and archived in the compressed form to provide better storage and
transmission efficiencies, the problem of word recognition in the respective
compressed domain without decompression becomes very challenging. The
traditional methods employ decompression and then apply learning algorithms
over them, therefore, novel algorithms are to be designed in order to apply
learning techniques directly in the compressed representations/domains. In this
direction, this research paper proposes a novel HWRCNet model for handwritten
word recognition directly in the compressed domain specifically focusing on
JPEG format. The proposed model combines the Convolutional Neural Network (CNN)
and Bi-Directional Long Short Term Memory (BiLSTM) based Recurrent Neural
Network (RNN). Basically, we train the model using JPEG compressed word images
and observe a very appealing performance with $89.05\%$ word recognition
accuracy and $13.37\%$ character error rate.
|
[
{
"version": "v1",
"created": "Tue, 4 Jan 2022 02:52:56 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Jan 2022 16:01:08 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Feb 2023 06:56:06 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Rajesh",
"Bulla",
""
],
[
"Gupta",
"Abhishek Kumar",
""
],
[
"Raj",
"Ayush",
""
],
[
"Javed",
"Mohammed",
""
],
[
"Dubey",
"Shiv Ram",
""
]
] |
new_dataset
| 0.999032 |
2205.08406
|
Ali Kariminezhad
|
Ravi Kothari, Ali Kariminezhad, Christian Mayr, Haoming Zhang
|
Raw Radar data based Object Detection and Heading estimation using Cross
Attention
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Radar is an inevitable part of the perception sensor set for autonomous
driving functions. It plays a gap-filling role to complement the shortcomings
of other sensors in diverse scenarios and weather conditions. In this paper, we
propose a Deep Neural Network (DNN) based end-to-end object detection and
heading estimation framework using raw radar data. To this end, we approach the
problem in both a Data-centric and model-centric manner. We refine the publicly
available CARRADA dataset and introduce Bivariate norm annotations. Besides,
the baseline model is improved by a transformer inspired cross-attention fusion
and further center-offset maps are added to reduce localisation error. Our
proposed model improves the detection mean Average Precision (mAP) by 5%, while
reducing the model complexity by almost 23%. For comprehensive scene
understanding purposes, we extend our model for heading estimation. The
improved ground truth and proposed model is available at Github
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 14:42:13 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2023 13:51:50 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Kothari",
"Ravi",
""
],
[
"Kariminezhad",
"Ali",
""
],
[
"Mayr",
"Christian",
""
],
[
"Zhang",
"Haoming",
""
]
] |
new_dataset
| 0.998153 |
2205.10012
|
Marija Sakota
|
Marija Sakota, Maxime Peyrard, Robert West
|
Descartes: Generating Short Descriptions of Wikipedia Articles
| null | null |
10.1145/3543507.3583220
| null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Wikipedia is one of the richest knowledge sources on the Web today. In order
to facilitate navigating, searching, and maintaining its content, Wikipedia's
guidelines state that all articles should be annotated with a so-called short
description indicating the article's topic (e.g., the short description of beer
is "Alcoholic drink made from fermented cereal grains"). Nonetheless, a large
fraction of articles (ranging from 10.2% in Dutch to 99.7% in Kazakh) have no
short description yet, with detrimental effects for millions of Wikipedia
users. Motivated by this problem, we introduce the novel task of automatically
generating short descriptions for Wikipedia articles and propose Descartes, a
multilingual model for tackling it. Descartes integrates three sources of
information to generate an article description in a target language: the text
of the article in all its language versions, the already-existing descriptions
(if any) of the article in other languages, and semantic type information
obtained from a knowledge graph. We evaluate a Descartes model trained for
handling 25 languages simultaneously, showing that it beats baselines
(including a strong translation-based baseline) and performs on par with
monolingual models tailored for specific languages. A human evaluation on three
languages further shows that the quality of Descartes's descriptions is largely
indistinguishable from that of human-written descriptions; e.g., 91.3% of our
English descriptions (vs. 92.1% of human-written descriptions) pass the bar for
inclusion in Wikipedia, suggesting that Descartes is ready for production, with
the potential to support human editors in filling a major gap in today's
Wikipedia across languages.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 08:03:07 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Nov 2022 09:58:37 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Feb 2023 09:26:36 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Sakota",
"Marija",
""
],
[
"Peyrard",
"Maxime",
""
],
[
"West",
"Robert",
""
]
] |
new_dataset
| 0.999764 |
2205.11117
|
Paul Scherer
|
Paul Scherer and Thomas Gaudelet and Alison Pouplin and Alice Del
Vecchio and Suraj M S and Oliver Bolton and Jyothish Soman and Jake P.
Taylor-King and Lindsay Edwards
|
PyRelationAL: a python library for active learning research and
development
|
Updated paper reflecting 1.0.0 release
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In constrained real-world scenarios, where it may be challenging or costly to
generate data, disciplined methods for acquiring informative new data points
are of fundamental importance for the efficient training of machine learning
(ML) models. Active learning (AL) is a sub-field of ML focused on the
development of methods to iteratively and economically acquire data through
strategically querying new data points that are the most useful for a
particular task. Here, we introduce PyRelationAL, an open source library for AL
research. We describe a modular toolkit that is compatible with diverse ML
frameworks (e.g. PyTorch, scikit-learn, TensorFlow, JAX). Furthermore, the
library implements a wide range of published methods and provides API access to
wide-ranging benchmark datasets and AL task configurations based on existing
literature. The library is supplemented by an expansive set of tutorials,
demos, and documentation to help users get started. PyRelationAL is maintained
using modern software engineering practices -- with an inclusive contributor
code of conduct -- to promote long term library quality and utilisation.
PyRelationAL is available under a permissive Apache licence on PyPi and at
https://github.com/RelationRx/pyrelational.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 08:21:21 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2023 15:45:35 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Scherer",
"Paul",
""
],
[
"Gaudelet",
"Thomas",
""
],
[
"Pouplin",
"Alison",
""
],
[
"Del Vecchio",
"Alice",
""
],
[
"S",
"Suraj M",
""
],
[
"Bolton",
"Oliver",
""
],
[
"Soman",
"Jyothish",
""
],
[
"Taylor-King",
"Jake P.",
""
],
[
"Edwards",
"Lindsay",
""
]
] |
new_dataset
| 0.994914 |
2206.06238
|
Satyajit Ghosh
|
Satyajit Ghosh, Mousumi Dutta, Tanaya Das
|
Indian Legal Text Summarization: A Text Normalisation-based Approach
|
Preprint. Accepted at 2022 IEEE 19th India Council International
Conference (INDICON)
| null |
10.1109/INDICON56171.2022.10039891
| null |
cs.CL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In the Indian court system, pending cases have long been a problem. There are
more than 4 crore cases outstanding. Manually summarising hundreds of documents
is a time-consuming and tedious task for legal stakeholders. Many
state-of-the-art models for text summarization have emerged as machine learning
has progressed. Domain-independent models don't do well with legal texts, and
fine-tuning those models for the Indian Legal System is problematic due to a
lack of publicly available datasets. To improve the performance of
domain-independent models, the authors have proposed a methodology for
normalising legal texts in the Indian context. The authors experimented with
two state-of-the-art domain-independent models for legal text summarization,
namely BART and PEGASUS. BART and PEGASUS are put through their paces in terms
of extractive and abstractive summarization to understand the effectiveness of
the text normalisation approach. Summarised texts are evaluated by domain
experts on multiple parameters and using ROUGE metrics. It shows the proposed
text normalisation approach is effective in legal texts with domain-independent
models.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 15:16:50 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Sep 2022 10:46:27 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Ghosh",
"Satyajit",
""
],
[
"Dutta",
"Mousumi",
""
],
[
"Das",
"Tanaya",
""
]
] |
new_dataset
| 0.980957 |
2207.07859
|
Jianfei Yang
|
Jianfei Yang, Xinyan Chen, Dazhuo Wang, Han Zou, Chris Xiaoxuan Lu,
Sumei Sun, Lihua Xie
|
SenseFi: A Library and Benchmark on Deep-Learning-Empowered WiFi Human
Sensing
|
A benchmark and model zoo for WiFi CSI Human sensing based on deep
learning methods. Accepted by Patterns, Cell Press
| null | null | null |
cs.LG cs.AI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
WiFi sensing has been evolving rapidly in recent years. Empowered by
propagation models and deep learning methods, many challenging applications are
realized such as WiFi-based human activity recognition and gesture recognition.
However, in contrast to deep learning for visual recognition and natural
language processing, no sufficiently comprehensive public benchmark exists. In
this paper, we review the recent progress on deep learning enabled WiFi
sensing, and then propose a benchmark, SenseFi, to study the effectiveness of
various deep learning models for WiFi sensing. These advanced models are
compared in terms of distinct sensing tasks, WiFi platforms, recognition
accuracy, model size, computational complexity, feature transferability, and
adaptability of unsupervised learning. It is also regarded as a tutorial for
deep learning based WiFi sensing, starting from CSI hardware platform to
sensing algorithms. The extensive experiments provide us with experiences in
deep model design, learning strategy skills and training techniques for
real-world applications. To the best of our knowledge, this is the first
benchmark with an open-source library for deep learning in WiFi sensing
research. The benchmark codes are available at
https://github.com/xyanchen/WiFi-CSI-Sensing-Benchmark.
|
[
{
"version": "v1",
"created": "Sat, 16 Jul 2022 07:23:45 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Dec 2022 12:44:36 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Feb 2023 06:11:04 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Yang",
"Jianfei",
""
],
[
"Chen",
"Xinyan",
""
],
[
"Wang",
"Dazhuo",
""
],
[
"Zou",
"Han",
""
],
[
"Lu",
"Chris Xiaoxuan",
""
],
[
"Sun",
"Sumei",
""
],
[
"Xie",
"Lihua",
""
]
] |
new_dataset
| 0.983631 |
2209.02429
|
Omran Alamayreh
|
Omran Alamayreh, Giovanna Maria Dimitri, Jun Wang, Benedetta Tondi,
Mauro Barni
|
Which country is this picture from? New data and methods for DNN-based
country recognition
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recognizing the country where a picture has been taken has many potential
applications, such as identification of fake news and prevention of
disinformation campaigns. Previous works focused on the estimation of the
geo-coordinates where a picture has been taken. Yet, recognizing in which
country an image was taken could be more critical, from a semantic and forensic
point of view, than estimating its spatial coordinates. In the above framework,
this paper provides two contributions. First, we introduce the VIPPGeo dataset,
containing 3.8 million geo-tagged images. Secondly, we used the dataset to
train a model casting the country recognition problem as a classification
problem. The experiments show that our model provides better results than the
current state of the art. Notably, we found that asking the network to identify
the country provides better results than estimating the geo-coordinates and
then tracing them back to the country where the picture was taken.
|
[
{
"version": "v1",
"created": "Fri, 2 Sep 2022 10:56:41 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2023 15:31:32 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Alamayreh",
"Omran",
""
],
[
"Dimitri",
"Giovanna Maria",
""
],
[
"Wang",
"Jun",
""
],
[
"Tondi",
"Benedetta",
""
],
[
"Barni",
"Mauro",
""
]
] |
new_dataset
| 0.998085 |
2210.05917
|
Junwoo Park
|
Junwoo Park, Youngwoo Cho, Gyuhyeon Sim, Hojoon Lee, Jaegul Choo
|
Enemy Spotted: in-game gun sound dataset for gunshot classification and
localization
|
Accepted at IEEE Conference on Games (GoG) 2022
| null |
10.1109/CoG51982.2022.9893670
| null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recently, deep learning-based methods have drawn huge attention due to their
simple yet high performance without domain knowledge in sound classification
and localization tasks. However, a lack of gun sounds in existing datasets has
been a major obstacle to implementing a support system to spot criminals from
their gunshots by leveraging deep learning models. Since the occurrence of
gunshot is rare and unpredictable, it is impractical to collect gun sounds in
the real world. As an alternative, gun sounds can be obtained from an FPS game
that is designed to mimic real-world warfare. The recent FPS game offers a
realistic environment where we can safely collect gunshot data while simulating
even dangerous situations. By exploiting the advantage of the game environment,
we construct a gunshot dataset, namely BGG, for the firearm classification and
gunshot localization tasks. The BGG dataset consists of 37 different types of
firearms, distances, and directions between the sound source and a receiver. We
carefully verify that the in-game gunshot data has sufficient information to
identify the location and type of gunshots by training several sound
classification and localization baselines on the BGG dataset. Afterward, we
demonstrate that the accuracy of real-world firearm classification and
localization tasks can be enhanced by utilizing the BGG dataset.
|
[
{
"version": "v1",
"created": "Wed, 12 Oct 2022 04:36:56 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2023 02:04:03 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Park",
"Junwoo",
""
],
[
"Cho",
"Youngwoo",
""
],
[
"Sim",
"Gyuhyeon",
""
],
[
"Lee",
"Hojoon",
""
],
[
"Choo",
"Jaegul",
""
]
] |
new_dataset
| 0.9996 |
2210.06742
|
Xue Yang
|
Xue Yang, Gefan Zhang, Wentong Li, Xuehui Wang, Yue Zhou, Junchi Yan
|
H2RBox: Horizontal Box Annotation is All You Need for Oriented Object
Detection
|
15 pages, 6 figures, 8 tables, accepted by ICLR 2023, the source code
is available at https://github.com/yangxue0827/h2rbox-mmrotate and
https://github.com/yangxue0827/h2rbox-jittor
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Oriented object detection emerges in many applications from aerial images to
autonomous driving, while many existing detection benchmarks are annotated with
horizontal bounding box only which is also less costive than fine-grained
rotated box, leading to a gap between the readily available training corpus and
the rising demand for oriented object detection. This paper proposes a simple
yet effective oriented object detection approach called H2RBox merely using
horizontal box annotation for weakly-supervised training, which closes the
above gap and shows competitive performance even against those trained with
rotated boxes. The cores of our method are weakly- and self-supervised
learning, which predicts the angle of the object by learning the consistency of
two different views. To our best knowledge, H2RBox is the first horizontal box
annotation-based oriented object detector. Compared to an alternative i.e.
horizontal box-supervised instance segmentation with our post adaption to
oriented object detection, our approach is not susceptible to the prediction
quality of mask and can perform more robustly in complex scenes containing a
large number of dense objects and outliers. Experimental results show that
H2RBox has significant performance and speed advantages over horizontal
box-supervised instance segmentation methods, as well as lower memory
requirements. While compared to rotated box-supervised oriented object
detectors, our method shows very close performance and speed. The source code
is available at PyTorch-based
\href{https://github.com/yangxue0827/h2rbox-mmrotate}{MMRotate} and
Jittor-based \href{https://github.com/yangxue0827/h2rbox-jittor}{JDet}.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 05:12:45 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Oct 2022 05:31:36 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Feb 2023 05:02:05 GMT"
},
{
"version": "v4",
"created": "Mon, 6 Feb 2023 12:08:34 GMT"
},
{
"version": "v5",
"created": "Fri, 17 Feb 2023 15:32:01 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Yang",
"Xue",
""
],
[
"Zhang",
"Gefan",
""
],
[
"Li",
"Wentong",
""
],
[
"Wang",
"Xuehui",
""
],
[
"Zhou",
"Yue",
""
],
[
"Yan",
"Junchi",
""
]
] |
new_dataset
| 0.98968 |
2212.13876
|
Dennis Melamed
|
Dennis Melamed, Cameron Johnson, Chen Zhao, Russell Blue, Philip
Morrone, Anthony Hoogs, Brian Clipp
|
xFBD: Focused Building Damage Dataset and Analysis
|
8 pages + 3-page supplemental, 8 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The xView2 competition and xBD dataset spurred significant advancements in
overhead building damage detection, but the competition's pixel level scoring
can lead to reduced solution performance in areas with tight clusters of
buildings or uninformative context. We seek to advance automatic building
damage assessment for disaster relief by proposing an auxiliary challenge to
the original xView2 competition. This new challenge involves a new dataset and
metrics indicating solution performance when damage is more local and limited
than in xBD. Our challenge measures a network's ability to identify individual
buildings and their damage level without excessive reliance on the buildings'
surroundings. Methods that succeed on this challenge will provide more
fine-grained, precise damage information than original xView2 solutions. The
best-performing xView2 networks' performances dropped noticeably in our new
limited/local damage detection task. The common causes of failure observed are
that (1) building objects and their classifications are not separated well, and
(2) when they are, the classification is strongly biased by surrounding
buildings and other damage context. Thus, we release our augmented version of
the dataset with additional object-level scoring metrics
(https://drive.google.com/drive/folders/1VuQZuAg6-Yo8r5J4OCx3ZRpa_fv9aaDX?usp=sharing)
to test independence and separability of building objects, alongside the
pixel-level performance metrics of the original competition. We also experiment
with new baseline models which improve independence and separability of
building damage predictions. Our results indicate that building damage
detection is not a fully-solved problem, and we invite others to use and build
on our dataset augmentations and metrics.
|
[
{
"version": "v1",
"created": "Fri, 23 Dec 2022 21:01:18 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Jan 2023 22:27:49 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Feb 2023 21:04:08 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Melamed",
"Dennis",
""
],
[
"Johnson",
"Cameron",
""
],
[
"Zhao",
"Chen",
""
],
[
"Blue",
"Russell",
""
],
[
"Morrone",
"Philip",
""
],
[
"Hoogs",
"Anthony",
""
],
[
"Clipp",
"Brian",
""
]
] |
new_dataset
| 0.999684 |
2302.06301
|
Xiaoqian Huang
|
Xiaoqian Huang, Kachole Sanket, Abdulla Ayyad, Fariborz Baghaei
Naeini, Dimitrios Makris, Yahya Zweiri
|
A Neuromorphic Dataset for Object Segmentation in Indoor Cluttered
Environment
| null | null | null | null |
cs.CV cs.DB cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Taking advantage of an event-based camera, the issues of motion blur, low
dynamic range and low time sampling of standard cameras can all be addressed.
However, there is a lack of event-based datasets dedicated to the benchmarking
of segmentation algorithms, especially those that provide depth information
which is critical for segmentation in occluded scenes. This paper proposes a
new Event-based Segmentation Dataset (ESD), a high-quality 3D spatial and
temporal dataset for object segmentation in an indoor cluttered environment.
Our proposed dataset ESD comprises 145 sequences with 14,166 RGB frames that
are manually annotated with instance masks. Overall 21.88 million and 20.80
million events from two event-based cameras in a stereo-graphic configuration
are collected, respectively. To the best of our knowledge, this densely
annotated and 3D spatial-temporal event-based segmentation benchmark of
tabletop objects is the first of its kind. By releasing ESD, we expect to
provide the community with a challenging segmentation benchmark with high
quality.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 12:02:51 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2023 08:33:28 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Huang",
"Xiaoqian",
""
],
[
"Sanket",
"Kachole",
""
],
[
"Ayyad",
"Abdulla",
""
],
[
"Naeini",
"Fariborz Baghaei",
""
],
[
"Makris",
"Dimitrios",
""
],
[
"Zweiri",
"Yahya",
""
]
] |
new_dataset
| 0.999714 |
2302.06758
|
Yuanqing Wang
|
Yuanqing Wang, Iv\'an Pulido, Kenichiro Takaba, Benjamin Kaminow,
Jenke Scheen, Lily Wang, John D. Chodera
|
EspalomaCharge: Machine learning-enabled ultra-fast partial charge
assignment
| null | null | null | null |
cs.LG physics.chem-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Atomic partial charges are crucial parameters in molecular dynamics (MD)
simulation, dictating the electrostatic contributions to intermolecular
energies, and thereby the potential energy landscape. Traditionally, the
assignment of partial charges has relied on surrogates of \textit{ab initio}
semiempirical quantum chemical methods such as AM1-BCC, and is expensive for
large systems or large numbers of molecules. We propose a hybrid physical /
graph neural network-based approximation to the widely popular AM1-BCC charge
model that is orders of magnitude faster while maintaining accuracy comparable
to differences in AM1-BCC implementations. Our hybrid approach couples a graph
neural network to a streamlined charge equilibration approach in order to
predict molecule-specific atomic electronegativity and hardness parameters,
followed by analytical determination of optimal charge-equilibrated parameters
that preserves total molecular charge. This hybrid approach scales linearly
with the number of atoms, enabling, for the first time, the use of fully
consistent charge models for small molecules and biopolymers for the
construction of next-generation self-consistent biomolecular force fields.
Implemented in the free and open source package \texttt{espaloma\_charge}, this
approach provides drop-in replacements for both AmberTools \texttt{antechamber}
and the Open Force Field Toolkit charging workflows, in addition to stand-alone
charge generation interfaces. Source code is available at
\url{https://github.com/choderalab/espaloma_charge}.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 00:02:31 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2023 21:16:58 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Wang",
"Yuanqing",
""
],
[
"Pulido",
"Iván",
""
],
[
"Takaba",
"Kenichiro",
""
],
[
"Kaminow",
"Benjamin",
""
],
[
"Scheen",
"Jenke",
""
],
[
"Wang",
"Lily",
""
],
[
"Chodera",
"John D.",
""
]
] |
new_dataset
| 0.961707 |
2302.08417
|
RuQing G. Xu
|
RuQing G. Xu and Field G. Van Zee and Robert A. van de Geijn
|
GEMMFIP: Unifying GEMM in BLIS
|
16 pages, 7 figures, 2 algorithms
| null | null | null |
cs.MS
|
http://creativecommons.org/licenses/by/4.0/
|
Matrix libraries often focus on achieving high performance for problems
considered to be either "small" or "large", as these two scenarios tend to
respond best to different optimization strategies. We propose a unified
technique for implementing matrix operations like general matrix multiplication
(GEMM) that can achieve high performance for both small and large problem
sizes. The key is to fuse packing -- an operation that copies data to a
contiguous layout in memory and which is critical for large matrix performance
-- with the first computational "pass" over that data. This boosts performance
across the problem size spectrum. As a result, tuning general-purpose libraries
becomes simpler since it obviates the need to carefully express and
parameterize logic that chooses between a "small matrix" strategy and a "large
matrix" strategy. A prototype implementation of the technique built with the
BLAS-like Library Instantiation Software (BLIS) framework is described and
performance on a range of architectures is reported.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 16:52:49 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2023 03:24:04 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Xu",
"RuQing G.",
""
],
[
"Van Zee",
"Field G.",
""
],
[
"van de Geijn",
"Robert A.",
""
]
] |
new_dataset
| 0.996615 |
2302.08563
|
Md Rashedur Rahman
|
Md Rashedur Rahman, Moinul Hossain, Jiang Xie
|
PACMAN Attack: A Mobility-Powered Attack in Private 5G-Enabled
Industrial Automation System
|
6 pages, 7 Figures, Accepted in IEEE International Conference on
Communications 2023
| null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
3GPP has introduced Private 5G to support the next-generation industrial
automation system (IAS) due to the versatility and flexibility of 5G
architecture. Besides the 3.5GHz CBRS band, unlicensed spectrum bands, like
5GHz, are considered as an additional medium because of their free and abundant
nature. However, while utilizing the unlicensed band, industrial equipment must
coexist with incumbents, e.g., Wi-Fi, which could introduce new security
threats and resuscitate old ones. In this paper, we propose a novel attack
strategy conducted by a mobility-enabled malicious Wi-Fi access point (mmAP),
namely \textit{PACMAN} attack, to exploit vulnerabilities introduced by
heterogeneous coexistence. A mmAP is capable of moving around the physical
surface to identify mission-critical devices, hopping through the frequency
domain to detect the victim's operating channel, and launching traditional MAC
layer-based attacks. The multi-dimensional mobility of the attacker makes it
impervious to state-of-the-art detection techniques that assume static
adversaries. In addition, we propose a novel Markov Decision Process (MDP)
based framework to intelligently design an attacker's multi-dimensional
mobility in space and frequency. Mathematical analysis and extensive simulation
results exhibit the adverse effect of the proposed mobility-powered attack.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 20:12:56 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Rahman",
"Md Rashedur",
""
],
[
"Hossain",
"Moinul",
""
],
[
"Xie",
"Jiang",
""
]
] |
new_dataset
| 0.998811 |
2302.08573
|
Vuthea Chheang
|
Lauren Baron, Vuthea Chheang, Amit Chaudhari, Arooj Liaqat, Aishwarya
Chandrasekaran, Yufan Wang, Joshua Cashaback, Erik Thostenson, Roghayeh Leila
Barmaki
|
Virtual Therapy Exergame for Upper Extremity Rehabilitation Using Smart
Wearable Sensors
|
IEEE/ACM international conference on Connected Health: Applications,
Systems and Engineering Technologies (CHASE) 2023
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Virtual Reality (VR) has been utilized for several applications and has shown
great potential for rehabilitation, especially for home therapy. However, these
systems solely rely on information from VR hand controllers, which do not fully
capture the individual movement of the joints. In this paper, we propose a
creative VR therapy exergame for upper extremity rehabilitation using
multi-dimensional reaching tasks while simultaneously capturing hand movement
from the VR controllers and elbow joint movement from a flexible carbon
nanotube sleeve. We conducted a preliminary study with non-clinical
participants (n = 12, 7 F). In a 2x2 within-subjects study (orientation
(vertical, horizontal) x configuration (flat, curved)), we evaluated the
effectiveness and enjoyment of the exergame in different study conditions. The
results show that there was a statistically significant difference in terms of
task completion time between the two orientations. However, no significant
differences were found in the number of mistakes in both orientation and
configuration of the virtual exergame. This can lead to customizing therapy
while maintaining the same level of intensity. That is, if a patient has
restricted lower limb mobility and requires to be seated, they can use the
orientations interchangeably. The results of resistance change generated from
the carbon nanotube sleeve revealed that the flat configuration in the vertical
orientation induced more elbow stretches than the other conditions. Finally, we
reported the subjective measures based on questionnaires for usability and user
experience in different study conditions. In conclusion, the proposed VR
exergame has the potential as a multimodal sensory tool for personalized upper
extremity home-based therapy and telerehabilitation.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 20:38:17 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Baron",
"Lauren",
""
],
[
"Chheang",
"Vuthea",
""
],
[
"Chaudhari",
"Amit",
""
],
[
"Liaqat",
"Arooj",
""
],
[
"Chandrasekaran",
"Aishwarya",
""
],
[
"Wang",
"Yufan",
""
],
[
"Cashaback",
"Joshua",
""
],
[
"Thostenson",
"Erik",
""
],
[
"Barmaki",
"Roghayeh Leila",
""
]
] |
new_dataset
| 0.999257 |
2302.08632
|
Tosiron Adegbija
|
Tosiron Adegbija
|
jazznet: A Dataset of Fundamental Piano Patterns for Music Audio Machine
Learning Research
|
To Appear at IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP) 2023
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper introduces the jazznet Dataset, a dataset of fundamental jazz
piano music patterns for developing machine learning (ML) algorithms in music
information retrieval (MIR). The dataset contains 162520 labeled piano
patterns, including chords, arpeggios, scales, and chord progressions with
their inversions, resulting in more than 26k hours of audio and a total size of
95GB. The paper explains the dataset's composition, creation, and generation,
and presents an open-source Pattern Generator using a method called
Distance-Based Pattern Structures (DBPS), which allows researchers to easily
generate new piano patterns simply by defining the distances between pitches
within the musical patterns. We demonstrate that the dataset can help
researchers benchmark new models for challenging MIR tasks, using a
convolutional recurrent neural network (CRNN) and a deep convolutional neural
network. The dataset and code are available via:
https://github.com/tosiron/jazznet.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 00:13:22 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Adegbija",
"Tosiron",
""
]
] |
new_dataset
| 0.999821 |
2302.08818
|
Robert Rou\v{s}
|
Robert Rou\v{s}, Joseph Peller, Gerrit Polder, Selwin Hageraats, Thijs
Ruigrok, Pieter M. Blok
|
Apple scab detection in orchards using deep learning on colour and
multispectral images
|
6 pages, 7 figures, 3 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Apple scab is a fungal disease caused by Venturia inaequalis. Disease is of
particular concern for growers, as it causes significant damage to fruit and
leaves, leading to loss of fruit and yield. This article examines the ability
of deep learning and hyperspectral imaging to accurately identify an apple
symptom infection in apple trees. In total, 168 image scenes were collected
using conventional RGB and Visible to Near-infrared (VIS-NIR) spectral imaging
(8 channels) in infected orchards. Spectral data were preprocessed with an
Artificial Neural Network (ANN) trained in segmentation to detect scab pixels
based on spectral information. Linear Discriminant Analysis (LDA) was used to
find the most discriminating channels in spectral data based on the healthy
leaf and scab infested leaf spectra. Five combinations of false-colour images
were created from the spectral data and the segmentation net results. The
images were trained and evaluated with a modified version of the YOLOv5
network. Despite the promising results of deep learning using RGB images
(P=0.8, mAP@50=0.73), the detection of apple scab in apple trees using
multispectral imaging proved to be a difficult task. The high-light environment
of the open field made it difficult to collect a balanced spectrum from the
multispectral camera, since the infrared channel and the visible channels
needed to be constantly balanced so that they did not overexpose in the images.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 11:33:17 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Rouš",
"Robert",
""
],
[
"Peller",
"Joseph",
""
],
[
"Polder",
"Gerrit",
""
],
[
"Hageraats",
"Selwin",
""
],
[
"Ruigrok",
"Thijs",
""
],
[
"Blok",
"Pieter M.",
""
]
] |
new_dataset
| 0.999745 |
2302.08873
|
Ziyi Zou
|
Ziyi Zou, Ziang Zhang, Zhen Lu, Xiang Li, You Wang, Jie Hao, and Guang
Li
|
Discrete States-Based Trajectory Planning for Nonholonomic Robots
|
8 pages, 9 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to nonholonomic dynamics, the motion planning of nonholonomic robots is
always a difficult problem. This letter presents a Discrete States-based
Trajectory Planning(DSTP) algorithm for autonomous nonholonomic robots. The
proposed algorithm represents the trajectory as x and y positions, orientation
angle, longitude velocity and acceleration, angular velocity, and time
intervals. More variables make the expression of optimization and constraints
simpler, reduce the error caused by too many approximations, and also handle
the gear shifting situation. L-BFGS-B is used to deal with the optimization of
many variables and box constraints, thus speeding up the problem solving.
Various simulation experiments compared with prior works have validated that
our algorithm has an order-of-magnitude efficiency advantage and can generate a
smoother trajectory with a high speed and low control effort. Besides,
real-world experiments are also conducted to verify the feasibility of our
algorithm in real scenes. We will release our codes as ros packages.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 13:42:14 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Zou",
"Ziyi",
""
],
[
"Zhang",
"Ziang",
""
],
[
"Lu",
"Zhen",
""
],
[
"Li",
"Xiang",
""
],
[
"Wang",
"You",
""
],
[
"Hao",
"Jie",
""
],
[
"Li",
"Guang",
""
]
] |
new_dataset
| 0.96339 |
2302.08908
|
Jiaxin Cheng
|
Jiaxin Cheng, Xiao Liang, Xingjian Shi, Tong He, Tianjun Xiao and Mu
Li
|
LayoutDiffuse: Adapting Foundational Diffusion Models for
Layout-to-Image Generation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Layout-to-image generation refers to the task of synthesizing photo-realistic
images based on semantic layouts. In this paper, we propose LayoutDiffuse that
adapts a foundational diffusion model pretrained on large-scale image or
text-image datasets for layout-to-image generation. By adopting a novel neural
adaptor based on layout attention and task-aware prompts, our method trains
efficiently, generates images with both high perceptual quality and layout
alignment, and needs less data. Experiments on three datasets show that our
method significantly outperforms other 10 generative models based on GANs,
VQ-VAE, and diffusion models.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 14:20:25 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Cheng",
"Jiaxin",
""
],
[
"Liang",
"Xiao",
""
],
[
"Shi",
"Xingjian",
""
],
[
"He",
"Tong",
""
],
[
"Xiao",
"Tianjun",
""
],
[
"Li",
"Mu",
""
]
] |
new_dataset
| 0.99075 |
2302.08909
|
Ihsan Ullah
|
Ihsan Ullah (LISN), Dustin Carri\'on-Ojeda (LISN), Sergio Escalera
(UB), Isabelle Guyon (LISN), Mike Huisman (LIACS), Felix Mohr, Jan N van Rijn
(LIACS), Haozhe Sun (LISN), Joaquin Vanschoren (TU/e), Phan Anh Vu (LISN)
|
Meta-Album: Multi-domain Meta-Dataset for Few-Shot Image Classification
| null |
36th Conference on Neural Information Processing Systems (NeurIPS
2022) Track on Datasets and Benchmarks., NeurIPS, Nov 2022, New Orleans,
United States
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Meta-Album, an image classification meta-dataset designed to
facilitate few-shot learning, transfer learning, meta-learning, among other
tasks. It includes 40 open datasets, each having at least 20 classes with 40
examples per class, with verified licences. They stem from diverse domains,
such as ecology (fauna and flora), manufacturing (textures, vehicles), human
actions, and optical character recognition, featuring various image scales
(microscopic, human scales, remote sensing). All datasets are preprocessed,
annotated, and formatted uniformly, and come in 3 versions (Micro $\subset$
Mini $\subset$ Extended) to match users' computational resources. We showcase
the utility of the first 30 datasets on few-shot learning problems. The other
10 will be released shortly after. Meta-Album is already more diverse and
larger (in number of datasets) than similar efforts, and we are committed to
keep enlarging it via a series of competitions. As competitions terminate,
their test data are released, thus creating a rolling benchmark, available
through OpenML.org. Our website https://meta-album.github.io/ contains the
source code of challenge winning methods, baseline methods, data loaders, and
instructions for contributing either new datasets or algorithms to our
expandable meta-dataset.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 11:07:51 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Ullah",
"Ihsan",
"",
"LISN"
],
[
"Carrión-Ojeda",
"Dustin",
"",
"LISN"
],
[
"Escalera",
"Sergio",
"",
"UB"
],
[
"Guyon",
"Isabelle",
"",
"LISN"
],
[
"Huisman",
"Mike",
"",
"LIACS"
],
[
"Mohr",
"Felix",
"",
"LIACS"
],
[
"van Rijn",
"Jan N",
"",
"LIACS"
],
[
"Sun",
"Haozhe",
"",
"LISN"
],
[
"Vanschoren",
"Joaquin",
"",
"TU/e"
],
[
"Vu",
"Phan Anh",
"",
"LISN"
]
] |
new_dataset
| 0.999869 |
2302.08931
|
Marvin Klemp
|
Marvin Klemp, Kevin R\"osch, Royden Wagner, Jannik Quehl, Martin Lauer
|
LDFA: Latent Diffusion Face Anonymization for Self-driving Applications
|
6 pages, 5 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In order to protect vulnerable road users (VRUs), such as pedestrians or
cyclists, it is essential that intelligent transportation systems (ITS)
accurately identify them. Therefore, datasets used to train perception models
of ITS must contain a significant number of vulnerable road users. However,
data protection regulations require that individuals are anonymized in such
datasets. In this work, we introduce a novel deep learning-based pipeline for
face anonymization in the context of ITS. In contrast to related methods, we do
not use generative adversarial networks (GANs) but build upon recent advances
in diffusion models. We propose a two-stage method, which contains a face
detection model followed by a latent diffusion model to generate realistic face
in-paintings. To demonstrate the versatility of anonymized images, we train
segmentation methods on anonymized data and evaluate them on non-anonymized
data. Our experiment reveal that our pipeline is better suited to anonymize
data for segmentation than naive methods and performes comparably with recent
GAN-based methods. Moreover, face detectors achieve higher mAP scores for faces
anonymized by our method compared to naive or recent GAN-based methods.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 15:14:00 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Klemp",
"Marvin",
""
],
[
"Rösch",
"Kevin",
""
],
[
"Wagner",
"Royden",
""
],
[
"Quehl",
"Jannik",
""
],
[
"Lauer",
"Martin",
""
]
] |
new_dataset
| 0.995621 |
2302.08932
|
Tao Hu
|
Tao Hu, Xiaoqing Guan, Yixu Wang, Yifan Liu, Bixuan Zhang, Boyu Lin,
You Wang and Guang Li
|
An MPC-based Optimal Motion Control Framework for Pendulum-driven
Spherical Robots
|
This paper has been submitted to IEEE Robotics and Automation Letters
(RA-L)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motion control is essential for all autonomous mobile robots, and even more
so for spherical robots. Due to the uniqueness of the spherical robot, its
motion control must not only ensure accurate tracking of the target commands,
but also minimize fluctuations in the robot's attitude and motors' current
while tracking. In this paper, model predictive control (MPC) is applied to the
control of spherical robots and an MPC-based motion control framework is
designed. There are two controllers in the framework, an optimal velocity
controller ESO-MPC which combines extend states observers (ESO) and MPC, and an
optimal orientation controller that uses multilayer perceptron (MLP) to
generate accurate trajectories and MPC with changing weights to achieve optimal
control. Finally, the performance of individual controllers and the whole
control framework are verified by physical experiments. The experimental
results show that the MPC-based motion control framework proposed in this work
is much better than PID in terms of rapidity and accuracy, and has great
advantages over sliding mode controller (SMC) for overshoot, attitude
stability, current stability and energy consumption.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 15:14:18 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Hu",
"Tao",
""
],
[
"Guan",
"Xiaoqing",
""
],
[
"Wang",
"Yixu",
""
],
[
"Liu",
"Yifan",
""
],
[
"Zhang",
"Bixuan",
""
],
[
"Lin",
"Boyu",
""
],
[
"Wang",
"You",
""
],
[
"Li",
"Guang",
""
]
] |
new_dataset
| 0.983059 |
2302.09006
|
Tommy Nilsson
|
Hanjo Schnellbaecher, Florian Dufresne, Tommy Nilsson, Leonie Becker,
Oliver Bensch, Enrico Guerra, Wafa Sadri, Vanessa Neumann
|
Telerobotic Mars Mission for Lava Tube Exploration and Examination of
Life
| null | null | null | null |
cs.HC cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The general profile and overarching goal of this proposed mission is to
pioneer potentially highly beneficial (even vital) and cost-effective
techniques for the future human colonization of Mars. Adopting radically new
and disruptive solutions untested in the Martian context, our approach is one
of high risk and high reward. The real possibility of such a solution failing
has prompted us to base our mission architecture around a rover carrying a set
of 6 distinct experimental payloads, each capable of operating independently on
the others, thus substantially increasing the chances of the mission yielding
some valuable findings. At the same time, we sought to exploit available
synergies by assembling a combination of payloads that would together form a
coherent experimental ecosystem, with each payload providing potential value to
the others. Apart from providing such a testbed for evaluation of novel
technological solutions, another aim of our proposed mission is to help
generate scientific know-how enhancing our understanding of the Red Planet. To
this end, our mission takes aim at the Nili-Fossae region, rich in natural
resources (and carbonates in particular), past water repositories and signs of
volcanic activity. With our proposed experimental payloads, we intend to
explore existing lava-tubes, search for signs of past life and assess their
potentially valuable geological features for future base building. We will
evaluate biomatter in the form of plants and fungi as possible food and
base-building materials respectively. Finally, we seek to explore a variety of
novel power generation techniques using the Martian atmosphere and gravity. As
detailed throughout the remainder of this chapter, this assemblage of
experimental payloads, then, constitutes the backbone of our proposed
telerobotic mission to Mars.
|
[
{
"version": "v1",
"created": "Tue, 31 Jan 2023 21:21:15 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Schnellbaecher",
"Hanjo",
""
],
[
"Dufresne",
"Florian",
""
],
[
"Nilsson",
"Tommy",
""
],
[
"Becker",
"Leonie",
""
],
[
"Bensch",
"Oliver",
""
],
[
"Guerra",
"Enrico",
""
],
[
"Sadri",
"Wafa",
""
],
[
"Neumann",
"Vanessa",
""
]
] |
new_dataset
| 0.9996 |
2302.09027
|
Zhi Zhang
|
Zhi Zhang, Helen Yannakoudakis, Xiantong Zhen, Ekaterina Shutova
|
CK-Transformer: Commonsense Knowledge Enhanced Transformers for
Referring Expression Comprehension
| null | null | null | null |
cs.CV cs.AI cs.CL cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The task of multimodal referring expression comprehension (REC), aiming at
localizing an image region described by a natural language expression, has
recently received increasing attention within the research comminity. In this
paper, we specifically focus on referring expression comprehension with
commonsense knowledge (KB-Ref), a task which typically requires reasoning
beyond spatial, visual or semantic information. We propose a novel framework
for Commonsense Knowledge Enhanced Transformers (CK-Transformer) which
effectively integrates commonsense knowledge into the representations of
objects in an image, facilitating identification of the target objects referred
to by the expressions. We conduct extensive experiments on several benchmarks
for the task of KB-Ref. Our results show that the proposed CK-Transformer
achieves a new state of the art, with an absolute improvement of 3.14% accuracy
over the existing state of the art.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 17:49:26 GMT"
}
] | 2023-02-20T00:00:00 |
[
[
"Zhang",
"Zhi",
""
],
[
"Yannakoudakis",
"Helen",
""
],
[
"Zhen",
"Xiantong",
""
],
[
"Shutova",
"Ekaterina",
""
]
] |
new_dataset
| 0.976959 |
1509.06837
|
Xaver Newberry
|
X. Y. Newberry
|
Semantics for a Logic of Presuppositions
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In 1952 P. F. Strawson proposed a logic of presuppositions. It is an
interpretation of Aristotelian logic, i.e. of the logic of the traditional
syllogism. In 1981 Richard Diaz published a monograph in which he presented
truth-relevant logic. This paper shows that truth-relevant logic is but a
propositional version of the logic of presuppositions. A semantics of the logic
of presuppositions is developed using truth-relevant logic. The semantics is
then further extended to polyadic logic and some consequences discussed.
|
[
{
"version": "v1",
"created": "Wed, 23 Sep 2015 03:45:00 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Feb 2019 02:08:51 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Aug 2020 00:29:34 GMT"
},
{
"version": "v4",
"created": "Sun, 15 Jan 2023 20:21:18 GMT"
},
{
"version": "v5",
"created": "Thu, 16 Feb 2023 01:42:45 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Newberry",
"X. Y.",
""
]
] |
new_dataset
| 0.996546 |
2203.06615
|
Amit Tsvieli
|
Amit Tsvieli and Nir Weinberger
|
Learning Maximum Margin Channel Decoders
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of learning a channel decoder is considered for two channel
models. The first model is an additive noise channel whose noise distribution
is unknown and nonparametric. The learner is provided with a fixed codebook and
a dataset comprised of independent samples of the noise, and is required to
select a precision matrix for a nearest neighbor decoder in terms of the
Mahalanobis distance. The second model is a non-linear channel with additive
white Gaussian noise and unknown channel transformation. The learner is
provided with a fixed codebook and a dataset comprised of independent
input-output samples of the channel, and is required to select a matrix for a
nearest neighbor decoder with a linear kernel. For both models, the objective
of maximizing the margin of the decoder is addressed. Accordingly, for each
channel model, a regularized loss minimization problem with a codebook-related
regularization term and hinge-like loss function is developed, which is
inspired by the support vector machine paradigm for classification problems.
Expected generalization error bounds for the error probability loss function
are provided for both models, under optimal choice of the regularization
parameter. For the additive noise channel, a theoretical guidance for choosing
the training signal-to-noise ratio is proposed based on this bound. In
addition, for the non-linear channel, a high probability uniform generalization
error bound is provided for the hypothesis class. For each channel, a
stochastic sub-gradient descent algorithm for solving the regularized loss
minimization problem is proposed, and an optimization error bound is stated.
The performance of the proposed algorithms is demonstrated through several
examples.
|
[
{
"version": "v1",
"created": "Sun, 13 Mar 2022 10:10:51 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2023 22:11:39 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Tsvieli",
"Amit",
""
],
[
"Weinberger",
"Nir",
""
]
] |
new_dataset
| 0.987452 |
2210.00662
|
Daniel Kyrollos
|
Daniel G. Kyrollos, Anthony Fuller, Kim Greenwood, JoAnn Harrold and
James R. Green
|
Under the Cover Infant Pose Estimation using Multimodal Data
| null | null |
10.1109/TIM.2023.3244220
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Infant pose monitoring during sleep has multiple applications in both
healthcare and home settings. In a healthcare setting, pose detection can be
used for region of interest detection and movement detection for noncontact
based monitoring systems. In a home setting, pose detection can be used to
detect sleep positions which has shown to have a strong influence on multiple
health factors. However, pose monitoring during sleep is challenging due to
heavy occlusions from blanket coverings and low lighting. To address this, we
present a novel dataset, Simultaneously-collected multimodal Mannequin Lying
pose (SMaL) dataset, for under the cover infant pose estimation. We collect
depth and pressure imagery of an infant mannequin in different poses under
various cover conditions. We successfully infer full body pose under the cover
by training state-of-art pose estimation methods and leveraging existing
multimodal adult pose datasets for transfer learning. We demonstrate a
hierarchical pretraining strategy for transformer-based models to significantly
improve performance on our dataset. Our best performing model was able to
detect joints under the cover within 25mm 86% of the time with an overall mean
error of 16.9mm. Data, code and models publicly available at
https://github.com/DanielKyr/SMaL
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 00:34:45 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2023 20:30:46 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Kyrollos",
"Daniel G.",
""
],
[
"Fuller",
"Anthony",
""
],
[
"Greenwood",
"Kim",
""
],
[
"Harrold",
"JoAnn",
""
],
[
"Green",
"James R.",
""
]
] |
new_dataset
| 0.99916 |
2210.08477
|
Hyung-Kwon Ko
|
Hyung-Kwon Ko, Gwanmo Park, Hyeon Jeon, Jaemin Jo, Juho Kim, Jinwook
Seo
|
Large-scale Text-to-Image Generation Models for Visual Artists' Creative
Works
|
15 pages, 3 figures
|
28th International Conference on Intelligent User Interfaces (IUI
'23), March 27--31, 2023, Sydney, NSW, Australia
|
10.1145/3581641.3584078
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Large-scale Text-to-image Generation Models (LTGMs) (e.g., DALL-E),
self-supervised deep learning models trained on a huge dataset, have
demonstrated the capacity for generating high-quality open-domain images from
multi-modal input. Although they can even produce anthropomorphized versions of
objects and animals, combine irrelevant concepts in reasonable ways, and give
variation to any user-provided images, we witnessed such rapid technological
advancement left many visual artists disoriented in leveraging LTGMs more
actively in their creative works. Our goal in this work is to understand how
visual artists would adopt LTGMs to support their creative works. To this end,
we conducted an interview study as well as a systematic literature review of 72
system/application papers for a thorough examination. A total of 28 visual
artists covering 35 distinct visual art domains acknowledged LTGMs' versatile
roles with high usability to support creative works in automating the creation
process (i.e., automation), expanding their ideas (i.e., exploration), and
facilitating or arbitrating in communication (i.e., mediation). We conclude by
providing four design guidelines that future researchers can refer to in making
intelligent user interfaces using LTGMs.
|
[
{
"version": "v1",
"created": "Sun, 16 Oct 2022 08:06:38 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Dec 2022 12:32:26 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Feb 2023 10:12:01 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Ko",
"Hyung-Kwon",
""
],
[
"Park",
"Gwanmo",
""
],
[
"Jeon",
"Hyeon",
""
],
[
"Jo",
"Jaemin",
""
],
[
"Kim",
"Juho",
""
],
[
"Seo",
"Jinwook",
""
]
] |
new_dataset
| 0.999163 |
2211.06862
|
Binbin Xie
|
Binbin Xie, Xiangpeng Wei, Baosong Yang, Huan Lin, Jun Xie, Xiaoli
Wang, Min Zhang and Jinsong Su
|
WR-ONE2SET: Towards Well-Calibrated Keyphrase Generation
|
EMNLP2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Keyphrase generation aims to automatically generate short phrases summarizing
an input document. The recently emerged ONE2SET paradigm (Ye et al., 2021)
generates keyphrases as a set and has achieved competitive performance.
Nevertheless, we observe serious calibration errors outputted by ONE2SET,
especially in the over-estimation of $\varnothing$ token (means "no
corresponding keyphrase"). In this paper, we deeply analyze this limitation and
identify two main reasons behind: 1) the parallel generation has to introduce
excessive $\varnothing$ as padding tokens into training instances; and 2) the
training mechanism assigning target to each slot is unstable and further
aggravates the $\varnothing$ token over-estimation. To make the model
well-calibrated, we propose WR-ONE2SET which extends ONE2SET with an adaptive
instance-level cost Weighting strategy and a target Re-assignment mechanism.
The former dynamically penalizes the over-estimated slots for different
instances thus smoothing the uneven training distribution. The latter refines
the original inappropriate assignment and reduces the supervisory signals of
over-estimated slots. Experimental results on commonly-used datasets
demonstrate the effectiveness and generality of our proposed paradigm.
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 09:56:24 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2023 05:16:27 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Xie",
"Binbin",
""
],
[
"Wei",
"Xiangpeng",
""
],
[
"Yang",
"Baosong",
""
],
[
"Lin",
"Huan",
""
],
[
"Xie",
"Jun",
""
],
[
"Wang",
"Xiaoli",
""
],
[
"Zhang",
"Min",
""
],
[
"Su",
"Jinsong",
""
]
] |
new_dataset
| 0.999772 |
2211.15103
|
Kashu Yamazaki
|
Kashu Yamazaki, Khoa Vo, Sang Truong, Bhiksha Raj, Ngan Le
|
VLTinT: Visual-Linguistic Transformer-in-Transformer for Coherent Video
Paragraph Captioning
|
Accepted to AAAI 2023 Oral
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Video paragraph captioning aims to generate a multi-sentence description of
an untrimmed video with several temporal event locations in coherent
storytelling. Following the human perception process, where the scene is
effectively understood by decomposing it into visual (e.g. human, animal) and
non-visual components (e.g. action, relations) under the mutual influence of
vision and language, we first propose a visual-linguistic (VL) feature. In the
proposed VL feature, the scene is modeled by three modalities including (i) a
global visual environment; (ii) local visual main agents; (iii) linguistic
scene elements. We then introduce an autoregressive Transformer-in-Transformer
(TinT) to simultaneously capture the semantic coherence of intra- and
inter-event contents within a video. Finally, we present a new VL contrastive
loss function to guarantee learnt embedding features are matched with the
captions semantics. Comprehensive experiments and extensive ablation studies on
ActivityNet Captions and YouCookII datasets show that the proposed
Visual-Linguistic Transformer-in-Transform (VLTinT) outperforms prior
state-of-the-art methods on accuracy and diversity. Source code is made
publicly available at: https://github.com/UARK-AICV/VLTinT.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 07:39:20 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2023 01:50:56 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Yamazaki",
"Kashu",
""
],
[
"Vo",
"Khoa",
""
],
[
"Truong",
"Sang",
""
],
[
"Raj",
"Bhiksha",
""
],
[
"Le",
"Ngan",
""
]
] |
new_dataset
| 0.994233 |
2212.05711
|
Zhao Mandi
|
Zhao Mandi, Homanga Bharadhwaj, Vincent Moens, Shuran Song, Aravind
Rajeswaran, Vikash Kumar
|
CACTI: A Framework for Scalable Multi-Task Multi-Scene Visual Imitation
Learning
| null | null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large-scale training have propelled significant progress in various
sub-fields of AI such as computer vision and natural language processing.
However, building robot learning systems at a comparable scale remains
challenging. To develop robots that can perform a wide range of skills and
adapt to new scenarios, efficient methods for collecting vast and diverse
amounts of data on physical robot systems are required, as well as the
capability to train high-capacity policies using such datasets. In this work,
we propose a framework for scaling robot learning, with specific focus on
multi-task and multi-scene manipulation in kitchen environments, both in
simulation and in the real world. Our proposed framework, CACTI, comprises four
stages that separately handle: data collection, data augmentation, visual
representation learning, and imitation policy training, to enable scalability
in robot learning . We make use of state-of-the-art generative models as part
of the data augmentation stage, and use pre-trained out-of-domain visual
representations to improve training efficiency. Experimental results
demonstrate the effectiveness of our approach. On a real robot setup, CACTI
enables efficient training of a single policy that can perform 10 manipulation
tasks involving kitchen objects, and is robust to varying layouts of
distractors. In a simulated kitchen environment, CACTI trains a single policy
to perform 18 semantic tasks across 100 layout variations for each individual
task. We will release the simulation task benchmark and augmented datasets in
both real and simulated environments to facilitate future research.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 05:30:08 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2023 16:23:31 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Mandi",
"Zhao",
""
],
[
"Bharadhwaj",
"Homanga",
""
],
[
"Moens",
"Vincent",
""
],
[
"Song",
"Shuran",
""
],
[
"Rajeswaran",
"Aravind",
""
],
[
"Kumar",
"Vikash",
""
]
] |
new_dataset
| 0.995818 |
2212.14721
|
Joseph O'Rourke
|
Joseph O'Rourke
|
Every Combinatorial Polyhedron Can Unfold with Overlap
|
15 pages, 12 figures, 12 references. v2: minor clarifications
| null | null | null |
cs.CG math.MG
|
http://creativecommons.org/licenses/by/4.0/
|
Ghomi proved that every convex polyhedron could be stretched via an affine
transformation so that it has an edge-unfolding to a net [Gho14]. A net is a
simple planar polygon; in particular, it does not self-overlap. One can view
his result as establishing that every combinatorial polyhedron has a metric
realization that allows unfolding to a net.
Joseph Malkevitch asked if the reverse holds (in some sense of ``reverse"):
Is there a combinatorial polyhedron such that, for every metric realization P
in R^3, and for every spanning cut-tree T, P cut by T unfolds to a net? In this
note we prove the answer is NO: every combinatorial polyhedron has a
realization and a cut-tree that unfolds the polyhedron with overlap.
|
[
{
"version": "v1",
"created": "Fri, 30 Dec 2022 14:02:34 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2023 19:39:34 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"O'Rourke",
"Joseph",
""
]
] |
new_dataset
| 0.970412 |
2301.10896
|
Li Zhang
|
Li Zhang, Hainiu Xu, Yue Yang, Shuyan Zhou, Weiqiu You, Manni Arora
and Chris Callison-Burch
|
Causal Reasoning of Entities and Events in Procedural Texts
|
In Findings of EACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Entities and events are crucial to natural language reasoning and common in
procedural texts. Existing work has focused either exclusively on entity state
tracking (e.g., whether a pan is hot) or on event reasoning (e.g., whether one
would burn themselves by touching the pan), while these two tasks are often
causally related. We propose CREPE, the first benchmark on causal reasoning of
event plausibility and entity states. We show that most language models,
including GPT-3, perform close to chance at .35 F1, lagging far behind human at
.87 F1. We boost model performance to .59 F1 by creatively representing events
as programming languages while prompting language models pretrained on code. By
injecting the causal relations between entities and events as intermediate
reasoning steps in our representation, we further boost the performance to .67
F1. Our findings indicate not only the challenge that CREPE brings for language
models, but also the efficacy of code-like prompting combined with
chain-of-thought prompting for multihop event reasoning.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 01:43:17 GMT"
},
{
"version": "v2",
"created": "Sun, 29 Jan 2023 03:12:41 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Feb 2023 13:56:22 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Zhang",
"Li",
""
],
[
"Xu",
"Hainiu",
""
],
[
"Yang",
"Yue",
""
],
[
"Zhou",
"Shuyan",
""
],
[
"You",
"Weiqiu",
""
],
[
"Arora",
"Manni",
""
],
[
"Callison-Burch",
"Chris",
""
]
] |
new_dataset
| 0.982928 |
2301.11007
|
Matthias Albrecht
|
Matthias Albrecht, Lorenz Assl\"ander, Harald Reiterer, Stephan
Streuber
|
MoPeDT: A Modular Head-Mounted Display Toolkit to Conduct Peripheral
Vision Research
|
Accepted IEEE VR 2023 conference paper
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Peripheral vision plays a significant role in human perception and
orientation. However, its relevance for human-computer interaction, especially
head-mounted displays, has not been fully explored yet. In the past, a few
specialized appliances were developed to display visual cues in the periphery,
each designed for a single specific use case only. A multi-purpose headset to
exclusively augment peripheral vision did not exist yet. We introduce MoPeDT:
Modular Peripheral Display Toolkit, a freely available, flexible,
reconfigurable, and extendable headset to conduct peripheral vision research.
MoPeDT can be built with a 3D printer and off-the-shelf components. It features
multiple spatially configurable near-eye display modules and full 3D tracking
inside and outside the lab. With our system, researchers and designers may
easily develop and prototype novel peripheral vision interaction and
visualization techniques. We demonstrate the versatility of our headset with
several possible applications for spatial awareness, balance, interaction,
feedback, and notifications. We conducted a small study to evaluate the
usability of the system. We found that participants were largely not irritated
by the peripheral cues, but the headset's comfort could be further improved. We
also evaluated our system based on established heuristics for human-computer
interaction toolkits to show how MoPeDT adapts to changing requirements, lowers
the entry barrier for peripheral vision research, and facilitates expressive
power in the combination of modular building blocks.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 09:40:53 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2023 10:40:46 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Albrecht",
"Matthias",
""
],
[
"Assländer",
"Lorenz",
""
],
[
"Reiterer",
"Harald",
""
],
[
"Streuber",
"Stephan",
""
]
] |
new_dataset
| 0.999736 |
2302.06860
|
Cai Yang
|
Cai Yang, Addie Woicik, Hoifung Poon, Sheng Wang
|
BLIAM: Literature-based Data Synthesis for Synergistic Drug Combination
Prediction
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Language models pre-trained on scientific literature corpora have
substantially advanced scientific discovery by offering high-quality feature
representations for downstream applications. However, these features are often
not interpretable, and thus can reveal limited insights to domain experts.
Instead of obtaining features from language models, we propose BLIAM, a
literature-based data synthesis approach to directly generate training data
points that are interpretable and model-agnostic to downstream applications.
The key idea of BLIAM is to create prompts using existing training data and
then use these prompts to synthesize new data points. BLIAM performs these two
steps iteratively as new data points will define more informative prompts and
new prompts will in turn synthesize more accurate data points. Notably,
literature-based data augmentation might introduce data leakage since labels of
test data points in downstream applications might have already been mentioned
in the language model corpus. To prevent such leakage, we introduce GDSC-combo,
a large-scale drug combination discovery dataset that was published after the
biomedical language model was trained. We found that BLIAM substantially
outperforms a non-augmented approach and manual prompting in this rigorous data
split setting. BLIAM can be further used to synthesize data points for novel
drugs and cell lines that were not even measured in biomedical experiments. In
addition to the promising prediction performance, the data points synthesized
by BLIAM are interpretable and model-agnostic, enabling in silico augmentation
for in vitro experiments.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 06:48:52 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2023 05:26:25 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Yang",
"Cai",
""
],
[
"Woicik",
"Addie",
""
],
[
"Poon",
"Hoifung",
""
],
[
"Wang",
"Sheng",
""
]
] |
new_dataset
| 0.985502 |
2302.07589
|
Phillip Rieger
|
Phillip Rieger, Marco Chilese, Reham Mohamed, Markus Miettinen,
Hossein Fereidooni, Ahmad-Reza Sadeghi
|
ARGUS: Context-Based Detection of Stealthy IoT Infiltration Attacks
|
To appear in the 32nd USENIX Security Symposium, August 2022, Anaheim
CA, USA
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
IoT application domains, device diversity and connectivity are rapidly
growing. IoT devices control various functions in smart homes and buildings,
smart cities, and smart factories, making these devices an attractive target
for attackers. On the other hand, the large variability of different
application scenarios and inherent heterogeneity of devices make it very
challenging to reliably detect abnormal IoT device behaviors and distinguish
these from benign behaviors. Existing approaches for detecting attacks are
mostly limited to attacks directly compromising individual IoT devices, or,
require predefined detection policies. They cannot detect attacks that utilize
the control plane of the IoT system to trigger actions in an
unintended/malicious context, e.g., opening a smart lock while the smart home
residents are absent.
In this paper, we tackle this problem and propose ARGUS, the first
self-learning intrusion detection system for detecting contextual attacks on
IoT environments, in which the attacker maliciously invokes IoT device actions
to reach its goals. ARGUS monitors the contextual setting based on the state
and actions of IoT devices in the environment. An unsupervised Deep Neural
Network (DNN) is used for modeling the typical contextual device behavior and
detecting actions taking place in abnormal contextual settings. This
unsupervised approach ensures that ARGUS is not restricted to detecting
previously known attacks but is also able to detect new attacks. We evaluated
ARGUS on heterogeneous real-world smart-home settings and achieve at least an
F1-Score of 99.64% for each setup, with a false positive rate (FPR) of at most
0.03%.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 11:05:45 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2023 17:02:19 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Rieger",
"Phillip",
""
],
[
"Chilese",
"Marco",
""
],
[
"Mohamed",
"Reham",
""
],
[
"Miettinen",
"Markus",
""
],
[
"Fereidooni",
"Hossein",
""
],
[
"Sadeghi",
"Ahmad-Reza",
""
]
] |
new_dataset
| 0.998844 |
2302.07693
|
Maxim Novopoltsev
|
Maxim Novopoltsev, Leonid Verkhovtsev, Ruslan Murtazin, Dmitriy
Milevich, Iuliia Zemtsova
|
Fine-tuning of sign language recognition models: a technical report
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Sign Language Recognition (SLR) is an essential yet challenging task since
sign language is performed with the fast and complex movement of hand gestures,
body posture, and even facial expressions. %Skeleton Aware Multi-modal Sign
Language Recognition In this work, we focused on investigating two questions:
how fine-tuning on datasets from other sign languages helps improve sign
recognition quality, and whether sign recognition is possible in real-time
without using GPU. Three different languages datasets (American sign language
WLASL, Turkish - AUTSL, Russian - RSL) have been used to validate the models.
The average speed of this system has reached 3 predictions per second, which
meets the requirements for the real-time scenario. This model (prototype) will
benefit speech or hearing impaired people talk with other trough internet. We
also investigated how the additional training of the model in another sign
language affects the quality of recognition. The results show that further
training of the model on the data of another sign language almost always leads
to an improvement in the quality of gesture recognition. We also provide code
for reproducing model training experiments, converting models to ONNX format,
and inference for real-time gesture recognition.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 14:36:18 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2023 07:57:08 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Novopoltsev",
"Maxim",
""
],
[
"Verkhovtsev",
"Leonid",
""
],
[
"Murtazin",
"Ruslan",
""
],
[
"Milevich",
"Dmitriy",
""
],
[
"Zemtsova",
"Iuliia",
""
]
] |
new_dataset
| 0.987622 |
2302.07747
|
Joseph O'Rourke
|
Joseph O'Rourke
|
Polar Zonohedra Edge-Unfold to Nets
|
22 pages, 16 figures, 7 references. v2 added a figure
| null | null | null |
cs.CG math.CO math.MG
|
http://creativecommons.org/licenses/by/4.0/
|
This note proves that every polar zonohedron has an edge-unfolding to a
non-overlapping net.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 15:53:57 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Feb 2023 14:03:02 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"O'Rourke",
"Joseph",
""
]
] |
new_dataset
| 0.977943 |
2302.07931
|
Dmitriy Rivkin
|
Dmitriy Rivkin, Gregory Dudek, Nikhil Kakodkar, David Meger, Oliver
Limoyo, Xue Liu, Francois Hogan
|
ANSEL Photobot: A Robot Event Photographer with Semantic Intelligence
|
ICRA 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our work examines the way in which large language models can be used for
robotic planning and sampling, specifically the context of automated
photographic documentation. Specifically, we illustrate how to produce a
photo-taking robot with an exceptional level of semantic awareness by
leveraging recent advances in general purpose language (LM) and vision-language
(VLM) models. Given a high-level description of an event we use an LM to
generate a natural-language list of photo descriptions that one would expect a
photographer to capture at the event. We then use a VLM to identify the best
matches to these descriptions in the robot's video stream. The photo portfolios
generated by our method are consistently rated as more appropriate to the event
by human evaluators than those generated by existing methods.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 20:21:22 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Rivkin",
"Dmitriy",
""
],
[
"Dudek",
"Gregory",
""
],
[
"Kakodkar",
"Nikhil",
""
],
[
"Meger",
"David",
""
],
[
"Limoyo",
"Oliver",
""
],
[
"Liu",
"Xue",
""
],
[
"Hogan",
"Francois",
""
]
] |
new_dataset
| 0.997004 |
2302.08192
|
Joseph de Vilmarest
|
Guillaume Lambert (EDF R&D), Bachir Hamrouche (EDF R&D), Joseph de
Vilmarest
|
Frugal day-ahead forecasting of multiple local electricity loads by
aggregating adaptive models
| null | null | null | null |
cs.LG stat.AP stat.ME stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We focus on day-ahead electricity load forecasting of substations of the
distribution network in France; therefore, our problem lies between the
instability of a single consumption and the stability of a countrywide total
demand. Moreover, we are interested in forecasting the loads of over one
thousand substations; consequently, we are in the context of forecasting
multiple time series. To that end, we rely on an adaptive methodology that
provided excellent results at a national scale; the idea is to combine
generalized additive models with state-space representations. However, the
extension of this methodology to the prediction of over a thousand time series
raises a computational issue. We solve it by developing a frugal variant,
reducing the number of parameters estimated; we estimate the forecasting models
only for a few time series and achieve transfer learning by relying on
aggregation of experts. It yields a reduction of computational needs and their
associated emissions. We build several variants, corresponding to different
levels of parameter transfer, and we look for the best trade-off between
accuracy and frugality. The selected method achieves competitive results
compared to state-of-the-art individual models. Finally, we highlight the
interpretability of the models, which is important for operational
applications.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 10:17:19 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Lambert",
"Guillaume",
"",
"EDF R&D"
],
[
"Hamrouche",
"Bachir",
"",
"EDF R&D"
],
[
"de Vilmarest",
"Joseph",
""
]
] |
new_dataset
| 0.977863 |
2302.08198
|
Francoise Grelaud
|
Patrick S\'egu\'ela, Nathalie Aussenac-Gilles (IRIT-MELODI, CNRS)
|
Un mod{\`e}le de base de connaissances terminologiques
|
in French language. 2{\`e}mes Rencontres Terminologie et Intelligence
Artificielle (TIA 1997), Groupe de recherche TIA : Terminologie et
intelligence artificielle, UT2 LeMirail, Toulouse, Apr 1997, Toulouse, France
| null | null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the present paper, we argue that Terminological Knowledge Bases (TKB) are
all the more useful for addressing various needs as they do not fulfill formal
criteria. Moreover, they intend to clarify the terminology of a given domain by
illustrating term uses in various contexts. Thus we designed a TKB structure
including 3 linked features: terms, concepts and texts, that present the
peculiar use of each term in the domain. Note that concepts are represented
into frames whose non-formal description is standardized. Associated with this
structure, we defined modeling criteria at the conceptual level. Finaly, we
discuss the situation of TKB with regard to ontologies, and the use of TKB for
the development of AI systems.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 10:28:23 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Séguéla",
"Patrick",
"",
"IRIT-MELODI, CNRS"
],
[
"Aussenac-Gilles",
"Nathalie",
"",
"IRIT-MELODI, CNRS"
]
] |
new_dataset
| 0.998531 |
2302.08212
|
Zhihao Qian
|
Zhihao Qian, Yutian Lin, Bo Du
|
Visible-Infrared Person Re-Identification via Patch-Mixed Cross-Modality
Learning
|
IJCAI23
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visible-infrared person re-identification (VI-ReID) aims to retrieve images
of the same pedestrian from different modalities, where the challenges lie in
the significant modality discrepancy. To alleviate the modality gap, recent
methods generate intermediate images by GANs, grayscaling, or mixup strategies.
However, these methods could ntroduce extra noise, and the semantic
correspondence between the two modalities is not well learned. In this paper,
we propose a Patch-Mixed Cross-Modality framework (PMCM), where two images of
the same person from two modalities are split into patches and stitched into a
new one for model learning. In this way, the modellearns to recognize a person
through patches of different styles, and the modality semantic correspondence
is directly embodied. With the flexible image generation strategy, the
patch-mixed images freely adjust the ratio of different modality patches, which
could further alleviate the modality imbalance problem. In addition, the
relationship between identity centers among modalities is explored to further
reduce the modality variance, and the global-to-part constraint is introduced
to regularize representation learning of part features. On two VI-ReID
datasets, we report new state-of-the-art performance with the proposed method.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 10:56:00 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Qian",
"Zhihao",
""
],
[
"Lin",
"Yutian",
""
],
[
"Du",
"Bo",
""
]
] |
new_dataset
| 0.970888 |
2302.08361
|
Andrei Costin
|
Andrei Costin, Syed Khandker, Hannu Turtiainen, Timo H\"am\"al\"ainen
|
Cybersecurity of COSPAS-SARSAT and EPIRB: threat and attacker models,
exploits, future research
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
COSPAS-SARSAT is an International programme for "Search and Rescue" (SAR)
missions based on the "Satellite Aided Tracking" system (SARSAT). It is
designed to provide accurate, timely, and reliable distress alert and location
data to help SAR authorities of participating countries to assist persons and
vessels in distress. Two types of satellite constellations serve COSPAS-SARSAT,
low earth orbit search and rescue (LEOSAR) and geostationary orbiting search
and rescue (GEOSAR). Despite its nearly-global deployment and critical
importance, unfortunately enough, we found that COSPAS-SARSAT protocols and
standard 406 MHz transmissions lack essential means of cybersecurity.
In this paper, we investigate the cybersecurity aspects of COSPAS-SARSAT
space-/satellite-based systems. In particular, we practically and successfully
implement and demonstrate the first (to our knowledge) attacks on COSPAS-SARSAT
406 MHz protocols, namely replay, spoofing, and protocol fuzzing on EPIRB
protocols. We also identify a set of core research challenges preventing more
effective cybersecurity research in the field and outline the main
cybersecurity weaknesses and possible mitigations to increase the system's
cybersecurity level.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 15:26:06 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Costin",
"Andrei",
""
],
[
"Khandker",
"Syed",
""
],
[
"Turtiainen",
"Hannu",
""
],
[
"Hämäläinen",
"Timo",
""
]
] |
new_dataset
| 0.995222 |
2302.08368
|
Andrei Costin
|
Lassi Laaksosaari, Hannu Turtianen, Syed Khandker, Andrei Costin
|
dump1030: open-source plug-and-play demodulator/decoder for 1030MHz
uplink
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic Dependent Surveillance (ADS), Automatic Dependent
Surveillance-Broadcast (ADS-B), Secondary Surveillance Radars (SSR), and Mode S
are key air surveillance technologies representing a critical component of
next-generation air transportation systems. However, compared to 1090MHz
demodulators and decoders, which have plenty of implementations, the 1030MHz
uplink receivers are, in general, scarcely, if at all, represented.
In this paper, we present the development and evaluation of dump1030 -
cross-platform plug-and-play open-source implementation for decoding 1030MHz
uplink Mode A/C/S interrogations. We demonstrate and detail an agile
development process of building dump1030 by adapting a state-of-the-art
dump1090 design and implementation. In our repeated experiments, dump1030
achieves a high detection accuracy of 1030MHz interrogation signals based on
lab evaluation using synthetically-generated interrogation signals. We also
discuss a handful of practical use cases where dump1030 can find immediate
application and implementation, both in research and industrial settings.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 15:36:48 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Laaksosaari",
"Lassi",
""
],
[
"Turtianen",
"Hannu",
""
],
[
"Khandker",
"Syed",
""
],
[
"Costin",
"Andrei",
""
]
] |
new_dataset
| 0.997582 |
2302.08504
|
Chung-Yi Weng
|
Chung-Yi Weng, Pratul P. Srinivasan, Brian Curless, and Ira
Kemelmacher-Shlizerman
|
PersonNeRF: Personalized Reconstruction from Photo Collections
|
Project Page: https://grail.cs.washington.edu/projects/personnerf/
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present PersonNeRF, a method that takes a collection of photos of a
subject (e.g. Roger Federer) captured across multiple years with arbitrary body
poses and appearances, and enables rendering the subject with arbitrary novel
combinations of viewpoint, body pose, and appearance. PersonNeRF builds a
customized neural volumetric 3D model of the subject that is able to render an
entire space spanned by camera viewpoint, body pose, and appearance. A central
challenge in this task is dealing with sparse observations; a given body pose
is likely only observed by a single viewpoint with a single appearance, and a
given appearance is only observed under a handful of different body poses. We
address this issue by recovering a canonical T-pose neural volumetric
representation of the subject that allows for changing appearance across
different observations, but uses a shared pose-dependent motion field across
all observations. We demonstrate that this approach, along with regularization
of the recovered volumetric geometry to encourage smoothness, is able to
recover a model that renders compelling images from novel combinations of
viewpoint, pose, and appearance from these challenging unstructured photo
collections, outperforming prior work for free-viewpoint human rendering.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 18:57:17 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Weng",
"Chung-Yi",
""
],
[
"Srinivasan",
"Pratul P.",
""
],
[
"Curless",
"Brian",
""
],
[
"Kemelmacher-Shlizerman",
"Ira",
""
]
] |
new_dataset
| 0.987919 |
2302.08505
|
Renjie Li
|
Renjie Li, Chun Yu Lao, Rebecca St. George, Katherine Lawler, Saurabh
Garg, Son N. Tran, Quan Bai, Jane Alty
|
Rapid-Motion-Track: Markerless Tracking of Fast Human Motion with Deeper
Learning
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Objective The coordination of human movement directly reflects function of
the central nervous system. Small deficits in movement are often the first sign
of an underlying neurological problem. The objective of this research is to
develop a new end-to-end, deep learning-based system, Rapid-Motion-Track (RMT)
that can track the fastest human movement accurately when webcams or laptop
cameras are used.
Materials and Methods We applied RMT to finger tapping, a well-validated test
of motor control that is one of the most challenging human motions to track
with computer vision due to the small keypoints of digits and the high
velocities that are generated. We recorded 160 finger tapping assessments
simultaneously with a standard 2D laptop camera (30 frames/sec) and a
high-speed wearable sensor-based 3D motion tracking system (250 frames/sec).
RMT and a range of DLC models were applied to the video data with tapping
frequencies up to 8Hz to extract movement features.
Results The movement features (e.g. speed, rhythm, variance) identified with
the new RMT system exhibited very high concurrent validity with the
gold-standard measurements (97.3\% of RMT measures were within +/-0.5Hz of the
Optotrak measures), and outperformed DLC and other advanced computer vision
tools (around 88.2\% of DLC measures were within +/-0.5Hz of the Optotrak
measures). RMT also accurately tracked a range of other rapid human movements
such as foot tapping, head turning and sit-to -stand movements.
Conclusion: With the ubiquity of video technology in smart devices, the RMT
method holds potential to transform access and accuracy of human movement
assessment.
|
[
{
"version": "v1",
"created": "Wed, 18 Jan 2023 22:57:34 GMT"
}
] | 2023-02-17T00:00:00 |
[
[
"Li",
"Renjie",
""
],
[
"Lao",
"Chun Yu",
""
],
[
"George",
"Rebecca St.",
""
],
[
"Lawler",
"Katherine",
""
],
[
"Garg",
"Saurabh",
""
],
[
"Tran",
"Son N.",
""
],
[
"Bai",
"Quan",
""
],
[
"Alty",
"Jane",
""
]
] |
new_dataset
| 0.99897 |
1910.11819
|
Qin Zou
|
Yuanhao Yue, Qin Zou, Hongkai Yu, Qian Wang, Zhongyuan Wang and Song
Wang
|
An End-to-End Network for Co-Saliency Detection in One Single Image
| null |
SCIENCE CHINA Information Sciences, 2023
|
10.1007/s11432-022-3686-1
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Co-saliency detection within a single image is a common vision problem that
has received little attention and has not yet been well addressed. Existing
methods often used a bottom-up strategy to infer co-saliency in an image in
which salient regions are firstly detected using visual primitives such as
color and shape and then grouped and merged into a co-saliency map. However,
co-saliency is intrinsically perceived complexly with bottom-up and top-down
strategies combined in human vision. To address this problem, this study
proposes a novel end-to-end trainable network comprising a backbone net and two
branch nets. The backbone net uses ground-truth masks as top-down guidance for
saliency prediction, whereas the two branch nets construct triplet proposals
for regional feature mapping and clustering, which drives the network to be
bottom-up sensitive to co-salient regions. We construct a new dataset of 2,019
natural images with co-saliency in each image to evaluate the proposed method.
Experimental results show that the proposed method achieves state-of-the-art
accuracy with a running speed of 28 fps.
|
[
{
"version": "v1",
"created": "Fri, 25 Oct 2019 16:00:44 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2023 15:17:28 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Yue",
"Yuanhao",
""
],
[
"Zou",
"Qin",
""
],
[
"Yu",
"Hongkai",
""
],
[
"Wang",
"Qian",
""
],
[
"Wang",
"Zhongyuan",
""
],
[
"Wang",
"Song",
""
]
] |
new_dataset
| 0.995967 |
2005.08572
|
Florentin Putz
|
Florentin Putz, Flor \'Alvarez, Jiska Classen
|
Acoustic Integrity Codes: Secure Device Pairing Using Short-Range
Acoustic Communication
|
11 pages, 11 figures. Published at ACM WiSec 2020 (13th ACM
Conference on Security and Privacy in Wireless and Mobile Networks). Updated
references
|
WiSec 2020: Proceedings of the 13th ACM Conference on Security and
Privacy in Wireless and Mobile Networks
|
10.1145/3395351.3399420
| null |
cs.CR cs.NI cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Secure Device Pairing (SDP) relies on an out-of-band channel to authenticate
devices. This requires a common hardware interface, which limits the use of
existing SDP systems. We propose to use short-range acoustic communication for
the initial pairing. Audio hardware is commonly available on existing
off-the-shelf devices and can be accessed from user space without requiring
firmware or hardware modifications. We improve upon previous approaches by
designing Acoustic Integrity Codes (AICs): a modulation scheme that provides
message authentication on the acoustic physical layer. We analyze their
security and demonstrate that we can defend against signal cancellation attacks
by designing signals with low autocorrelation. Our system can detect
overshadowing attacks using a ternary decision function with a threshold. In
our evaluation of this SDP scheme's security and robustness, we achieve a bit
error ratio below 0.1% for a net bit rate of 100 bps with a signal-to-noise
ratio (SNR) of 14 dB. Using our open-source proof-of-concept implementation on
Android smartphones, we demonstrate pairing between different smartphone
models.
|
[
{
"version": "v1",
"created": "Mon, 18 May 2020 10:33:26 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Aug 2020 17:53:32 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Putz",
"Florentin",
""
],
[
"Álvarez",
"Flor",
""
],
[
"Classen",
"Jiska",
""
]
] |
new_dataset
| 0.999429 |
2112.06102
|
Pedro Machado PhD
|
Pedro Machado, Joao Filipe Ferreira, Andreas Oikonomou, T.M. McGinnity
|
NeuroHSMD: Neuromorphic Hybrid Spiking Motion Detector
| null | null | null | null |
cs.NE cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Vertebrate retinas are highly-efficient in processing trivial visual tasks
such as detecting moving objects, yet a complex challenges for modern
computers. In vertebrates, the detection of object motion is performed by
specialised retinal cells named Object Motion Sensitive Ganglion Cells
(OMS-GC). OMS-GC process continuous visual signals and generate spike patterns
that are post-processed by the Visual Cortex. Our previous Hybrid Sensitive
Motion Detector (HSMD) algorithm was the first hybrid algorithm to enhance
Background subtraction (BS) algorithms with a customised 3-layer Spiking Neural
Network (SNN) that generates OMS-GC spiking-like responses. In this work, we
present a Neuromorphic Hybrid Sensitive Motion Detector (NeuroHSMD) algorithm
that accelerates our HSMD algorithm using Field-Programmable Gate Arrays
(FPGAs). The NeuroHSMD was compared against the HSMD algorithm, using the same
2012 Change Detection (CDnet2012) and 2014 Change Detection (CDnet2014)
benchmark datasets. When tested against the CDnet2012 and CDnet2014 datasets,
NeuroHSMD performs object motion detection at 720x480 at 28.06 Frames Per
Second (fps) and 720x480 at 28.71 fps, respectively, with no degradation of
quality. Moreover, the NeuroHSMD proposed in this paper was completely
implemented in Open Computer Language (OpenCL) and therefore is easily
replicated in other devices such as Graphical Processing Units (GPUs) and
clusters of Central Processing Units (CPUs).
|
[
{
"version": "v1",
"created": "Sun, 12 Dec 2021 00:01:15 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Jan 2022 21:38:18 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Jul 2022 17:24:28 GMT"
},
{
"version": "v4",
"created": "Sat, 12 Nov 2022 23:55:54 GMT"
},
{
"version": "v5",
"created": "Tue, 14 Feb 2023 23:43:01 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Machado",
"Pedro",
""
],
[
"Ferreira",
"Joao Filipe",
""
],
[
"Oikonomou",
"Andreas",
""
],
[
"McGinnity",
"T. M.",
""
]
] |
new_dataset
| 0.988817 |
2205.08738
|
Jiahao Zhu
|
Jiahao Zhu, Huajun Zhou, Zixuan Chen, Yi Zhou, Xiaohua Xie
|
3D-VFD: A Victim-free Detector against 3D Adversarial Point Clouds
|
6 pages, 13pages
| null | null | null |
cs.MM cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D deep models consuming point clouds have achieved sound application effects
in computer vision. However, recent studies have shown they are vulnerable to
3D adversarial point clouds. In this paper, we regard these malicious point
clouds as 3D steganography examples and present a new perspective, 3D
steganalysis, to counter such examples. Specifically, we propose 3D-VFD, a
victim-free detector against 3D adversarial point clouds. Its core idea is to
capture the discrepancies between residual geometric feature distributions of
benign point clouds and adversarial point clouds and map these point clouds to
a lower dimensional space where we can efficiently distinguish them. Unlike
existing detection techniques against 3D adversarial point clouds, 3D-VFD does
not rely on the victim 3D deep model's outputs for discrimination. Extensive
experiments demonstrate that 3D-VFD achieves state-of-the-art detection and can
effectively detect 3D adversarial attacks based on point adding and point
perturbation while keeping fast detection speed.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 06:19:15 GMT"
},
{
"version": "v2",
"created": "Sat, 21 Jan 2023 04:21:18 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Feb 2023 05:22:34 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Zhu",
"Jiahao",
""
],
[
"Zhou",
"Huajun",
""
],
[
"Chen",
"Zixuan",
""
],
[
"Zhou",
"Yi",
""
],
[
"Xie",
"Xiaohua",
""
]
] |
new_dataset
| 0.998507 |
2207.03157
|
Zixuan Huang
|
Zixuan Huang, Beixiong Zheng, Rui Zhang
|
Roadside IRS-Aided Vehicular Communication: Efficient Channel Estimation
and Low-Complexity Beamforming Design
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intelligent reflecting surface (IRS) has emerged as a promising technique to
control wireless propagation environment for enhancing the communication
performance cost-effectively. However, the rapidly time-varying channel in
high-mobility communication scenarios such as vehicular communication renders
it challenging to obtain the instantaneous channel state information (CSI)
efficiently for IRS with a large number of reflecting elements. In this paper,
we propose a new roadside IRS-aided vehicular communication system to tackle
this challenge. Specifically, by exploiting the symmetrical deployment of IRSs
with inter-laced equal intervals on both sides of the road and the cooperation
among nearby IRS controllers, we propose a new two-stage channel estimation
scheme with off-line and online training, respectively, to obtain the
static/time-varying CSI required by the proposed low-complexity passive
beamforming scheme efficiently. The proposed IRS beamforming and online channel
estimation designs leverage the existing uplink pilots in wireless networks and
do not require any change of the existing transmission protocol. Moreover, they
can be implemented by each of IRS controllers independently, without the need
of any real-time feedback from the user's serving BS. Simulation results show
that the proposed designs can efficiently achieve the high IRS passive
beamforming gain and thus significantly enhance the achievable communication
throughput for high-speed vehicular communications.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 08:42:40 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2023 11:14:38 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Huang",
"Zixuan",
""
],
[
"Zheng",
"Beixiong",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.999175 |
2211.07173
|
Yuzhou Peng
|
Jie Wang, Yuzhou Peng, Xiaodong Yang, Ting Wang, Yanming Zhang
|
SportsTrack: An Innovative Method for Tracking Athletes in Sports Scenes
|
7 pages,9 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The SportsMOT dataset aims to solve multiple object tracking of athletes in
different sports scenes such as basketball or soccer. The dataset is
challenging because of the unstable camera view, athletes' complex trajectory,
and complicated background. Previous MOT methods can not match enough
high-quality tracks of athletes. To pursue higher performance of MOT in sports
scenes, we introduce an innovative tracker named SportsTrack, we utilize
tracking by detection as our detection paradigm. Then we will introduce a
three-stage matching process to solve the motion blur and body overlapping in
sports scenes. Meanwhile, we present another innovation point: one-to-many
correspondence between detection bboxes and crowded tracks to handle the
overlap of athletes' bodies during sports competitions. Compared to other
trackers such as BOT-SORT and ByteTrack, We carefully restored edge-lost tracks
that were ignored by other trackers. Finally, we reached the SOTA result in the
SportsMOT dataset.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 08:09:38 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 03:23:39 GMT"
},
{
"version": "v3",
"created": "Mon, 13 Feb 2023 08:48:41 GMT"
},
{
"version": "v4",
"created": "Wed, 15 Feb 2023 03:25:30 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Wang",
"Jie",
""
],
[
"Peng",
"Yuzhou",
""
],
[
"Yang",
"Xiaodong",
""
],
[
"Wang",
"Ting",
""
],
[
"Zhang",
"Yanming",
""
]
] |
new_dataset
| 0.999042 |
2301.03551
|
Khaleel Mershad
|
Omar Cheikhrouhou, Khaleel Mershad, Faisal Jamil, Redowan Mahmud, Anis
Koubaa, Sanaz Rahimi Moosavi
|
A Lightweight Blockchain and Fog-enabled Secure Remote Patient
Monitoring System
|
32 pages, 13 figures, 5 tables, accepted by Elsevier "Internet of
Things; Engineering Cyber Physical Human Systems" journal on January 9, 2023
| null |
10.1016/j.iot.2023.100691
| null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
IoT has enabled the rapid growth of smart remote healthcare applications.
These IoT-based remote healthcare applications deliver fast and preventive
medical services to patients at risk or with chronic diseases. However,
ensuring data security and patient privacy while exchanging sensitive medical
data among medical IoT devices is still a significant concern in remote
healthcare applications. Altered or corrupted medical data may cause wrong
treatment and create grave health issues for patients. Moreover, current remote
medical applications' efficiency and response time need to be addressed and
improved. Considering the need for secure and efficient patient care, this
paper proposes a lightweight Blockchain-based and Fog-enabled remote patient
monitoring system that provides a high level of security and efficient response
time. Simulation results and security analysis show that the proposed
lightweight blockchain architecture fits the resource-constrained IoT devices
well and is secure against attacks. Moreover, the augmentation of Fog computing
improved the responsiveness of the remote patient monitoring system by 40%.
|
[
{
"version": "v1",
"created": "Mon, 9 Jan 2023 18:01:35 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Cheikhrouhou",
"Omar",
""
],
[
"Mershad",
"Khaleel",
""
],
[
"Jamil",
"Faisal",
""
],
[
"Mahmud",
"Redowan",
""
],
[
"Koubaa",
"Anis",
""
],
[
"Moosavi",
"Sanaz Rahimi",
""
]
] |
new_dataset
| 0.985022 |
2301.11030
|
Marcel Gohsen MSc.
|
Marcel Gohsen and Matthias Hagen and Martin Potthast and Benno Stein
|
Paraphrase Acquisition from Image Captions
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We propose to use image captions from the Web as a previously underutilized
resource for paraphrases (i.e., texts with the same "message") and to create
and analyze a corresponding dataset. When an image is reused on the Web, an
original caption is often assigned. We hypothesize that different captions for
the same image naturally form a set of mutual paraphrases. To demonstrate the
suitability of this idea, we analyze captions in the English Wikipedia, where
editors frequently relabel the same image for different articles. The paper
introduces the underlying mining technology, the resulting Wikipedia-IPC
dataset, and compares known paraphrase corpora with respect to their syntactic
and semantic paraphrase similarity to our new resource. In this context, we
introduce characteristic maps along the two similarity dimensions to identify
the style of paraphrases coming from different sources. An annotation study
demonstrates the high reliability of the algorithmically determined
characteristic maps.
|
[
{
"version": "v1",
"created": "Thu, 26 Jan 2023 10:54:51 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2023 15:32:26 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Gohsen",
"Marcel",
""
],
[
"Hagen",
"Matthias",
""
],
[
"Potthast",
"Martin",
""
],
[
"Stein",
"Benno",
""
]
] |
new_dataset
| 0.971278 |
2301.12519
|
Shreelakshmi C R
|
Shreelakshmi C R, Surya S. Durbha, Gaganpreet Singh
|
3D Object Detection in LiDAR Point Clouds using Graph Neural Networks
|
Errors in the results section. Experiments are carried out to rectify
the results
| null | null | null |
cs.CV cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
LiDAR (Light Detection and Ranging) is an advanced active remote sensing
technique working on the principle of time of travel (ToT) for capturing highly
accurate 3D information of the surroundings. LiDAR has gained wide attention in
research and development with the LiDAR industry expected to reach 2.8 billion
$ by 2025. Although the LiDAR dataset is of rich density and high spatial
resolution, it is challenging to process LiDAR data due to its inherent 3D
geometry and massive volume. But such a high-resolution dataset possesses
immense potential in many applications and has great potential in 3D object
detection and recognition. In this research we propose Graph Neural Network
(GNN) based framework to learn and identify the objects in the 3D LiDAR point
clouds. GNNs are class of deep learning which learns the patterns and objects
based on the principle of graph learning which have shown success in various 3D
computer vision tasks.
|
[
{
"version": "v1",
"created": "Sun, 29 Jan 2023 19:23:01 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 06:11:07 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"R",
"Shreelakshmi C",
""
],
[
"Durbha",
"Surya S.",
""
],
[
"Singh",
"Gaganpreet",
""
]
] |
new_dataset
| 0.999818 |
2302.06476
|
Chengwei Qin
|
Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro
Yasunaga, Diyi Yang
|
Is ChatGPT a General-Purpose Natural Language Processing Task Solver?
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Spurred by advancements in scale, large language models (LLMs) have
demonstrated the ability to perform a variety of natural language processing
(NLP) tasks zero-shot -- i.e., without adaptation on downstream data. Recently,
the debut of ChatGPT has drawn a great deal of attention from the natural
language processing (NLP) community due to the fact that it can generate
high-quality responses to human input and self-correct previous mistakes based
on subsequent conversations. However, it is not yet known whether ChatGPT can
serve as a generalist model that can perform many NLP tasks zero-shot. In this
work, we empirically analyze the zero-shot learning ability of ChatGPT by
evaluating it on 20 popular NLP datasets covering 7 representative task
categories. With extensive empirical studies, we demonstrate both the
effectiveness and limitations of the current version of ChatGPT. We find that
ChatGPT performs well on many tasks favoring reasoning capabilities (e.g.,
arithmetic reasoning) while it still faces challenges when solving specific
tasks such as sequence tagging. We additionally provide in-depth analysis
through qualitative case studies.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 09:44:51 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2023 17:46:20 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Qin",
"Chengwei",
""
],
[
"Zhang",
"Aston",
""
],
[
"Zhang",
"Zhuosheng",
""
],
[
"Chen",
"Jiaao",
""
],
[
"Yasunaga",
"Michihiro",
""
],
[
"Yang",
"Diyi",
""
]
] |
new_dataset
| 0.994322 |
2302.07241
|
Krishna Murthy Jatavallabhula
|
Krishna Murthy Jatavallabhula and Alihusein Kuwajerwala and Qiao Gu
and Mohd Omama and Tao Chen and Shuang Li and Ganesh Iyer and Soroush
Saryazdi and Nikhil Keetha and Ayush Tewari and Joshua B. Tenenbaum and Celso
Miguel de Melo and Madhava Krishna and Liam Paull and Florian Shkurti and
Antonio Torralba
|
ConceptFusion: Open-set Multimodal 3D Mapping
| null | null | null | null |
cs.CV cs.AI cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Building 3D maps of the environment is central to robot navigation, planning,
and interaction with objects in a scene. Most existing approaches that
integrate semantic concepts with 3D maps largely remain confined to the
closed-set setting: they can only reason about a finite set of concepts,
pre-defined at training time. Further, these maps can only be queried using
class labels, or in recent work, using text prompts.
We address both these issues with ConceptFusion, a scene representation that
is (1) fundamentally open-set, enabling reasoning beyond a closed set of
concepts and (ii) inherently multimodal, enabling a diverse range of possible
queries to the 3D map, from language, to images, to audio, to 3D geometry, all
working in concert. ConceptFusion leverages the open-set capabilities of
today's foundation models pre-trained on internet-scale data to reason about
concepts across modalities such as natural language, images, and audio. We
demonstrate that pixel-aligned open-set features can be fused into 3D maps via
traditional SLAM and multi-view fusion approaches. This enables effective
zero-shot spatial reasoning, not needing any additional training or finetuning,
and retains long-tailed concepts better than supervised approaches,
outperforming them by more than 40% margin on 3D IoU. We extensively evaluate
ConceptFusion on a number of real-world datasets, simulated home environments,
a real-world tabletop manipulation task, and an autonomous driving platform. We
showcase new avenues for blending foundation models with 3D open-set multimodal
mapping.
For more information, visit our project page https://concept-fusion.github.io
or watch our 5-minute explainer video
https://www.youtube.com/watch?v=rkXgws8fiDs
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 18:40:26 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2023 01:49:09 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Jatavallabhula",
"Krishna Murthy",
""
],
[
"Kuwajerwala",
"Alihusein",
""
],
[
"Gu",
"Qiao",
""
],
[
"Omama",
"Mohd",
""
],
[
"Chen",
"Tao",
""
],
[
"Li",
"Shuang",
""
],
[
"Iyer",
"Ganesh",
""
],
[
"Saryazdi",
"Soroush",
""
],
[
"Keetha",
"Nikhil",
""
],
[
"Tewari",
"Ayush",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"de Melo",
"Celso Miguel",
""
],
[
"Krishna",
"Madhava",
""
],
[
"Paull",
"Liam",
""
],
[
"Shkurti",
"Florian",
""
],
[
"Torralba",
"Antonio",
""
]
] |
new_dataset
| 0.956806 |
2302.07455
|
Xinyi Chen
|
Jinxia Zhang, Xinyi Chen, Haikun Wei, Kanjian Zhang
|
A lightweight network for photovoltaic cell defect detection in
electroluminescence images based on neural architecture search and knowledge
distillation
|
12 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, the rapid development of photovoltaic(PV) power stations requires
increasingly reliable maintenance and fault diagnosis of PV modules in the
field. Due to the effectiveness, convolutional neural network (CNN) has been
widely used in the existing automatic defect detection of PV cells. However,
the parameters of these CNN-based models are very large, which require
stringent hardware resources and it is difficult to be applied in actual
industrial projects. To solve these problems, we propose a novel lightweight
high-performance model for automatic defect detection of PV cells in
electroluminescence(EL) images based on neural architecture search and
knowledge distillation. To auto-design an effective lightweight model, we
introduce neural architecture search to the field of PV cell defect
classification for the first time. Since the defect can be any size, we design
a proper search structure of network to better exploit the multi-scale
characteristic. To improve the overall performance of the searched lightweight
model, we further transfer the knowledge learned by the existing pre-trained
large-scale model based on knowledge distillation. Different kinds of knowledge
are exploited and transferred, including attention information, feature
information, logit information and task-oriented information. Experiments have
demonstrated that the proposed model achieves the state-of-the-art performance
on the public PV cell dataset of EL images under online data augmentation with
accuracy of 91.74% and the parameters of 1.85M. The proposed lightweight
high-performance model can be easily deployed to the end devices of the actual
industrial projects and retain the accuracy.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 04:00:35 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Zhang",
"Jinxia",
""
],
[
"Chen",
"Xinyi",
""
],
[
"Wei",
"Haikun",
""
],
[
"Zhang",
"Kanjian",
""
]
] |
new_dataset
| 0.998292 |
2302.07478
|
Hongtao Zhong
|
Hongtao Zhong, Zhonghao Chen, Wenqin Huangfu, Chen Wang, Yixin Xu,
Tianyi Wang, Yao Yu, Yongpan Liu, Vijaykrishnan Narayanan, Huazhong Yang,
Xueqing Li
|
ASMCap: An Approximate String Matching Accelerator for Genome Sequence
Analysis Based on Capacitive Content Addressable Memory
|
Accepted by Design Automation Conference (DAC) 2023
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Genome sequence analysis is a powerful tool in medical and scientific
research. Considering the inevitable sequencing errors and genetic variations,
approximate string matching (ASM) has been adopted in practice for genome
sequencing. However, with exponentially increasing bio-data, ASM hardware
acceleration is facing severe challenges in improving the throughput and energy
efficiency with the accuracy constraint. This paper presents ASMCap, an ASM
acceleration approach for genome sequence analysis with hardware-algorithm
co-optimization. At the circuit level, ASMCap adopts charge-domain computing
based on the capacitive multi-level content addressable memories (ML-CAMs), and
outperforms the state-of-the-art ML-CAM-based ASM accelerators EDAM with higher
accuracy and energy efficiency. ASMCap also has misjudgment correction
capability with two proposed hardware-friendly strategies, namely the
Hamming-Distance Aid Correction (HDAC) for the substitution-dominant edits and
the Threshold-Aware Sequence Rotation (TASR) for the consecutive indels.
Evaluation results show that ASMCap can achieve an average of 1.2x (from 74.7%
to 87.6%) and up to 1.8x (from 46.3% to 81.2%) higher F1 score (the key metric
of accuracy), 1.4x speedup, and 10.8x energy efficiency improvement compared
with EDAM. Compared with the other ASM accelerators, including ResMA based on
the comparison matrix, and SaVI based on the seeding strategy, ASMCap achieves
an average improvement of 174x and 61x speedup, and 8.7e3x and 943x higher
energy efficiency, respectively.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 05:49:56 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Zhong",
"Hongtao",
""
],
[
"Chen",
"Zhonghao",
""
],
[
"Huangfu",
"Wenqin",
""
],
[
"Wang",
"Chen",
""
],
[
"Xu",
"Yixin",
""
],
[
"Wang",
"Tianyi",
""
],
[
"Yu",
"Yao",
""
],
[
"Liu",
"Yongpan",
""
],
[
"Narayanan",
"Vijaykrishnan",
""
],
[
"Yang",
"Huazhong",
""
],
[
"Li",
"Xueqing",
""
]
] |
new_dataset
| 0.999458 |
2302.07483
|
Shihan Liu
|
Shihan Liu, Junlin Zha, Jian Sun, Zhuo Li and Gang Wang
|
EdgeYOLO: An Edge-Real-Time Object Detector
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes an efficient, low-complexity and anchor-free object
detector based on the state-of-the-art YOLO framework, which can be implemented
in real time on edge computing platforms. We develop an enhanced data
augmentation method to effectively suppress overfitting during training, and
design a hybrid random loss function to improve the detection accuracy of small
objects. Inspired by FCOS, a lighter and more efficient decoupled head is
proposed, and its inference speed can be improved with little loss of
precision. Our baseline model can reach the accuracy of 50.6% AP50:95 and 69.8%
AP50 in MS COCO2017 dataset, 26.4% AP50:95 and 44.8% AP50 in VisDrone2019-DET
dataset, and it meets real-time requirements (FPS>=30) on edge-computing device
Nvidia Jetson AGX Xavier. We also designed lighter models with less parameters
for edge computing devices with lower computing power, which also show better
performances. Our source code, hyper-parameters and model weights are all
available at https://github.com/LSH9832/edgeyolo.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 06:05:14 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Liu",
"Shihan",
""
],
[
"Zha",
"Junlin",
""
],
[
"Sun",
"Jian",
""
],
[
"Li",
"Zhuo",
""
],
[
"Wang",
"Gang",
""
]
] |
new_dataset
| 0.997255 |
2302.07492
|
Franck Dernoncourt
|
Catherine Yeh, Nedim Lipka, Franck Dernoncourt
|
Envisioning the Next-Gen Document Reader
|
Paper accepted at the AAAI 2023 Workshop on Scientific Document
Understanding
| null | null | null |
cs.CL cs.AI cs.HC cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
People read digital documents on a daily basis to share, exchange, and
understand information in electronic settings. However, current document
readers create a static, isolated reading experience, which does not support
users' goals of gaining more knowledge and performing additional tasks through
document interaction. In this work, we present our vision for the next-gen
document reader that strives to enhance user understanding and create a more
connected, trustworthy information experience. We describe 18 NLP-powered
features to add to existing document readers and propose a novel plug-in
marketplace that allows users to further customize their reading experience, as
demonstrated through 3 exploratory UI prototypes available at
https://github.com/catherinesyeh/nextgen-prototypes
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 06:43:12 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Yeh",
"Catherine",
""
],
[
"Lipka",
"Nedim",
""
],
[
"Dernoncourt",
"Franck",
""
]
] |
new_dataset
| 0.963529 |
2302.07564
|
Lili Yang
|
Feng Shu, Lili Yang, Yan Wang, Xuehui Wang, Weiping Shi, Chong Shen,
Jiangzhou Wang
|
Precoding and Beamforming Design for Intelligent Reconfigurable
Surface-Aided Hybrid Secure Spatial Modulation
|
14pages,8figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intelligent reflecting surface (IRS) is an emerging technology for wireless
communication composed of a large number of low-cost passive devices with
reconfigurable parameters, which can reflect signals with a certain phase shift
and is capable of building programmable communication environment. In this
paper, to avoid the high hardware cost and energy consumption in spatial
modulation (SM), an IRS-aided hybrid secure SM (SSM) system with a hybrid
precoder is proposed. To improve the security performance, we formulate an
optimization problem to maximize the secrecy rate (SR) by jointly optimizing
the beamforming at IRS and hybrid precoding at the transmitter. Considering
that the SR has no closed form expression, an approximate SR (ASR) expression
is derived as the objective function. To improve the SR performance, three IRS
beamforming methods, called IRS alternating direction method of multipliers
(IRS-ADMM), IRS block coordinate ascend (IRS-BCA) and IRS semi-definite
relaxation (IRS-SDR), are proposed. As for the hybrid precoding design,
approximated secrecy rate-successive convex approximation (ASR-SCA) method and
cut-off rate-gradient ascend (COR-GA) method are proposed. Simulation results
demonstrate that the proposed IRS-SDR and IRS-ADMM beamformers harvest
substantial SR performance gains over IRS-BCA. Particularly, the proposed
IRS-ADMM and IRS-BCA are of low-complexity at the expense of a little
performance loss compared with IRS-SDR. For hybrid precoding, the proposed
ASR-SCA performs better than COR-GA in the high transmit power region.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 10:09:09 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Shu",
"Feng",
""
],
[
"Yang",
"Lili",
""
],
[
"Wang",
"Yan",
""
],
[
"Wang",
"Xuehui",
""
],
[
"Shi",
"Weiping",
""
],
[
"Shen",
"Chong",
""
],
[
"Wang",
"Jiangzhou",
""
]
] |
new_dataset
| 0.97246 |
2302.07655
|
Felix Staudigl
|
Felix Staudigl, Thorben Fetz, Rebecca Pelke, Dominik Sisejkovic, Jan
Moritz Joseph, Leticia Bolzani P\"ohls, and Rainer Leupers
|
Fault Injection in Native Logic-in-Memory Computation on Neuromorphic
Hardware
| null | null | null | null |
cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
Logic-in-memory (LIM) describes the execution of logic gates within
memristive crossbar structures, promising to improve performance and energy
efficiency. Utilizing only binary values, LIM particularly excels in
accelerating binary neural networks, shifting it in the focus of edge
applications. Considering its potential, the impact of faults on BNNs
accelerated with LIM still lacks investigation. In this paper, we propose
faulty logic-in-memory (FLIM), a fault injection platform capable of executing
full-fledged BNNs on LIM while injecting in-field faults. The results show that
FLIM runs a single MNIST picture 66754x faster than the state of the art by
offering a fine-grained fault injection methodology.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 13:38:57 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Staudigl",
"Felix",
""
],
[
"Fetz",
"Thorben",
""
],
[
"Pelke",
"Rebecca",
""
],
[
"Sisejkovic",
"Dominik",
""
],
[
"Joseph",
"Jan Moritz",
""
],
[
"Pöhls",
"Leticia Bolzani",
""
],
[
"Leupers",
"Rainer",
""
]
] |
new_dataset
| 0.995196 |
2302.07676
|
Shengyu Hao
|
Shenghao Hao, Peiyuan Liu, Yibing Zhan, Kaixun Jin, Zuozhu Liu, Mingli
Song, Jenq-Neng Hwang, Gaoang Wang
|
DIVOTrack: A Novel Dataset and Baseline Method for Cross-View
Multi-Object Tracking in DIVerse Open Scenes
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cross-view multi-object tracking aims to link objects between frames and
camera views with substantial overlaps. Although cross-view multi-object
tracking has received increased attention in recent years, existing datasets
still have several issues, including 1) missing real-world scenarios, 2)
lacking diverse scenes, 3) owning a limited number of tracks, 4) comprising
only static cameras, and 5) lacking standard benchmarks, which hinder the
investigation and comparison of cross-view tracking methods. To solve the
aforementioned issues, we introduce DIVOTrack: a new cross-view multi-object
tracking dataset for DIVerse Open scenes with dense tracking pedestrians in
realistic and non-experimental environments. Our DIVOTrack has ten distinct
scenarios and 550 cross-view tracks, surpassing all cross-view multi-object
tracking datasets currently available. Furthermore, we provide a novel baseline
cross-view tracking method with a unified joint detection and cross-view
tracking framework named CrossMOT, which learns object detection, single-view
association, and cross-view matching with an all-in-one embedding model.
Finally, we present a summary of current methodologies and a set of standard
benchmarks with our DIVOTrack to provide a fair comparison and conduct a
comprehensive analysis of current approaches and our proposed CrossMOT. The
dataset and code are available at https://github.com/shengyuhao/DIVOTrack.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 14:10:42 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Hao",
"Shenghao",
""
],
[
"Liu",
"Peiyuan",
""
],
[
"Zhan",
"Yibing",
""
],
[
"Jin",
"Kaixun",
""
],
[
"Liu",
"Zuozhu",
""
],
[
"Song",
"Mingli",
""
],
[
"Hwang",
"Jenq-Neng",
""
],
[
"Wang",
"Gaoang",
""
]
] |
new_dataset
| 0.999814 |
2302.07734
|
Vishnu Naresh Boddeti
|
Zhichao Lu, Chuntao Ding, Felix Juefei-Xu, Vishnu Naresh Boddeti,
Shangguang Wang, and Yun Yang
|
TFormer: A Transmission-Friendly ViT Model for IoT Devices
|
IEEE Transactions on Parallel and Distributed Systems
| null | null | null |
cs.CV cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deploying high-performance vision transformer (ViT) models on ubiquitous
Internet of Things (IoT) devices to provide high-quality vision services will
revolutionize the way we live, work, and interact with the world. Due to the
contradiction between the limited resources of IoT devices and
resource-intensive ViT models, the use of cloud servers to assist ViT model
training has become mainstream. However, due to the larger number of parameters
and floating-point operations (FLOPs) of the existing ViT models, the model
parameters transmitted by cloud servers are large and difficult to run on
resource-constrained IoT devices. To this end, this paper proposes a
transmission-friendly ViT model, TFormer, for deployment on
resource-constrained IoT devices with the assistance of a cloud server. The
high performance and small number of model parameters and FLOPs of TFormer are
attributed to the proposed hybrid layer and the proposed partially connected
feed-forward network (PCS-FFN). The hybrid layer consists of nonlearnable
modules and a pointwise convolution, which can obtain multitype and multiscale
features with only a few parameters and FLOPs to improve the TFormer
performance. The PCS-FFN adopts group convolution to reduce the number of
parameters. The key idea of this paper is to propose TFormer with few model
parameters and FLOPs to facilitate applications running on resource-constrained
IoT devices to benefit from the high performance of the ViT models.
Experimental results on the ImageNet-1K, MS COCO, and ADE20K datasets for image
classification, object detection, and semantic segmentation tasks demonstrate
that the proposed model outperforms other state-of-the-art models.
Specifically, TFormer-S achieves 5% higher accuracy on ImageNet-1K than
ResNet18 with 1.4$\times$ fewer parameters and FLOPs.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 15:36:10 GMT"
}
] | 2023-02-16T00:00:00 |
[
[
"Lu",
"Zhichao",
""
],
[
"Ding",
"Chuntao",
""
],
[
"Juefei-Xu",
"Felix",
""
],
[
"Boddeti",
"Vishnu Naresh",
""
],
[
"Wang",
"Shangguang",
""
],
[
"Yang",
"Yun",
""
]
] |
new_dataset
| 0.951845 |
1912.00582
|
Alex Warstadt
|
Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng,
Sheng-Fu Wang, Samuel R. Bowman
|
BLiMP: The Benchmark of Linguistic Minimal Pairs for English
|
2020: Published in TACL Feb 2023: Corrected erroneous GPT-2 results
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce The Benchmark of Linguistic Minimal Pairs (shortened to BLiMP),
a challenge set for evaluating what language models (LMs) know about major
grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each
containing 1000 minimal pairs isolating specific contrasts in syntax,
morphology, or semantics. The data is automatically generated according to
expert-crafted grammars, and aggregate human agreement with the labels is
96.4%. We use it to evaluate n-gram, LSTM, and Transformer (GPT-2 and
Transformer-XL) LMs. We find that state-of-the-art models identify
morphological contrasts reliably, but they struggle with semantic restrictions
on the distribution of quantifiers and negative polarity items and subtle
syntactic phenomena such as extraction islands.
|
[
{
"version": "v1",
"created": "Mon, 2 Dec 2019 05:42:41 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Apr 2020 02:07:03 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Sep 2020 20:08:54 GMT"
},
{
"version": "v4",
"created": "Tue, 14 Feb 2023 10:33:15 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Warstadt",
"Alex",
""
],
[
"Parrish",
"Alicia",
""
],
[
"Liu",
"Haokun",
""
],
[
"Mohananey",
"Anhad",
""
],
[
"Peng",
"Wei",
""
],
[
"Wang",
"Sheng-Fu",
""
],
[
"Bowman",
"Samuel R.",
""
]
] |
new_dataset
| 0.999769 |
2012.13341
|
Yuchi Zhang
|
Chunjin Song, Yuchi Zhang, Willis Peng, Parmis Mohaghegh, Bastian
Wandt, and Helge Rhodin
|
AudioViewer: Learning to Visualize Sounds
| null |
Proceedings of the IEEE/CVF Winter Conference on Applications of
Computer Vision (WACV), 2023, pp. 2206-2216
| null | null |
cs.HC cs.CV cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A long-standing goal in the field of sensory substitution is to enable sound
perception for deaf and hard of hearing (DHH) people by visualizing audio
content. Different from existing models that translate to hand sign language,
between speech and text, or text and images, we target immediate and low-level
audio to video translation that applies to generic environment sounds as well
as human speech. Since such a substitution is artificial, without labels for
supervised learning, our core contribution is to build a mapping from audio to
video that learns from unpaired examples via high-level constraints. For
speech, we additionally disentangle content from style, such as gender and
dialect. Qualitative and quantitative results, including a human study,
demonstrate that our unpaired translation approach maintains important audio
features in the generated video and that videos of faces and numbers are well
suited for visualizing high-dimensional audio features that can be parsed by
humans to match and distinguish between sounds and words. Code and models are
available at https://chunjinsong.github.io/audioviewer
|
[
{
"version": "v1",
"created": "Tue, 22 Dec 2020 21:52:45 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Dec 2020 21:35:09 GMT"
},
{
"version": "v3",
"created": "Thu, 11 Mar 2021 19:51:23 GMT"
},
{
"version": "v4",
"created": "Fri, 3 Dec 2021 08:31:19 GMT"
},
{
"version": "v5",
"created": "Thu, 10 Nov 2022 06:33:29 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Song",
"Chunjin",
""
],
[
"Zhang",
"Yuchi",
""
],
[
"Peng",
"Willis",
""
],
[
"Mohaghegh",
"Parmis",
""
],
[
"Wandt",
"Bastian",
""
],
[
"Rhodin",
"Helge",
""
]
] |
new_dataset
| 0.97755 |
2112.10735
|
Mustafa Cemil Coskun
|
Peihong Yuan and Mustafa Cemil Co\c{s}kun
|
Successive Cancellation Ordered Search Decoding of Modified
$\boldsymbol{G}_N$-Coset Codes
|
14 pages, 9 figures, 3 tables. Submitted to IEEE journal. The revised
version of the first submission. Major changes: 1) No dedicated section for
numerical results. Instead, simulations are provided right after the relevant
section. 2) More simulation results are added to compare all the state of art
polar decoders in terms of the number of arithmetic operations. arXiv admin
note: text overlap with arXiv:2105.04048
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A tree search algorithm called successive cancellation ordered search (SCOS)
is proposed for $\boldsymbol{G}_N$-coset codes that implements
maximum-likelihood (ML) decoding with an adaptive complexity for transmission
over binary-input AWGN channels. Unlike bit-flip decoders, no outer code is
needed to terminate decoding; therefore, SCOS also applies to
$\boldsymbol{G}_N$-coset codes modified with dynamic frozen bits. The average
complexity is close to that of successive cancellation (SC) decoding at
practical frame error rates (FERs) for codes with wide ranges of rate and
lengths up to $512$ bits, which perform within $0.25$ dB or less from the
random coding union bound and outperform Reed--Muller codes under ML decoding
by up to $0.5$ dB. Simulations illustrate simultaneous gains for SCOS over
SC-Fano, SC stack (SCS) and SC list (SCL) decoding in FER and the average
complexity at various SNR regimes. SCOS is further extended by forcing it to
look for candidates satisfying a threshold on the likelihood, thereby
outperforming basic SCOS under complexity constraints. The modified SCOS
enables strong error-detection capability without the need for an outer code.
In particular, the $(128, 64)$ PAC code under modified SCOS provides gains in
overall and undetected FER compared to CRC-aided polar codes under SCL/dynamic
SC flip decoding at high SNR.
|
[
{
"version": "v1",
"created": "Mon, 20 Dec 2021 18:32:27 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Feb 2023 23:26:54 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Yuan",
"Peihong",
""
],
[
"Coşkun",
"Mustafa Cemil",
""
]
] |
new_dataset
| 0.999517 |
2201.09737
|
Nabil Ibtehaz
|
Nabil Ibtehaz, Muhammad E. H. Chowdhury, Amith Khandakar, Susu M.
Zughaier, Serkan Kiranyaz, M. Sohel Rahman
|
RamanNet: A generalized neural network architecture for Raman Spectrum
Analysis
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Raman spectroscopy provides a vibrational profile of the molecules and thus
can be used to uniquely identify different kind of materials. This sort of
fingerprinting molecules has thus led to widespread application of Raman
spectrum in various fields like medical dignostics, forensics, mineralogy,
bacteriology and virology etc. Despite the recent rise in Raman spectra data
volume, there has not been any significant effort in developing generalized
machine learning methods for Raman spectra analysis. We examine, experiment and
evaluate existing methods and conjecture that neither current sequential models
nor traditional machine learning models are satisfactorily sufficient to
analyze Raman spectra. Both has their perks and pitfalls, therefore we attempt
to mix the best of both worlds and propose a novel network architecture
RamanNet. RamanNet is immune to invariance property in CNN and at the same time
better than traditional machine learning models for the inclusion of sparse
connectivity. Our experiments on 4 public datasets demonstrate superior
performance over the much complex state-of-the-art methods and thus RamanNet
has the potential to become the defacto standard in Raman spectra data analysis
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 23:15:25 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Feb 2023 20:27:25 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Ibtehaz",
"Nabil",
""
],
[
"Chowdhury",
"Muhammad E. H.",
""
],
[
"Khandakar",
"Amith",
""
],
[
"Zughaier",
"Susu M.",
""
],
[
"Kiranyaz",
"Serkan",
""
],
[
"Rahman",
"M. Sohel",
""
]
] |
new_dataset
| 0.999257 |
2201.13143
|
Shen Wang
|
Jiaying Guo and Long Cheng and Shen Wang
|
CoTV: Cooperative Control for Traffic Light Signals and Connected
Autonomous Vehicles using Deep Reinforcement Learning
| null | null | null | null |
cs.AI cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The target of reducing travel time only is insufficient to support the
development of future smart transportation systems. To align with the United
Nations Sustainable Development Goals (UN-SDG), a further reduction of fuel and
emissions, improvements of traffic safety, and the ease of infrastructure
deployment and maintenance should also be considered. Different from existing
work focusing on the optimization of the control in either traffic light signal
(to improve the intersection throughput), or vehicle speed (to stabilize the
traffic), this paper presents a multi-agent Deep Reinforcement Learning (DRL)
system called CoTV, which Cooperatively controls both Traffic light signals and
Connected Autonomous Vehicles (CAV). Therefore, our CoTV can well balance the
achievement of the reduction of travel time, fuel, and emissions. In the
meantime, CoTV can also be easy to deploy by cooperating with only one CAV that
is the nearest to the traffic light controller on each incoming road. This
enables more efficient coordination between traffic light controllers and CAV,
thus leading to the convergence of training CoTV under the large-scale
multi-agent scenario that is traditionally difficult to converge. We give the
detailed system design of CoTV and demonstrate its effectiveness in a
simulation study using SUMO under various grid maps and realistic urban
scenarios with mixed-autonomy traffic.
|
[
{
"version": "v1",
"created": "Mon, 31 Jan 2022 11:40:13 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Feb 2023 14:47:16 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Guo",
"Jiaying",
""
],
[
"Cheng",
"Long",
""
],
[
"Wang",
"Shen",
""
]
] |
new_dataset
| 0.996828 |
2207.05081
|
James Smith
|
James E. Smith
|
A Macrocolumn Architecture Implemented with Spiking Neurons
|
This is a major revision. Neuron outputs are encoded as the body
potential. Winner-take-all inhibition then compares body potentials to
determine a winner. At the end of each cycle, a non-zero WTA output is
converted to a binary spike. This method remains consistent with temporal
neuron operation internal to a cycle, with only a single bit of temporal
precision being maintained between cycles
| null | null | null |
cs.NE cs.LG q-bio.NC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The macrocolumn is a key component of a neuromorphic computing system that
interacts with an external environment under control of an agent. Environments
are learned and stored in the macrocolumn as labeled directed graphs where
edges connect features and labels indicate the relative displacements between
them. Macrocolumn functionality is first defined with a state machine model.
This model is then implemented with a neural network composed of spiking
neurons. The neuron model employs active dendrites and mirrors the
Hawkins/Numenta neuron model. The architecture is demonstrated with a research
benchmark in which an agent employs a macrocolumn to first learn and then
navigate 2-d environments containing pseudo-randomly placed features.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 17:20:57 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Feb 2023 16:46:15 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Smith",
"James E.",
""
]
] |
new_dataset
| 0.978362 |
2210.05408
|
Ren\'e B{\o}dker Christensen
|
Ren\'e B{\o}dker Christensen and Petar Popovski
|
Private Randomness Agreement and its Application in Quantum Key
Distribution Networks
|
6 pages
|
IEEE Communications Letters; vol. 27, no. 2, February 2023. pp.
477-481
|
10.1109/LCOMM.2022.3225262
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We define a variation on the well-known problem of private message
transmission. This new problem called private randomness agreement (PRA) gives
two participants access to a public, authenticated channel alongside the main
channels, and the 'message' is not fixed a priori.
Instead, the participants aim to agree on a random string completely unknown
to a computationally unbounded adversary.
We define privacy and reliability, and show that PRA cannot be solved in a
single round. We then show that it can be solved in three rounds, albeit with
exponential cost, and give an efficient four-round protocol based on polynomial
evaluation.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 12:32:31 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Nov 2022 07:40:33 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Feb 2023 07:39:28 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Christensen",
"René Bødker",
""
],
[
"Popovski",
"Petar",
""
]
] |
new_dataset
| 0.963569 |
2210.12858
|
Yunqi Zhang
|
Yunqi Zhang and Shaileshh Bojja Venkatakrishnan
|
Kadabra: Adapting Kademlia for the Decentralized Web
|
Financial Cryptography and Data Security 2023 (FC 2023); 27 pages, 20
figures
| null | null | null |
cs.NI cs.AI cs.DS cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blockchains have become the catalyst for a growing movement to create a more
decentralized Internet. A fundamental operation of applications in a
decentralized Internet is data storage and retrieval. As today's blockchains
are limited in their storage functionalities, in recent years a number of
peer-to-peer data storage networks have emerged based on the Kademlia
distributed hash table protocol. However, existing Kademlia implementations are
not efficient enough to support fast data storage and retrieval operations
necessary for (decentralized) Web applications. In this paper, we present
Kadabra, a decentralized protocol for computing the routing table entries in
Kademlia to accelerate lookups. Kadabra is motivated by the multi-armed bandit
problem, and can automatically adapt to heterogeneity and dynamism in the
network. Experimental results show Kadabra achieving between 15-50% lower
lookup latencies compared to state-of-the-art baselines.
|
[
{
"version": "v1",
"created": "Sun, 23 Oct 2022 21:21:19 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Feb 2023 17:46:39 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Zhang",
"Yunqi",
""
],
[
"Venkatakrishnan",
"Shaileshh Bojja",
""
]
] |
new_dataset
| 0.991752 |
2212.09408
|
Feng Lin
|
Feng Lin, Wenze Hu, Yaowei Wang, Yonghong Tian, Guangming Lu, Fanglin
Chen, Yong Xu, Xiaoyu Wang
|
Million-scale Object Detection with Large Vision Model
|
This paper is revised by ChatGPT
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the past few years, there has been growing interest in developing a
broad, universal, and general-purpose computer vision system. Such a system
would have the potential to solve a wide range of vision tasks simultaneously,
without being restricted to a specific problem or data domain. This is crucial
for practical, real-world computer vision applications. In this study, we focus
on the million-scale multi-domain universal object detection problem, which
presents several challenges, including cross-dataset category label
duplication, label conflicts, and the need to handle hierarchical taxonomies.
Furthermore, there is an ongoing challenge in the field to find a
resource-efficient way to leverage large pre-trained vision models for
million-scale cross-dataset object detection. To address these challenges, we
introduce our approach to label handling, hierarchy-aware loss design, and
resource-efficient model training using a pre-trained large model. Our method
was ranked second in the object detection track of the Robust Vision Challenge
2022 (RVC 2022). We hope that our detailed study will serve as a useful
reference and alternative approach for similar problems in the computer vision
community. The code is available at https://github.com/linfeng93/Large-UniDet.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 12:40:13 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Feb 2023 13:09:48 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Lin",
"Feng",
""
],
[
"Hu",
"Wenze",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Tian",
"Yonghong",
""
],
[
"Lu",
"Guangming",
""
],
[
"Chen",
"Fanglin",
""
],
[
"Xu",
"Yong",
""
],
[
"Wang",
"Xiaoyu",
""
]
] |
new_dataset
| 0.997811 |
2302.01110
|
Huayi Zhou
|
Huayi Zhou, Fei Jiang, and Hongtao Lu
|
DirectMHP: Direct 2D Multi-Person Head Pose Estimation with Full-range
Angles
|
13 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing head pose estimation (HPE) mainly focuses on single person with
pre-detected frontal heads, which limits their applications in real complex
scenarios with multi-persons. We argue that these single HPE methods are
fragile and inefficient for Multi-Person Head Pose Estimation (MPHPE) since
they rely on the separately trained face detector that cannot generalize well
to full viewpoints, especially for heads with invisible face areas. In this
paper, we focus on the full-range MPHPE problem, and propose a direct
end-to-end simple baseline named DirectMHP. Due to the lack of datasets
applicable to the full-range MPHPE, we firstly construct two benchmarks by
extracting ground-truth labels for head detection and head orientation from
public datasets AGORA and CMU Panoptic. They are rather challenging for having
many truncated, occluded, tiny and unevenly illuminated human heads. Then, we
design a novel end-to-end trainable one-stage network architecture by joint
regressing locations and orientations of multi-head to address the MPHPE
problem. Specifically, we regard pose as an auxiliary attribute of the head,
and append it after the traditional object prediction. Arbitrary pose
representation such as Euler angles is acceptable by this flexible design.
Then, we jointly optimize these two tasks by sharing features and utilizing
appropriate multiple losses. In this way, our method can implicitly benefit
from more surroundings to improve HPE accuracy while maintaining head detection
performance. We present comprehensive comparisons with state-of-the-art single
HPE methods on public benchmarks, as well as superior baseline results on our
constructed MPHPE datasets. Datasets and code are released in
https://github.com/hnuzhy/DirectMHP.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 14:08:49 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Feb 2023 13:30:31 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Zhou",
"Huayi",
""
],
[
"Jiang",
"Fei",
""
],
[
"Lu",
"Hongtao",
""
]
] |
new_dataset
| 0.999152 |
2302.05729
|
Ha Thanh Nguyen
|
Ha-Thanh Nguyen
|
A Brief Report on LawGPT 1.0: A Virtual Legal Assistant Based on GPT-3
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
LawGPT 1.0 is a virtual legal assistant built on the state-of-the-art
language model GPT-3, fine-tuned for the legal domain. The system is designed
to provide legal assistance to users in a conversational manner, helping them
with tasks such as answering legal questions, generating legal documents, and
providing legal advice. In this paper, we provide a brief overview of LawGPT
1.0, its architecture, and its performance on a set of legal benchmark tasks.
Please note that the detailed information about the model is protected by a
non-disclosure agreement (NDA) and cannot be disclosed in this report.
|
[
{
"version": "v1",
"created": "Sat, 11 Feb 2023 15:50:20 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Feb 2023 06:26:42 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Nguyen",
"Ha-Thanh",
""
]
] |
new_dataset
| 0.992905 |
2302.06729
|
Danilo Neves Ribeiro
|
Danilo Ribeiro, Shen Wang, Xiaofei Ma, Henry Zhu, Rui Dong, Deguang
Kong, Juliette Burger, Anjelica Ramos, William Wang, Zhiheng Huang, George
Karypis, Bing Xiang, Dan Roth
|
STREET: A Multi-Task Structured Reasoning and Explanation Benchmark
|
Published in ICLR 2023
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce STREET, a unified multi-task and multi-domain natural language
reasoning and explanation benchmark. Unlike most existing question-answering
(QA) datasets, we expect models to not only answer questions, but also produce
step-by-step structured explanations describing how premises in the question
are used to produce intermediate conclusions that can prove the correctness of
a certain answer. We perform extensive evaluation with popular language models
such as few-shot prompting GPT-3 and fine-tuned T5. We find that these models
still lag behind human performance when producing such structured reasoning
steps. We believe this work will provide a way for the community to better
train and test systems on multi-step reasoning and explanations in natural
language.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 22:34:02 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Ribeiro",
"Danilo",
""
],
[
"Wang",
"Shen",
""
],
[
"Ma",
"Xiaofei",
""
],
[
"Zhu",
"Henry",
""
],
[
"Dong",
"Rui",
""
],
[
"Kong",
"Deguang",
""
],
[
"Burger",
"Juliette",
""
],
[
"Ramos",
"Anjelica",
""
],
[
"Wang",
"William",
""
],
[
"Huang",
"Zhiheng",
""
],
[
"Karypis",
"George",
""
],
[
"Xiang",
"Bing",
""
],
[
"Roth",
"Dan",
""
]
] |
new_dataset
| 0.968929 |
2302.06806
|
Kam Kwai Wong
|
Kam Kwai Wong, Xingbo Wang, Yong Wang, Jianben He, Rong Zhang, Huamin
Qu
|
Anchorage: Visual Analysis of Satisfaction in Customer Service Videos
via Anchor Events
|
13 pages. A preprint version of a publication at IEEE Transactions on
Visualization and Computer Graphics (TVCG), 2023
| null | null | null |
cs.HC cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Delivering customer services through video communications has brought new
opportunities to analyze customer satisfaction for quality management. However,
due to the lack of reliable self-reported responses, service providers are
troubled by the inadequate estimation of customer services and the tedious
investigation into multimodal video recordings. We introduce Anchorage, a
visual analytics system to evaluate customer satisfaction by summarizing
multimodal behavioral features in customer service videos and revealing
abnormal operations in the service process. We leverage the semantically
meaningful operations to introduce structured event understanding into videos
which help service providers quickly navigate to events of their interest.
Anchorage supports a comprehensive evaluation of customer satisfaction from the
service and operation levels and efficient analysis of customer behavioral
dynamics via multifaceted visualization views. We extensively evaluate
Anchorage through a case study and a carefully-designed user study. The results
demonstrate its effectiveness and usability in assessing customer satisfaction
using customer service videos. We found that introducing event contexts in
assessing customer satisfaction can enhance its performance without
compromising annotation precision. Our approach can be adapted in situations
where unlabelled and unstructured videos are collected along with sequential
records.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 03:20:51 GMT"
}
] | 2023-02-15T00:00:00 |
[
[
"Wong",
"Kam Kwai",
""
],
[
"Wang",
"Xingbo",
""
],
[
"Wang",
"Yong",
""
],
[
"He",
"Jianben",
""
],
[
"Zhang",
"Rong",
""
],
[
"Qu",
"Huamin",
""
]
] |
new_dataset
| 0.978696 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.