id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2302.14754
|
Ahmed Hossain
|
Ahmed Hossain, Xiaoduan Sun, Shahrin Islam, Shah Alam, Md Mahmud
Hossain
|
Identifying roadway departure crash patterns on rural two-lane highways
under different lighting conditions: association knowledge using data mining
approach
| null |
Journal of Safety Research 2023
|
10.1016/j.jsr.2023.01.006
| null |
cs.LG stat.AP stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
More than half of all fatalities on U.S. highways occur due to roadway
departure (RwD) each year. Previous research has explored various risk factors
that contribute to RwD crashes, however, a comprehensive investigation
considering the effect of lighting conditions has been insufficiently
addressed. Using the Louisiana Department of Transportation and Development
crash database, fatal and injury RwD crashes occurring on rural two-lane (R2L)
highways between 2008-2017 were analyzed based on daylight and dark
(with/without streetlight). This research employed a safe system approach to
explore meaningful complex interactions among multidimensional crash risk
factors. To accomplish this, an unsupervised data mining algorithm association
rules mining (ARM) was utilized. Based on the generated rules, the findings
reveal several interesting crash patterns in the daylight,
dark-with-streetlight, and dark-no-streetlight, emphasizing the importance of
investigating RwD crash patterns depending on the lighting conditions. In
daylight, fatal RwD crashes are associated with cloudy weather conditions,
distracted drivers, standing water on the roadway, no seat belt use, and
construction zones. In dark lighting conditions (with/without streetlight), the
majority of the RwD crashes are associated with alcohol/drug involvement, young
drivers (15-24 years), driver condition (e.g., inattentive, distracted,
illness/fatigued/asleep) and colliding with animal (s). The findings reveal how
certain driver behavior patterns are connected to RwD crashes, such as a strong
association between alcohol/drug intoxication and no seat belt usage in the
dark-no-streetlight condition. Based on the identified crash patterns and
behavioral characteristics under different lighting conditions, the findings
could aid researchers and safety specialists in developing the most effective
RwD crash mitigation strategies.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 16:53:54 GMT"
}
] | 2023-03-01T00:00:00 |
[
[
"Hossain",
"Ahmed",
""
],
[
"Sun",
"Xiaoduan",
""
],
[
"Islam",
"Shahrin",
""
],
[
"Alam",
"Shah",
""
],
[
"Hossain",
"Md Mahmud",
""
]
] |
new_dataset
| 0.999328 |
2302.14807
|
Mohamed Nagy
|
Mohamed Nagy, Majid Khonji, Jorge Dias and Sajid Javed
|
DFR-FastMOT: Detection Failure Resistant Tracker for Fast Multi-Object
Tracking Based on Sensor Fusion
|
\c{opyright} 2023 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Persistent multi-object tracking (MOT) allows autonomous vehicles to navigate
safely in highly dynamic environments. One of the well-known challenges in MOT
is object occlusion when an object becomes unobservant for subsequent frames.
The current MOT methods store objects information, like objects' trajectory, in
internal memory to recover the objects after occlusions. However, they retain
short-term memory to save computational time and avoid slowing down the MOT
method. As a result, they lose track of objects in some occlusion scenarios,
particularly long ones. In this paper, we propose DFR-FastMOT, a light MOT
method that uses data from a camera and LiDAR sensors and relies on an
algebraic formulation for object association and fusion. The formulation boosts
the computational time and permits long-term memory that tackles more occlusion
scenarios. Our method shows outstanding tracking performance over recent
learning and non-learning benchmarks with about 3% and 4% margin in MOTA,
respectively. Also, we conduct extensive experiments that simulate occlusion
phenomena by employing detectors with various distortion levels. The proposed
solution enables superior performance under various distortion levels in
detection over current state-of-art methods. Our framework processes about
7,763 frames in 1.48 seconds, which is seven times faster than recent
benchmarks. The framework will be available at
https://github.com/MohamedNagyMostafa/DFR-FastMOT.
|
[
{
"version": "v1",
"created": "Tue, 28 Feb 2023 17:57:06 GMT"
}
] | 2023-03-01T00:00:00 |
[
[
"Nagy",
"Mohamed",
""
],
[
"Khonji",
"Majid",
""
],
[
"Dias",
"Jorge",
""
],
[
"Javed",
"Sajid",
""
]
] |
new_dataset
| 0.954618 |
2007.03963
|
Jingjie Lv
|
Jingjie Lv, Ruihu Li, Juan Li
|
The algebraic structure of conjucyclic codes over F_{q^2}
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Conjucyclic codes are an important and special family of classical
error-correcting codes, which have been used to construct binary quantum
error-correcting codes (QECCs). However, at present, the research on the
conjucyclic codes is extremely insufficient. This paper will explore the
algebraic structure of additive conjucyclic codes over $\mathbb{F}_{q^{2}}$ for
the first time. Mainly via the trace function from $\mathbb{F}_{q^{2}}$ down
$\mathbb{F}_{q}$, we will firstly build an isomorphic mapping between $q^2$-ary
additive conjucyclic codes and $q$-ary linear cyclic codes. Since the mapping
preserves the weight and orthogonality, then the dual structure of these codes
with respect to the alternating inner product will be described. Then a new
construction of QECCs from conjucyclic codes can be obtained. Finally, the
enumeration of $q^2$-ary additive conjucyclic codes of length $n$ and the
explicit forms of their generator and parity-check matrices will be determined.
|
[
{
"version": "v1",
"created": "Wed, 8 Jul 2020 08:42:07 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2023 16:33:57 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Lv",
"Jingjie",
""
],
[
"Li",
"Ruihu",
""
],
[
"Li",
"Juan",
""
]
] |
new_dataset
| 0.999174 |
2103.16107
|
Weiqing Min
|
Weiqing Min and Zhiling Wang and Yuxin Liu and Mengjiang Luo and
Liping Kang and Xiaoming Wei and Xiaolin Wei and Shuqiang Jiang
|
Large Scale Visual Food Recognition
|
Accepted by IEEE Transactions on Pattern Analysis and Machine
Intelligence
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Food recognition plays an important role in food choice and intake, which is
essential to the health and well-being of humans. It is thus of importance to
the computer vision community, and can further support many food-oriented
vision and multimodal tasks. Unfortunately, we have witnessed remarkable
advancements in generic visual recognition for released large-scale datasets,
yet largely lags in the food domain. In this paper, we introduce Food2K, which
is the largest food recognition dataset with 2,000 categories and over 1
million images.Compared with existing food recognition datasets, Food2K
bypasses them in both categories and images by one order of magnitude, and thus
establishes a new challenging benchmark to develop advanced models for food
visual representation learning. Furthermore, we propose a deep progressive
region enhancement network for food recognition, which mainly consists of two
components, namely progressive local feature learning and region feature
enhancement. The former adopts improved progressive training to learn diverse
and complementary local features, while the latter utilizes self-attention to
incorporate richer context with multiple scales into local features for further
local feature enhancement. Extensive experiments on Food2K demonstrate the
effectiveness of our proposed method. More importantly, we have verified better
generalization ability of Food2K in various tasks, including food recognition,
food image retrieval, cross-modal recipe retrieval, food detection and
segmentation. Food2K can be further explored to benefit more food-relevant
tasks including emerging and more complex ones (e.g., nutritional understanding
of food), and the trained models on Food2K can be expected as backbones to
improve the performance of more food-relevant tasks. We also hope Food2K can
serve as a large scale fine-grained visual recognition benchmark.
|
[
{
"version": "v1",
"created": "Tue, 30 Mar 2021 06:41:42 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Mar 2021 05:01:34 GMT"
},
{
"version": "v3",
"created": "Sun, 26 Feb 2023 13:06:56 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Min",
"Weiqing",
""
],
[
"Wang",
"Zhiling",
""
],
[
"Liu",
"Yuxin",
""
],
[
"Luo",
"Mengjiang",
""
],
[
"Kang",
"Liping",
""
],
[
"Wei",
"Xiaoming",
""
],
[
"Wei",
"Xiaolin",
""
],
[
"Jiang",
"Shuqiang",
""
]
] |
new_dataset
| 0.999726 |
2107.13063
|
Chen Peng
|
Chen Peng, Stavros Vougioukas, David Slaughter, Zhenghao Fei,
Rajkishan Arikapudi
|
A strawberry harvest-aiding system with crop-transport co-robots:
Design, development, and field evaluation
| null | null |
10.1002/rob.22106
| null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Mechanizing the manual harvesting of fresh market fruits constitutes one of
the biggest challenges to the sustainability of the fruit industry. During
manual harvesting of some fresh-market crops like strawberries and table
grapes, pickers spend significant amounts of time walking to carry full trays
to a collection station at the edge of the field. A step toward increasing
harvest automation for such crops is to deploy harvest-aid collaborative robots
(co-bots) that transport the empty and full trays, thus increasing harvest
efficiency by reducing pickers' non-productive walking times. This work
presents the development of a co-robotic harvest-aid system and its evaluation
during commercial strawberry harvesting. At the heart of the system lies a
predictive stochastic scheduling algorithm that minimizes the expected
non-picking time, thus maximizing the harvest efficiency. During the evaluation
experiments, the co-robots improved the mean harvesting efficiency by around
10% and reduced the mean non-productive time by 60%, when the robot-to-picker
ratio was 1:3. The concepts developed in this work can be applied to robotic
harvest-aids for other manually harvested crops that involve walking for crop
transportation.
|
[
{
"version": "v1",
"created": "Tue, 27 Jul 2021 19:55:34 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Peng",
"Chen",
""
],
[
"Vougioukas",
"Stavros",
""
],
[
"Slaughter",
"David",
""
],
[
"Fei",
"Zhenghao",
""
],
[
"Arikapudi",
"Rajkishan",
""
]
] |
new_dataset
| 0.998642 |
2203.17132
|
Leon Kellerhals
|
Matthias Bentert, Leon Kellerhals, and Rolf Niedermeier
|
Fair Short Paths in Vertex-Colored Graphs
|
Full version of a paper accepted at AAAI '23
| null | null | null |
cs.DS cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The computation of short paths in graphs with arc lengths is a pillar of
graph algorithmics and network science. In a more diverse world, however, not
every short path is equally valuable. For the setting where each vertex is
assigned to a group (color), we provide a framework to model multiple natural
fairness aspects. We seek to find short paths in which the number of
occurrences of each color is within some given lower and upper bounds. Among
other results, we prove the introduced problems to be computationally
intractable (NP-hard and parameterized hard with respect to the number of
colors) even in very restricted settings (such as each color should appear with
exactly the same frequency), while also presenting an encouraging algorithmic
result ("fixed-parameter tractability") related to the length of the sought
solution path for the general problem.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 15:58:01 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Apr 2022 12:03:33 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Feb 2023 15:56:35 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Bentert",
"Matthias",
""
],
[
"Kellerhals",
"Leon",
""
],
[
"Niedermeier",
"Rolf",
""
]
] |
new_dataset
| 0.978571 |
2204.08805
|
Jingyuan Liu
|
Jingyuan Liu, Nazmus Saquib, Zhutian Chen, Rubaiat Habib Kazi, Li-Yi
Wei, Hongbo Fu, Chiew-Lan Tai
|
PoseCoach: A Customizable Analysis and Visualization System for
Video-based Running Coaching
| null | null |
10.1109/TVCG.2022.3230855
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Videos are an accessible form of media for analyzing sports postures and
providing feedback to athletes. Existing sport-specific systems embed bespoke
human pose attributes and thus can be hard to scale for new attributes,
especially for users without programming experiences. Some systems retain
scalability by directly showing the differences between two poses, but they
might not clearly visualize the key differences that viewers would like to
pursue. Besides, video-based coaching systems often present feedback on the
correctness of poses by augmenting videos with visual markers or reference
poses. However, previewing and augmenting videos limit the analysis and
visualization of human poses due to the fixed viewpoints in videos, which
confine the observation of captured human movements and cause ambiguity in the
augmented feedback. To address these issues, we study customizable human pose
data analysis and visualization in the context of running pose attributes, such
as joint angles and step distances. Based on existing literature and a
formative study, we have designed and implemented a system, PoseCoach, to
provide feedback on running poses for amateurs by comparing the running poses
between a novice and an expert. PoseCoach adopts a customizable data analysis
model to allow users' controllability in defining pose attributes of their
interests through our interface. To avoid the influence of viewpoint
differences and provide intuitive feedback, PoseCoach visualizes the pose
differences as part-based 3D animations on a human model to imitate the
demonstration of a human coach. We conduct a user study to verify our design
components and conduct expert interviews to evaluate the usefulness of the
system.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2022 11:03:26 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2023 14:50:14 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Liu",
"Jingyuan",
""
],
[
"Saquib",
"Nazmus",
""
],
[
"Chen",
"Zhutian",
""
],
[
"Kazi",
"Rubaiat Habib",
""
],
[
"Wei",
"Li-Yi",
""
],
[
"Fu",
"Hongbo",
""
],
[
"Tai",
"Chiew-Lan",
""
]
] |
new_dataset
| 0.994348 |
2205.08159
|
Prabhav Singh
|
Prabhav Singh, Ridam Srivastava, K.P.S. Rana, Vineet Kumar
|
SEMI-FND: Stacked Ensemble Based Multimodal Inference For Faster Fake
News Detection
| null | null |
10.1016/j.eswa.2022.119302
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Fake News Detection (FND) is an essential field in natural language
processing that aims to identify and check the truthfulness of major claims in
a news article to decide the news veracity. FND finds its uses in preventing
social, political and national damage caused due to misrepresentation of facts
which may harm a certain section of society. Further, with the explosive rise
in fake news dissemination over social media, including images and text, it has
become imperative to identify fake news faster and more accurately. To solve
this problem, this work investigates a novel multimodal stacked ensemble-based
approach (SEMIFND) to fake news detection. Focus is also kept on ensuring
faster performance with fewer parameters. Moreover, to improve multimodal
performance, a deep unimodal analysis is done on the image modality to identify
NasNet Mobile as the most appropriate model for the task. For text, an ensemble
of BERT and ELECTRA is used. The approach was evaluated on two datasets:
Twitter MediaEval and Weibo Corpus. The suggested framework offered accuracies
of 85.80% and 86.83% on the Twitter and Weibo datasets respectively. These
reported metrics are found to be superior when compared to similar recent
works. Further, we also report a reduction in the number of parameters used in
training when compared to recent relevant works. SEMI-FND offers an overall
parameter reduction of at least 20% with unimodal parametric reduction on text
being 60%. Therefore, based on the investigations presented, it is concluded
that the application of a stacked ensembling significantly improves FND over
other approaches while also improving speed.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 07:51:55 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Singh",
"Prabhav",
""
],
[
"Srivastava",
"Ridam",
""
],
[
"Rana",
"K. P. S.",
""
],
[
"Kumar",
"Vineet",
""
]
] |
new_dataset
| 0.987528 |
2205.11713
|
Ali Hummos
|
Ali Hummos
|
Thalamus: a brain-inspired algorithm for biologically-plausible
continual learning and disentangled representations
|
Published ICLR 2023
| null | null | null |
cs.AI cs.NE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Animals thrive in a constantly changing environment and leverage the temporal
structure to learn well-factorized causal representations. In contrast,
traditional neural networks suffer from forgetting in changing environments and
many methods have been proposed to limit forgetting with different trade-offs.
Inspired by the brain thalamocortical circuit, we introduce a simple algorithm
that uses optimization at inference time to generate internal representations
of the current task dynamically. The algorithm alternates between updating the
model weights and a latent task embedding, allowing the agent to parse the
stream of temporal experience into discrete events and organize learning about
them. On a continual learning benchmark, it achieves competitive end average
accuracy by mitigating forgetting, but importantly, by requiring the model to
adapt through latent updates, it organizes knowledge into flexible structures
with a cognitive interface to control them. Tasks later in the sequence can be
solved through knowledge transfer as they become reachable within the
well-factorized latent space. The algorithm meets many of the desiderata of an
ideal continually learning agent in open-ended environments, and its simplicity
suggests fundamental computations in circuits with abundant feedback control
loops such as the thalamocortical circuits in the brain.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 01:29:21 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2022 16:58:30 GMT"
},
{
"version": "v3",
"created": "Sun, 26 Feb 2023 05:30:17 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Hummos",
"Ali",
""
]
] |
new_dataset
| 0.99729 |
2206.04520
|
Bao Bach
|
Trung Dinh Pham, Bao Gia Bach, Lam Trinh Luu, Minh Dinh Nguyen, Hai
Duc Pham, Khoa Bui Anh, Xuan Quang Nguyen, Cuong Pham Quoc
|
An FPGA-based Solution for Convolution Operation Acceleration
|
11 pages, 6 figures, accepted to The First International Conference
on Intelligence of Things (ICIT 2022)
|
Lecture Notes on Data Engineering and Communications Technologies,
vol 148. Springer, 2022,
|
10.1007/978-3-031-15063-0_26
| null |
cs.AR cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Hardware-based acceleration is an extensive attempt to facilitate many
computationally-intensive mathematics operations. This paper proposes an
FPGA-based architecture to accelerate the convolution operation - a complex and
expensive computing step that appears in many Convolutional Neural Network
models. We target the design to the standard convolution operation, intending
to launch the product as an edge-AI solution. The project's purpose is to
produce an FPGA IP core that can process a convolutional layer at a time.
System developers can deploy the IP core with various FPGA families by using
Verilog HDL as the primary design language for the architecture. The
experimental results show that our single computing core synthesized on a
simple edge computing FPGA board can offer 0.224 GOPS. When the board is fully
utilized, 4.48 GOPS can be achieved.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 14:12:30 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Pham",
"Trung Dinh",
""
],
[
"Bach",
"Bao Gia",
""
],
[
"Luu",
"Lam Trinh",
""
],
[
"Nguyen",
"Minh Dinh",
""
],
[
"Pham",
"Hai Duc",
""
],
[
"Anh",
"Khoa Bui",
""
],
[
"Nguyen",
"Xuan Quang",
""
],
[
"Quoc",
"Cuong Pham",
""
]
] |
new_dataset
| 0.986268 |
2206.04910
|
Jinsong Chen
|
Jinsong Chen, Kaiyuan Gao, Gaichao Li, Kun He
|
NAGphormer: A Tokenized Graph Transformer for Node Classification in
Large Graphs
|
Accepted by ICLR 2023
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The graph Transformer emerges as a new architecture and has shown superior
performance on various graph mining tasks. In this work, we observe that
existing graph Transformers treat nodes as independent tokens and construct a
single long sequence composed of all node tokens so as to train the Transformer
model, causing it hard to scale to large graphs due to the quadratic complexity
on the number of nodes for the self-attention computation. To this end, we
propose a Neighborhood Aggregation Graph Transformer (NAGphormer) that treats
each node as a sequence containing a series of tokens constructed by our
proposed Hop2Token module. For each node, Hop2Token aggregates the neighborhood
features from different hops into different representations and thereby
produces a sequence of token vectors as one input. In this way, NAGphormer
could be trained in a mini-batch manner and thus could scale to large graphs.
Moreover, we mathematically show that as compared to a category of advanced
Graph Neural Networks (GNNs), the decoupled Graph Convolutional Network,
NAGphormer could learn more informative node representations from the multi-hop
neighborhoods. Extensive experiments on benchmark datasets from small to large
are conducted to demonstrate that NAGphormer consistently outperforms existing
graph Transformers and mainstream GNNs. Code is available at
https://github.com/JHL-HUST/NAGphormer.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 07:23:51 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Oct 2022 04:27:32 GMT"
},
{
"version": "v3",
"created": "Sat, 14 Jan 2023 07:01:56 GMT"
},
{
"version": "v4",
"created": "Mon, 27 Feb 2023 11:39:09 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Chen",
"Jinsong",
""
],
[
"Gao",
"Kaiyuan",
""
],
[
"Li",
"Gaichao",
""
],
[
"He",
"Kun",
""
]
] |
new_dataset
| 0.998723 |
2207.01751
|
Xinling Yu
|
Ziyue Liu, Xinling Yu, Zheng Zhang
|
TT-PINN: A Tensor-Compressed Neural PDE Solver for Edge Computing
| null | null | null | null |
cs.LG cs.AR cs.DC cs.NA math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physics-informed neural networks (PINNs) have been increasingly employed due
to their capability of modeling complex physics systems. To achieve better
expressiveness, increasingly larger network sizes are required in many
problems. This has caused challenges when we need to train PINNs on edge
devices with limited memory, computing and energy resources. To enable training
PINNs on edge devices, this paper proposes an end-to-end compressed PINN based
on Tensor-Train decomposition. In solving a Helmholtz equation, our proposed
model significantly outperforms the original PINNs with few parameters and
achieves satisfactory prediction with up to 15$\times$ overall parameter
reduction.
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2022 23:56:27 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Liu",
"Ziyue",
""
],
[
"Yu",
"Xinling",
""
],
[
"Zhang",
"Zheng",
""
]
] |
new_dataset
| 0.999329 |
2208.06752
|
Adrian Jackson
|
Nicolau Manubens (1), Tiago Quintino (1), Simon D. Smart (1), Emanuele
Danovaro (1), and Adrian Jackson (2) ((1) ECMWF, (2) EPCC, The University of
Edinburgh)
|
DAOS as HPC Storage, a view from Numerical Weather Prediction
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Object storage solutions potentially address long-standing performance issues
with POSIX file systems for certain I/O workloads, and new storage technologies
offer promising performance characteristics for data-intensive use cases. In
this work, we present a preliminary assessment of Intel's Distributed
Asynchronous Object Store (DAOS), an emerging high-performance object store, in
conjunction with non-volatile storage and evaluate its potential use for HPC
storage. We demonstrate DAOS can provide the required performance, with
bandwidth scaling linearly with additional DAOS server nodes in most cases,
although choices in configuration and application design can impact achievable
bandwidth. We describe a new I/O benchmark and associated metrics that address
object storage performance from application-derived workloads.
|
[
{
"version": "v1",
"created": "Sun, 14 Aug 2022 00:09:31 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2023 13:21:34 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Manubens",
"Nicolau",
""
],
[
"Quintino",
"Tiago",
""
],
[
"Smart",
"Simon D.",
""
],
[
"Danovaro",
"Emanuele",
""
],
[
"Jackson",
"Adrian",
""
]
] |
new_dataset
| 0.998719 |
2209.07764
|
Juyeop Han
|
Juyeop Han, Youngjae Min, Hyeok-Joo Chae, Byeong-Min Jeong and Han-Lim
Choi
|
DS-K3DOM: 3-D Dynamic Occupancy Mapping with Kernel Inference and
Dempster-Shafer Evidential Theory
|
7 pages, 3 figures, Accepted to ICRA 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Occupancy mapping has been widely utilized to represent the surroundings for
autonomous robots to perform tasks such as navigation and manipulation. While
occupancy mapping in 2-D environments has been well-studied, there have been
few approaches suitable for 3-D dynamic occupancy mapping which is essential
for aerial robots. This paper presents a novel 3-D dynamic occupancy mapping
algorithm called DS-K3DOM. We first establish a Bayesian method to sequentially
update occupancy maps for a stream of measurements based on the random finite
set theory. Then, we approximate it with particles in the Dempster-Shafer
domain to enable real-time computation. Moreover, the algorithm applies
kernel-based inference with Dirichlet basic belief assignment to enable dense
mapping from sparse measurements. The efficacy of the proposed algorithm is
demonstrated through simulations and real experiments.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 07:47:40 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Feb 2023 03:13:46 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Han",
"Juyeop",
""
],
[
"Min",
"Youngjae",
""
],
[
"Chae",
"Hyeok-Joo",
""
],
[
"Jeong",
"Byeong-Min",
""
],
[
"Choi",
"Han-Lim",
""
]
] |
new_dataset
| 0.966087 |
2210.01076
|
Tsung-Wei Huang
|
Tsung-Wei Huang
|
qTask: Task-parallel Quantum Circuit Simulation with Incrementality
|
2023 IEEE International Parallel and Distributed Processing Symposium
(IPDPS)
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Incremental quantum circuit simulation has emerged as an important tool for
simulation-driven quantum applications, such as circuit synthesis,
verification, and analysis. When a small portion of the circuit is modified,
the simulator must incrementally update state amplitudes for reasonable
turnaround time and productivity. However, this type of incrementality has been
largely ignored by existing research. To fill this gap, we introduce a new
incremental quantum circuit simulator called qTask. qTask leverages a
task-parallel decomposition strategy to explore both inter- and intra-gate
operation parallelisms from partitioned data blocks. Our partitioning strategy
effectively narrows down incremental update to a small set of partitions
affected by circuit modifiers. We have demonstrated the promising performance
of qTask on QASMBench benchmarks. Compared to two state-of-the-art simulators,
Qulacs and Qiskit, qTask is respectively 1.46x and 1.71x faster for full
simulation and 5.77x and 9.76x faster for incremental simulation.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 16:48:29 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2023 01:32:37 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Huang",
"Tsung-Wei",
""
]
] |
new_dataset
| 0.999764 |
2210.14948
|
James Kempf
|
James Kempf
|
kube-volttron: Rearchitecting the VOLTTRON Building Energy Management
System for Cloud Native Deployment
| null | null | null | null |
cs.DC cs.CY cs.SE cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Managing the energy consumption of the built environment is an important
source of flexible load and decarbonization, enabling building managers and
utilities to schedule consumption to avoid costly demand charges and peak times
when carbon emissions from grid generated electricity are highest. A key
technology component in building energy management is the building energy
management system. Eclipse VOLTTRON is a legacy software platform which enables
building energy management. It was developed for the US Department of Energy
(DOE) at Pacific Northwest National Labs (PNNL) written in Python and based on
a monolithic build-configure-and-run-in-place system architecture that predates
cloud native architectural concepts. Yet the software architecture is
componentized in a way that anticipates modular containerized applications,
with software agents handling functions like data storage, web access, and
communication with IoT devices over specific IoT protocols such as BACnet and
Modbus. The agents communicate among themselves over a message bus. This paper
describes a proof-of-concept prototype to rearchitect VOLTTRON into a
collection of microservices suitable for deployment on the Kubernetes cloud
native container orchestration platform. The agents are packaged in
redistributable containers that perform specific functions and which can be
configured when they are deployed. The deployment architecture consists of
single Kubernetes cluster containing a central node, nominally in a cloud-based
VM, where a microservice containing the database agent (called a "historian")
and the web site agent for the service run, and gateway nodes running on sites
in buildings where a microservice containing IoT protocol-specific agents
handles control and data collection to and from devices, and communication back
to the central node.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 18:04:22 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2023 03:52:21 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Kempf",
"James",
""
]
] |
new_dataset
| 0.998335 |
2210.16081
|
Mariona Car\'os
|
Mariona Caros, Ariadna Just, Santi Segui, Jordi Vitria
|
Object Segmentation of Cluttered Airborne LiDAR Point Clouds
|
proceedings of the 24th International Conference of the Catalan
Association for Artificial Intelligence (CCIA 2022)
|
Artificial Intelligence Research and Development. 356 (2022)
259-268
|
10.3233/FAIA220347
| null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Airborne topographic LiDAR is an active remote sensing technology that emits
near-infrared light to map objects on the Earth's surface. Derived products of
LiDAR are suitable to service a wide range of applications because of their
rich three-dimensional spatial information and their capacity to obtain
multiple returns. However, processing point cloud data still requires a
significant effort in manual editing. Certain human-made objects are difficult
to detect because of their variety of shapes, irregularly-distributed point
clouds, and low number of class samples. In this work, we propose an efficient
end-to-end deep learning framework to automatize the detection and segmentation
of objects defined by an arbitrary number of LiDAR points surrounded by
clutter. Our method is based on a light version of PointNet that achieves good
performance on both object recognition and segmentation tasks. The results are
tested against manually delineated power transmission towers and show promising
accuracy.
|
[
{
"version": "v1",
"created": "Fri, 28 Oct 2022 11:58:22 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Feb 2023 15:42:14 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Caros",
"Mariona",
""
],
[
"Just",
"Ariadna",
""
],
[
"Segui",
"Santi",
""
],
[
"Vitria",
"Jordi",
""
]
] |
new_dataset
| 0.993868 |
2210.16394
|
Reza Yousefi Mashhoor
|
Reza Yousefi Mashhoor, Ahmad Ayatollahi
|
HeartSiam: A Domain Invariant Model for Heart Sound Classification
| null | null |
10.1109/ICSPIS56952.2022.10044047
| null |
cs.SD eess.AS eess.SP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Cardiovascular disease is one of the leading causes of death according to
WHO. Phonocardiography (PCG) is a costeffective, non-invasive method suitable
for heart monitoring. The main aim of this work is to classify heart sounds
into normal/abnormal categories. Heart sounds are recorded using different
stethoscopes, thus varying in the domain. Based on recent studies, this
variability can affect heart sound classification. This work presents a Siamese
network architecture for learning the similarity between normal vs. normal or
abnormal vs. abnormal signals and the difference between normal vs. abnormal
signals. By applying this similarity and difference learning across all
domains, the task of domain invariant heart sound classification can be well
achieved. We have used the multi-domain 2016 Physionet/CinC challenge dataset
for the evaluation method. Results: On the evaluation set provided by the
challenge, we have achieved a sensitivity of 82.8%, specificity of 75.3%, and
mean accuracy of 79.1%. While overcoming the multi-domain problem, the proposed
method has surpassed the first-place method of the Physionet challenge in terms
of specificity up to 10.9% and mean accuracy up to 5.6%. Also, compared with
similar state-of-the-art domain invariant methods, our model converges faster
and performs better in specificity (4.1%) and mean accuracy (1.5%) with an
equal number of epochs learned.
|
[
{
"version": "v1",
"created": "Fri, 28 Oct 2022 20:26:42 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Dec 2022 10:07:07 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Mashhoor",
"Reza Yousefi",
""
],
[
"Ayatollahi",
"Ahmad",
""
]
] |
new_dataset
| 0.971929 |
2210.16777
|
Jiadi Yao
|
Jiadi Yao, Xing Chen, Xiao-Lei Zhang, Wei-Qiang Zhang and Kunde Yang
|
Symmetric Saliency-based Adversarial Attack To Speaker Identification
| null | null |
10.1109/LSP.2023.3236509
| null |
cs.SD cs.CR cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Adversarial attack approaches to speaker identification either need high
computational cost or are not very effective, to our knowledge. To address this
issue, in this paper, we propose a novel generation-network-based approach,
called symmetric saliency-based encoder-decoder (SSED), to generate adversarial
voice examples to speaker identification. It contains two novel components.
First, it uses a novel saliency map decoder to learn the importance of speech
samples to the decision of a targeted speaker identification system, so as to
make the attacker focus on generating artificial noise to the important
samples. It also proposes an angular loss function to push the speaker
embedding far away from the source speaker. Our experimental results
demonstrate that the proposed SSED yields the state-of-the-art performance,
i.e. over 97% targeted attack success rate and a signal-to-noise level of over
39 dB on both the open-set and close-set speaker identification tasks, with a
low computational cost.
|
[
{
"version": "v1",
"created": "Sun, 30 Oct 2022 08:54:02 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Yao",
"Jiadi",
""
],
[
"Chen",
"Xing",
""
],
[
"Zhang",
"Xiao-Lei",
""
],
[
"Zhang",
"Wei-Qiang",
""
],
[
"Yang",
"Kunde",
""
]
] |
new_dataset
| 0.998445 |
2212.02827
|
Kunihiro Miyazaki
|
Kunihiro Miyazaki, Taichi Murayama, Akira Matsui, Masaru Nishikawa,
Takayuki Uchiba, Haewoon Kwak, Jisun An
|
Political Honeymoon Effect on Social Media: Characterizing Social Media
Reaction to the Changes of Prime Minister in Japan
|
Accepted at ACM Web Science Conference 2023 (WebSci'23). 12 pages, 6
figures
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
New leaders in democratic countries typically enjoy high approval ratings
immediately after taking office. This phenomenon is called the honeymoon effect
and is regarded as a significant political phenomenon; however, its mechanism
remains underexplored. Therefore, this study examines how social media users
respond to changes in political leadership in order to better understand the
honeymoon effect in politics. In particular, we constructed a 15-year Twitter
dataset on eight change timings of Japanese prime ministers consisting of 6.6M
tweets and analyzed them in terms of sentiments, topics, and users. We found
that, while not always, social media tend to show a honeymoon effect at the
change timings of prime minister. The study also revealed that sentiment about
prime ministers differed by topic, indicating that public expectations vary
from one prime minister to another. Furthermore, the user base was largely
replaced before and after the change in the prime minister, and their sentiment
was also significantly different. The implications of this study would be
beneficial for administrative management.
|
[
{
"version": "v1",
"created": "Tue, 6 Dec 2022 08:53:26 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Feb 2023 20:56:20 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Miyazaki",
"Kunihiro",
""
],
[
"Murayama",
"Taichi",
""
],
[
"Matsui",
"Akira",
""
],
[
"Nishikawa",
"Masaru",
""
],
[
"Uchiba",
"Takayuki",
""
],
[
"Kwak",
"Haewoon",
""
],
[
"An",
"Jisun",
""
]
] |
new_dataset
| 0.998452 |
2212.07555
|
Anindita Ghosh
|
Anindita Ghosh, Rishabh Dabral, Vladislav Golyanik, Christian
Theobalt, Philipp Slusallek
|
IMos: Intent-Driven Full-Body Motion Synthesis for Human-Object
Interactions
|
10 pages, 9 figures
| null | null | null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Can we make virtual characters in a scene interact with their surrounding
objects through simple instructions? Is it possible to synthesize such motion
plausibly with a diverse set of objects and instructions? Inspired by these
questions, we present the first framework to synthesize the full-body motion of
virtual human characters performing specified actions with 3D objects placed
within their reach. Our system takes textual instructions specifying the
objects and the associated intentions of the virtual characters as input and
outputs diverse sequences of full-body motions. This contrasts existing works,
where full-body action synthesis methods generally do not consider object
interactions, and human-object interaction methods focus mainly on synthesizing
hand or finger movements for grasping objects. We accomplish our objective by
designing an intent-driven fullbody motion generator, which uses a pair of
decoupled conditional variational auto-regressors to learn the motion of the
body parts in an autoregressive manner. We also optimize the 6-DoF pose of the
objects such that they plausibly fit within the hands of the synthesized
characters. We compare our proposed method with the existing methods of motion
synthesis and establish a new and stronger state-of-the-art for the task of
intent-driven motion synthesis.
|
[
{
"version": "v1",
"created": "Wed, 14 Dec 2022 23:59:24 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Dec 2022 18:39:09 GMT"
},
{
"version": "v3",
"created": "Sun, 26 Feb 2023 12:38:55 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Ghosh",
"Anindita",
""
],
[
"Dabral",
"Rishabh",
""
],
[
"Golyanik",
"Vladislav",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Slusallek",
"Philipp",
""
]
] |
new_dataset
| 0.999617 |
2301.03174
|
Liqun Qi
|
Liqun Qi, Xiangke Wang and Chunfeng Cui
|
Augmented Quaternion and Augmented Unit Quaternion Optimization
| null | null | null | null |
cs.RO math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce and explore augmented quaternions and augmented
unit quaternions, and present an augmented unit quaternion optimization model.
An augmented quaternion consist of a quaternion and a translation vector. The
multiplication rule of augmented quaternion is defined. An augmented unit
quaternion consists of a unit quaternion and a translation vector. The
augmented unit quaternions form a Lie group. By means of augmented unit
quaternions, we study the error model and kinematics. Then we formulate two
classical problems in robot research, i.e., the hand-eye calibration problem
and the simultaneous localization and mapping (SLAM) problem as augmented unit
quaternion optimization problems, which are actually real smooth spherical
equality constrained optimization problems. Comparing with the corresponding
unit dual quaternion optimization model, the augmented unit quaternion
optimization model has less variables and removes the orthogonality
constraints.
|
[
{
"version": "v1",
"created": "Mon, 9 Jan 2023 05:21:17 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2023 10:40:22 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Qi",
"Liqun",
""
],
[
"Wang",
"Xiangke",
""
],
[
"Cui",
"Chunfeng",
""
]
] |
new_dataset
| 0.998898 |
2302.03819
|
Aaron Kuan
|
Tri Nguyen, Mukul Narwani, Mark Larson, Yicong Li, Shuhan Xie,
Hanspeter Pfister, Donglai Wei, Nir Shavit, Lu Mi, Alexandra Pacureanu,
Wei-Chung Lee, Aaron T. Kuan
|
The XPRESS Challenge: Xray Projectomic Reconstruction -- Extracting
Segmentation with Skeletons
|
6 pages, 2 figures
| null | null | null |
cs.CV cs.LG q-bio.NC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The wiring and connectivity of neurons form a structural basis for the
function of the nervous system. Advances in volume electron microscopy (EM) and
image segmentation have enabled mapping of circuit diagrams (connectomics)
within local regions of the mouse brain. However, applying volume EM over the
whole brain is not currently feasible due to technological challenges. As a
result, comprehensive maps of long-range connections between brain regions are
lacking. Recently, we demonstrated that X-ray holographic nanotomography (XNH)
can provide high-resolution images of brain tissue at a much larger scale than
EM. In particular, XNH is wellsuited to resolve large, myelinated axon tracts
(white matter) that make up the bulk of long-range connections (projections)
and are critical for inter-region communication. Thus, XNH provides an imaging
solution for brain-wide projectomics. However, because XNH data is typically
collected at lower resolutions and larger fields-of-view than EM, accurate
segmentation of XNH images remains an important challenge that we present here.
In this task, we provide volumetric XNH images of cortical white matter axons
from the mouse brain along with ground truth annotations for axon trajectories.
Manual voxel-wise annotation of ground truth is a time-consuming bottleneck for
training segmentation networks. On the other hand, skeleton-based ground truth
is much faster to annotate, and sufficient to determine connectivity.
Therefore, we encourage participants to develop methods to leverage
skeleton-based training. To this end, we provide two types of ground-truth
annotations: a small volume of voxel-wise annotations and a larger volume with
skeleton-based annotations. Entries will be evaluated on how accurately the
submitted segmentations agree with the ground-truth skeleton annotations.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 00:53:46 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2023 19:42:59 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Nguyen",
"Tri",
""
],
[
"Narwani",
"Mukul",
""
],
[
"Larson",
"Mark",
""
],
[
"Li",
"Yicong",
""
],
[
"Xie",
"Shuhan",
""
],
[
"Pfister",
"Hanspeter",
""
],
[
"Wei",
"Donglai",
""
],
[
"Shavit",
"Nir",
""
],
[
"Mi",
"Lu",
""
],
[
"Pacureanu",
"Alexandra",
""
],
[
"Lee",
"Wei-Chung",
""
],
[
"Kuan",
"Aaron T.",
""
]
] |
new_dataset
| 0.950443 |
2302.06908
|
Yichen Peng
|
Yichen Peng, Chunqi Zhao, Haoran Xie, Tsukasa Fukusato, and Kazunori
Miyata
|
DiffFaceSketch: High-Fidelity Face Image Synthesis with Sketch-Guided
Latent Diffusion Model
|
10 pages, 12 figures, and 2 tables, project page:
https://puckikk1202.github.io/difffacesketch2023/
| null | null | null |
cs.CV cs.AI cs.LG cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Synthesizing face images from monochrome sketches is one of the most
fundamental tasks in the field of image-to-image translation. However, it is
still challenging to (1)~make models learn the high-dimensional face features
such as geometry and color, and (2)~take into account the characteristics of
input sketches. Existing methods often use sketches as indirect inputs (or as
auxiliary inputs) to guide the models, resulting in the loss of sketch features
or the alteration of geometry information. In this paper, we introduce a
Sketch-Guided Latent Diffusion Model (SGLDM), an LDM-based network architect
trained on the paired sketch-face dataset. We apply a Multi-Auto-Encoder (AE)
to encode the different input sketches from different regions of a face from
pixel space to a feature map in latent space, which enables us to reduce the
dimension of the sketch input while preserving the geometry-related information
of local face details. We build a sketch-face paired dataset based on the
existing method that extracts the edge map from an image. We then introduce a
Stochastic Region Abstraction (SRA), an approach to augment our dataset to
improve the robustness of SGLDM to handle sketch input with arbitrary
abstraction. The evaluation study shows that SGLDM can synthesize high-quality
face images with different expressions, facial accessories, and hairstyles from
various sketches with different abstraction levels.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 08:51:47 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Feb 2023 11:03:38 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Peng",
"Yichen",
""
],
[
"Zhao",
"Chunqi",
""
],
[
"Xie",
"Haoran",
""
],
[
"Fukusato",
"Tsukasa",
""
],
[
"Miyata",
"Kazunori",
""
]
] |
new_dataset
| 0.983529 |
2302.09432
|
Dakuan Lu
|
Dakuan Lu, Hengkui Wu, Jiaqing Liang, Yipei Xu, Qianyu He, Yipeng
Geng, Mengkun Han, Yingsi Xin, Yanghua Xiao
|
BBT-Fin: Comprehensive Construction of Chinese Financial Domain
Pre-trained Language Model, Corpus and Benchmark
|
Changed author order
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
To advance Chinese financial natural language processing (NLP), we introduce
BBT-FinT5, a new Chinese financial pre-training language model based on the T5
model. To support this effort, we have built BBT-FinCorpus, a large-scale
financial corpus with approximately 300GB of raw text from four different
sources. In general domain NLP, comprehensive benchmarks like GLUE and
SuperGLUE have driven significant advancements in language model pre-training
by enabling head-to-head comparisons among models. Drawing inspiration from
these benchmarks, we propose BBT-CFLEB, a Chinese Financial Language
understanding and generation Evaluation Benchmark, which includes six datasets
covering both understanding and generation tasks. Our aim is to facilitate
research in the development of NLP within the Chinese financial domain. Our
model, corpus and benchmark are released at
https://github.com/ssymmetry/BBT-FinCUGE-Applications. Our work belongs to the
Big Bang Transformer (BBT), a large-scale pre-trained language model project.
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2023 22:20:37 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Feb 2023 10:50:09 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Lu",
"Dakuan",
""
],
[
"Wu",
"Hengkui",
""
],
[
"Liang",
"Jiaqing",
""
],
[
"Xu",
"Yipei",
""
],
[
"He",
"Qianyu",
""
],
[
"Geng",
"Yipeng",
""
],
[
"Han",
"Mengkun",
""
],
[
"Xin",
"Yingsi",
""
],
[
"Xiao",
"Yanghua",
""
]
] |
new_dataset
| 0.991902 |
2302.11306
|
HongYu Liu
|
Hongyu Liu and Xintong Han and Chengbin Jin and Lihui Qian and Huawei
Wei and Zhe Lin and Faqiang Wang and Haoye Dong and Yibing Song and Jia Xu
and Qifeng Chen
|
Human MotionFormer: Transferring Human Motions with Vision Transformers
|
Accepted by ICLR2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human motion transfer aims to transfer motions from a target dynamic person
to a source static one for motion synthesis. An accurate matching between the
source person and the target motion in both large and subtle motion changes is
vital for improving the transferred motion quality. In this paper, we propose
Human MotionFormer, a hierarchical ViT framework that leverages global and
local perceptions to capture large and subtle motion matching, respectively. It
consists of two ViT encoders to extract input features (i.e., a target motion
image and a source human image) and a ViT decoder with several cascaded blocks
for feature matching and motion transfer. In each block, we set the target
motion feature as Query and the source person as Key and Value, calculating the
cross-attention maps to conduct a global feature matching. Further, we
introduce a convolutional layer to improve the local perception after the
global cross-attention computations. This matching process is implemented in
both warping and generation branches to guide the motion transfer. During
training, we propose a mutual learning loss to enable the co-supervision
between warping and generation branches for better motion representations.
Experiments show that our Human MotionFormer sets the new state-of-the-art
performance both qualitatively and quantitatively. Project page:
\url{https://github.com/KumapowerLIU/Human-MotionFormer}
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 11:42:44 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Feb 2023 14:59:45 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Liu",
"Hongyu",
""
],
[
"Han",
"Xintong",
""
],
[
"Jin",
"Chengbin",
""
],
[
"Qian",
"Lihui",
""
],
[
"Wei",
"Huawei",
""
],
[
"Lin",
"Zhe",
""
],
[
"Wang",
"Faqiang",
""
],
[
"Dong",
"Haoye",
""
],
[
"Song",
"Yibing",
""
],
[
"Xu",
"Jia",
""
],
[
"Chen",
"Qifeng",
""
]
] |
new_dataset
| 0.993626 |
2302.12198
|
Jiadi Cui
|
Jiadi Cui and S\"oren Schwertfeger
|
CP+: Camera Poses Augmentation with Large-scale LiDAR Maps
| null | null |
10.1109/RCAR54675.2022.9872176
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale colored point clouds have many advantages in navigation or scene
display. Relying on cameras and LiDARs, which are now widely used in
reconstruction tasks, it is possible to obtain such colored point clouds.
However, the information from these two kinds of sensors is not well fused in
many existing frameworks, resulting in poor colorization results, thus
resulting in inaccurate camera poses and damaged point colorization results. We
propose a novel framework called Camera Pose Augmentation (CP+) to improve the
camera poses and align them directly with the LiDAR-based point cloud. Initial
coarse camera poses are given by LiDAR-Inertial or LiDAR-Inertial-Visual
Odometry with approximate extrinsic parameters and time synchronization. The
key steps to improve the alignment of the images consist of selecting a point
cloud corresponding to a region of interest in each camera view, extracting
reliable edge features from this point cloud, and deriving 2D-3D line
correspondences which are used towards iterative minimization of the
re-projection error.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 17:49:53 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Feb 2023 07:18:15 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Cui",
"Jiadi",
""
],
[
"Schwertfeger",
"Sören",
""
]
] |
new_dataset
| 0.983859 |
2302.12966
|
Jiawei Hou
|
Jiawei Hou, Qi Chen, Yurong Cheng, Guang Chen, Xiangyang Xue, Taiping
Zeng, Jian Pu
|
SUPS: A Simulated Underground Parking Scenario Dataset for Autonomous
Driving
|
Accepted for publication at the 25th IEEE Intelligent Transportation
Systems Conference (ITSC 2022)
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic underground parking has attracted considerable attention as the
scope of autonomous driving expands. The auto-vehicle is supposed to obtain the
environmental information, track its location, and build a reliable map of the
scenario. Mainstream solutions consist of well-trained neural networks and
simultaneous localization and mapping (SLAM) methods, which need numerous
carefully labeled images and multiple sensor estimations. However, there is a
lack of underground parking scenario datasets with multiple sensors and
well-labeled images that support both SLAM tasks and perception tasks, such as
semantic segmentation and parking slot detection. In this paper, we present
SUPS, a simulated dataset for underground automatic parking, which supports
multiple tasks with multiple sensors and multiple semantic labels aligned with
successive images according to timestamps. We intend to cover the defect of
existing datasets with the variability of environments and the diversity and
accessibility of sensors in the virtual scene. Specifically, the dataset
records frames from four surrounding fisheye cameras, two forward pinhole
cameras, a depth camera, and data from LiDAR, inertial measurement unit (IMU),
GNSS. Pixel-level semantic labels are provided for objects, especially ground
signs such as arrows, parking lines, lanes, and speed bumps. Perception, 3D
reconstruction, depth estimation, and SLAM, and other relative tasks are
supported by our dataset. We also evaluate the state-of-the-art SLAM algorithms
and perception models on our dataset. Finally, we open source our virtual 3D
scene built based on Unity Engine and release our dataset at
https://github.com/jarvishou829/SUPS.
|
[
{
"version": "v1",
"created": "Sat, 25 Feb 2023 02:59:12 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Hou",
"Jiawei",
""
],
[
"Chen",
"Qi",
""
],
[
"Cheng",
"Yurong",
""
],
[
"Chen",
"Guang",
""
],
[
"Xue",
"Xiangyang",
""
],
[
"Zeng",
"Taiping",
""
],
[
"Pu",
"Jian",
""
]
] |
new_dataset
| 0.999775 |
2302.12976
|
Shuangshuang Cui
|
Shuangshuang Cui, Hongzhi Wang, Xianglong Liu, Zeyu Tian, Xiaoou Ding
|
TS-Cabinet: Hierarchical Storage for Cloud-Edge-End Time-series Database
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hierarchical data storage is crucial for cloud-edge-end time-series database.
Efficient hierarchical storage will directly reduce the storage space of local
databases at each side and improve the access hit rate of data. However, no
effective hierarchical data management strategy for cloud-edge-end time-series
database has been proposed. To solve this problem, this paper proposes
TS-Cabinet, a hierarchical storage scheduler for cloud-edge-end time-series
database based on workload forecasting. To the best of our knowledge, it is the
first work for hierarchical storage of cloud-edge-end time-series database. By
building a temperature model, we calculate the current temperature for the
timeseries data, and use the workload forecasting model to predict the data's
future temperature. Finally, we perform hierarchical storage according to the
data migration policy. We validate it on a public dataset, and the experimental
results show that our method can achieve about 94% hit rate for data access on
the cloud side and edge side, which is 12% better than the existing methods.
TS-Cabinet can help cloud-edge-end time-series database avoid the storage
overhead caused by storing the full amount of data at all three sides, and
greatly reduce the data transfer overhead between each side when collaborative
query processing.
|
[
{
"version": "v1",
"created": "Sat, 25 Feb 2023 04:04:49 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Cui",
"Shuangshuang",
""
],
[
"Wang",
"Hongzhi",
""
],
[
"Liu",
"Xianglong",
""
],
[
"Tian",
"Zeyu",
""
],
[
"Ding",
"Xiaoou",
""
]
] |
new_dataset
| 0.998022 |
2302.12994
|
Augustine Ukpebor Ph.D
|
Augustine Ukpebor, James Addy, Kamal Ali and Ali Abu-El Humos
|
Secure End-to-End Communications with Lightweight Cryptographic
Algorithm
|
14 pages,7 figures, 2 tables, Conference - The 2021 World Congress in
Computer Science, Computer Engineering, and Applied Computing (CSCE'21)
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The field of lightweight cryptography has been gaining popularity as
traditional cryptographic techniques are challenging to implement in
resource-limited environments. This research paper presents an approach to
utilizing the ESP32 microcontroller as a hardware platform to implement a
lightweight cryptographic algorithm. Our approach employs KATAN32, the smallest
block cipher of the KATAN family, with an 80-bit key and 32-bit blocks. The
algorithm requires less computational power as it employs an 80 unsigned 64-bit
integer key for encrypting and decrypting data. During encryption, a data array
is passed into the encryption function with a key, which is then used to fill a
buffer with an encrypted array. Similarly, the decryption function utilizes a
buffer to fill an array of original data in 32 unsigned 64-bit integers. This
study also investigates the optimal implementation of cryptography block
ciphers, benchmarking performance against various metrics, including memory
requirements (RAM), throughput, power consumption, and security. Our
implementation demonstrates that data can be securely transmitted end-to-end
with good throughput and low power consumption.
|
[
{
"version": "v1",
"created": "Sat, 25 Feb 2023 05:27:30 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Ukpebor",
"Augustine",
""
],
[
"Addy",
"James",
""
],
[
"Ali",
"Kamal",
""
],
[
"Humos",
"Ali Abu-El",
""
]
] |
new_dataset
| 0.98663 |
2302.13049
|
Jawad Muhammad
|
Jawad Muhammad, Yunlong Wang, Junxing Hu, Kunbo Zhang, and Zhenan Sun
|
CASIA-Iris-Africa: A Large-scale African Iris Image Database
|
This paper has been accepted for publication in Machine Intelligence
Research Journal (MIR)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Iris biometrics is a phenotypic biometric trait that has proven to be
agnostic to human natural physiological changes. Research on iris biometrics
has progressed tremendously, partly due to publicly available iris databases.
Various databases have been available to researchers that address pressing iris
biometric challenges such as constraint, mobile, multispectral, synthetics,
long-distance, contact lenses, liveness detection, etc. However, these
databases mostly contain subjects of Caucasian and Asian docents with very few
Africans. Despite many investigative studies on racial bias in face biometrics,
very few studies on iris biometrics have been published, mainly due to the lack
of racially diverse large-scale databases containing sufficient iris samples of
Africans in the public domain. Furthermore, most of these databases contain a
relatively small number of subjects and labelled images. This paper proposes a
large-scale African database named CASIA-Iris-Africa that can be used as a
complementary database for the iris recognition community to mediate the effect
of racial biases on Africans. The database contains 28,717 images of 1023
African subjects (2046 iris classes) with age, gender, and ethnicity attributes
that can be useful in demographically sensitive studies of Africans. Sets of
specific application protocols are incorporated with the database to ensure the
database's variability and scalability. Performance results of some open-source
SOTA algorithms on the database are presented, which will serve as baseline
performances. The relatively poor performances of the baseline algorithms on
the proposed database despite better performance on other databases prove that
racial biases exist in these iris recognition algorithms. The database will be
made available on our website: http://www.idealtest.org.
|
[
{
"version": "v1",
"created": "Sat, 25 Feb 2023 10:26:34 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Muhammad",
"Jawad",
""
],
[
"Wang",
"Yunlong",
""
],
[
"Hu",
"Junxing",
""
],
[
"Zhang",
"Kunbo",
""
],
[
"Sun",
"Zhenan",
""
]
] |
new_dataset
| 0.998999 |
2302.13073
|
Jun Su
|
Jun Su, Guangyue Han, Shlomo Shamai (Shitz)
|
Feedback Capacity of OU-Colored AWGN Channels
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We derive an explicit feedback capacity formula for the OU-Colored AWGN
channel. Among many others, this result shows that at least in some cases, the
continuous-time Schalkwijk-Kailath coding scheme achieves the feedback capacity
for such a channel, and feedback may not increase the capacity of a
continuous-time ACGN channel even if the noise process is colored.
|
[
{
"version": "v1",
"created": "Sat, 25 Feb 2023 12:30:25 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Su",
"Jun",
"",
"Shitz"
],
[
"Han",
"Guangyue",
"",
"Shitz"
],
[
"Shamai",
"Shlomo",
"",
"Shitz"
]
] |
new_dataset
| 0.995913 |
2302.13075
|
Martin Sundermeyer
|
Martin Sundermeyer, Tomas Hodan, Yann Labbe, Gu Wang, Eric Brachmann,
Bertram Drost, Carsten Rother, Jiri Matas
|
BOP Challenge 2022 on Detection, Segmentation and Pose Estimation of
Specific Rigid Objects
|
arXiv admin note: text overlap with arXiv:2009.07378
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present the evaluation methodology, datasets and results of the BOP
Challenge 2022, the fourth in a series of public competitions organized with
the goal to capture the status quo in the field of 6D object pose estimation
from an RGB/RGB-D image. In 2022, we witnessed another significant improvement
in the pose estimation accuracy -- the state of the art, which was 56.9 AR$_C$
in 2019 (Vidal et al.) and 69.8 AR$_C$ in 2020 (CosyPose), moved to new heights
of 83.7 AR$_C$ (GDRNPP). Out of 49 pose estimation methods evaluated since
2019, the top 18 are from 2022. Methods based on point pair features, which
were introduced in 2010 and achieved competitive results even in 2020, are now
clearly outperformed by deep learning methods. The synthetic-to-real domain gap
was again significantly reduced, with 82.7 AR$_C$ achieved by GDRNPP trained
only on synthetic images from BlenderProc. The fastest variant of GDRNPP
reached 80.5 AR$_C$ with an average time per image of 0.23s. Since most of the
recent methods for 6D object pose estimation begin by detecting/segmenting
objects, we also started evaluating 2D object detection and segmentation
performance based on the COCO metrics. Compared to the Mask R-CNN results from
CosyPose in 2020, detection improved from 60.3 to 77.3 AP$_C$ and segmentation
from 40.5 to 58.7 AP$_C$. The online evaluation system stays open and is
available at: \href{http://bop.felk.cvut.cz/}{bop.felk.cvut.cz}.
|
[
{
"version": "v1",
"created": "Sat, 25 Feb 2023 13:12:50 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Sundermeyer",
"Martin",
""
],
[
"Hodan",
"Tomas",
""
],
[
"Labbe",
"Yann",
""
],
[
"Wang",
"Gu",
""
],
[
"Brachmann",
"Eric",
""
],
[
"Drost",
"Bertram",
""
],
[
"Rother",
"Carsten",
""
],
[
"Matas",
"Jiri",
""
]
] |
new_dataset
| 0.999343 |
2302.13099
|
Emilia Wi\'snios
|
Piotr Wilczy\'nski, Artur \.Z\'o{\l}kowski, Mateusz Krzyzi\'nski,
Emilia Wi\'snios, Bartosz Pieli\'nski, Stanis{\l}aw Gizi\'nski, Julian
Sienkiewicz, Przemys{\l}aw Biecek
|
HADES: Homologous Automated Document Exploration and Summarization
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces HADES, a novel tool for automatic comparative documents
with similar structures. HADES is designed to streamline the work of
professionals dealing with large volumes of documents, such as policy
documents, legal acts, and scientific papers. The tool employs a multi-step
pipeline that begins with processing PDF documents using topic modeling,
summarization, and analysis of the most important words for each topic. The
process concludes with an interactive web app with visualizations that
facilitate the comparison of the documents. HADES has the potential to
significantly improve the productivity of professionals dealing with high
volumes of documents, reducing the time and effort required to complete tasks
related to comparative document analysis. Our package is publically available
on GitHub.
|
[
{
"version": "v1",
"created": "Sat, 25 Feb 2023 15:16:10 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Wilczyński",
"Piotr",
""
],
[
"Żółkowski",
"Artur",
""
],
[
"Krzyziński",
"Mateusz",
""
],
[
"Wiśnios",
"Emilia",
""
],
[
"Pieliński",
"Bartosz",
""
],
[
"Giziński",
"Stanisław",
""
],
[
"Sienkiewicz",
"Julian",
""
],
[
"Biecek",
"Przemysław",
""
]
] |
new_dataset
| 0.992566 |
2302.13141
|
Nick Willemstein
|
Nick Willemstein and Herman van der Kooij and Ali Sadeghi
|
3D Printed Proprioceptive Soft Fluidic Actuators with graded porosity
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Integration of both actuation and proprioception into the robot body enables
sensorized soft actuators that can operate in a closed loop. An interesting
class of actuators for this purpose are graded porous actuators, which can be
mechanically programmed by their porosity (gradient) and sensorized by using a
smart material. Three types of such actuators were 3D printed, namely: a
bending finger, contractor, and a three DoF bending segment. Piezoresistive
sensing was embedded by printing with a conductive thermoplastic elastomer. A
challenge with piezoresistive sensors is to relate the change in resistance to
deformation due to their inherent hysteresis and nonlinearity. In this work, an
(estimated) Wiener-Hammerstein (WH) model was used to predict the deformation.
The bending and contracting actuators showed that the linear and WH models
could reach 70+% and 80+% fits, respectively. Thereby indicating that the
deformation of the printed actuators could be estimated quite well. Similarly,
the 3DoF bending segment showed similar values with the WH model reducing both
the fitting and RMS error on average with 20+%. These results indicate that
sensorized actuators based on 3D-printed soft structures with a porosity
gradient can be mechanically programmed whereas strain estimation can be done
using identified Wiener-Hammerstein models.
|
[
{
"version": "v1",
"created": "Sat, 25 Feb 2023 18:50:17 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Willemstein",
"Nick",
""
],
[
"van der Kooij",
"Herman",
""
],
[
"Sadeghi",
"Ali",
""
]
] |
new_dataset
| 0.998694 |
2302.13261
|
Faiza Tazi
|
Alisa Zezulak, Faiza Tazi, Sanchari Das
|
SoK: Evaluating Privacy and Security Concerns of Using Web Services for
the Disabled Population
| null | null | null | null |
cs.CR cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The online privacy and security of the disabled community is a complex field
that has implications for every user who navigates web services. While many
disciplines have separately researched the disabled population and their online
privacy and security concerns, the overlap between the two is very high but
under-researched. Moreover, a complex relationship exists between the disabled
population and web services where the interaction depends on several web
service developmental factors, including usability and accessibility. To this
aid, we explored this intersection of privacy and security of web services as
perceived by the disabled community through previous studies by conducting a
detailed systematic literature review and analysis of 63 articles. Our findings
encompassed several topics, including how the disabled population navigates
around authentication interfaces, online privacy concerns, universal design
practices, and how security methods such as CAPTCHAs can be improved to become
more accessible and usable for people of all needs and abilities. We further
discuss the gap in the current research, including solutions such as the
universal implementation of inclusive privacy and security tools and protocols.
|
[
{
"version": "v1",
"created": "Sun, 26 Feb 2023 08:20:25 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Zezulak",
"Alisa",
""
],
[
"Tazi",
"Faiza",
""
],
[
"Das",
"Sanchari",
""
]
] |
new_dataset
| 0.997201 |
2302.13307
|
Sanjeev Sharma
|
Sanjeev Sharma
|
QCQP-Tunneling: Ellipsoidal Constrained Agent Navigation
|
In proceedings of the 2nd IASTED International Conference on
Robotics, 2011
|
2nd IASTED International Conference on Robotics, 2011
|
10.2316/P.2011.752-010
| null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a convex-QCQP based novel path planning algorithm named
ellipsoidal constrained agent navigation (ECAN), for a challenging problem of
online path planning in completely unknown and unseen continuous environments.
ECAN plans path for the agent by making a tunnel of overlapping ellipsoids, in
an online fashion, through the environment. Convex constraints in the
ellipsoid-formation step circumvent collision with the obstacles. The problem
of online-tunneling is solved as a convex-QCQP. This paper assumes no
constraints on shape of the agent and the obstacles. However, to make the
approach clearer, this paper first introduces the framework for a point-mass
agent with point-size obstacles. After explaining the underlying principle in
drawing an ellipsoid tunnel, the framework is extended to the agent and
obstacles having finite area (2d space) and finite-volume (3d-space).
|
[
{
"version": "v1",
"created": "Sun, 26 Feb 2023 12:41:46 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Sharma",
"Sanjeev",
""
]
] |
new_dataset
| 0.989927 |
2302.13311
|
Chunpu Xu
|
Chunpu Xu, Hanzhuo Tan, Jing Li, Piji Li
|
Understanding Social Media Cross-Modality Discourse in Linguistic Space
|
EMNLP 2022 Findings
| null | null | null |
cs.MM cs.CL cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
The multimedia communications with texts and images are popular on social
media. However, limited studies concern how images are structured with texts to
form coherent meanings in human cognition. To fill in the gap, we present a
novel concept of cross-modality discourse, reflecting how human readers couple
image and text understandings. Text descriptions are first derived from images
(named as subtitles) in the multimedia contexts. Five labels -- entity-level
insertion, projection and concretization and scene-level restatement and
extension -- are further employed to shape the structure of subtitles and texts
and present their joint meanings. As a pilot study, we also build the very
first dataset containing 16K multimedia tweets with manually annotated
discourse labels. The experimental results show that the multimedia encoder
based on multi-head attention with captions is able to obtain
the-state-of-the-art results.
|
[
{
"version": "v1",
"created": "Sun, 26 Feb 2023 13:04:04 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Xu",
"Chunpu",
""
],
[
"Tan",
"Hanzhuo",
""
],
[
"Li",
"Jing",
""
],
[
"Li",
"Piji",
""
]
] |
new_dataset
| 0.998956 |
2302.13317
|
Hung Cao
|
Atah Nuh Mih, Hung Cao, Joshua Pickard, Monica Wachowicz, Rickey Dubay
|
TransferD2: Automated Defect Detection Approach in Smart Manufacturing
using Transfer Learning Techniques
|
Keywords: Transfer Learning, Smart Manufacturing, Defect Detection,
Deflectometry Data, Data Enhancement, Product Quality Assurance
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Quality assurance is crucial in the smart manufacturing industry as it
identifies the presence of defects in finished products before they are shipped
out. Modern machine learning techniques can be leveraged to provide rapid and
accurate detection of these imperfections. We, therefore, propose a transfer
learning approach, namely TransferD2, to correctly identify defects on a
dataset of source objects and extend its application to new unseen target
objects. We present a data enhancement technique to generate a large dataset
from the small source dataset for building a classifier. We then integrate
three different pre-trained models (Xception, ResNet101V2, and
InceptionResNetV2) into the classifier network and compare their performance on
source and target data. We use the classifier to detect the presence of
imperfections on the unseen target data using pseudo-bounding boxes. Our
results show that ResNet101V2 performs best on the source data with an accuracy
of 95.72%. Xception performs best on the target data with an accuracy of 91.00%
and also provides a more accurate prediction of the defects on the target
images. Throughout the experiment, the results also indicate that the choice of
a pre-trained model is not dependent on the depth of the network. Our proposed
approach can be applied in defect detection applications where insufficient
data is available for training a model and can be extended to identify
imperfections in new unseen data.
|
[
{
"version": "v1",
"created": "Sun, 26 Feb 2023 13:24:46 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Mih",
"Atah Nuh",
""
],
[
"Cao",
"Hung",
""
],
[
"Pickard",
"Joshua",
""
],
[
"Wachowicz",
"Monica",
""
],
[
"Dubay",
"Rickey",
""
]
] |
new_dataset
| 0.961386 |
2302.13321
|
Ninell Oldenburg
|
Tibor Krols, Yana Nikolova, Ninell Oldenburg
|
Multi-Modality in Music: Predicting Emotion in Music from High-Level
Audio Features and Lyrics
|
12 pages, incl. 2 pages appendix
| null | null | null |
cs.SD cs.CL cs.MM eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This paper aims to test whether a multi-modal approach for music emotion
recognition (MER) performs better than a uni-modal one on high-level song
features and lyrics. We use 11 song features retrieved from the Spotify API,
combined lyrics features including sentiment, TF-IDF, and Anew to predict
valence and arousal (Russell, 1980) scores on the Deezer Mood Detection Dataset
(DMDD) (Delbouys et al., 2018) with 4 different regression models. We find that
out of the 11 high-level song features, mainly 5 contribute to the performance,
multi-modal features do better than audio alone when predicting valence. We
made our code publically available.
|
[
{
"version": "v1",
"created": "Sun, 26 Feb 2023 13:38:42 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Krols",
"Tibor",
""
],
[
"Nikolova",
"Yana",
""
],
[
"Oldenburg",
"Ninell",
""
]
] |
new_dataset
| 0.99964 |
2302.13378
|
Milad Shafiee Ashtiani
|
Milad Shafiee, Guillaume Bellegarda, Auke Ijspeert
|
Puppeteer and Marionette: Learning Anticipatory Quadrupedal Locomotion
Based on Interactions of a Central Pattern Generator and Supraspinal Drive
|
Accepted for IEEE International Conference on Robotics and Automation
(ICRA) 2023
| null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Quadruped animal locomotion emerges from the interactions between the spinal
central pattern generator (CPG), sensory feedback, and supraspinal drive
signals from the brain. Computational models of CPGs have been widely used for
investigating the spinal cord contribution to animal locomotion control in
computational neuroscience and in bio-inspired robotics. However, the
contribution of supraspinal drive to anticipatory behavior, i.e. motor behavior
that involves planning ahead of time (e.g. of footstep placements), is not yet
properly understood. In particular, it is not clear whether the brain modulates
CPG activity and/or directly modulates muscle activity (hence bypassing the
CPG) for accurate foot placements. In this paper, we investigate the
interaction of supraspinal drive and a CPG in an anticipatory locomotion
scenario that involves stepping over gaps. By employing deep reinforcement
learning (DRL), we train a neural network policy that replicates the
supraspinal drive behavior. This policy can either modulate the CPG dynamics,
or directly change actuation signals to bypass the CPG dynamics. Our results
indicate that the direct supraspinal contribution to the actuation signal is a
key component for a high gap crossing success rate. However, the CPG dynamics
in the spinal cord are beneficial for gait smoothness and energy efficiency.
Moreover, our investigation shows that sensing the front feet distances to the
gap is the most important and sufficient sensory information for learning gap
crossing. Our results support the biological hypothesis that cats and horses
mainly control the front legs for obstacle avoidance, and that hind limbs
follow an internal memory based on the front limbs' information. Our method
enables the quadruped robot to cross gaps of up to 20 cm (50% of body-length)
without any explicit dynamics modeling or Model Predictive Control (MPC).
|
[
{
"version": "v1",
"created": "Sun, 26 Feb 2023 18:32:44 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Shafiee",
"Milad",
""
],
[
"Bellegarda",
"Guillaume",
""
],
[
"Ijspeert",
"Auke",
""
]
] |
new_dataset
| 0.979795 |
2302.13392
|
Maryam Jameela
|
Maryam Jameela and Gunho Sohn
|
NSANet: Noise Seeking Attention Network
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR (Light Detection and Ranging) technology has remained popular in
capturing natural and built environments for numerous applications. The recent
technological advancements in electro-optical engineering have aided in
obtaining laser returns at a higher pulse repetition frequency (PRF), which
considerably increased the density of the 3D point cloud. Conventional
techniques with lower PRF had a single pulse-in-air (SPIA) zone, large enough
to avoid a mismatch among pulse pairs at the receiver. New multiple
pulses-in-air (MPIA) technology guarantees various windows of operational
ranges for a single flight line and no blind zones. The disadvantage of the
technology is the projection of atmospheric returns closer to the same
pulse-in-air zone of adjacent terrain points likely to intersect with objects
of interest. These noise properties compromise the perceived quality of the
scene and encourage the development of new noise-filtering neural networks, as
existing filters are significantly ineffective. We propose a novel
dual-attention noise-filtering neural network called Noise Seeking Attention
Network (NSANet) that uses physical priors and local spatial attention to
filter noise. Our research is motivated by two psychology theories of feature
integration and attention engagement to prove the role of attention in computer
vision at the encoding and decoding phase. The presented results of NSANet show
the inclination towards attention engagement theory and a performance boost
compared to the state-of-the-art noise-filtering deep convolutional neural
networks.
|
[
{
"version": "v1",
"created": "Sun, 26 Feb 2023 19:22:36 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Jameela",
"Maryam",
""
],
[
"Sohn",
"Gunho",
""
]
] |
new_dataset
| 0.9985 |
2302.13403
|
Cagri Toraman
|
Cagri Toraman, Izzet Emre Kucukkaya, Oguzhan Ozcelik, Umitcan Sahin
|
Tweets Under the Rubble: Detection of Messages Calling for Help in
Earthquake Disaster
| null | null | null | null |
cs.SI cs.CL cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The importance of social media is again exposed in the recent tragedy of the
2023 Turkey and Syria earthquake. Many victims who were trapped under the
rubble called for help by posting messages in Twitter. We present an
interactive tool to provide situational awareness for missing and trapped
people, and disaster relief for rescue and donation efforts. The system (i)
collects tweets, (ii) classifies the ones calling for help, (iii) extracts
important entity tags, and (iv) visualizes them in an interactive map screen.
Our initial experiments show that the performance in terms of the F1 score is
up to 98.30 for tweet classification, and 84.32 for entity extraction. The
demonstration, dataset, and other related files can be accessed at
https://github.com/avaapm/deprem
|
[
{
"version": "v1",
"created": "Sun, 26 Feb 2023 20:55:19 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Toraman",
"Cagri",
""
],
[
"Kucukkaya",
"Izzet Emre",
""
],
[
"Ozcelik",
"Oguzhan",
""
],
[
"Sahin",
"Umitcan",
""
]
] |
new_dataset
| 0.999144 |
2302.13461
|
Chengju Li
|
Hai Liu, Chengju Li, Haifeng Qian
|
Parameters of several families of binary duadic codes and their related
codes
|
arXiv admin note: substantial text overlap with arXiv:2301.06446
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Binary duadic codes are an interesting subclass of cyclic codes since they
have large dimensions and their minimum distances may have a square-root bound.
In this paper, we present several families of binary duadic codes of length
$2^m-1$ and develop some lower bounds on their minimum distances by using the
BCH bound on cyclic codes, which partially solves one case of the open problem
proposed in \cite{LLD}. It is shown that the lower bounds on their minimum
distances are close to the square root bound. Moreover, the parameters of the
dual and extended codes of these binary duadic codes are investigated.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 01:26:36 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Liu",
"Hai",
""
],
[
"Li",
"Chengju",
""
],
[
"Qian",
"Haifeng",
""
]
] |
new_dataset
| 0.997799 |
2302.13471
|
David Braun
|
Chase W. Mathews and David J. Braun
|
Design of a Variable Stiffness Spring with Human-Selectable Stiffness
|
Accepted for presentation at the 2023 IEEE International Conference
on Robotics and Automation
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Springs are commonly used in wearable robotic devices to provide assistive
joint torque without the need for motors and batteries. However, different
tasks (such as walking or running) and different users (such as athletes with
strong legs or the elderly with weak legs) necessitate different assistive
joint torques, and therefore, springs with different stiffness. Variable
stiffness springs are a special class of springs which can exert more or less
torque upon the same deflection, provided that the user is able to change the
stiffness of the spring. In this paper, we present a novel variable stiffness
spring design in which the user can select a preferred spring stiffness similar
to switching gears on a bicycle. Using a leg-swing experiment, we demonstrate
that the user can increment and decrement spring stiffness in a large range to
effectively assist the hip joint during leg oscillations. Variable stiffness
springs with human-selectable stiffness could be key components of wearable
devices which augment locomotion tasks, such as walking, running, and swimming.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 01:55:31 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Mathews",
"Chase W.",
""
],
[
"Braun",
"David J.",
""
]
] |
new_dataset
| 0.99634 |
2302.13487
|
Jiawei Lian
|
Jiawei Lian, Xiaofei Wang, Yuru Su, Mingyang Ma, Shaohui Mei
|
Contextual adversarial attack against aerial detection in the physical
world
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep Neural Networks (DNNs) have been extensively utilized in aerial
detection. However, DNNs' sensitivity and vulnerability to maliciously
elaborated adversarial examples have progressively garnered attention.
Recently, physical attacks have gradually become a hot issue due to they are
more practical in the real world, which poses great threats to some
security-critical applications. In this paper, we take the first attempt to
perform physical attacks in contextual form against aerial detection in the
physical world. We propose an innovative contextual attack method against
aerial detection in real scenarios, which achieves powerful attack performance
and transfers well between various aerial object detectors without smearing or
blocking the interested objects to hide. Based on the findings that the
targets' contextual information plays an important role in aerial detection by
observing the detectors' attention maps, we propose to make full use of the
contextual area of the interested targets to elaborate contextual perturbations
for the uncovered attacks in real scenarios. Extensive proportionally scaled
experiments are conducted to evaluate the effectiveness of the proposed
contextual attack method, which demonstrates the proposed method's superiority
in both attack efficacy and physical practicality.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 02:57:58 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Lian",
"Jiawei",
""
],
[
"Wang",
"Xiaofei",
""
],
[
"Su",
"Yuru",
""
],
[
"Ma",
"Mingyang",
""
],
[
"Mei",
"Shaohui",
""
]
] |
new_dataset
| 0.998934 |
2302.13495
|
Qiang Zhou
|
Qiang Zhou, Yuang Liu, Chaohui Yu, Jingliang Li, Zhibin Wang, Fan Wang
|
LMSeg: Language-guided Multi-dataset Segmentation
|
12 figures, 5 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
It's a meaningful and attractive topic to build a general and inclusive
segmentation model that can recognize more categories in various scenarios. A
straightforward way is to combine the existing fragmented segmentation datasets
and train a multi-dataset network. However, there are two major issues with
multi-dataset segmentation: (1) the inconsistent taxonomy demands manual
reconciliation to construct a unified taxonomy; (2) the inflexible one-hot
common taxonomy causes time-consuming model retraining and defective
supervision of unlabeled categories. In this paper, we investigate the
multi-dataset segmentation and propose a scalable Language-guided Multi-dataset
Segmentation framework, dubbed LMSeg, which supports both semantic and panoptic
segmentation. Specifically, we introduce a pre-trained text encoder to map the
category names to a text embedding space as a unified taxonomy, instead of
using inflexible one-hot label. The model dynamically aligns the segment
queries with the category embeddings. Instead of relabeling each dataset with
the unified taxonomy, a category-guided decoding module is designed to
dynamically guide predictions to each datasets taxonomy. Furthermore, we adopt
a dataset-aware augmentation strategy that assigns each dataset a specific
image augmentation pipeline, which can suit the properties of images from
different datasets. Extensive experiments demonstrate that our method achieves
significant improvements on four semantic and three panoptic segmentation
datasets, and the ablation study evaluates the effectiveness of each component.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 03:43:03 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Zhou",
"Qiang",
""
],
[
"Liu",
"Yuang",
""
],
[
"Yu",
"Chaohui",
""
],
[
"Li",
"Jingliang",
""
],
[
"Wang",
"Zhibin",
""
],
[
"Wang",
"Fan",
""
]
] |
new_dataset
| 0.998137 |
2302.13540
|
Ruihang Miao
|
Ruihang Miao, Weizhou Liu, Mingrui Chen, Zheng Gong, Weixin Xu, Chen
Hu, Shuchang Zhou
|
OccDepth: A Depth-Aware Method for 3D Semantic Scene Completion
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D Semantic Scene Completion (SSC) can provide dense geometric and semantic
scene representations, which can be applied in the field of autonomous driving
and robotic systems. It is challenging to estimate the complete geometry and
semantics of a scene solely from visual images, and accurate depth information
is crucial for restoring 3D geometry. In this paper, we propose the first
stereo SSC method named OccDepth, which fully exploits implicit depth
information from stereo images (or RGBD images) to help the recovery of 3D
geometric structures. The Stereo Soft Feature Assignment (Stereo-SFA) module is
proposed to better fuse 3D depth-aware features by implicitly learning the
correlation between stereo images. In particular, when the input are RGBD
image, a virtual stereo images can be generated through original RGB image and
depth map. Besides, the Occupancy Aware Depth (OAD) module is used to obtain
geometry-aware 3D features by knowledge distillation using pre-trained depth
models. In addition, a reformed TartanAir benchmark, named SemanticTartanAir,
is provided in this paper for further testing our OccDepth method on SSC task.
Compared with the state-of-the-art RGB-inferred SSC method, extensive
experiments on SemanticKITTI show that our OccDepth method achieves superior
performance with improving +4.82% mIoU, of which +2.49% mIoU comes from stereo
images and +2.33% mIoU comes from our proposed depth-aware method. Our code and
trained models are available at https://github.com/megvii-research/OccDepth.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 06:35:03 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Miao",
"Ruihang",
""
],
[
"Liu",
"Weizhou",
""
],
[
"Chen",
"Mingrui",
""
],
[
"Gong",
"Zheng",
""
],
[
"Xu",
"Weixin",
""
],
[
"Hu",
"Chen",
""
],
[
"Zhou",
"Shuchang",
""
]
] |
new_dataset
| 0.968784 |
2302.13564
|
Zhaoji Huang
|
Junli Gao, Zhaoji Huang, Zhaonian Tang, Haitao Song, Wenyu Liang
|
Visuo-Tactile-Based Slip Detection Using A Multi-Scale Temporal
Convolution Network
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans can accurately determine whether the object in hand has slipped or not
by visual and tactile perception. However, it is still a challenge for robots
to detect in-hand object slip through visuo-tactile fusion. To address this
issue, a novel visuo-tactile fusion deep neural network is proposed to detect
slip, which is a time-dependent continuous action. By using the multi-scale
temporal convolution network (MS-TCN) to extract the temporal features of
visual and tactile data, the slip can be detected effectively. In this paper, a
7-dregree-of-freedom (7-DoF) robot manipulator equipped with a camera and a
tactile sensor is used for data collection on 50 daily objects with different
shapes, materials, sizes, and weights. Therefore, a dataset is built, where the
grasping data of 40 objects and 10 objects are used for network training and
testing, respectively. The detection accuracy is 96.96% based on the proposed
model. Also, the proposed model is compared with a visuo-tactile fusion deep
neural network (DNN) based on long short-term memory network (LSTM) on the
collected dataset and a public dataset using the GelSight tactile sensor. The
results demonstrate that the proposed model performs better on both dataset.
The proposed model can help robots grasp daily objects reliably. In addition,
it can be used in grasping force control, grasping policy generation and
dexterous manipulation.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 07:48:10 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Gao",
"Junli",
""
],
[
"Huang",
"Zhaoji",
""
],
[
"Tang",
"Zhaonian",
""
],
[
"Song",
"Haitao",
""
],
[
"Liang",
"Wenyu",
""
]
] |
new_dataset
| 0.993114 |
2302.13577
|
Hai Lan
|
Xihao Wang, Jiaming Lei, Hai Lan, Arafat Al-Jawari, Xian Wei
|
DuEqNet: Dual-Equivariance Network in Outdoor 3D Object Detection for
Autonomous Driving
|
This work is accepted by ICRA2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Outdoor 3D object detection has played an essential role in the environment
perception of autonomous driving. In complicated traffic situations, precise
object recognition provides indispensable information for prediction and
planning in the dynamic system, improving self-driving safety and reliability.
However, with the vehicle's veering, the constant rotation of the surrounding
scenario makes a challenge for the perception systems. Yet most existing
methods have not focused on alleviating the detection accuracy impairment
brought by the vehicle's rotation, especially in outdoor 3D detection. In this
paper, we propose DuEqNet, which first introduces the concept of equivariance
into 3D object detection network by leveraging a hierarchical embedded
framework. The dual-equivariance of our model can extract the equivariant
features at both local and global levels, respectively. For the local feature,
we utilize the graph-based strategy to guarantee the equivariance of the
feature in point cloud pillars. In terms of the global feature, the group
equivariant convolution layers are adopted to aggregate the local feature to
achieve the global equivariance. In the experiment part, we evaluate our
approach with different baselines in 3D object detection tasks and obtain
State-Of-The-Art performance. According to the results, our model presents
higher accuracy on orientation and better prediction efficiency. Moreover, our
dual-equivariance strategy exhibits the satisfied plug-and-play ability on
various popular object detection frameworks to improve their performance.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 08:30:02 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Wang",
"Xihao",
""
],
[
"Lei",
"Jiaming",
""
],
[
"Lan",
"Hai",
""
],
[
"Al-Jawari",
"Arafat",
""
],
[
"Wei",
"Xian",
""
]
] |
new_dataset
| 0.97069 |
2302.13619
|
Nuo Chen
|
Nuo Chen, Hongguang Li, Yinan Bao, Junqing He, Xinshi Lin, Qi Yang,
Jianfeng Liu, Ruyi Gan, Jiaxing Zhang, Baoyuan Wang, Jia Li
|
Orca: A Few-shot Benchmark for Chinese Conversational Machine Reading
Comprehension
|
14 pages
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The conversational machine reading comprehension (CMRC) task aims to answer
questions in conversations, which has been a hot research topic in recent years
because of its wide applications. However, existing CMRC benchmarks in which
each conversation is assigned a static passage are inconsistent with real
scenarios. Thus, model's comprehension ability towards real scenarios are hard
to evaluate reasonably. To this end, we propose the first Chinese CMRC
benchmark Orca and further provide zero-shot/few-shot settings to evaluate
model's generalization ability towards diverse domains. We collect 831
hot-topic driven conversations with 4,742 turns in total. Each turn of a
conversation is assigned with a response-related passage, aiming to evaluate
model's comprehension ability more reasonably. The topics of conversations are
collected from social media platform and cover 33 domains, trying to be
consistent with real scenarios. Importantly, answers in Orca are all
well-annotated natural responses rather than the specific spans or short phrase
in previous datasets. Besides, we implement three strong baselines to tackle
the challenge in Orca. The results indicate the great challenge of our CMRC
benchmark. Our datatset and checkpoints are available at
https://github.com/nuochenpku/Orca.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 09:40:41 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Chen",
"Nuo",
""
],
[
"Li",
"Hongguang",
""
],
[
"Bao",
"Yinan",
""
],
[
"He",
"Junqing",
""
],
[
"Lin",
"Xinshi",
""
],
[
"Yang",
"Qi",
""
],
[
"Liu",
"Jianfeng",
""
],
[
"Gan",
"Ruyi",
""
],
[
"Zhang",
"Jiaxing",
""
],
[
"Wang",
"Baoyuan",
""
],
[
"Li",
"Jia",
""
]
] |
new_dataset
| 0.998934 |
2302.13655
|
Benjamin Lee
|
Benjamin Lee, Arvind Satyanarayan, Maxime Cordeil, Arnaud Prouzeau,
Bernhard Jenny, Tim Dwyer
|
Deimos: A Grammar of Dynamic Embodied Immersive Visualisation Morphs and
Transitions
|
CHI 2023
| null |
10.1145/3544548.3580754
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Deimos, a grammar for specifying dynamic embodied immersive
visualisation morphs and transitions. A morph is a collection of animated
transitions that are dynamically applied to immersive visualisations at runtime
and is conceptually modelled as a state machine. It is comprised of state,
transition, and signal specifications. States in a morph are used to generate
animation keyframes, with transitions connecting two states together. A
transition is controlled by signals, which are composable data streams that can
be used to enable embodied interaction techniques. Morphs allow immersive
representations of data to transform and change shape through user interaction,
facilitating the embodied cognition process. We demonstrate the expressivity of
Deimos in an example gallery and evaluate its usability in an expert user study
of six immersive analytics researchers. Participants found the grammar to be
powerful and expressive, and showed interest in drawing upon Deimos' concepts
and ideas in their own research.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 10:48:31 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Lee",
"Benjamin",
""
],
[
"Satyanarayan",
"Arvind",
""
],
[
"Cordeil",
"Maxime",
""
],
[
"Prouzeau",
"Arnaud",
""
],
[
"Jenny",
"Bernhard",
""
],
[
"Dwyer",
"Tim",
""
]
] |
new_dataset
| 0.99926 |
2302.13714
|
Tuan Thanh Nguyen
|
Tuan Thanh Nguyen, Kui Cai, Han Mao Kiah, Duc Tu Dao, and Kees A.
Schouhamer Immink
|
On the Design of Codes for DNA Computing: Secondary Structure Avoidance
Codes
| null | null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we investigate a challenging problem, which has been considered
to be an important criterion in designing codewords for DNA computing purposes,
namely secondary structure avoidance in single-stranded DNA molecules. In
short, secondary structure refers to the tendency of a single-stranded DNA
sequence to fold back upon itself, thus becoming inactive in the computation
process. While some design criteria that reduces the possibility of secondary
structure formation has been proposed by Milenkovic and Kashyap (2006), the
main contribution of this work is to provide an explicit construction of DNA
codes that completely avoid secondary structure of arbitrary stem length.
Formally, given codeword length n and arbitrary integer m>=2, we provide
efficient methods to construct DNA codes of length n that avoid secondary
structure of any stem length more than or equal to m. Particularly, when m = 3,
our constructions yield a family of DNA codes of rate 1.3031 bits/nt, while the
highest rate found in the prior art was 1.1609 bits/nt. In addition, for
m>=3log n + 4, we provide an efficient encoder that incurs only one redundant
symbol.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 12:22:07 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Nguyen",
"Tuan Thanh",
""
],
[
"Cai",
"Kui",
""
],
[
"Kiah",
"Han Mao",
""
],
[
"Dao",
"Duc Tu",
""
],
[
"Immink",
"Kees A. Schouhamer",
""
]
] |
new_dataset
| 0.994723 |
2302.13784
|
Tingting Qiao
|
Tingting Qiao, Gonzalo Moro Perez
|
Solution for the EPO CodeFest on Green Plastics: Hierarchical
multi-label classification of patents relating to green plastics using deep
learning
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This work aims at hierarchical multi-label patents classification for patents
disclosing technologies related to green plastics. This is an emerging field
for which there is currently no classification scheme, and hence, no labeled
data is available, making this task particularly challenging. We first propose
a classification scheme for this technology and a way to learn a machine
learning model to classify patents into the proposed classification scheme. To
achieve this, we come up with a strategy to automatically assign labels to
patents in order to create a labeled training dataset that can be used to learn
a classification model in a supervised learning setting. Using said training
dataset, we come up with two classification models, a SciBERT Neural Network
(SBNN) model and a SciBERT Hierarchical Neural Network (SBHNN) model. Both
models use a BERT model as a feature extractor and on top of it, a neural
network as a classifier. We carry out extensive experiments and report commonly
evaluation metrics for this challenging classification problem. The experiment
results verify the validity of our approach and show that our model sets a very
strong benchmark for this problem. We also interpret our models by visualizing
the word importance given by the trained model, which indicates the model is
capable to extract high-level semantic information of input documents. Finally,
we highlight how our solution fulfills the evaluation criteria for the EPO
CodeFest and we also outline possible directions for future work. Our code has
been made available at https://github.com/epo/CF22-Green-Hands
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 19:06:58 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Qiao",
"Tingting",
""
],
[
"Perez",
"Gonzalo Moro",
""
]
] |
new_dataset
| 0.962948 |
2302.13795
|
Christoph Leiter
|
Christoph Leiter, Ran Zhang, Yanran Chen, Jonas Belouadi, Daniil
Larionov, Vivian Fresen and Steffen Eger
|
ChatGPT: A Meta-Analysis after 2.5 Months
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
ChatGPT, a chatbot developed by OpenAI, has gained widespread popularity and
media attention since its release in November 2022. However, little hard
evidence is available regarding its perception in various sources. In this
paper, we analyze over 300,000 tweets and more than 150 scientific papers to
investigate how ChatGPT is perceived and discussed. Our findings show that
ChatGPT is generally viewed as of high quality, with positive sentiment and
emotions of joy dominating in social media. Its perception has slightly
decreased since its debut, however, with joy decreasing and (negative) surprise
on the rise, and it is perceived more negatively in languages other than
English. In recent scientific papers, ChatGPT is characterized as a great
opportunity across various fields including the medical domain, but also as a
threat concerning ethics and receives mixed assessments for education. Our
comprehensive meta-analysis of ChatGPT's current perception after 2.5 months
since its release can contribute to shaping the public debate and informing its
future development. We make our data available.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 15:43:22 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Leiter",
"Christoph",
""
],
[
"Zhang",
"Ran",
""
],
[
"Chen",
"Yanran",
""
],
[
"Belouadi",
"Jonas",
""
],
[
"Larionov",
"Daniil",
""
],
[
"Fresen",
"Vivian",
""
],
[
"Eger",
"Steffen",
""
]
] |
new_dataset
| 0.998073 |
2302.13946
|
Majid Haghparast
|
Behrouz Safaiezadeh, Majid Haghparast and Lauri Kettunen
|
Novel Efficient Scalable QCA XOR and Full Adder Designs
|
18 pages, 12 figures, 15 tables
| null | null | null |
cs.ET cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Circuit design based on Quantum-dots Cellular Automata technology offers
power-efficiency and nano-size circuits. It is an attractive alternative to
CMOS technology. The XOR gate is a widely used building element in arithmetic
circuits. An efficient XOR gate in QCA computational circuits can significantly
improve efficiency. This paper proposes two different approaches for designing
3-input QCA XOR gates with 10 and 8 cells. They require two clock phases to
create output. They have efficient and scalable structures. To demonstrate the
functionality of these structures, we design QCA full adders using the
suggested gates and compare the results with existing designs. The proposed QCA
full adder has only 12 cells and is the best compared to all the existing
counterparts. We simulated and verified the proposed structures. We proved the
functionality of the proposed QCA full adder and the suggested QCA XOR
structures. Additionally, QCAPro is used to estimate the energy dissipation of
the proposed XOR and Full-adder. The results demonstrated that the proposed
designs have the desired performance based on the number of cells, occupied
area, and latency.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 16:48:59 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Safaiezadeh",
"Behrouz",
""
],
[
"Haghparast",
"Majid",
""
],
[
"Kettunen",
"Lauri",
""
]
] |
new_dataset
| 0.981719 |
2302.13971
|
Gautier Izacard
|
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet,
Marie-Anne Lachaux, Timoth\'ee Lacroix, Baptiste Rozi\`ere, Naman Goyal, Eric
Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave,
Guillaume Lample
|
LLaMA: Open and Efficient Foundation Language Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce LLaMA, a collection of foundation language models ranging from
7B to 65B parameters. We train our models on trillions of tokens, and show that
it is possible to train state-of-the-art models using publicly available
datasets exclusively, without resorting to proprietary and inaccessible
datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks,
and LLaMA-65B is competitive with the best models, Chinchilla-70B and
PaLM-540B. We release all our models to the research community.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 17:11:15 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Touvron",
"Hugo",
""
],
[
"Lavril",
"Thibaut",
""
],
[
"Izacard",
"Gautier",
""
],
[
"Martinet",
"Xavier",
""
],
[
"Lachaux",
"Marie-Anne",
""
],
[
"Lacroix",
"Timothée",
""
],
[
"Rozière",
"Baptiste",
""
],
[
"Goyal",
"Naman",
""
],
[
"Hambro",
"Eric",
""
],
[
"Azhar",
"Faisal",
""
],
[
"Rodriguez",
"Aurelien",
""
],
[
"Joulin",
"Armand",
""
],
[
"Grave",
"Edouard",
""
],
[
"Lample",
"Guillaume",
""
]
] |
new_dataset
| 0.998693 |
2302.13996
|
Size Wu
|
Size Wu, Wenwei Zhang, Sheng Jin, Wentao Liu, Chen Change Loy
|
Aligning Bag of Regions for Open-Vocabulary Object Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Pre-trained vision-language models (VLMs) learn to align vision and language
representations on large-scale datasets, where each image-text pair usually
contains a bag of semantic concepts. However, existing open-vocabulary object
detectors only align region embeddings individually with the corresponding
features extracted from the VLMs. Such a design leaves the compositional
structure of semantic concepts in a scene under-exploited, although the
structure may be implicitly learned by the VLMs. In this work, we propose to
align the embedding of bag of regions beyond individual regions. The proposed
method groups contextually interrelated regions as a bag. The embeddings of
regions in a bag are treated as embeddings of words in a sentence, and they are
sent to the text encoder of a VLM to obtain the bag-of-regions embedding, which
is learned to be aligned to the corresponding features extracted by a frozen
VLM. Applied to the commonly used Faster R-CNN, our approach surpasses the
previous best results by 4.6 box AP50 and 2.8 mask AP on novel categories of
open-vocabulary COCO and LVIS benchmarks, respectively. Code and models are
available at https://github.com/wusize/ovdet.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 17:39:21 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Wu",
"Size",
""
],
[
"Zhang",
"Wenwei",
""
],
[
"Jin",
"Sheng",
""
],
[
"Liu",
"Wentao",
""
],
[
"Loy",
"Chen Change",
""
]
] |
new_dataset
| 0.965362 |
2302.14039
|
Jingpei Lu
|
Jingpei Lu, Fei Liu, Cedric Girerd, Michael C. Yip
|
Image-based Pose Estimation and Shape Reconstruction for Robot
Manipulators and Soft, Continuum Robots via Differentiable Rendering
|
7 pages, 7 figures, accepted to ICRA 2023
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State estimation from measured data is crucial for robotic applications as
autonomous systems rely on sensors to capture the motion and localize in the 3D
world. Among sensors that are designed for measuring a robot's pose, or for
soft robots, their shape, vision sensors are favorable because they are
information-rich, easy to set up, and cost-effective. With recent advancements
in computer vision, deep learning-based methods no longer require markers for
identifying feature points on the robot. However, learning-based methods are
data-hungry and hence not suitable for soft and prototyping robots, as building
such bench-marking datasets is usually infeasible. In this work, we achieve
image-based robot pose estimation and shape reconstruction from camera images.
Our method requires no precise robot meshes, but rather utilizes a
differentiable renderer and primitive shapes. It hence can be applied to robots
for which CAD models might not be available or are crude. Our parameter
estimation pipeline is fully differentiable. The robot shape and pose are
estimated iteratively by back-propagating the image loss to update the
parameters. We demonstrate that our method of using geometrical shape
primitives can achieve high accuracy in shape reconstruction for a soft
continuum robot and pose estimation for a robot manipulator.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 18:51:29 GMT"
}
] | 2023-02-28T00:00:00 |
[
[
"Lu",
"Jingpei",
""
],
[
"Liu",
"Fei",
""
],
[
"Girerd",
"Cedric",
""
],
[
"Yip",
"Michael C.",
""
]
] |
new_dataset
| 0.979283 |
2102.10421
|
James Woodruff
|
J. Zachary Woodruff and Kevin M. Lynch
|
Robotic Contact Juggling
|
18 pages, 15 figures. | Supplemental Video:
https://youtu.be/QT55_Q1ePfg | Code:
https://github.com/zackwoodruff/rolling_dynamics
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We define "robotic contact juggling" to be the purposeful control of the
motion of a three-dimensional smooth object as it rolls freely on a
motion-controlled robot manipulator, or "hand." While specific examples of
robotic contact juggling have been studied before, in this paper we provide the
first general formulation and solution method for the case of an arbitrary
smooth object in single-point rolling contact on an arbitrary smooth hand. Our
formulation splits the problem into four subproblems: (1) deriving the
second-order rolling kinematics; (2) deriving the three-dimensional rolling
dynamics; (3) planning rolling motions that satisfy the rolling dynamics; and
(4) feedback stabilization of planned rolling trajectories. The theoretical
results are demonstrated in simulation and experiment using feedback from a
high-speed vision system.
|
[
{
"version": "v1",
"created": "Sat, 20 Feb 2021 19:15:28 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2023 17:14:55 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Woodruff",
"J. Zachary",
""
],
[
"Lynch",
"Kevin M.",
""
]
] |
new_dataset
| 0.980186 |
2204.02939
|
Mrinal Kanti Dhar
|
Mrinal Kanti Dhar and Mou Deb
|
S-R2F2U-Net: A single-stage model for teeth segmentation
|
GitHub link is mentioned in the abstract. The main manuscript
contains 4 figures and 4 tables. The supplementary document contains 7
figures and 1 table
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Precision tooth segmentation is crucial in the oral sector because it
provides location information for orthodontic therapy, clinical diagnosis, and
surgical treatments. In this paper, we investigate residual, recurrent, and
attention networks to segment teeth from panoramic dental images. Based on our
findings, we suggest three single-stage models: Single Recurrent R2U-Net
(S-R2U-Net), Single Recurrent Filter Double R2U-Net (S-R2F2U-Net), and Single
Recurrent Attention Enabled Filter Double (S-R2F2-Attn-U-Net). Particularly,
S-R2F2U-Net outperforms state-of-the-art models in terms of accuracy and dice
score. A hybrid loss function combining the cross-entropy loss and dice loss is
used to train the model. In addition, it reduces around 45% of model parameters
compared to the R2U-Net model. Models are trained and evaluated on a benchmark
dataset containing 1500 dental panoramic X-ray images. S-R2F2U-Net achieves
97.31% of accuracy and 93.26% of dice score, showing superiority over the
state-of-the-art methods. Codes are available at
https://github.com/mrinal054/teethSeg_sr2f2u-net.git.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 17:07:09 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Feb 2023 20:26:37 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Dhar",
"Mrinal Kanti",
""
],
[
"Deb",
"Mou",
""
]
] |
new_dataset
| 0.995165 |
2205.08152
|
Huan Meng
|
Huan Meng
|
Dual-mode robust MPC for the tracking control of non-holonomoic mobile
robots
|
This paper exists a lot of mistakes. Therefore, I want to withdraw it
| null | null | null |
cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In this paper, a novel dual-mode robust model predictive control (MPC)
approach is proposed for solving the tracking control problem of non-holonomoic
mobile robots with additive bounded disturbance. To reduce the negative effect
of disturbance and drive the state of real system closer to the one of nominal
system , a robust reference signal is introduced into the cost function of MPC.
In order to reduced the computation burden caused by online optimization of MPC
and further improve the tracking accuracy, a dual-mode control strucuture
consisting of the robust MPC and the local nonlinear robust control is
developed, in which the local nonlinear robust control law is applied within a
specified terminal region. Finally, simulation results on the non-holonomic
mobile robot are presented to show the validity of the proposed control
approach.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 07:43:20 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2023 09:38:55 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Meng",
"Huan",
""
]
] |
new_dataset
| 0.978206 |
2207.10748
|
Zhaolin Wang
|
Zhaolin Wang, Xidong Mu, Yuanwei Liu
|
STARS Enabled Integrated Sensing and Communications
|
16 pages, 8 figures
| null |
10.1109/TWC.2023.3245297
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A simultaneously transmitting and reflecting intelligent surface (STARS)
enabled integrated sensing and communications (ISAC) framework is proposed,
where the whole space is divided by STARS into a sensing space and a
communication space. A novel sensing-at-STARS structure, where dedicated
sensors are installed at the STARS, is proposed to address the significant path
loss and clutter interference for sensing. The Cramer-Rao bound (CRB) of the
2-dimension (2D) direction-of-arrivals (DOAs) estimation of the sensing target
is derived, which is then minimized subject to the minimum communication
requirement. A novel approach is proposed to transform the complicated CRB
minimization problem into a trackable modified Fisher information matrix (FIM)
optimization problem. Both independent and coupled phase-shift models of STARS
are investigated: 1) For the independent phase-shift model, to address the
coupling of ISAC waveform and STARS coefficient in the modified FIM, an
efficient double-loop iterative algorithm based on the penalty dual
decomposition (PDD) framework is conceived; 2) For the coupled phase-shift
model, based on the PDD framework, a low complexity alternating optimization
algorithm is proposed to tackle coupled phase-shift constants by alternatively
optimizing amplitude and phase-shift coefficients in closed-form. Finally, the
numerical results demonstrate that: 1) STARS significantly outperforms the
conventional RIS in CRB under the communication constraints; 2) The coupled
phase-shift model achieves comparable performance to the independent one for
low communication requirements or sufficient STARS elements; 3) It is more
efficient to increase the number of passive elements of STARS rather than the
active elements of the sensor; 4) High sensing accuracy can be achieved by
STARS using the practical 2D maximum likelihood estimator compared with the
conventional RIS.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 20:47:53 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Nov 2022 15:07:34 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Feb 2023 01:39:37 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Wang",
"Zhaolin",
""
],
[
"Mu",
"Xidong",
""
],
[
"Liu",
"Yuanwei",
""
]
] |
new_dataset
| 0.997542 |
2209.04145
|
Zhengzhe Liu
|
Zhengzhe Liu, Peng Dai, Ruihui Li, Xiaojuan Qi, Chi-Wing Fu
|
ISS: Image as Stepping Stone for Text-Guided 3D Shape Generation
|
ICLR 2023 spotlight
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Text-guided 3D shape generation remains challenging due to the absence of
large paired text-shape data, the substantial semantic gap between these two
modalities, and the structural complexity of 3D shapes. This paper presents a
new framework called Image as Stepping Stone (ISS) for the task by introducing
2D image as a stepping stone to connect the two modalities and to eliminate the
need for paired text-shape data. Our key contribution is a two-stage
feature-space-alignment approach that maps CLIP features to shapes by
harnessing a pre-trained single-view reconstruction (SVR) model with multi-view
supervisions: first map the CLIP image feature to the detail-rich shape space
in the SVR model, then map the CLIP text feature to the shape space and
optimize the mapping by encouraging CLIP consistency between the input text and
the rendered images. Further, we formulate a text-guided shape stylization
module to dress up the output shapes with novel textures. Beyond existing works
on 3D shape generation from text, our new approach is general for creating
shapes in a broad range of categories, without requiring paired text-shape
data. Experimental results manifest that our approach outperforms the
state-of-the-arts and our baselines in terms of fidelity and consistency with
text. Further, our approach can stylize the generated shapes with both
realistic and fantasy structures and textures.
|
[
{
"version": "v1",
"created": "Fri, 9 Sep 2022 06:54:21 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Sep 2022 06:50:28 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Sep 2022 06:47:08 GMT"
},
{
"version": "v4",
"created": "Thu, 22 Sep 2022 02:27:31 GMT"
},
{
"version": "v5",
"created": "Sat, 28 Jan 2023 09:19:09 GMT"
},
{
"version": "v6",
"created": "Fri, 24 Feb 2023 01:38:20 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Liu",
"Zhengzhe",
""
],
[
"Dai",
"Peng",
""
],
[
"Li",
"Ruihui",
""
],
[
"Qi",
"Xiaojuan",
""
],
[
"Fu",
"Chi-Wing",
""
]
] |
new_dataset
| 0.998471 |
2210.07838
|
Gonzalo Mier
|
Gonzalo Mier and Jo\~ao Valente and Sytze de Bruin
|
Fields2Cover: An open-source coverage path planning library for unmanned
agricultural vehicles
|
8 pages, 5 figures, 2 tables
| null |
10.1109/LRA.2023.3248439
| null |
cs.RO cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes Fields2Cover, a novel open source library for coverage
path planning (CPP) for agricultural vehicles. While there are several CPP
solutions nowadays, there have been limited efforts to unify them into an open
source library and provide benchmarking tools to compare their performance.
Fields2Cover provides a framework for planning coverage paths, developing novel
techniques, and benchmarking state-of-the-art algorithms. The library features
a modular and extensible architecture that supports various vehicles and can be
used for a variety of applications, including farms. Its core modules are: a
headland generator, a swath generator, a route planner and a path planner. An
interface to the Robot Operating System (ROS) is also supplied as an add-on. In
this paper, the functionalities of the library for planning a coverage path in
agriculture are demonstrated using 8 state-of-the-art methods and 7 objective
functions in simulation and field experiments.
|
[
{
"version": "v1",
"created": "Fri, 14 Oct 2022 14:09:29 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2023 10:35:46 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Mier",
"Gonzalo",
""
],
[
"Valente",
"João",
""
],
[
"de Bruin",
"Sytze",
""
]
] |
new_dataset
| 0.998949 |
2210.14502
|
Chenhui Shen
|
Chenhui Shen, Liying Cheng, Lidong Bing, Yang You, Luo Si
|
SentBS: Sentence-level Beam Search for Controllable Summarization
|
10 pages, 1 figure, accepted by EMNLP 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A wide range of control perspectives have been explored in controllable text
generation. Structure-controlled summarization is recently proposed as a useful
and interesting research direction. However, current structure-controlling
methods have limited effectiveness in enforcing the desired structure. To
address this limitation, we propose a sentence-level beam search generation
method (SentBS), where evaluation is conducted throughout the generation
process to select suitable sentences for subsequent generations. We experiment
with different combinations of decoding methods to be used as subcomponents by
SentBS and evaluate results on the structure-controlled dataset MReD.
Experiments show that all explored combinations for SentBS can improve the
agreement between the generated text and the desired structure, with the best
method significantly reducing the structural discrepancies suffered by the
existing model, by approximately 68%.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 06:21:01 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Nov 2022 14:33:30 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Feb 2023 03:59:33 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Shen",
"Chenhui",
""
],
[
"Cheng",
"Liying",
""
],
[
"Bing",
"Lidong",
""
],
[
"You",
"Yang",
""
],
[
"Si",
"Luo",
""
]
] |
new_dataset
| 0.97683 |
2211.15226
|
Alessandro Ottino
|
Alessandro Ottino, Joshua Benjamin, Georgios Zervas
|
RAMP: A Flat Nanosecond Optical Network and MPI Operations for
Distributed Deep Learning Systems
| null | null | null | null |
cs.DC cs.LG cs.NI cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Distributed deep learning (DDL) systems strongly depend on network
performance. Current electronic packet switched (EPS) network architectures and
technologies suffer from variable diameter topologies, low-bisection bandwidth
and over-subscription affecting completion time of communication and collective
operations.
We introduce a near-exascale, full-bisection bandwidth, all-to-all,
single-hop, all-optical network architecture with nanosecond reconfiguration
called RAMP, which supports large-scale distributed and parallel computing
systems (12.8~Tbps per node for up to 65,536 nodes).
For the first time, a custom RAMP-x MPI strategy and a network transcoder is
proposed to run MPI collective operations across the optical circuit switched
(OCS) network in a schedule-less and contention-less manner. RAMP achieves
7.6-171$\times$ speed-up in completion time across all MPI operations compared
to realistic EPS and OCS counterparts. It can also deliver a 1.3-16$\times$ and
7.8-58$\times$ reduction in Megatron and DLRM training time respectively} while
offering 42-53$\times$ and 3.3-12.4$\times$ improvement in energy consumption
and cost respectively.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 11:24:51 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2023 11:25:22 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Ottino",
"Alessandro",
""
],
[
"Benjamin",
"Joshua",
""
],
[
"Zervas",
"Georgios",
""
]
] |
new_dataset
| 0.997447 |
2212.10048
|
Yang Jiao
|
Yang Jiao, Kai Yang, Tiancheng Wu, Dongjin Song, Chengtao Jian
|
Asynchronous Distributed Bilevel Optimization
|
Accepted at ICLR2023
| null | null | null |
cs.LG cs.AI math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bilevel optimization plays an essential role in many machine learning tasks,
ranging from hyperparameter optimization to meta-learning. Existing studies on
bilevel optimization, however, focus on either centralized or synchronous
distributed setting. The centralized bilevel optimization approaches require
collecting massive amount of data to a single server, which inevitably incur
significant communication expenses and may give rise to data privacy risks.
Synchronous distributed bilevel optimization algorithms, on the other hand,
often face the straggler problem and will immediately stop working if a few
workers fail to respond. As a remedy, we propose Asynchronous Distributed
Bilevel Optimization (ADBO) algorithm. The proposed ADBO can tackle bilevel
optimization problems with both nonconvex upper-level and lower-level objective
functions, and its convergence is theoretically guaranteed. Furthermore, it is
revealed through theoretic analysis that the iteration complexity of ADBO to
obtain the $\epsilon$-stationary point is upper bounded by
$\mathcal{O}(\frac{1}{{{\epsilon ^2}}})$. Thorough empirical studies on public
datasets have been conducted to elucidate the effectiveness and efficiency of
the proposed ADBO.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 07:44:48 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Feb 2023 13:32:55 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Feb 2023 04:49:07 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Jiao",
"Yang",
""
],
[
"Yang",
"Kai",
""
],
[
"Wu",
"Tiancheng",
""
],
[
"Song",
"Dongjin",
""
],
[
"Jian",
"Chengtao",
""
]
] |
new_dataset
| 0.977963 |
2301.00970
|
Xu Yan
|
Xu Yan, Chaoda Zheng, Zhen Li, Shuguang Cui, Dengxin Dai
|
Benchmarking the Robustness of LiDAR Semantic Segmentation Models
|
The benchmark will be made available at
https://yanx27.github.io/RobustLidarSeg/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When using LiDAR semantic segmentation models for safety-critical
applications such as autonomous driving, it is essential to understand and
improve their robustness with respect to a large range of LiDAR corruptions. In
this paper, we aim to comprehensively analyze the robustness of LiDAR semantic
segmentation models under various corruptions. To rigorously evaluate the
robustness and generalizability of current approaches, we propose a new
benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR
corruptions in three groups, namely adverse weather, measurement noise and
cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic
segmentation models, especially spanning different input representations (e.g.,
point clouds, voxels, projected images, and etc.), network architectures and
training schemes. Through this study, we obtain two insights: 1) We find out
that the input representation plays a crucial role in robustness. Specifically,
under specific corruptions, different representations perform variously. 2)
Although state-of-the-art methods on LiDAR semantic segmentation achieve
promising results on clean data, they are less robust when dealing with noisy
data. Finally, based on the above observations, we design a robust LiDAR
segmentation model (RLSeg) which greatly boosts the robustness with simple but
effective modifications. It is promising that our benchmark, comprehensive
analysis, and observations can boost future research in robust LiDAR semantic
segmentation for safety-critical applications.
|
[
{
"version": "v1",
"created": "Tue, 3 Jan 2023 06:47:31 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2023 02:23:08 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Yan",
"Xu",
""
],
[
"Zheng",
"Chaoda",
""
],
[
"Li",
"Zhen",
""
],
[
"Cui",
"Shuguang",
""
],
[
"Dai",
"Dengxin",
""
]
] |
new_dataset
| 0.985799 |
2302.05916
|
Qiang Wen
|
Qiang Wen, Yue Wu, Qifeng Chen
|
Video Waterdrop Removal via Spatio-Temporal Fusion in Driving Scenes
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The waterdrops on windshields during driving can cause severe visual
obstructions, which may lead to car accidents. Meanwhile, the waterdrops can
also degrade the performance of a computer vision system in autonomous driving.
To address these issues, we propose an attention-based framework that fuses the
spatio-temporal representations from multiple frames to restore visual
information occluded by waterdrops. Due to the lack of training data for video
waterdrop removal, we propose a large-scale synthetic dataset with simulated
waterdrops in complex driving scenes on rainy days. To improve the generality
of our proposed method, we adopt a cross-modality training strategy that
combines synthetic videos and real-world images. Extensive experiments show
that our proposed method can generalize well and achieve the best waterdrop
removal performance in complex real-world driving scenes.
|
[
{
"version": "v1",
"created": "Sun, 12 Feb 2023 13:47:26 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2023 07:16:35 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Feb 2023 18:27:30 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Wen",
"Qiang",
""
],
[
"Wu",
"Yue",
""
],
[
"Chen",
"Qifeng",
""
]
] |
new_dataset
| 0.998167 |
2302.11136
|
Rabindra Lamsal
|
Rabindra Lamsal, Maria Rodriguez Read, Shanika Karunasekera
|
A Twitter narrative of the COVID-19 pandemic in Australia
|
Accepted to ISCRAM 2023
| null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Social media platforms contain abundant data that can provide comprehensive
knowledge of historical and real-time events. During crisis events, the use of
social media peaks, as people discuss what they have seen, heard, or felt.
Previous studies confirm the usefulness of such socially generated discussions
for the public, first responders, and decision-makers to gain a better
understanding of events as they unfold at the ground level. This study performs
an extensive analysis of COVID-19-related Twitter discussions generated in
Australia between January 2020, and October 2022. We explore the Australian
Twitterverse by employing state-of-the-art approaches from both supervised and
unsupervised domains to perform network analysis, topic modeling, sentiment
analysis, and causality analysis. As the presented results provide a
comprehensive understanding of the Australian Twitterverse during the COVID-19
pandemic, this study aims to explore the discussion dynamics to aid the
development of future automated information systems for epidemic/pandemic
management.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 04:06:59 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2023 02:01:08 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Lamsal",
"Rabindra",
""
],
[
"Read",
"Maria Rodriguez",
""
],
[
"Karunasekera",
"Shanika",
""
]
] |
new_dataset
| 0.994812 |
2302.11970
|
Md Awsafur Rahman
|
Md Awsafur Rahman, Bishmoy Paul, Najibul Haque Sarker, Zaber Ibn Abdul
Hakim, Shaikh Anowarul Fattah
|
ArtiFact: A Large-Scale Dataset with Artificial and Factual Images for
Generalizable and Robust Synthetic Image Detection
|
Figures High-Res
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Synthetic image generation has opened up new opportunities but has also
created threats in regard to privacy, authenticity, and security. Detecting
fake images is of paramount importance to prevent illegal activities, and
previous research has shown that generative models leave unique patterns in
their synthetic images that can be exploited to detect them. However, the
fundamental problem of generalization remains, as even state-of-the-art
detectors encounter difficulty when facing generators never seen during
training. To assess the generalizability and robustness of synthetic image
detectors in the face of real-world impairments, this paper presents a
large-scale dataset named ArtiFact, comprising diverse generators, object
categories, and real-world challenges. Moreover, the proposed multi-class
classification scheme, combined with a filter stride reduction strategy
addresses social platform impairments and effectively detects synthetic images
from both seen and unseen generators. The proposed solution significantly
outperforms other top teams by 8.34% on Test 1, 1.26% on Test 2, and 15.08% on
Test 3 in the IEEE VIP Cup challenge at ICIP 2022, as measured by the accuracy
metric.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 12:40:36 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2023 13:41:35 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Rahman",
"Md Awsafur",
""
],
[
"Paul",
"Bishmoy",
""
],
[
"Sarker",
"Najibul Haque",
""
],
[
"Hakim",
"Zaber Ibn Abdul",
""
],
[
"Fattah",
"Shaikh Anowarul",
""
]
] |
new_dataset
| 0.999434 |
2302.12367
|
Mian Zhong
|
Mian Zhong, Shehzaad Dhuliawala, Niklas Stoehr
|
Extracting Victim Counts from Text
|
Long paper accepted at EACL 2023 main conference
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Decision-makers in the humanitarian sector rely on timely and exact
information during crisis events. Knowing how many civilians were injured
during an earthquake is vital to allocate aids properly. Information about such
victim counts is often only available within full-text event descriptions from
newspapers and other reports. Extracting numbers from text is challenging:
numbers have different formats and may require numeric reasoning. This renders
purely string matching-based approaches insufficient. As a consequence,
fine-grained counts of injured, displaced, or abused victims beyond fatalities
are often not extracted and remain unseen. We cast victim count extraction as a
question answering (QA) task with a regression or classification objective. We
compare regex, dependency parsing, semantic role labeling-based approaches, and
advanced text-to-text models. Beyond model accuracy, we analyze extraction
reliability and robustness which are key for this sensitive task. In
particular, we discuss model calibration and investigate few-shot and
out-of-distribution performance. Ultimately, we make a comprehensive
recommendation on which model to select for different desiderata and data
domains. Our work is among the first to apply numeracy-focused large language
models in a real-world use case with a positive impact.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 23:50:24 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Zhong",
"Mian",
""
],
[
"Dhuliawala",
"Shehzaad",
""
],
[
"Stoehr",
"Niklas",
""
]
] |
new_dataset
| 0.998351 |
2302.12433
|
Zhangir Azerbayev Mr
|
Zhangir Azerbayev, Bartosz Piotrowski, Hailey Schoelkopf, Edward W.
Ayers, Dragomir Radev, Jeremy Avigad
|
ProofNet: Autoformalizing and Formally Proving Undergraduate-Level
Mathematics
| null | null | null | null |
cs.CL cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce ProofNet, a benchmark for autoformalization and formal proving
of undergraduate-level mathematics. The ProofNet benchmarks consists of 371
examples, each consisting of a formal theorem statement in Lean 3, a natural
language theorem statement, and a natural language proof. The problems are
primarily drawn from popular undergraduate pure mathematics textbooks and cover
topics such as real and complex analysis, linear algebra, abstract algebra, and
topology. We intend for ProofNet to be a challenging benchmark that will drive
progress in autoformalization and automatic theorem proving. We report baseline
results on statement autoformalization via in-context learning. Moreover, we
introduce two novel statement autoformalization methods: prompt retrieval and
distilled backtranslation.
|
[
{
"version": "v1",
"created": "Fri, 24 Feb 2023 03:28:46 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Azerbayev",
"Zhangir",
""
],
[
"Piotrowski",
"Bartosz",
""
],
[
"Schoelkopf",
"Hailey",
""
],
[
"Ayers",
"Edward W.",
""
],
[
"Radev",
"Dragomir",
""
],
[
"Avigad",
"Jeremy",
""
]
] |
new_dataset
| 0.999353 |
2302.12443
|
Abhishek Verma
|
Abhishek Verma, Virender Ranga
|
CoSec-RPL: detection of copycat attacks in RPL based 6LoWPANs using
outlier analysis
| null |
Telecommunication Systems, 75, 43-61 (2020)
|
10.1007/s11235-020-00674-w
| null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The IPv6 routing protocol for low-power and lossy networks (RPL) is the
standard routing protocol for IPv6 based low-power wireless personal area
networks (6LoWPANs). In RPL protocol, DODAG information object (DIO) messages
are used to disseminate routing information to other nodes in the network. A
malicious node may eavesdrop DIO messages of its neighbor nodes and later
replay the captured DIO many times with fixed intervals. In this paper, we
present and investigate one of the severe attacks named as a non-spoofed
copycat attack, a type of replay based DoS attack against RPL protocol. It is
shown that the non-spoofed copycat attack increases the average end-to-end
delay (AE2ED) and packet delivery ratio of the network. Thus, to address this
problem, an intrusion detection system (IDS) named CoSec-RPL is proposed in
this paper. The attack detection logic of CoSec-RPL is primarily based on the
idea of outlier detection (OD). CoSec-RPL significantly mitigates the effects
of the non-spoofed copycat attack on the network's performance. The
effectiveness of the proposed IDS is compared with the standard RPL protocol.
The experimental results indicate that CoSec-RPL detects and mitigates
non-spoofed copycat attack efficiently in both static and mobile network
scenarios without adding any significant overhead to the nodes. To the best of
our knowledge, CoSec-RPL is the first RPL specific IDS that utilizes OD for
intrusion detection in 6LoWPANs.
|
[
{
"version": "v1",
"created": "Fri, 24 Feb 2023 04:05:07 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Verma",
"Abhishek",
""
],
[
"Ranga",
"Virender",
""
]
] |
new_dataset
| 0.986872 |
2302.12489
|
Chirag Srivatsa
|
Chirag Ramesh Srivatsa and Chandra R. Murthy
|
Channel State Information Based User Censoring in Irregular Repetition
Slotted Aloha
|
Accepted at IEEE ICC 2023
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Irregular repetition slotted aloha (IRSA) is a massive random access protocol
which can be used to serve a large number of users while achieving a packet
loss rate (PLR) close to zero. However, if the number of users is too high,
then the system is interference limited and the PLR is close to one. In this
paper, we propose a variant of IRSA in the interference limited regime, namely
Censored-IRSA (C-IRSA), wherein users with poor channel states censor
themselves from transmitting their packets. We theoretically analyze the
throughput performance of C-IRSA via density evolution. Using this, we derive
closed-form expressions for the optimal choice of the censor threshold which
maximizes the throughput while achieving zero PLR among uncensored users.
Through extensive numerical simulations, we show that C-IRSA can achieve a
4$\times$ improvement in the peak throughput compared to conventional IRSA.
|
[
{
"version": "v1",
"created": "Fri, 24 Feb 2023 07:10:17 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Srivatsa",
"Chirag Ramesh",
""
],
[
"Murthy",
"Chandra R.",
""
]
] |
new_dataset
| 0.997772 |
2302.12532
|
Bin Liu
|
Bin Liu, Xiaolin Wei, Bo Li, Junjie Cao, Yu-Kun Lai
|
Pose-Controllable 3D Facial Animation Synthesis using Hierarchical
Audio-Vertex Attention
|
15 pages, 12 figures
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most of the existing audio-driven 3D facial animation methods suffered from
the lack of detailed facial expression and head pose, resulting in
unsatisfactory experience of human-robot interaction. In this paper, a novel
pose-controllable 3D facial animation synthesis method is proposed by utilizing
hierarchical audio-vertex attention. To synthesize real and detailed
expression, a hierarchical decomposition strategy is proposed to encode the
audio signal into both a global latent feature and a local vertex-wise control
feature. Then the local and global audio features combined with vertex spatial
features are used to predict the final consistent facial animation via a graph
convolutional neural network by fusing the intrinsic spatial topology structure
of the face model and the corresponding semantic feature of the audio. To
accomplish pose-controllable animation, we introduce a novel pose attribute
augmentation method by utilizing the 2D talking face technique. Experimental
results indicate that the proposed method can produce more realistic facial
expressions and head posture movements. Qualitative and quantitative
experiments show that the proposed method achieves competitive performance
against state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Fri, 24 Feb 2023 09:36:31 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Liu",
"Bin",
""
],
[
"Wei",
"Xiaolin",
""
],
[
"Li",
"Bo",
""
],
[
"Cao",
"Junjie",
""
],
[
"Lai",
"Yu-Kun",
""
]
] |
new_dataset
| 0.99565 |
2302.12587
|
Savvas Papaioannou
|
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides,
Christos G. Panayiotou and Marios M. Polycarpou
|
3D Trajectory Planning for UAV-based Search Missions: An Integrated
Assessment and Search Planning Approach
|
2021 International Conference on Unmanned Aircraft Systems (ICUAS)
|
2021 International Conference on Unmanned Aircraft Systems (ICUAS)
|
10.1109/ICUAS51884.2021.9476869
| null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
The ability to efficiently plan and execute search missions in challenging
and complex environments during natural and man-made disasters is imperative.
In many emergency situations, precise navigation between obstacles and
time-efficient searching around 3D structures is essential for finding
survivors. In this work we propose an integrated assessment and search planning
approach which allows an autonomous UAV (unmanned aerial vehicle) agent to plan
and execute collision-free search trajectories in 3D environments. More
specifically, the proposed search-planning framework aims to integrate and
automate the first two phases (i.e., the assessment phase and the search phase)
of a traditional search-and-rescue (SAR) mission. In the first stage, termed
assessment-planning we aim to find a high-level assessment plan which the UAV
agent can execute in order to visit a set of points of interest. The generated
plan of this stage guides the UAV to fly over the objects of interest thus
providing a first assessment of the situation at hand. In the second stage,
termed search-planning, the UAV trajectory is further fine-tuned to allow the
UAV to search in 3D (i.e., across all faces) the objects of interest for
survivors. The performance of the proposed approach is demonstrated through
extensive simulation analysis.
|
[
{
"version": "v1",
"created": "Fri, 24 Feb 2023 11:51:17 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Papaioannou",
"Savvas",
""
],
[
"Kolios",
"Panayiotis",
""
],
[
"Theocharides",
"Theocharis",
""
],
[
"Panayiotou",
"Christos G.",
""
],
[
"Polycarpou",
"Marios M.",
""
]
] |
new_dataset
| 0.99812 |
2302.12772
|
Tali Treibitz
|
Yelena Randall and Tali Treibitz
|
FLSea: Underwater Visual-Inertial and Stereo-Vision Forward-Looking
Datasets
| null | null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Visibility underwater is challenging, and degrades as the distance between
the subject and camera increases, making vision tasks in the forward-looking
direction more difficult. We have collected underwater forward-looking
stereo-vision and visual-inertial image sets in the Mediterranean and Red Sea.
To our knowledge there are no other public datasets in the underwater
environment acquired with this camera-sensor orientation published with
ground-truth. These datasets are critical for the development of several
underwater applications, including obstacle avoidance, visual odometry, 3D
tracking, Simultaneous Localization and Mapping (SLAM) and depth estimation.
The stereo datasets include synchronized stereo images in dynamic underwater
environments with objects of known-size. The visual-inertial datasets contain
monocular images and IMU measurements, aligned with millisecond resolution
timestamps and objects of known size which were placed in the scene. Both
sensor configurations allow for scale estimation, with the calibrated baseline
in the stereo setup and the IMU in the visual-inertial setup. Ground truth
depth maps were created offline for both dataset types using photogrammetry.
The ground truth is validated with multiple known measurements placed
throughout the imaged environment. There are 5 stereo and 8 visual-inertial
datasets in total, each containing thousands of images, with a range of
different underwater visibility and ambient light conditions, natural and
man-made structures and dynamic camera motions. The forward-looking orientation
of the camera makes these datasets unique and ideal for testing underwater
obstacle-avoidance algorithms and for navigation close to the seafloor in
dynamic environments. With our datasets, we hope to encourage the advancement
of autonomous functionality for underwater vehicles in dynamic and/or shallow
water environments.
|
[
{
"version": "v1",
"created": "Fri, 24 Feb 2023 17:39:53 GMT"
}
] | 2023-02-27T00:00:00 |
[
[
"Randall",
"Yelena",
""
],
[
"Treibitz",
"Tali",
""
]
] |
new_dataset
| 0.978407 |
2002.09283
|
Bin Hu
|
Hanshu Cai, Yiwen Gao, Shuting Sun, Na Li, Fuze Tian, Han Xiao,
Jianxiu Li, Zhengwu Yang, Xiaowei Li, Qinglin Zhao, Zhenyu Liu, Zhijun Yao,
Minqiang Yang, Hong Peng, Jing Zhu, Xiaowei Zhang, Guoping Gao, Fang Zheng,
Rui Li, Zhihua Guo, Rong Ma, Jing Yang, Lan Zhang, Xiping Hu, Yumin Li, Bin
Hu
|
MODMA dataset: a Multi-modal Open Dataset for Mental-disorder Analysis
| null |
Sci Data 9, 178 (2022)
|
10.1038/s41597-022-01211-x
| null |
cs.DL cs.LG q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
According to the World Health Organization, the number of mental disorder
patients, especially depression patients, has grown rapidly and become a
leading contributor to the global burden of disease. However, the present
common practice of depression diagnosis is based on interviews and clinical
scales carried out by doctors, which is not only labor-consuming but also
time-consuming. One important reason is due to the lack of physiological
indicators for mental disorders. With the rising of tools such as data mining
and artificial intelligence, using physiological data to explore new possible
physiological indicators of mental disorder and creating new applications for
mental disorder diagnosis has become a new research hot topic. However, good
quality physiological data for mental disorder patients are hard to acquire. We
present a multi-modal open dataset for mental-disorder analysis. The dataset
includes EEG and audio data from clinically depressed patients and matching
normal controls. All our patients were carefully diagnosed and selected by
professional psychiatrists in hospitals. The EEG dataset includes not only data
collected using traditional 128-electrodes mounted elastic cap, but also a
novel wearable 3-electrode EEG collector for pervasive applications. The
128-electrodes EEG signals of 53 subjects were recorded as both in resting
state and under stimulation; the 3-electrode EEG signals of 55 subjects were
recorded in resting state; the audio data of 52 subjects were recorded during
interviewing, reading, and picture description. We encourage other researchers
in the field to use it for testing their methods of mental-disorder analysis.
|
[
{
"version": "v1",
"created": "Thu, 20 Feb 2020 09:40:39 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Mar 2020 02:27:08 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Mar 2020 03:43:31 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Cai",
"Hanshu",
""
],
[
"Gao",
"Yiwen",
""
],
[
"Sun",
"Shuting",
""
],
[
"Li",
"Na",
""
],
[
"Tian",
"Fuze",
""
],
[
"Xiao",
"Han",
""
],
[
"Li",
"Jianxiu",
""
],
[
"Yang",
"Zhengwu",
""
],
[
"Li",
"Xiaowei",
""
],
[
"Zhao",
"Qinglin",
""
],
[
"Liu",
"Zhenyu",
""
],
[
"Yao",
"Zhijun",
""
],
[
"Yang",
"Minqiang",
""
],
[
"Peng",
"Hong",
""
],
[
"Zhu",
"Jing",
""
],
[
"Zhang",
"Xiaowei",
""
],
[
"Gao",
"Guoping",
""
],
[
"Zheng",
"Fang",
""
],
[
"Li",
"Rui",
""
],
[
"Guo",
"Zhihua",
""
],
[
"Ma",
"Rong",
""
],
[
"Yang",
"Jing",
""
],
[
"Zhang",
"Lan",
""
],
[
"Hu",
"Xiping",
""
],
[
"Li",
"Yumin",
""
],
[
"Hu",
"Bin",
""
]
] |
new_dataset
| 0.999891 |
2106.10432
|
Muhamad Amin Husni Abdul Haris
|
Muhamad Amin Husni Abdul Haris, Sin Liang Lim
|
Neural Network Facial Authentication for Public Electric Vehicle
Charging Station
| null |
JETAP Vol.3 No.1 (2021) 17-21
|
10.33093/jetap.2021.3.1
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
This study is to investigate and compare the facial recognition accuracy
performance of Dlib ResNet against a K-Nearest Neighbour (KNN) classifier.
Particularly when used against a dataset from an Asian ethnicity as Dlib ResNet
was reported to have an accuracy deficiency when it comes to Asian faces. The
comparisons are both implemented on the facial vectors extracted using the
Histogram of Oriented Gradients (HOG) method and use the same dataset for a
fair comparison. Authentication of a user by facial recognition in an electric
vehicle (EV) charging station demonstrates a practical use case for such an
authentication system.
|
[
{
"version": "v1",
"created": "Sat, 19 Jun 2021 05:48:42 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Haris",
"Muhamad Amin Husni Abdul",
""
],
[
"Lim",
"Sin Liang",
""
]
] |
new_dataset
| 0.96418 |
2209.08573
|
Michael Amir
|
Ori Rappel, Michael Amir, Alfred M. Bruckstein
|
Stigmergy-based, Dual-Layer Coverage of Unknown Indoor Regions
|
to appear in the proceedings of AAMAS2023 ("International Conference
on Autonomous Agents and Multiagent Systems 2023")
| null | null | null |
cs.MA cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present algorithms for uniformly covering an unknown indoor region with a
swarm of simple, anonymous and autonomous mobile agents. The exploration of
such regions is made difficult by the lack of a common global reference frame,
severe degradation of radio-frequency communication, and numerous ground
obstacles. We propose addressing these challenges by using airborne agents,
such as Micro Air Vehicles, in dual capacity, both as mobile explorers and
(once they land) as beacons that help other agents navigate the region.
The algorithms we propose are designed for a swarm of simple, identical,
ant-like agents with local sensing capabilities. The agents enter the region,
which is discretized as a graph, over time from one or more entry points and
are tasked with occupying all of its vertices. Unlike many works in this area,
we consider the requirement of informing an outside operator with limited
information that the coverage mission is complete. Even with this additional
requirement we show, both through simulations and mathematical proofs, that the
dual role concept results in linear-time termination, while also besting many
well-known algorithms in the literature in terms of energy use.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 14:18:30 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Feb 2023 01:20:28 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Rappel",
"Ori",
""
],
[
"Amir",
"Michael",
""
],
[
"Bruckstein",
"Alfred M.",
""
]
] |
new_dataset
| 0.999766 |
2210.05404
|
Zifan Jiang
|
Zifan Jiang, Amit Moryossef, Mathias M\"uller, Sarah Ebling
|
Machine Translation between Spoken Languages and Signed Languages
Represented in SignWriting
|
Accepted at EACL 2023 (Findings)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents work on novel machine translation (MT) systems between
spoken and signed languages, where signed languages are represented in
SignWriting, a sign language writing system. Our work seeks to address the lack
of out-of-the-box support for signed languages in current MT systems and is
based on the SignBank dataset, which contains pairs of spoken language text and
SignWriting content. We introduce novel methods to parse, factorize, decode,
and evaluate SignWriting, leveraging ideas from neural factored MT. In a
bilingual setup--translating from American Sign Language to (American)
English--our method achieves over 30 BLEU, while in two multilingual
setups--translating in both directions between spoken languages and signed
languages--we achieve over 20 BLEU. We find that common MT techniques used to
improve spoken language translation similarly affect the performance of sign
language translation. These findings validate our use of an intermediate text
representation for signed languages to include them in natural language
processing research.
|
[
{
"version": "v1",
"created": "Tue, 11 Oct 2022 12:28:06 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Feb 2023 10:08:01 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Jiang",
"Zifan",
""
],
[
"Moryossef",
"Amit",
""
],
[
"Müller",
"Mathias",
""
],
[
"Ebling",
"Sarah",
""
]
] |
new_dataset
| 0.997103 |
2301.03634
|
Neeloy Chakraborty
|
Neeloy Chakraborty, Aamir Hasan, Shuijing Liu, Tianchen Ji, Weihang
Liang, D. Livingston McPherson, Katherine Driggs-Campbell
|
Structural Attention-Based Recurrent Variational Autoencoder for Highway
Vehicle Anomaly Detection
|
11 pages, 5 figures; Published as a full paper in IFAAMAS
International Conference on Autonomous Agents and Multiagent Systems (AAMAS),
2023; Added appendix and discussion of Att-LSTM-VAE ablation
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In autonomous driving, detection of abnormal driving behaviors is essential
to ensure the safety of vehicle controllers. Prior works in vehicle anomaly
detection have shown that modeling interactions between agents improves
detection accuracy, but certain abnormal behaviors where structured road
information is paramount are poorly identified, such as wrong-way and off-road
driving. We propose a novel unsupervised framework for highway anomaly
detection named Structural Attention-Based Recurrent VAE (SABeR-VAE), which
explicitly uses the structure of the environment to aid anomaly identification.
Specifically, we use a vehicle self-attention module to learn the relations
among vehicles on a road, and a separate lane-vehicle attention module to model
the importance of permissible lanes to aid in trajectory prediction.
Conditioned on the attention modules' outputs, a recurrent encoder-decoder
architecture with a stochastic Koopman operator-propagated latent space
predicts the next states of vehicles. Our model is trained end-to-end to
minimize prediction loss on normal vehicle behaviors, and is deployed to detect
anomalies in (ab)normal scenarios. By combining the heterogeneous vehicle and
lane information, SABeR-VAE and its deterministic variant, SABeR-AE, improve
abnormal AUPR by 18% and 25% respectively on the simulated MAAD highway dataset
over STGAE-KDE. Furthermore, we show that the learned Koopman operator in
SABeR-VAE enforces interpretable structure in the variational latent space. The
results of our method indeed show that modeling environmental factors is
essential to detecting a diverse set of anomalies in deployment. For code
implementation, please visit https://sites.google.com/illinois.edu/saber-vae.
|
[
{
"version": "v1",
"created": "Mon, 9 Jan 2023 19:13:58 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Feb 2023 18:12:38 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Chakraborty",
"Neeloy",
""
],
[
"Hasan",
"Aamir",
""
],
[
"Liu",
"Shuijing",
""
],
[
"Ji",
"Tianchen",
""
],
[
"Liang",
"Weihang",
""
],
[
"McPherson",
"D. Livingston",
""
],
[
"Driggs-Campbell",
"Katherine",
""
]
] |
new_dataset
| 0.962693 |
2302.04752
|
Ernest Davis
|
Ernest Davis
|
Benchmarks for Automated Commonsense Reasoning: A Survey
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
More than one hundred benchmarks have been developed to test the commonsense
knowledge and commonsense reasoning abilities of artificial intelligence (AI)
systems. However, these benchmarks are often flawed and many aspects of common
sense remain untested. Consequently, we do not currently have any reliable way
of measuring to what extent existing AI systems have achieved these abilities.
This paper surveys the development and uses of AI commonsense benchmarks. We
discuss the nature of common sense; the role of common sense in AI; the goals
served by constructing commonsense benchmarks; and desirable features of
commonsense benchmarks. We analyze the common flaws in benchmarks, and we argue
that it is worthwhile to invest the work needed ensure that benchmark examples
are consistently high quality. We survey the various methods of constructing
commonsense benchmarks. We enumerate 139 commonsense benchmarks that have been
developed: 102 text-based, 18 image-based, 12 video based, and 7 simulated
physical environments. We discuss the gaps in the existing benchmarks and
aspects of commonsense reasoning that are not addressed in any existing
benchmark. We conclude with a number of recommendations for future development
of commonsense AI benchmarks.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 16:34:30 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2023 19:36:41 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Davis",
"Ernest",
""
]
] |
new_dataset
| 0.985511 |
2302.08296
|
Houjian Guo
|
Houjian Guo, Chaoran Liu, Carlos Toshinori Ishi, Hiroshi Ishiguro
|
QuickVC: Any-to-many Voice Conversion Using Inverse Short-time Fourier
Transform for Faster Conversion
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of automatic speech recognition (ASR) and text-to-speech
(TTS) technology, high-quality voice conversion (VC) can be achieved by
extracting source content information and target speaker information to
reconstruct waveforms. However, current methods still require improvement in
terms of inference speed. In this study, we propose a lightweight VITS-based VC
model that uses the HuBERT-Soft model to extract content information features
without speaker information. Through subjective and objective experiments on
synthesized speech, the proposed model demonstrates competitive results in
terms of naturalness and similarity. Importantly, unlike the original VITS
model, we use the inverse short-time Fourier transform (iSTFT) to replace the
most computationally expensive part. Experimental results show that our model
can generate samples at over 5000 kHz on the 3090 GPU and over 250 kHz on the
i9-10900K CPU, achieving competitive speed for the same hardware configuration.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 13:49:09 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2023 06:52:49 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Feb 2023 12:44:10 GMT"
},
{
"version": "v4",
"created": "Thu, 23 Feb 2023 05:43:07 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Guo",
"Houjian",
""
],
[
"Liu",
"Chaoran",
""
],
[
"Ishi",
"Carlos Toshinori",
""
],
[
"Ishiguro",
"Hiroshi",
""
]
] |
new_dataset
| 0.995005 |
2302.11461
|
Meilin Chen
|
Meilin Chen, Yizhou Wang, Shixiang Tang, Feng Zhu, Haiyang Yang, Lei
Bai, Rui Zhao, Donglian Qi, Wanli Ouyang
|
Saliency Guided Contrastive Learning on Scene Images
|
12 pages, 5 figures. arXiv admin note: text overlap with
arXiv:2106.11952 by other authors
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning holds promise in leveraging large numbers of
unlabeled data. However, its success heavily relies on the highly-curated
dataset, e.g., ImageNet, which still needs human cleaning. Directly learning
representations from less-curated scene images is essential for pushing
self-supervised learning to a higher level. Different from curated images which
include simple and clear semantic information, scene images are more complex
and mosaic because they often include complex scenes and multiple objects.
Despite being feasible, recent works largely overlooked discovering the most
discriminative regions for contrastive learning to object representations in
scene images. In this work, we leverage the saliency map derived from the
model's output during learning to highlight these discriminative regions and
guide the whole contrastive learning. Specifically, the saliency map first
guides the method to crop its discriminative regions as positive pairs and then
reweighs the contrastive losses among different crops by its saliency scores.
Our method significantly improves the performance of self-supervised learning
on scene images by +1.1, +4.3, +2.2 Top1 accuracy in ImageNet linear
evaluation, Semi-supervised learning with 1% and 10% ImageNet labels,
respectively. We hope our insights on saliency maps can motivate future
research on more general-purpose unsupervised representation learning from
scene data.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 15:54:07 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Feb 2023 05:46:53 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Chen",
"Meilin",
""
],
[
"Wang",
"Yizhou",
""
],
[
"Tang",
"Shixiang",
""
],
[
"Zhu",
"Feng",
""
],
[
"Yang",
"Haiyang",
""
],
[
"Bai",
"Lei",
""
],
[
"Zhao",
"Rui",
""
],
[
"Qi",
"Donglian",
""
],
[
"Ouyang",
"Wanli",
""
]
] |
new_dataset
| 0.998773 |
2302.11569
|
Zhifeng Wang
|
Liting Lyu, Zhifeng Wang, Haihong Yun, Zexue Yang, Ya Li
|
DKT-STDRL: Spatial and Temporal Representation Learning Enhanced Deep
Knowledge Tracing for Learning Performance Prediction
|
22 pages
| null | null | null |
cs.LG cs.AI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge tracing (KT) serves as a primary part of intelligent education
systems. Most current KTs either rely on expert judgments or only exploit a
single network structure, which affects the full expression of learning
features. To adequately mine features of students' learning process, Deep
Knowledge Tracing Based on Spatial and Temporal Deep Representation Learning
for Learning Performance Prediction (DKT-STDRL) is proposed in this paper.
DKT-STDRL extracts spatial features from students' learning history sequence,
and then further extracts temporal features to extract deeper hidden
information. Specifically, firstly, the DKT-STDRL model uses CNN to extract the
spatial feature information of students' exercise sequences. Then, the spatial
features are connected with the original students' exercise features as joint
learning features. Then, the joint features are input into the BiLSTM part.
Finally, the BiLSTM part extracts the temporal features from the joint learning
features to obtain the prediction information of whether the students answer
correctly at the next time step. Experiments on the public education datasets
ASSISTment2009, ASSISTment2015, Synthetic-5, ASSISTchall, and Statics2011 prove
that DKT-STDRL can achieve better prediction effects than DKT and CKT.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 09:23:21 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Lyu",
"Liting",
""
],
[
"Wang",
"Zhifeng",
""
],
[
"Yun",
"Haihong",
""
],
[
"Yang",
"Zexue",
""
],
[
"Li",
"Ya",
""
]
] |
new_dataset
| 0.990063 |
2302.11649
|
Jason Xinyu Liu
|
Jason Xinyu Liu, Ziyi Yang, Ifrah Idrees, Sam Liang, Benjamin
Schornstein, Stefanie Tellex, Ankit Shah
|
Lang2LTL: Translating Natural Language Commands to Temporal Robot Task
Specification
| null | null | null | null |
cs.RO cs.AI cs.CL cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Natural language provides a powerful modality to program robots to perform
temporal tasks. Linear temporal logic (LTL) provides unambiguous semantics for
formal descriptions of temporal tasks. However, existing approaches cannot
accurately and robustly translate English sentences to their equivalent LTL
formulas in unseen environments. To address this problem, we propose Lang2LTL,
a novel modular system that leverages pretrained large language models to first
extract referring expressions from a natural language command, then ground the
expressions to real-world landmarks and objects, and finally translate the
command into an LTL task specification for the robot. It enables any robotic
system to interpret natural language navigation commands without additional
training, provided that it tracks its position and has a semantic map with
landmarks labeled with free-form text. We demonstrate the state-of-the-art
ability to generalize to multi-scale navigation domains such as OpenStreetMap
(OSM) and CleanUp World (a simulated household environment). Lang2LTL achieves
an average accuracy of 88.4% in translating challenging LTL formulas in 22
unseen OSM environments as evaluated on a new corpus of over 10,000 commands,
22 times better than the previous SoTA. Without modification, the best
performing Lang2LTL model on the OSM dataset can translate commands in CleanUp
World with 82.8% accuracy. As a part of our proposed comprehensive evaluation
procedures, we collected a new labeled dataset of English commands representing
2,125 unique LTL formulas, the largest ever dataset of natural language
commands to LTL specifications for robotic tasks with the most diverse LTL
formulas, 40 times more than previous largest dataset. Finally, we integrated
Lang2LTL with a planner to command a quadruped mobile robot to perform
multi-step navigational tasks in an analog real-world environment created in
the lab.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 20:56:40 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Liu",
"Jason Xinyu",
""
],
[
"Yang",
"Ziyi",
""
],
[
"Idrees",
"Ifrah",
""
],
[
"Liang",
"Sam",
""
],
[
"Schornstein",
"Benjamin",
""
],
[
"Tellex",
"Stefanie",
""
],
[
"Shah",
"Ankit",
""
]
] |
new_dataset
| 0.999798 |
2302.11667
|
\'Edouard Bonnet
|
\'Edouard Bonnet, Dibyayan Chakraborty, Julien Duron
|
Cutting Barnette graphs perfectly is hard
|
19 pages, 7 figures
| null | null | null |
cs.CC cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A perfect matching cut is a perfect matching that is also a cutset, or
equivalently a perfect matching containing an even number of edges on every
cycle. The corresponding algorithmic problem, Perfect Matching Cut, is known to
be NP-complete in subcubic bipartite graphs [Le & Telle, TCS '22] but its
complexity was open in planar graphs and in cubic graphs. We settle both
questions at once by showing that Perfect Matching Cut is NP-complete in
3-connected cubic bipartite planar graphs or Barnette graphs. Prior to our
work, among problems whose input is solely an undirected graph, only Distance-2
4-Coloring was known NP-complete in Barnette graphs. Notably, Hamiltonian Cycle
would only join this private club if Barnette's conjecture were refuted.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 21:43:07 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Bonnet",
"Édouard",
""
],
[
"Chakraborty",
"Dibyayan",
""
],
[
"Duron",
"Julien",
""
]
] |
new_dataset
| 0.999307 |
2302.11683
|
Yi Ru Wang
|
Yi Ru Wang, Yuchi Zhao, Haoping Xu, Saggi Eppel, Alan Aspuru-Guzik,
Florian Shkurti, Animesh Garg
|
MVTrans: Multi-View Perception of Transparent Objects
|
Accepted to ICRA 2023; 6 pages, 4 figures, 4 tables
| null | null | null |
cs.RO cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transparent object perception is a crucial skill for applications such as
robot manipulation in household and laboratory settings. Existing methods
utilize RGB-D or stereo inputs to handle a subset of perception tasks including
depth and pose estimation. However, transparent object perception remains to be
an open problem. In this paper, we forgo the unreliable depth map from RGB-D
sensors and extend the stereo based method. Our proposed method, MVTrans, is an
end-to-end multi-view architecture with multiple perception capabilities,
including depth estimation, segmentation, and pose estimation. Additionally, we
establish a novel procedural photo-realistic dataset generation pipeline and
create a large-scale transparent object detection dataset, Syn-TODD, which is
suitable for training networks with all three modalities, RGB-D, stereo and
multi-view RGB. Project Site: https://ac-rad.github.io/MVTrans/
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 22:45:28 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Wang",
"Yi Ru",
""
],
[
"Zhao",
"Yuchi",
""
],
[
"Xu",
"Haoping",
""
],
[
"Eppel",
"Saggi",
""
],
[
"Aspuru-Guzik",
"Alan",
""
],
[
"Shkurti",
"Florian",
""
],
[
"Garg",
"Animesh",
""
]
] |
new_dataset
| 0.999213 |
2302.11720
|
Khac-Hoang Ngo
|
Khac-Hoang Ngo, Alexandre Graell i Amat, and Giuseppe Durisi
|
Irregular Repetition Slotted ALOHA Over the Binary Adder Channel
|
accepted to IEEE International Conference on Communication (ICC) 2023
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an irregular repetition slotted ALOHA (IRSA) based random-access
protocol for the binary adder channel (BAC). The BAC captures important
physical-layer concepts, such as packet generation, per-slot decoding, and
information rate, which are neglected in the commonly considered collision
channel model. We divide a frame into slots and let users generate a packet, to
be transmitted over a slot, from a given codebook. In a state-of-the-art scheme
proposed by Paolini et al. (2022), the codebook is constructed as the
parity-check matrix of a BCH code. Here, we construct the codebook from
independent and identically distributed binary symbols to obtain a
random-coding achievability bound. Our per-slot decoder progressively discards
incompatible codewords from a list of candidate codewords, and can be improved
by shrinking this list across iterations. In a regime of practical interests,
our scheme can resolve more colliding users in a slot and thus achieves a
higher average sum rate than the scheme in Paolini et al. (2022).
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 00:52:33 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Ngo",
"Khac-Hoang",
""
],
[
"Amat",
"Alexandre Graell i",
""
],
[
"Durisi",
"Giuseppe",
""
]
] |
new_dataset
| 0.999445 |
2302.11766
|
Mayank Singh
|
Rahul Gupta, Vivek Srivastava, Mayank Singh
|
MUTANT: A Multi-sentential Code-mixed Hinglish Dataset
|
Accepted in Findings of EACL
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The multi-sentential long sequence textual data unfolds several interesting
research directions pertaining to natural language processing and generation.
Though we observe several high-quality long-sequence datasets for English and
other monolingual languages, there is no significant effort in building such
resources for code-mixed languages such as Hinglish (code-mixing of
Hindi-English). In this paper, we propose a novel task of identifying
multi-sentential code-mixed text (MCT) from multilingual articles. As a use
case, we leverage multilingual articles from two different data sources and
build a first-of-its-kind multi-sentential code-mixed Hinglish dataset i.e.,
MUTANT. We propose a token-level language-aware pipeline and extend the
existing metrics measuring the degree of code-mixing to a multi-sentential
framework and automatically identify MCT in the multilingual articles. The
MUTANT dataset comprises 67k articles with 85k identified Hinglish MCTs. To
facilitate future research, we make the publicly available.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 04:04:18 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Gupta",
"Rahul",
""
],
[
"Srivastava",
"Vivek",
""
],
[
"Singh",
"Mayank",
""
]
] |
new_dataset
| 0.999272 |
2302.11866
|
Wanling Gao
|
Ke Liu, Wanling Gao, Chunjie Luo, Cheng Huang, Chunxin Lan, Zhenxing
Zhang, Lei Wang, Xiwen He, Nan Li, and Jianfeng Zhan
|
DCNetBench: Scaleable Data Center Network Benchmarking
|
19 pages, 15 figures
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Data center networking is the central infrastructure of the modern
information society. However, benchmarking them is very challenging as the
real-world network traffic is difficult to model, and Internet service giants
treat the network traffic as confidential. Several industries have published a
few publicly available network traces. However, these traces are collected from
specific data center environments, e.g., applications, network topology,
protocols, and hardware devices, and thus cannot be scaled to different users,
underlying technologies, and varying benchmarking requirements.
This article argues we should scale different data center applications and
environments in designing, implementing, and evaluating data center networking
benchmarking. We build DCNetBench, the first application-driven data center
network benchmarking that can scale to different users, underlying
technologies, and varying benchmarking requirements. The methodology is as
follows. We built an emulated system that can simulate networking with
different configurations. Then we run applications on the emulated systems to
capture the realistic network traffic patterns; we analyze and classify these
patterns to model and replay those traces. Finally, we provide an automatic
benchmarking framework to support this pipeline. The evaluations on DCNetBench
show its scaleability, effectiveness, and diversity for data center network
benchmarking.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 09:12:52 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Liu",
"Ke",
""
],
[
"Gao",
"Wanling",
""
],
[
"Luo",
"Chunjie",
""
],
[
"Huang",
"Cheng",
""
],
[
"Lan",
"Chunxin",
""
],
[
"Zhang",
"Zhenxing",
""
],
[
"Wang",
"Lei",
""
],
[
"He",
"Xiwen",
""
],
[
"Li",
"Nan",
""
],
[
"Zhan",
"Jianfeng",
""
]
] |
new_dataset
| 0.993959 |
2302.12007
|
Shannan Guan
|
Shannan Guan, Xin Yu, Wei Huang, Gengfa Fang, Haiyan Lu
|
DMMG: Dual Min-Max Games for Self-Supervised Skeleton-Based Action
Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose a new Dual Min-Max Games (DMMG) based
self-supervised skeleton action recognition method by augmenting unlabeled data
in a contrastive learning framework. Our DMMG consists of a viewpoint variation
min-max game and an edge perturbation min-max game. These two min-max games
adopt an adversarial paradigm to perform data augmentation on the skeleton
sequences and graph-structured body joints, respectively. Our viewpoint
variation min-max game focuses on constructing various hard contrastive pairs
by generating skeleton sequences from various viewpoints. These hard
contrastive pairs help our model learn representative action features, thus
facilitating model transfer to downstream tasks. Moreover, our edge
perturbation min-max game specializes in building diverse hard contrastive
samples through perturbing connectivity strength among graph-based body joints.
The connectivity-strength varying contrastive pairs enable the model to capture
minimal sufficient information of different actions, such as representative
gestures for an action while preventing the model from overfitting. By fully
exploiting the proposed DMMG, we can generate sufficient challenging
contrastive pairs and thus achieve discriminative action feature
representations from unlabeled skeleton data in a self-supervised manner.
Extensive experiments demonstrate that our method achieves superior results
under various evaluation protocols on widely-used NTU-RGB+D and NTU120-RGB+D
datasets.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 08:53:11 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Guan",
"Shannan",
""
],
[
"Yu",
"Xin",
""
],
[
"Huang",
"Wei",
""
],
[
"Fang",
"Gengfa",
""
],
[
"Lu",
"Haiyan",
""
]
] |
new_dataset
| 0.99337 |
2302.12054
|
Maurice HT Ling
|
Zhu En Chay, Bing Feng Goh, Maurice HT Ling
|
PNet: A Python Library for Petri Net Modeling and Simulation
| null |
Advances in Computer Science: an international journal 5(4): 24-30
(2016)
| null | null |
cs.MS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Petri Net is a formalism to describe changes between 2 or more states across
discrete time and has been used to model many systems. We present PNet - a pure
Python library for Petri Net modeling and simulation in Python programming
language. The design of PNet focuses on reducing the learning curve needed to
define a Petri Net by using a text-based language rather than programming
constructs to define transition rules. Complex transition rules can be refined
as regular Python functions. To demonstrate the simplicity of PNet, we present
2 examples - bread baking, and epidemiological models.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 14:27:50 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Chay",
"Zhu En",
""
],
[
"Goh",
"Bing Feng",
""
],
[
"Ling",
"Maurice HT",
""
]
] |
new_dataset
| 0.997167 |
2302.12056
|
Maurice HT Ling
|
Justin Sam Chew, Maurice HT Ling
|
TAPPS Release 1: Plugin-Extensible Platform for Technical Analysis and
Applied Statistics
| null |
Advances in Computer Science: an international journal 5(1):
132-141 (2016)
| null | null |
cs.MS stat.AP
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present the first release of TAPPS (Technical Analysis and Applied
Statistics System); a Python implementation of a thin software platform aimed
towards technical analyses and applied statistics. The core of TAPPS is a
container for 2-dimensional data frame objects and a TAPPS command language.
TAPPS language is not meant to be a programming language for script and plugin
development but for the operational purposes. In this aspect, TAPPS language
takes on the flavor of SQL rather than R, resulting in a shallower learning
curve. All analytical functions are implemented as plugins. This results in a
defined plugin system, which enables rapid development and incorporation of
analysis functions. TAPPS Release 1 is released under GNU General Public
License 3 for academic and non-commercial use. TAPPS code repository can be
found at http://github.com/mauriceling/tapps.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 14:30:20 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Chew",
"Justin Sam",
""
],
[
"Ling",
"Maurice HT",
""
]
] |
new_dataset
| 0.99754 |
2302.12136
|
Ishan Bansal
|
Ishan Bansal and Oktay G\"unl\"uk
|
Warehouse Problem with Bounds, Fixed Costs and Complementarity
Constraints
|
Version 1 of full paper
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper studies an open question in the warehouse problem where a merchant
trading a commodity tries to find an optimal inventory-trading policy to decide
on purchase and sale quantities during a fixed time horizon in order to
maximize their total pay-off, making use of fluctuations in sale and cost
prices. We provide the first known polynomial-time algorithms for the case when
there are fixed costs for purchases and sales, optional complementarity
constraints that prohibit purchasing and selling during the same time period,
and bounds on purchase and sales quantities. We do so by providing an exact
characterization of the extreme points of the feasible region and using this to
construct a suitable network where a min-cost flow computation provides an
optimal solution. We are also able to provide polynomial extended linear
formulations for the original feasible regions. Our methods build on the work
by Wolsey and Yaman (Discrete Optimization 2018). We also consider the problem
without fixed costs and provide a fully polynomial time approximation scheme in
a setting with time-dependent bounds.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 16:21:27 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Bansal",
"Ishan",
""
],
[
"Günlük",
"Oktay",
""
]
] |
new_dataset
| 0.999245 |
2302.12142
|
Shihao Ju
|
Shihao Ju and Theodore S. Rappaport
|
142 GHz Multipath Propagation Measurements and Path Loss Channel
Modeling in Factory Buildings
|
6 pages, 8 figures
|
2023 IEEE International Conference on Communications (ICC), May.
2023, pp. 1-6
| null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents sub-Terahertz (THz) radio propagation measurements at 142
GHz conducted in four factories with various layouts and facilities to explore
sub-THz wireless channels for smart factories in 6G and beyond. Here we study
spatial and temporal channel responses at 82 transmitter-receiver (TX-RX)
locations across four factories in the New York City area and over distances
from 5 m to 85 m in both line-of-sight (LOS) and non-LOS (NLOS) environments.
The measurements were performed with a sliding-correlation-based channel
sounder with 1 GHz RF bandwidth with steerable directional horn antennas with
27 dBi gain and 8\degree~half-power beamwidth at both TX and RX, using both
vertical and horizontal antenna polarizations, yielding over 75,000 directional
power delay profiles. Channel measurements of two RX heights at 1.5 m (high)
emulating handheld devices and at 0.5 m (low) emulating automated guided
vehicles (AGVs) were conducted for automated industrial scenarios with various
clutter densities. Results yield the first path loss models for indoor factory
(InF) environments at 142 GHz and show the low RX height experiences a mean
path loss increase of 10.7 dB and 6.0 dB when compared with the high RX height
at LOS and NLOS locations, respectively. Furthermore, flat and rotatable metal
plates were leveraged as passive reflecting surfaces (PRSs) in channel
enhancement measurements to explore the potential power gain on sub-THz
propagation channels, demonstrating a range from 0.5 to 22 dB improvement with
a mean of 6.5 dB in omnidirectional channel gain as compared to when no PRSs
are present.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 16:28:47 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Ju",
"Shihao",
""
],
[
"Rappaport",
"Theodore S.",
""
]
] |
new_dataset
| 0.996466 |
2302.12190
|
Ciprian-Octavian Truic\u{a}
|
Ciprian-Octavian Truic\u{a} and Elena-Simona Apostol and
Radu-C\u{a}t\u{a}lin Nicolescu and Panagiotis Karras
|
MCWDST: a Minimum-Cost Weighted Directed Spanning Tree Algorithm for
Real-Time Fake News Mitigation in Social Media
| null | null | null | null |
cs.SI cs.AI cs.CL cs.NE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The widespread availability of internet access and handheld devices confers
to social media a power similar to the one newspapers used to have. People seek
affordable information on social media and can reach it within seconds. Yet
this convenience comes with dangers; any user may freely post whatever they
please and the content can stay online for a long period, regardless of its
truthfulness. A need to detect untruthful information, also known as fake news,
arises. In this paper, we present an end-to-end solution that accurately
detects fake news and immunizes network nodes that spread them in real-time. To
detect fake news, we propose two new stack deep learning architectures that
utilize convolutional and bidirectional LSTM layers. To mitigate the spread of
fake news, we propose a real-time network-aware strategy that (1) constructs a
minimum-cost weighted directed spanning tree for a detected node, and (2)
immunizes nodes in that tree by scoring their harmfulness using a novel ranking
function. We demonstrate the effectiveness of our solution on five real-world
datasets.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 17:31:40 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Truică",
"Ciprian-Octavian",
""
],
[
"Apostol",
"Elena-Simona",
""
],
[
"Nicolescu",
"Radu-Cătălin",
""
],
[
"Karras",
"Panagiotis",
""
]
] |
new_dataset
| 0.99878 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.