id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2302.12249
|
Christian Reiser
|
Christian Reiser and Richard Szeliski and Dor Verbin and Pratul P.
Srinivasan and Ben Mildenhall and Andreas Geiger and Jonathan T. Barron and
Peter Hedman
|
MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in
Unbounded Scenes
|
Video and interactive web demo available at https://merf42.github.io
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Neural radiance fields enable state-of-the-art photorealistic view synthesis.
However, existing radiance field representations are either too
compute-intensive for real-time rendering or require too much memory to scale
to large scenes. We present a Memory-Efficient Radiance Field (MERF)
representation that achieves real-time rendering of large-scale scenes in a
browser. MERF reduces the memory consumption of prior sparse volumetric
radiance fields using a combination of a sparse feature grid and
high-resolution 2D feature planes. To support large-scale unbounded scenes, we
introduce a novel contraction function that maps scene coordinates into a
bounded volume while still allowing for efficient ray-box intersection. We
design a lossless procedure for baking the parameterization used during
training into a model that achieves real-time rendering while still preserving
the photorealistic view synthesis quality of a volumetric radiance field.
|
[
{
"version": "v1",
"created": "Thu, 23 Feb 2023 18:59:07 GMT"
}
] | 2023-02-24T00:00:00 |
[
[
"Reiser",
"Christian",
""
],
[
"Szeliski",
"Richard",
""
],
[
"Verbin",
"Dor",
""
],
[
"Srinivasan",
"Pratul P.",
""
],
[
"Mildenhall",
"Ben",
""
],
[
"Geiger",
"Andreas",
""
],
[
"Barron",
"Jonathan T.",
""
],
[
"Hedman",
"Peter",
""
]
] |
new_dataset
| 0.990896 |
2004.07371
|
Eduardo Nuno Almeida
|
Eduardo Nuno Almeida, Andr\'e Coelho, Jos\'e Ruela, Rui Campos, Manuel
Ricardo
|
Joint Traffic-Aware UAV Placement and Predictive Routing for Aerial
Networks
| null |
Ad Hoc Networks, Volume 118, 2021, pp. 102525
|
10.1016/j.adhoc.2021.102525
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aerial networks, composed of Unmanned Aerial Vehicles (UAVs) acting as Wi-Fi
access points or cellular base stations, are emerging as an interesting
solution to provide on-demand wireless connectivity to users, when there is no
network infrastructure available, or to enhance the network capacity. This
article proposes a traffic-aware topology control solution for aerial networks
that holistically combines the placement of UAVs with a predictive and
centralized routing protocol. The synergy created by the combination of the UAV
placement and routing solutions allows the aerial network to seamlessly update
its topology according to the users' traffic demand, whilst minimizing the
disruption caused by the movement of the UAVs. As a result, the Quality of
Service (QoS) provided to the users is improved. The components of the proposed
solution are described and evaluated individually in this article by means of
simulation and an experimental testbed. The results show that all the
components improve the QoS provided to the users when compared to the
corresponding baseline solutions.
|
[
{
"version": "v1",
"created": "Wed, 15 Apr 2020 22:01:13 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Almeida",
"Eduardo Nuno",
""
],
[
"Coelho",
"André",
""
],
[
"Ruela",
"José",
""
],
[
"Campos",
"Rui",
""
],
[
"Ricardo",
"Manuel",
""
]
] |
new_dataset
| 0.982719 |
2202.12055
|
Hendrik Molter
|
Jessica Enright and Kitty Meeks and Hendrik Molter
|
Counting Temporal Paths
| null | null | null | null |
cs.DS cs.CC cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The betweenness centrality of a vertex v is an important centrality measure
that quantifies how many optimal paths between pairs of other vertices visit v.
Computing betweenness centrality in a temporal graph, in which the edge set may
change over discrete timesteps, requires us to count temporal paths that are
optimal with respect to some criterion. For several natural notions of
optimality, including foremost or fastest temporal paths, this counting problem
reduces to #Temporal Path, the problem of counting all temporal paths between a
fixed pair of vertices; like the problems of counting foremost and fastest
temporal paths, #Temporal Path is #P-hard in general. Motivated by the many
applications of this intractable problem, we initiate a systematic study of the
prameterised and approximation complexity of #Temporal Path. We show that the
problem presumably does not admit an FPT-algorithm for the feedback vertex
number of the static underlying graph, and that it is hard to approximate in
general. On the positive side, we proved several exact and approximate
FPT-algorithms for special cases.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 12:22:12 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Feb 2023 23:24:22 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Enright",
"Jessica",
""
],
[
"Meeks",
"Kitty",
""
],
[
"Molter",
"Hendrik",
""
]
] |
new_dataset
| 0.958008 |
2203.12132
|
Chadni Islam
|
Chadni Islam, Victor Prokhorenko and M. Ali Babar
|
Runtime Software Patching: Taxonomy, Survey and Future Directions
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Runtime software patching aims to minimize or eliminate service downtime,
user interruptions and potential data losses while deploying a patch. Due to
modern software systems' high variance and heterogeneity, no universal
solutions are available or proposed to deploy and execute patches at runtime.
Existing runtime software patching solutions focus on specific cases,
scenarios, programming languages and operating systems. This paper aims to
identify, investigate and synthesize state-of-the-art runtime software patching
approaches and gives an overview of currently unsolved challenges. It further
provides insights on multiple aspects of runtime patching approaches such as
patch scales, general strategies and responsibilities. This study identifies
seven levels of granularity, two key strategies providing a conceptual model of
three responsible entities and four capabilities of runtime patching solutions.
Through the analysis of the existing literature, this research also reveals
open issues hindering more comprehensive adoption of runtime patching in
practice. Finally, it proposes several crucial future directions that require
further attention from both researchers and practitioners.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 01:54:21 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2023 06:10:12 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Islam",
"Chadni",
""
],
[
"Prokhorenko",
"Victor",
""
],
[
"Babar",
"M. Ali",
""
]
] |
new_dataset
| 0.990233 |
2206.00718
|
Austin McEver
|
R. Austin McEver, Bowen Zhang, Connor Levenson, A S M Iftekhar, B.S.
Manjunath
|
Context-Driven Detection of Invertebrate Species in Deep-Sea Video
| null |
International Journal of Computer Vision 2023
|
10.1007/s11263-023-01755-4
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Each year, underwater remotely operated vehicles (ROVs) collect thousands of
hours of video of unexplored ocean habitats revealing a plethora of information
regarding biodiversity on Earth. However, fully utilizing this information
remains a challenge as proper annotations and analysis require trained
scientists time, which is both limited and costly. To this end, we present a
Dataset for Underwater Substrate and Invertebrate Analysis (DUSIA), a benchmark
suite and growing large-scale dataset to train, validate, and test methods for
temporally localizing four underwater substrates as well as temporally and
spatially localizing 59 underwater invertebrate species. DUSIA currently
includes over ten hours of footage across 25 videos captured in 1080p at 30 fps
by an ROV following pre planned transects across the ocean floor near the
Channel Islands of California. Each video includes annotations indicating the
start and end times of substrates across the video in addition to counts of
species of interest. Some frames are annotated with precise bounding box
locations for invertebrate species of interest, as seen in Figure 1. To our
knowledge, DUSIA is the first dataset of its kind for deep sea exploration,
with video from a moving camera, that includes substrate annotations and
invertebrate species that are present at significant depths where sunlight does
not penetrate. Additionally, we present the novel context-driven object
detector (CDD) where we use explicit substrate classification to influence an
object detection network to simultaneously predict a substrate and species
class influenced by that substrate. We also present a method for improving
training on partially annotated bounding box frames. Finally, we offer a
baseline method for automating the counting of invertebrate species of
interest.
|
[
{
"version": "v1",
"created": "Wed, 1 Jun 2022 18:59:46 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"McEver",
"R. Austin",
""
],
[
"Zhang",
"Bowen",
""
],
[
"Levenson",
"Connor",
""
],
[
"Iftekhar",
"A S M",
""
],
[
"Manjunath",
"B. S.",
""
]
] |
new_dataset
| 0.999622 |
2209.12435
|
Chongjian Yuan
|
Chongjian Yuan, Jiarong Lin, Zuhao Zou, Xiaoping Hong and Fu Zhang
|
STD: Stable Triangle Descriptor for 3D place recognition
|
2023 ICRA
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present a novel global descriptor termed stable triangle
descriptor (STD) for 3D place recognition. For a triangle, its shape is
uniquely determined by the length of the sides or included angles. Moreover,
the shape of triangles is completely invariant to rigid transformations. Based
on this property, we first design an algorithm to efficiently extract local key
points from the 3D point cloud and encode these key points into triangular
descriptors. Then, place recognition is achieved by matching the side lengths
(and some other information) of the descriptors between point clouds. The point
correspondence obtained from the descriptor matching pair can be further used
in geometric verification, which greatly improves the accuracy of place
recognition. In our experiments, we extensively compare our proposed system
against other state-of-the-art systems (i.e., M2DP, Scan Context) on public
datasets (i.e., KITTI, NCLT, and Complex-Urban) and our self-collected dataset
(with a non-repetitive scanning solid-state LiDAR). All the quantitative
results show that STD has stronger adaptability and a great improvement in
precision over its counterparts. To share our findings and make contributions
to the community, we open source our code on our GitHub:
https://github.com/hku-mars/STD.
|
[
{
"version": "v1",
"created": "Mon, 26 Sep 2022 05:55:54 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2023 09:55:27 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Yuan",
"Chongjian",
""
],
[
"Lin",
"Jiarong",
""
],
[
"Zou",
"Zuhao",
""
],
[
"Hong",
"Xiaoping",
""
],
[
"Zhang",
"Fu",
""
]
] |
new_dataset
| 0.999724 |
2210.12402
|
Lun Du
|
Feifan Li, Lun Du, Qiang Fu, Shi Han, Yushu Du, Guangming Lu, Zi Li
|
DIGMN: Dynamic Intent Guided Meta Network for Differentiated User
Engagement Forecasting in Online Professional Social Platforms
|
10 pages, Accepted by WSDM'23
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
User engagement prediction plays a critical role for designing interaction
strategies to grow user engagement and increase revenue in online social
platforms. Through the in-depth analysis of the real-world data from the
world's largest professional social platforms, i.e., LinkedIn, we find that
users expose diverse engagement patterns, and a major reason for the
differences in user engagement patterns is that users have different intents.
That is, people have different intents when using LinkedIn, e.g., applying for
jobs, building connections, or checking notifications, which shows quite
different engagement patterns. Meanwhile, user intents and the corresponding
engagement patterns may change over time. Although such pattern differences and
dynamics are essential for user engagement prediction, differentiating user
engagement patterns based on user dynamic intents for better user engagement
forecasting has not received enough attention in previous works. In this paper,
we proposed a Dynamic Intent Guided Meta Network (DIGMN), which can explicitly
model user intent varying with time and perform differentiated user engagement
forecasting. Specifically, we derive some interpretable basic user intents as
prior knowledge from data mining and introduce prior intents in explicitly
modeling dynamic user intent. Furthermore, based on the dynamic user intent
representations, we propose a meta predictor to perform differentiated user
engagement forecasting. Through a comprehensive evaluation on LinkedIn
anonymous user data, our method outperforms state-of-the-art baselines
significantly, i.e., 2.96% and 3.48% absolute error reduction, on
coarse-grained and fine-grained user engagement prediction tasks, respectively,
demonstrating the effectiveness of our method.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 09:57:27 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2023 00:01:24 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Li",
"Feifan",
""
],
[
"Du",
"Lun",
""
],
[
"Fu",
"Qiang",
""
],
[
"Han",
"Shi",
""
],
[
"Du",
"Yushu",
""
],
[
"Lu",
"Guangming",
""
],
[
"Li",
"Zi",
""
]
] |
new_dataset
| 0.968726 |
2212.11484
|
Zohreh Azizi
|
Zohreh Azizi, C.-C. Jay Kuo
|
SALVE: Self-supervised Adaptive Low-light Video Enhancement
|
12 pages, 7 figures, 4 tables
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A self-supervised adaptive low-light video enhancement method, called SALVE,
is proposed in this work. SALVE first enhances a few key frames of an input
low-light video using a retinex-based low-light image enhancement technique.
For each keyframe, it learns a mapping from low-light image patches to enhanced
ones via ridge regression. These mappings are then used to enhance the
remaining frames in the low-light video. The combination of traditional
retinex-based image enhancement and learning-based ridge regression leads to a
robust, adaptive and computationally inexpensive solution to enhance low-light
videos. Our extensive experiments along with a user study show that 87% of
participants prefer SALVE over prior work.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 05:00:18 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2023 02:37:05 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Azizi",
"Zohreh",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] |
new_dataset
| 0.998981 |
2212.14389
|
David Braun
|
Sung Y. Kim and David J. Braun
|
Controllable Mechanical-domain Energy Accumulators
|
Accepted for presentation at the 2023 IEEE International Conference
on Robotics and Automation
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Springs are efficient in storing and returning elastic potential energy but
are unable to hold the energy they store in the absence of an external load.
Lockable springs use clutches to hold elastic potential energy in the absence
of an external load, but have not yet been widely adopted in applications,
partly because clutches introduce design complexity, reduce energy efficiency,
and typically do not afford high fidelity control over the energy stored by the
spring. Here, we present the design of a novel lockable compression spring that
uses a small capstan clutch to passively lock a mechanical spring. The capstan
clutch can lock over 1000 N force at any arbitrary deflection, unlock the
spring in less than 10 ms with a control force less than 1 % of the maximal
spring force, and provide an 80 % energy storage and return efficiency
(comparable to a highly efficient electric motor operated at constant nominal
speed). By retaining the form factor of a regular spring while providing
high-fidelity locking capability even under large spring forces, the proposed
design could facilitate the development of energy-efficient spring-based
actuators and robots.
|
[
{
"version": "v1",
"created": "Thu, 29 Dec 2022 17:45:53 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2023 00:30:11 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Kim",
"Sung Y.",
""
],
[
"Braun",
"David J.",
""
]
] |
new_dataset
| 0.998343 |
2301.09595
|
Adri\`a Recasens
|
Adri\`a Recasens, Jason Lin, Jo\=ao Carreira, Drew Jaegle, Luyu Wang,
Jean-baptiste Alayrac, Pauline Luc, Antoine Miech, Lucas Smaira, Ross
Hemsley, Andrew Zisserman
|
Zorro: the masked multimodal transformer
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Attention-based models are appealing for multimodal processing because inputs
from multiple modalities can be concatenated and fed to a single backbone
network - thus requiring very little fusion engineering. The resulting
representations are however fully entangled throughout the network, which may
not always be desirable: in learning, contrastive audio-visual self-supervised
learning requires independent audio and visual features to operate, otherwise
learning collapses; in inference, evaluation of audio-visual models should be
possible on benchmarks having just audio or just video. In this paper, we
introduce Zorro, a technique that uses masks to control how inputs from each
modality are routed inside Transformers, keeping some parts of the
representation modality-pure. We apply this technique to three popular
transformer-based architectures (ViT, Swin and HiP) and show that with
contrastive pre-training Zorro achieves state-of-the-art results on most
relevant benchmarks for multimodal tasks (AudioSet and VGGSound). Furthermore,
the resulting models are able to perform unimodal inference on both video and
audio benchmarks such as Kinetics-400 or ESC-50.
|
[
{
"version": "v1",
"created": "Mon, 23 Jan 2023 17:51:39 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2023 18:58:10 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Recasens",
"Adrià",
""
],
[
"Lin",
"Jason",
""
],
[
"Carreira",
"Joāo",
""
],
[
"Jaegle",
"Drew",
""
],
[
"Wang",
"Luyu",
""
],
[
"Alayrac",
"Jean-baptiste",
""
],
[
"Luc",
"Pauline",
""
],
[
"Miech",
"Antoine",
""
],
[
"Smaira",
"Lucas",
""
],
[
"Hemsley",
"Ross",
""
],
[
"Zisserman",
"Andrew",
""
]
] |
new_dataset
| 0.998322 |
2302.03461
|
Antoine Amarilli
|
Antoine Amarilli
|
Degree-3 Planar Graphs as Topological Minors of Wall Graphs in
Polynomial Time
|
V2: Updated to fix an error in the proof pointed out by Mika\"el
Monet. V3: Updated to point out alternative and simpler proof route following
https://cstheory.stackexchange.com/a/52489
| null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
In this note, we give a proof of the fact that we can efficiently find
degree-3 planar graphs as topological minors of sufficiently large wall graphs.
The result is needed as an intermediate step to fix a proof in my PhD thesis.
|
[
{
"version": "v1",
"created": "Tue, 7 Feb 2023 13:32:41 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2023 14:27:09 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Feb 2023 13:39:33 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Amarilli",
"Antoine",
""
]
] |
new_dataset
| 0.974351 |
2302.08055
|
Chenjiu Wang
|
Chenjiu Wang, Ke He, Ruiqi Fan, Xiaonan Wang, Yang Kong, Wei Wang,
Qinfen Hao
|
CXL over Ethernet: A Novel FPGA-based Memory Disaggregation Design in
Data Centers
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Memory resources in data centers generally suffer from low utilization and
lack of dynamics. Memory disaggregation solves these problems by decoupling CPU
and memory, which currently includes approaches based on RDMA or
interconnection protocols such as Compute Express Link (CXL). However, the
RDMA-based approach involves code refactoring and higher latency. The CXL-based
approach supports native memory semantics and overcomes the shortcomings of
RDMA, but is limited within rack level. In addition, memory pooling and sharing
based on CXL products are currently in the process of early exploration and
still take time to be available in the future. In this paper, we propose the
CXL over Ethernet approach that the host processor can access the remote memory
with memory semantics through Ethernet. Our approach can support native memory
load/store access and extends the physical range to cross server and rack
levels by taking advantage of CXL and RDMA technologies. We prototype our
approach with one server and two FPGA boards with 100 Gbps network and measure
the memory access latency. Furthermore, we optimize the memory access path by
using data cache and congestion control algorithm in the critical path to
further lower access latency. The evaluation results show that the average
latency for the server to access remote memory is 1.97 {\mu}s, which is about
37% lower than the baseline latency in the industry. The latency can be further
reduced to 415 ns with cache block and hit access on FPGA.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 03:36:04 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2023 08:16:57 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Wang",
"Chenjiu",
""
],
[
"He",
"Ke",
""
],
[
"Fan",
"Ruiqi",
""
],
[
"Wang",
"Xiaonan",
""
],
[
"Kong",
"Yang",
""
],
[
"Wang",
"Wei",
""
],
[
"Hao",
"Qinfen",
""
]
] |
new_dataset
| 0.961367 |
2302.09715
|
Sahithya Ravi
|
Sahithya Ravi, Chris Tanner, Raymond Ng, Vered Shwartz
|
What happens before and after: Multi-Event Commonsense in Event
Coreference Resolution
|
Accepted to EACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Event coreference models cluster event mentions pertaining to the same
real-world event. Recent models rely on contextualized representations to
recognize coreference among lexically or contextually similar mentions.
However, models typically fail to leverage commonsense inferences, which is
particularly limiting for resolving lexically-divergent mentions. We propose a
model that extends event mentions with temporal commonsense inferences. Given a
complex sentence with multiple events, e.g., "The man killed his wife and got
arrested", with the target event "arrested", our model generates plausible
events that happen before the target event - such as "the police arrived", and
after it, such as "he was sentenced". We show that incorporating such
inferences into an existing event coreference model improves its performance,
and we analyze the coreferences in which such temporal knowledge is required.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 01:51:01 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Feb 2023 22:44:34 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Ravi",
"Sahithya",
""
],
[
"Tanner",
"Chris",
""
],
[
"Ng",
"Raymond",
""
],
[
"Shwartz",
"Vered",
""
]
] |
new_dataset
| 0.963733 |
2302.09778
|
Lianghua Huang Dr.
|
Lianghua Huang, Di Chen, Yu Liu, Yujun Shen, Deli Zhao, Jingren Zhou
|
Composer: Creative and Controllable Image Synthesis with Composable
Conditions
|
Project page: https://damo-vilab.github.io/composer-page/
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Recent large-scale generative models learned on big data are capable of
synthesizing incredible images yet suffer from limited controllability. This
work offers a new generation paradigm that allows flexible control of the
output image, such as spatial layout and palette, while maintaining the
synthesis quality and model creativity. With compositionality as the core idea,
we first decompose an image into representative factors, and then train a
diffusion model with all these factors as the conditions to recompose the
input. At the inference stage, the rich intermediate representations work as
composable elements, leading to a huge design space (i.e., exponentially
proportional to the number of decomposed factors) for customizable content
creation. It is noteworthy that our approach, which we call Composer, supports
various levels of conditions, such as text description as the global
information, depth map and sketch as the local guidance, color histogram for
low-level details, etc. Besides improving controllability, we confirm that
Composer serves as a general framework and facilitates a wide range of
classical generative tasks without retraining. Code and models will be made
available.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 05:48:41 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2023 02:14:55 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Huang",
"Lianghua",
""
],
[
"Chen",
"Di",
""
],
[
"Liu",
"Yu",
""
],
[
"Shen",
"Yujun",
""
],
[
"Zhao",
"Deli",
""
],
[
"Zhou",
"Jingren",
""
]
] |
new_dataset
| 0.988557 |
2302.10887
|
Andrea Soltoggio
|
Andrea Soltoggio, Eseoghene Ben-Iwhiwhu, Christos Peridis, Pawel
Ladosz, Jeffery Dick, Praveen K. Pilly, Soheil Kolouri
|
The configurable tree graph (CT-graph): measurable problems in partially
observable and distal reward environments for lifelong reinforcement learning
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces a set of formally defined and transparent problems for
reinforcement learning algorithms with the following characteristics: (1)
variable degrees of observability (non-Markov observations), (2) distal and
sparse rewards, (3) variable and hierarchical reward structure, (4)
multiple-task generation, (5) variable problem complexity. The environment
provides 1D or 2D categorical observations, and takes actions as input. The
core structure of the CT-graph is a multi-branch tree graph with arbitrary
branching factor, depth, and observation sets that can be varied to increase
the dimensions of the problem in a controllable and measurable way. Two main
categories of states, decision states and wait states, are devised to create a
hierarchy of importance among observations, typical of real-world problems. A
large observation set can produce a vast set of histories that impairs
memory-augmented agents. Variable reward functions allow for the easy creation
of multiple tasks and the ability of an agent to efficiently adapt in dynamic
scenarios where tasks with controllable degrees of similarities are presented.
Challenging complexity levels can be easily achieved due to the exponential
growth of the graph. The problem formulation and accompanying code provide a
fast, transparent, and mathematically defined set of configurable tests to
compare the performance of reinforcement learning algorithms, in particular in
lifelong learning settings.
|
[
{
"version": "v1",
"created": "Sat, 21 Jan 2023 21:05:52 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Soltoggio",
"Andrea",
""
],
[
"Ben-Iwhiwhu",
"Eseoghene",
""
],
[
"Peridis",
"Christos",
""
],
[
"Ladosz",
"Pawel",
""
],
[
"Dick",
"Jeffery",
""
],
[
"Pilly",
"Praveen K.",
""
],
[
"Kolouri",
"Soheil",
""
]
] |
new_dataset
| 0.99608 |
2302.10895
|
Bas Peters
|
Bas Peters
|
CQnet: convex-geometric interpretation and constraining neural-network
trajectories
|
12 pages, 7 figures
| null | null | null |
cs.LG cs.AI math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce CQnet, a neural network with origins in the CQ algorithm for
solving convex split-feasibility problems and forward-backward splitting.
CQnet's trajectories are interpretable as particles that are tracking a
changing constraint set via its point-to-set distance function while being
elements of another constraint set at every layer. More than just a
convex-geometric interpretation, CQnet accommodates learned and deterministic
constraints that may be sample or data-specific and are satisfied by every
layer and the output. Furthermore, the states in CQnet progress toward another
constraint set at every layer. We provide proof of stability/nonexpansiveness
with minimal assumptions. The combination of constraint handling and stability
put forward CQnet as a candidate for various tasks where prior knowledge exists
on the network states or output.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 07:38:09 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Peters",
"Bas",
""
]
] |
new_dataset
| 0.980814 |
2302.10914
|
Hossein Rajaby Faghihi
|
Hossein Rajaby Faghihi, Aliakbar Nafar, Chen Zheng, Roshanak Mirzaee,
Yue Zhang, Andrzej Uszok, Alexander Wan, Tanawan Premsri, Dan Roth, and
Parisa Kordjamshidi
|
GLUECons: A Generic Benchmark for Learning Under Constraints
|
8 pages, Accepted in AAAI 2023 proceedings
| null | null | null |
cs.LG cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent research has shown that integrating domain knowledge into deep
learning architectures is effective -- it helps reduce the amount of required
data, improves the accuracy of the models' decisions, and improves the
interpretability of models. However, the research community is missing a
convened benchmark for systematically evaluating knowledge integration methods.
In this work, we create a benchmark that is a collection of nine tasks in the
domains of natural language processing and computer vision. In all cases, we
model external knowledge as constraints, specify the sources of the constraints
for each task, and implement various models that use these constraints. We
report the results of these models using a new set of extended evaluation
criteria in addition to the task performances for a more in-depth analysis.
This effort provides a framework for a more comprehensive and systematic
comparison of constraint integration techniques and for identifying related
research challenges. It will facilitate further research for alleviating some
problems of state-of-the-art neural models.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 16:45:36 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Faghihi",
"Hossein Rajaby",
""
],
[
"Nafar",
"Aliakbar",
""
],
[
"Zheng",
"Chen",
""
],
[
"Mirzaee",
"Roshanak",
""
],
[
"Zhang",
"Yue",
""
],
[
"Uszok",
"Andrzej",
""
],
[
"Wan",
"Alexander",
""
],
[
"Premsri",
"Tanawan",
""
],
[
"Roth",
"Dan",
""
],
[
"Kordjamshidi",
"Parisa",
""
]
] |
new_dataset
| 0.981123 |
2302.10920
|
Erandika Lakmali
|
R. M. D. S. M. Chandrarathna (1), T. W. M. S. A. Weerasinghe (1), N.
S. Madhuranga (1), T. M. L. S. Thennakoon (1), Anjalie Gamage (1), Erandika
Lakmali (2) ((1) Faculty of Computing, Sri Lanka Institute of Information
Technology, Malabe, Sri Lanka, (2) University of Kelaniya, Dalugama,
Kelaniya, Sri Lanka)
|
'The Taurus': Cattle Breeds & Diseases Identification Mobile Application
using Machine Learning
| null |
International Journal of Engineering and Management Research, vol
12, no 6, (December 2022), 198-205
|
10.31033/ijemr.12.6.27
| null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Dairy farming plays an important role in agriculture for thousands of years
not only in Sri Lanka but also in so many other countries. When it comes to
dairy farming cattle is an indispensable animal. According to the literature
surveys almost 3.9 million cattle and calves die in a year due to different
types of diseases. The causes of diseases are mainly bacteria, parasites,
fungi, chemical poisons and etc. Infectious diseases can be a greatest threat
to livestock health. The mortality rate of cattle causes a huge impact on
social, economic and environmental damage. In order to decrease this negative
impact, the proposal implements a cross-platform mobile application to easily
analyze and identify the diseases which cattle suffer from and give them a
solution and also to identify the cattle breeds. The mobile application is
designed to identify the breeds by analyzing the images of the cattle and
identify diseases after analyzing the videos and the images of affected areas.
Then make a model to identify the weight and the age of a particular cow and
suggest the best dose of the medicine to the identified disease. This will be a
huge advantage to farmers as well as to dairy industry. The name of the
proposed mobile application is 'The Taurus' and this paper address the selected
machine learning and image processing models and the approaches taken to
identify the diseases, breeds and suggest the prevention methods and medicine
to the identified disease.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 03:11:11 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Chandrarathna",
"R. M. D. S. M.",
""
],
[
"Weerasinghe",
"T. W. M. S. A.",
""
],
[
"Madhuranga",
"N. S.",
""
],
[
"Thennakoon",
"T. M. L. S.",
""
],
[
"Gamage",
"Anjalie",
""
],
[
"Lakmali",
"Erandika",
""
]
] |
new_dataset
| 0.999288 |
2302.11021
|
Ankur Samanta
|
Ankur Samanta, Mark Karlov, Meghna Ravikumar, Christian McIntosh
Clarke, Jayakumar Rajadas, Kaveh Hassani
|
MVMTnet: A Multi-variate Multi-modal Transformer for Multi-class
Classification of Cardiac Irregularities Using ECG Waveforms and Clinical
Notes
|
18 pages, 11 figures, submitted to Artificial Intelligence in
Medicine journal
| null | null | null |
cs.LG cs.AI q-bio.QM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Deep learning provides an excellent avenue for optimizing diagnosis and
patient monitoring for clinical-based applications, which can critically
enhance the response time to the onset of various conditions. For
cardiovascular disease, one such condition where the rising number of patients
increasingly outweighs the availability of medical resources in different parts
of the world, a core challenge is the automated classification of various
cardiac abnormalities. Existing deep learning approaches have largely been
limited to detecting the existence of an irregularity, as in binary
classification, which has been achieved using networks such as CNNs and
RNN/LSTMs. The next step is to accurately perform multi-class classification
and determine the specific condition(s) from the inherently noisy multi-variate
waveform, which is a difficult task that could benefit from (1) a more powerful
sequential network, and (2) the integration of clinical notes, which provide
valuable semantic and clinical context from human doctors. Recently,
Transformers have emerged as the state-of-the-art architecture for forecasting
and prediction using time-series data, with their multi-headed attention
mechanism, and ability to process whole sequences and learn both long and
short-range dependencies. The proposed novel multi-modal Transformer
architecture would be able to accurately perform this task while demonstrating
the cross-domain effectiveness of Transformers, establishing a method for
incorporating multiple data modalities within a Transformer for classification
tasks, and laying the groundwork for automating real-time patient condition
monitoring in clinical and ER settings.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 21:38:41 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Samanta",
"Ankur",
""
],
[
"Karlov",
"Mark",
""
],
[
"Ravikumar",
"Meghna",
""
],
[
"Clarke",
"Christian McIntosh",
""
],
[
"Rajadas",
"Jayakumar",
""
],
[
"Hassani",
"Kaveh",
""
]
] |
new_dataset
| 0.962595 |
2302.11034
|
Shahin Tajik
|
Maryam Saadat Safa, Tahoura Mosavirik, Shahin Tajik
|
Counterfeit Chip Detection using Scattering Parameter Analysis
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The increase in the number of counterfeit and recycled microelectronic chips
in recent years has created significant security and safety concerns in various
applications. Hence, detecting such counterfeit chips in electronic systems is
critical before deployment in the field. Unfortunately, the conventional
verification tools using physical inspection and side-channel methods are
costly, unscalable, error-prone, and often incompatible with legacy systems.
This paper introduces a generic non-invasive and low-cost counterfeit chip
detection based on characterizing the impedance of the system's power delivery
network (PDN). Our method relies on the fact that the impedance of the
counterfeit and recycled chips differs from the genuine ones. To sense such
impedance variations confidently, we deploy scattering parameters, frequently
used for impedance characterization of RF/microwave circuits. Our proposed
approach can directly be applied to soldered chips on the system's PCB and does
not require any modifications on the legacy systems. To validate our claims, we
perform extensive measurements on genuine and aged samples from two families of
STMicroelectronics chips to assess the effectiveness of the proposed approach.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 22:26:18 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Safa",
"Maryam Saadat",
""
],
[
"Mosavirik",
"Tahoura",
""
],
[
"Tajik",
"Shahin",
""
]
] |
new_dataset
| 0.956677 |
2302.11036
|
Davide Foini
|
Davide Foini, Magdalena Rzyska, Katharina Baschmakov, Sergio Murino
|
CrowdLogo: crowd simulation in NetLogo
| null | null | null | null |
cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
Planning the evacuation of people from crowded places, such as squares,
stadiums, or indoor arenas during emergency scenarios is a fundamental task
that authorities must deal with. This article summarizes the work of the
authors to simulate an emergency scenario in a square using NetLogo, a
multi-agent programmable modeling environment. The emergency scenario is based
on a real event, which took place in Piazza San Carlo, Turin, on the 3rd of
June 2017. The authors have developed a model and conducted various
experiments, the results of which are presented, discussed and analyzed. The
article concludes by offering suggestions for further research and summarizing
the key takeaways.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 22:38:04 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Foini",
"Davide",
""
],
[
"Rzyska",
"Magdalena",
""
],
[
"Baschmakov",
"Katharina",
""
],
[
"Murino",
"Sergio",
""
]
] |
new_dataset
| 0.977624 |
2302.11053
|
Ryo Suzuki
|
Mehrad Faridan, Bheesha Kumari, Ryo Suzuki
|
ChameleonControl: Teleoperating Real Human Surrogates through Mixed
Reality Gestural Guidance for Remote Hands-on Classrooms
|
CHI 2023
| null |
10.1145/3544548.3581381
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ChameleonControl, a real-human teleoperation system for scalable
remote instruction in hands-on classrooms. In contrast to existing video or
AR/VR-based remote hands-on education, ChameleonControl uses a real human as a
surrogate of a remote instructor. Building on existing human-based telepresence
approaches, we contribute a novel method to teleoperate a human surrogate
through synchronized mixed reality hand gestural navigation and verbal
communication. By overlaying the remote instructor's virtual hands in the local
user's MR view, the remote instructor can guide and control the local user as
if they were physically present. This allows the local user/surrogate to
synchronize their hand movements and gestures with the remote instructor,
effectively teleoperating a real human. We deploy and evaluate our system in
classrooms of physiotherapy training, as well as other application domains such
as mechanical assembly, sign language and cooking lessons. The study results
confirm that our approach can increase engagement and the sense of co-presence,
showing potential for the future of remote hands-on classrooms.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 23:11:41 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Faridan",
"Mehrad",
""
],
[
"Kumari",
"Bheesha",
""
],
[
"Suzuki",
"Ryo",
""
]
] |
new_dataset
| 0.986109 |
2302.11095
|
Guoli Wang
|
Yu Ren, Guoli Wang, Pingping Wang, Kunmeng Liu, Quanjin Liu, Hongfu
Sun, Xiang Li, Benzheng Wei
|
MM-SFENet: Multi-scale Multi-task Localization and Classification of
Bladder Cancer in MRI with Spatial Feature Encoder Network
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background and Objective: Bladder cancer is a common malignant urinary
carcinoma, with muscle-invasive and non-muscle-invasive as its two major
subtypes. This paper aims to achieve automated bladder cancer invasiveness
localization and classification based on MRI. Method: Different from previous
efforts that segment bladder wall and tumor, we propose a novel end-to-end
multi-scale multi-task spatial feature encoder network (MM-SFENet) for locating
and classifying bladder cancer, according to the classification criteria of the
spatial relationship between the tumor and bladder wall. First, we built a
backbone with residual blocks to distinguish bladder wall and tumor; then, a
spatial feature encoder is designed to encode the multi-level features of the
backbone to learn the criteria. Results: We substitute Smooth-L1 Loss with IoU
Loss for multi-task learning, to improve the accuracy of the classification
task. By testing a total of 1287 MRIs collected from 98 patients at the
hospital, the mAP and IoU are used as the evaluation metrics. The experimental
result could reach 93.34\% and 83.16\% on test set. Conclusions: The
experimental result demonstrates the effectiveness of the proposed MM-SFENet on
the localization and classification of bladder cancer. It may provide an
effective supplementary diagnosis method for bladder cancer staging.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 02:28:14 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Ren",
"Yu",
""
],
[
"Wang",
"Guoli",
""
],
[
"Wang",
"Pingping",
""
],
[
"Liu",
"Kunmeng",
""
],
[
"Liu",
"Quanjin",
""
],
[
"Sun",
"Hongfu",
""
],
[
"Li",
"Xiang",
""
],
[
"Wei",
"Benzheng",
""
]
] |
new_dataset
| 0.99975 |
2302.11119
|
Hangsong Su
|
Hangsong Su, Feng Xue, Runze Guo, Anlong Ming
|
Balanced Line Coverage in Large-scale Urban Scene
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Line coverage is to cover linear infrastructure modeled as 1D segments by
robots, which received attention in recent years. With the increasing
urbanization, the area of the city and the density of infrastructure continues
to increase, which brings two issues: (1) Due to the energy constraint, it is
hard for the homogeneous robot team to cover the large-scale linear
infrastructure starting from one depot; (2) In the large urban scene, the
imbalance of robots' path greatly extends the time cost of the multi-robot
system, which is more serious than that in smaller-size scenes. To address
these issues, we propose a heterogeneous multi-robot approach consisting of
several teams, each of which contains one transportation robot (TRob) and
several coverage robots (CRobs). Firstly, a balanced graph partitioning (BGP)
algorithm is proposed to divide the road network into several similar-size
sub-graphs, and then the TRob delivers a group of CRobs to the subgraph region
quickly. Secondly, a balanced ulusoy partitioning (BUP) algorithm is proposed
to extract similar-length tours for each CRob from the sub-graph. Abundant
experiments are conducted on seven road networks ranging in scales that are
collected in this paper. Our method achieves robot utilization of 90% and the
best maximal tour length at the cost of a small increase in total tour length,
which further minimizes the time cost of the whole system. The source code and
the road networks are available at
https://github.com/suhangsong/BLC-LargeScale.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 03:32:29 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Su",
"Hangsong",
""
],
[
"Xue",
"Feng",
""
],
[
"Guo",
"Runze",
""
],
[
"Ming",
"Anlong",
""
]
] |
new_dataset
| 0.991335 |
2302.11120
|
Peizheng Yuan
|
Peizheng Yuan, Hideyuki Tsukagoshi
|
Soft Pneumatic Actuator Capable of Generating Various Bending and
Extension Motions Inspired by an Elephant Trunk
|
8 pages, 11 figures, submitted to the IEEE Robotics and Automation
Letters (RA-L)
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inspired by the dexterous handling ability of an elephant's trunk, we propose
a pneumatic actuator that generates diverse bending and extension motions in a
flexible arm. The actuator consists of two flexible tubes. Each flexible tube
is restrained by a single string with variable length and tilt angle. Even if a
single tube can perform only three simple types of motions (bending, extension,
and helical), a variety of complex bending patterns can be created by arranging
a pair of tubes in parallel and making the restraint variable. This performance
takes advantage of the effect of the superposition of forces by arranging two
tubes to constructively interfere with each other. This paper described six
resulting pose patterns. First, the configuration and operating principle are
described, and the fabrication method is explained. Next, two mathematical
models and four finite element method-based analyses are introduced to predict
the tip position changes in five motion patterns. All the models were validated
through experiments. Finally, we experimentally demonstrated that the prototype
SEMI-TRUNK can realize the action of grabbing a bottle and pouring water,
verifying the effectiveness of the proposed method.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 03:34:17 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Yuan",
"Peizheng",
""
],
[
"Tsukagoshi",
"Hideyuki",
""
]
] |
new_dataset
| 0.999512 |
2302.11157
|
Agam Shah
|
Agam Shah, Ruchit Vithani, Abhinav Gullapalli, Sudheer Chava
|
FiNER: Financial Named Entity Recognition Dataset and Weak-Supervision
Model
| null | null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The development of annotated datasets over the 21st century has helped us
truly realize the power of deep learning. Most of the datasets created for the
named-entity-recognition (NER) task are not domain specific. Finance domain
presents specific challenges to the NER task and a domain specific dataset
would help push the boundaries of finance research. In our work, we develop the
first high-quality NER dataset for the finance domain. To set the benchmark for
the dataset, we develop and test a weak-supervision-based framework for the NER
task. We extend the current weak-supervision framework to make it employable
for span-level classification. Our weak-ner framework and the dataset are
publicly available on GitHub and Hugging Face.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 05:41:27 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Shah",
"Agam",
""
],
[
"Vithani",
"Ruchit",
""
],
[
"Gullapalli",
"Abhinav",
""
],
[
"Chava",
"Sudheer",
""
]
] |
new_dataset
| 0.980919 |
2302.11159
|
Jiawei Jiang
|
Jiawei Jiang, Chengkai Han, Jingyuan Wang
|
BUAA_BIGSCity: Spatial-Temporal Graph Neural Network for Wind Power
Forecasting in Baidu KDD CUP 2022
|
6 pages, 4 figures, Report for ACM KDD Workshop - Baidu KDD CUP 2022
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this technical report, we present our solution for the Baidu KDD Cup 2022
Spatial Dynamic Wind Power Forecasting Challenge. Wind power is a rapidly
growing source of clean energy. Accurate wind power forecasting is essential
for grid stability and the security of supply. Therefore, organizers provide a
wind power dataset containing historical data from 134 wind turbines and launch
the Baidu KDD Cup 2022 to examine the limitations of current methods for wind
power forecasting. The average of RMSE (Root Mean Square Error) and MAE (Mean
Absolute Error) is used as the evaluation score. We adopt two spatial-temporal
graph neural network models, i.e., AGCRN and MTGNN, as our basic models. We
train AGCRN by 5-fold cross-validation and additionally train MTGNN directly on
the training and validation sets. Finally, we ensemble the two models based on
the loss values of the validation set as our final submission. Using our
method, our team \team achieves -45.36026 on the test set. We release our codes
on Github (https://github.com/BUAABIGSCity/KDDCUP2022) for reproduction.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 05:47:45 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Jiang",
"Jiawei",
""
],
[
"Han",
"Chengkai",
""
],
[
"Wang",
"Jingyuan",
""
]
] |
new_dataset
| 0.99926 |
2302.11224
|
Jiaming Zhou
|
Jiaming Zhou, Shiwan Zhao, Ning Jiang, Guoqing Zhao, Yong Qin
|
MADI: Inter-domain Matching and Intra-domain Discrimination for
Cross-domain Speech Recognition
|
Accepted to ICASSP 2023
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
End-to-end automatic speech recognition (ASR) usually suffers from
performance degradation when applied to a new domain due to domain shift.
Unsupervised domain adaptation (UDA) aims to improve the performance on the
unlabeled target domain by transferring knowledge from the source to the target
domain. To improve transferability, existing UDA approaches mainly focus on
matching the distributions of the source and target domains globally and/or
locally, while ignoring the model discriminability. In this paper, we propose a
novel UDA approach for ASR via inter-domain MAtching and intra-domain
DIscrimination (MADI), which improves the model transferability by fine-grained
inter-domain matching and discriminability by intra-domain contrastive
discrimination simultaneously. Evaluations on the Libri-Adapt dataset
demonstrate the effectiveness of our approach. MADI reduces the relative word
error rate (WER) on cross-device and cross-environment ASR by 17.7% and 22.8%,
respectively.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 09:11:06 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Zhou",
"Jiaming",
""
],
[
"Zhao",
"Shiwan",
""
],
[
"Jiang",
"Ning",
""
],
[
"Zhao",
"Guoqing",
""
],
[
"Qin",
"Yong",
""
]
] |
new_dataset
| 0.985168 |
2302.11280
|
Donghuo Zeng
|
Donghuo Zeng, Jianming Wu, Yanan Wang, Kazunori Matsumoto, Gen
Hattori, Kazushi Ikeda
|
Topic-switch adapted Japanese Dialogue System based on PLATO-2
|
10 pages, 8 figures, 7 tables
| null | null | null |
cs.CL cs.MM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Large-scale open-domain dialogue systems such as PLATO-2 have achieved
state-of-the-art scores in both English and Chinese. However, little work
explores whether such dialogue systems also work well in the Japanese language.
In this work, we create a large-scale Japanese dialogue dataset,
Dialogue-Graph, which contains 1.656 million dialogue data in a tree structure
from News, TV subtitles, and Wikipedia corpus. Then, we train PLATO-2 using
Dialogue-Graph to build a large-scale Japanese dialogue system, PLATO-JDS. In
addition, to improve the PLATO-JDS in the topic switch issue, we introduce a
topic-switch algorithm composed of a topic discriminator to switch to a new
topic when user input differs from the previous topic. We evaluate the user
experience by using our model with respect to four metrics, namely, coherence,
informativeness, engagingness, and humanness. As a result, our proposed
PLATO-JDS achieves an average score of 1.500 for the human evaluation with
human-bot chat strategy, which is close to the maximum score of 2.000 and
suggests the high-quality dialogue generation capability of PLATO-2 in
Japanese. Furthermore, our proposed topic-switch algorithm achieves an average
score of 1.767 and outperforms PLATO-JDS by 0.267, indicating its effectiveness
in improving the user experience of our system.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 10:57:59 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Zeng",
"Donghuo",
""
],
[
"Wu",
"Jianming",
""
],
[
"Wang",
"Yanan",
""
],
[
"Matsumoto",
"Kazunori",
""
],
[
"Hattori",
"Gen",
""
],
[
"Ikeda",
"Kazushi",
""
]
] |
new_dataset
| 0.999624 |
2302.11283
|
Yu Guo
|
Yu Guo, Ryan Wen Liu, Jingxiang Qu, Yuxu Lu, Fenghua Zhu, Yisheng Lv
|
Asynchronous Trajectory Matching-Based Multimodal Maritime Data Fusion
for Vessel Traffic Surveillance in Inland Waterways
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The automatic identification system (AIS) and video cameras have been widely
exploited for vessel traffic surveillance in inland waterways. The AIS data
could provide the vessel identity and dynamic information on vessel position
and movements. In contrast, the video data could describe the visual
appearances of moving vessels, but without knowing the information on identity,
position and movements, etc. To further improve vessel traffic surveillance, it
becomes necessary to fuse the AIS and video data to simultaneously capture the
visual features, identity and dynamic information for the vessels of interest.
However, traditional data fusion methods easily suffer from several potential
limitations, e.g., asynchronous messages, missing data, random outliers, etc.
In this work, we first extract the AIS- and video-based vessel trajectories,
and then propose a deep learning-enabled asynchronous trajectory matching
method (named DeepSORVF) to fuse the AIS-based vessel information with the
corresponding visual targets. In addition, by combining the AIS- and
video-based movement features, we also present a prior knowledge-driven
anti-occlusion method to yield accurate and robust vessel tracking results
under occlusion conditions. To validate the efficacy of our DeepSORVF, we have
also constructed a new benchmark dataset (termed FVessel) for vessel detection,
tracking, and data fusion. It consists of many videos and the corresponding AIS
data collected in various weather conditions and locations. The experimental
results have demonstrated that our method is capable of guaranteeing
high-reliable data fusion and anti-occlusion vessel tracking.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 11:00:34 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Guo",
"Yu",
""
],
[
"Liu",
"Ryan Wen",
""
],
[
"Qu",
"Jingxiang",
""
],
[
"Lu",
"Yuxu",
""
],
[
"Zhu",
"Fenghua",
""
],
[
"Lv",
"Yisheng",
""
]
] |
new_dataset
| 0.984896 |
2302.11292
|
Keita Emura
|
Keita Emura and Masato Yoshimi
|
An End-To-End Encrypted Cache System with Time-Dependent Access Control
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Due to the increasing use of encrypted communication, such as Transport Layer
Security (TLS), encrypted cache systems are a promising approach for providing
communication efficiency and privacy. Cache-22 is an encrypted cache system
(Emura et al. ISITA 2020) that makes it possible to significantly reduce
communication between a cache server and a service provider. In the final
procedure of Cache-22, the service provider sends the corresponding decryption
key to the user via TLS and this procedure allows the service provider to
control which users can access the contents. For example, if a user has
downloaded ciphertexts of several episodes of a show, the service provider can
decide to provide some of the contents (e.g., the first episode) available for
free while requiring a fee for the remaining contents. However, no concrete
access control method has been implemented in the original Cache-22 system. In
this paper, we add a scalable access control protocol to Cache-22.
Specifically, we propose a time-dependent access control that requires a
communication cost of $O(\log T_{\sf max})$ where $T_{\sf max}$ is the maximum
time period. Although the protocol is stateful, we can provide time-dependent
access control with scalability at the expense of this key management. We
present experimental results and demonstrate that the modified system is
effective for controlling access rights. We also observe a relationship between
cache capacity and network traffic because the number of duplicated contents is
higher than that in the original Cache-22 system, due to time-dependent access
control.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 11:25:07 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Emura",
"Keita",
""
],
[
"Yoshimi",
"Masato",
""
]
] |
new_dataset
| 0.986819 |
2302.11358
|
Carlos Segarra
|
Simon Shillaker, Carlos Segarra, Eleftheria Mappoura, Mayeul Fournial,
Lluis Vilanova, Peter Pietzuch
|
Faabric: Fine-Grained Distribution of Scientific Workloads in the Cloud
|
12 pages
| null | null | null |
cs.DC cs.OS
|
http://creativecommons.org/licenses/by/4.0/
|
With their high parallelism and resource needs, many scientific applications
benefit from cloud deployments. Today, scientific applications are executed on
dedicated pools of VMs, resulting in resource fragmentation: users pay for
underutilised resources, and providers cannot reallocate unused resources
between applications. While serverless cloud computing could address these
issues, its programming model is incompatible with the use of shared memory and
message passing in scientific applications: serverless functions do not share
memory directly on the same VM or support message passing semantics when
scheduling functions dynamically.
We describe Faabric, a new serverless cloud runtime that transparently
distributes applications with shared memory and message passing across VMs.
Faabric achieves this by scheduling computation in a fine-grained
(thread/process) fashion through a new execution abstraction called Granules.
To support shared memory, Granules are isolated using WebAssembly but share
memory directly; to support message passing, Granules offer asynchronous
point-to-point communication. Faabric schedules Granules to meet an
application's parallelism needs. It also synchronises changes to Granule's
shared memory, and migrates Granules to improve locality.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 13:10:07 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Shillaker",
"Simon",
""
],
[
"Segarra",
"Carlos",
""
],
[
"Mappoura",
"Eleftheria",
""
],
[
"Fournial",
"Mayeul",
""
],
[
"Vilanova",
"Lluis",
""
],
[
"Pietzuch",
"Peter",
""
]
] |
new_dataset
| 0.994394 |
2302.11385
|
Zhen Gao
|
Keke Ying, Zhen Gao, Sheng Chen, Xinyu Gao, Michail Matthaiou, Rui
Zhang, and Robert Schober
|
Reconfigurable Massive MIMO: Harnessing the Power of the Electromagnetic
Domain for Enhanced Information Transfer
|
7 pages, 3 figures. This paper is accepted by IEEE Wireless
Communications Magazine. Copyright may be transferred without notice, after
which this version may no longer be accessible
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The capacity of commercial massive multiple-input multiple-output (mMIMO)
systems is constrained by the limited array aperture at the base station, and
cannot meet the ever-increasing traffic demands of wireless networks. Given the
array aperture, holographic MIMO with infinitesimal antenna spacing can
maximize the capacity, but is physically unrealizable. As a promising
alternative, reconfigurable mMIMO is proposed to harness the unexploited power
of the electromagnetic (EM) domain for enhanced information transfer.
Specifically, the reconfigurable pixel antenna technology provides each antenna
with an adjustable EM radiation (EMR) pattern, introducing extra degrees of
freedom for information transfer in the EM domain. In this article, we present
the concept and benefits of availing the EMR domain for mMIMO transmission.
Moreover, we propose a viable architecture for reconfigurable mMIMO systems,
and the associated system model and downlink precoding are also discussed. In
particular, a three-level precoding scheme is proposed, and simulation results
verify its considerable spectral and energy efficiency advantages compared to
traditional mMIMO systems. Finally, we further discuss the challenges,
insights, and prospects of deploying reconfigurable mMIMO, along with the
associated hardware, algorithms, and fundamental theory.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 13:58:10 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Ying",
"Keke",
""
],
[
"Gao",
"Zhen",
""
],
[
"Chen",
"Sheng",
""
],
[
"Gao",
"Xinyu",
""
],
[
"Matthaiou",
"Michail",
""
],
[
"Zhang",
"Rui",
""
],
[
"Schober",
"Robert",
""
]
] |
new_dataset
| 0.962174 |
2302.11458
|
Manuel Stoiber
|
Manuel Stoiber, Mariam Elsayed, Anne E. Reichert, Florian Steidle,
Dongheui Lee, Rudolph Triebel
|
Fusing Visual Appearance and Geometry for Multi-modality 6DoF Object
Tracking
|
Submitted to IEEE/RSJ International Conference on Intelligent Robots
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In many applications of advanced robotic manipulation, six degrees of freedom
(6DoF) object pose estimates are continuously required. In this work, we
develop a multi-modality tracker that fuses information from visual appearance
and geometry to estimate object poses. The algorithm extends our previous
method ICG, which uses geometry, to additionally consider surface appearance.
In general, object surfaces contain local characteristics from text, graphics,
and patterns, as well as global differences from distinct materials and colors.
To incorporate this visual information, two modalities are developed. For local
characteristics, keypoint features are used to minimize distances between
points from keyframes and the current image. For global differences, a novel
region approach is developed that considers multiple regions on the object
surface. In addition, it allows the modeling of external geometries.
Experiments on the YCB-Video and OPT datasets demonstrate that our approach
ICG+ performs best on both datasets, outperforming both conventional and deep
learning-based methods. At the same time, the algorithm is highly efficient and
runs at more than 300 Hz. The source code of our tracker is publicly available.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 15:53:00 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Stoiber",
"Manuel",
""
],
[
"Elsayed",
"Mariam",
""
],
[
"Reichert",
"Anne E.",
""
],
[
"Steidle",
"Florian",
""
],
[
"Lee",
"Dongheui",
""
],
[
"Triebel",
"Rudolph",
""
]
] |
new_dataset
| 0.979972 |
2302.11476
|
Jaros{\l}aw B{\l}asiok
|
Josh Alman, Jaros{\l}aw B{\l}asiok
|
Matrix Multiplication and Number On the Forehead Communication
| null | null | null | null |
cs.CC cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
Three-player Number On the Forehead communication may be thought of as a
three-player Number In the Hand promise model, in which each player is given
the inputs that are supposedly on the other two players' heads, and promised
that they are consistent with the inputs of of the other players. The set of
all allowed inputs under this promise may be thought of as an order-3 tensor.
We surprisingly observe that this tensor is exactly the matrix multiplication
tensor, which is widely studied in the design of fast matrix multiplication
algorithms.
Using this connection, we prove a number of results about both Number On the
Forehead communication and matrix multiplication, each by using known results
or techniques about the other. For example, we show how the Laser method, a key
technique used to design the best matrix multiplication algorithms, can also be
used to design communication protocols for a variety of problems. We also show
how known lower bounds for Number On the Forehead communication can be used to
bound properties of the matrix multiplication tensor such as its zeroing out
subrank. Finally, we substantially generalize known methods based on slice-rank
for studying communication, and show how they directly relate to the matrix
multiplication exponent $\omega$.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 16:25:19 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Alman",
"Josh",
""
],
[
"Błasiok",
"Jarosław",
""
]
] |
new_dataset
| 0.995823 |
2302.11506
|
Pranav Kadam
|
Pranav Kadam, Hardik Prajapati, Min Zhang, Jintang Xue, Shan Liu,
C.-C. Jay Kuo
|
S3I-PointHop: SO(3)-Invariant PointHop for 3D Point Cloud Classification
|
5 pages, 3 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many point cloud classification methods are developed under the assumption
that all point clouds in the dataset are well aligned with the canonical axes
so that the 3D Cartesian point coordinates can be employed to learn features.
When input point clouds are not aligned, the classification performance drops
significantly. In this work, we focus on a mathematically transparent point
cloud classification method called PointHop, analyze its reason for failure due
to pose variations, and solve the problem by replacing its pose dependent
modules with rotation invariant counterparts. The proposed method is named
SO(3)-Invariant PointHop (or S3I-PointHop in short). We also significantly
simplify the PointHop pipeline using only one single hop along with multiple
spatial aggregation techniques. The idea of exploiting more spatial information
is novel. Experiments on the ModelNet40 dataset demonstrate the superiority of
S3I-PointHop over traditional PointHop-like methods.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 17:23:33 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Kadam",
"Pranav",
""
],
[
"Prajapati",
"Hardik",
""
],
[
"Zhang",
"Min",
""
],
[
"Xue",
"Jintang",
""
],
[
"Liu",
"Shan",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] |
new_dataset
| 0.997474 |
2302.11566
|
Chen Guo
|
Chen Guo, Tianjian Jiang, Xu Chen, Jie Song, Otmar Hilliges
|
Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via
Self-supervised Scene Decomposition
|
Project page: https://moygcc.github.io/vid2avatar/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present Vid2Avatar, a method to learn human avatars from monocular
in-the-wild videos. Reconstructing humans that move naturally from monocular
in-the-wild videos is difficult. Solving it requires accurately separating
humans from arbitrary backgrounds. Moreover, it requires reconstructing
detailed 3D surface from short video sequences, making it even more
challenging. Despite these challenges, our method does not require any
groundtruth supervision or priors extracted from large datasets of clothed
human scans, nor do we rely on any external segmentation modules. Instead, it
solves the tasks of scene decomposition and surface reconstruction directly in
3D by modeling both the human and the background in the scene jointly,
parameterized via two separate neural fields. Specifically, we define a
temporally consistent human representation in canonical space and formulate a
global optimization over the background model, the canonical human shape and
texture, and per-frame human pose parameters. A coarse-to-fine sampling
strategy for volume rendering and novel objectives are introduced for a clean
separation of dynamic human and static background, yielding detailed and robust
3D human geometry reconstructions. We evaluate our methods on publicly
available datasets and show improvements over prior art.
|
[
{
"version": "v1",
"created": "Wed, 22 Feb 2023 18:59:17 GMT"
}
] | 2023-02-23T00:00:00 |
[
[
"Guo",
"Chen",
""
],
[
"Jiang",
"Tianjian",
""
],
[
"Chen",
"Xu",
""
],
[
"Song",
"Jie",
""
],
[
"Hilliges",
"Otmar",
""
]
] |
new_dataset
| 0.999704 |
2005.06411
|
Andrzej Murawski
|
Andrzej S. Murawski, Steven J. Ramsay, Nikos Tzevelekos
|
Bisimilarity in fresh-register automata
| null | null | null | null |
cs.LO cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Register automata are a basic model of computation over infinite alphabets.
Fresh-register automata extend register automata with the capability to
generate fresh symbols in order to model computational scenarios involving name
creation. This paper investigates the complexity of the bisimilarity problem
for classes of register and fresh-register automata. We examine all main
disciplines that have appeared in the literature: general register assignments;
assignments where duplicate register values are disallowed; and assignments
without duplicates in which registers cannot be empty. In the general case, we
show that the problem is EXPTIME-complete.
However, the absence of duplicate values in registers enables us to identify
inherent symmetries inside the associated bisimulation relations, which can be
used to establish a polynomial bound on the depth of Attacker-winning
strategies. Furthermore, they enable a highly succinct representation of the
corresponding bisimulations. By exploiting results from group theory and
computational group theory, we can then show solvability in PSPACE and NP
respectively for the latter two register disciplines. In each case, we find
that freshness does not affect the complexity class of the problem.
The results allow us to close a complexity gap for language equivalence of
deterministic register automata. We show that deterministic language
inequivalence for the no-duplicates fragment is NP-complete, which disproves an
old conjecture of Sakamoto.
Finally, we discover that, unlike in the finite-alphabet case, the addition
of pushdown store makes bisimilarity undecidable, even in the case of visibly
pushdown storage.
|
[
{
"version": "v1",
"created": "Wed, 13 May 2020 16:38:19 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Feb 2023 23:07:10 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Murawski",
"Andrzej S.",
""
],
[
"Ramsay",
"Steven J.",
""
],
[
"Tzevelekos",
"Nikos",
""
]
] |
new_dataset
| 0.998951 |
2202.06219
|
Kasra Darvishi
|
Kasra Darvishi, Newsha Shahbodagh, Zahra Abbasiantaeb, Saeedeh Momtazi
|
PQuAD: A Persian Question Answering Dataset
| null |
Computer Speech & Language, Volume 80, 2023, 101486
|
10.1016/j.csl.2023.101486
| null |
cs.CL cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present Persian Question Answering Dataset (PQuAD), a crowdsourced reading
comprehension dataset on Persian Wikipedia articles. It includes 80,000
questions along with their answers, with 25% of the questions being
adversarially unanswerable. We examine various properties of the dataset to
show the diversity and the level of its difficulty as an MRC benchmark. By
releasing this dataset, we aim to ease research on Persian reading
comprehension and development of Persian question answering systems. Our
experiments on different state-of-the-art pre-trained contextualized language
models show 74.8% Exact Match (EM) and 87.6% F1-score that can be used as the
baseline results for further research on Persian QA.
|
[
{
"version": "v1",
"created": "Sun, 13 Feb 2022 05:42:55 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Darvishi",
"Kasra",
""
],
[
"Shahbodagh",
"Newsha",
""
],
[
"Abbasiantaeb",
"Zahra",
""
],
[
"Momtazi",
"Saeedeh",
""
]
] |
new_dataset
| 0.999815 |
2203.10120
|
Alvin Sukmadji
|
Alvin Y. Sukmadji, Umberto Mart\'inez-Pe\~nas, Frank R. Kschischang
|
Zipper Codes
|
Accepted for publication on JLT, updated reference for oFEC
|
in Journal of Lightwave Technology, vol. 40, no. 19, pp.
6397-6407, Oct. 2022
|
10.1109/JLT.2022.3193635
| null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Zipper codes are a framework for describing spatially-coupled product-like
codes. Many well-known codes, such as staircase codes and braided block codes,
are subsumed into this framework. New types of codes such as tiled diagonal and
delayed diagonal zipper codes are introduced along with their software
simulation results. Stall patterns that can arise in iterative decoding are
analyzed, giving a means of error floor estimation.
|
[
{
"version": "v1",
"created": "Fri, 18 Mar 2022 18:36:35 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Feb 2023 19:26:01 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Sukmadji",
"Alvin Y.",
""
],
[
"Martínez-Peñas",
"Umberto",
""
],
[
"Kschischang",
"Frank R.",
""
]
] |
new_dataset
| 0.999859 |
2204.03874
|
Dexin Wang
|
Dexin Wang, Faliang Chang, Chunsheng Liu, Rurui Yang, Nanjun Li,
Hengqiang Huan
|
On-Policy Pixel-Level Grasping Across the Gap Between Simulation and
Reality
|
The experiment had design flaws
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Grasp detection in cluttered scenes is a very challenging task for robots.
Generating synthetic grasping data is a popular way to train and test grasp
methods, as is Dex-net and GraspNet; yet, these methods generate training
grasps on 3D synthetic object models, but evaluate at images or point clouds
with different distributions, which reduces performance on real scenes due to
sparse grasp labels and covariate shift. To solve existing problems, we propose
a novel on-policy grasp detection method, which can train and test on the same
distribution with dense pixel-level grasp labels generated on RGB-D images. A
Parallel-Depth Grasp Generation (PDG-Generation) method is proposed to generate
a parallel depth image through a new imaging model of projecting points in
parallel; then this method generates multiple candidate grasps for each pixel
and obtains robust grasps through flatness detection, force-closure metric and
collision detection. Then, a large comprehensive Pixel-Level Grasp Pose Dataset
(PLGP-Dataset) is constructed and released; distinguished with previous
datasets with off-policy data and sparse grasp samples, this dataset is the
first pixel-level grasp dataset, with the on-policy distribution where grasps
are generated based on depth images. Lastly, we build and test a series of
pixel-level grasp detection networks with a data augmentation process for
imbalance training, which learn grasp poses in a decoupled manner on the input
RGB-D images. Extensive experiments show that our on-policy grasp method can
largely overcome the gap between simulation and reality, and achieves the
state-of-the-art performance. Code and data are provided at
https://github.com/liuchunsense/PLGP-Dataset.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 06:56:27 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Aug 2022 06:49:44 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Feb 2023 15:08:02 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Wang",
"Dexin",
""
],
[
"Chang",
"Faliang",
""
],
[
"Liu",
"Chunsheng",
""
],
[
"Yang",
"Rurui",
""
],
[
"Li",
"Nanjun",
""
],
[
"Huan",
"Hengqiang",
""
]
] |
new_dataset
| 0.994697 |
2206.00995
|
Alessandro De Luca
|
Alessandro De Luca, Gabriele Fici
|
On the Lie complexity of Sturmian words
|
6 pages, submitted to Theoretical Computer Science
|
Theoretical Computer Science 938 (2022) 81-85
|
10.1016/j.tcs.2022.10.009
| null |
cs.DM cs.FL math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bell and Shallit recently introduced the Lie complexity of an infinite word
$s$ as the function counting for each length the number of conjugacy classes of
words whose elements are all factors of $s$. They proved, using algebraic
techniques, that the Lie complexity is bounded above by the first difference of
the factor complexity plus one; hence, it is uniformly bounded for words with
linear factor complexity, and, in particular, it is at most 2 for Sturmian
words, which are precisely the words with factor complexity $n+1$ for every
$n$. In this note, we provide an elementary combinatorial proof of the result
of Bell and Shallit and give an exact formula for the Lie complexity of any
Sturmian word.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 11:32:53 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Jul 2022 21:23:21 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"De Luca",
"Alessandro",
""
],
[
"Fici",
"Gabriele",
""
]
] |
new_dataset
| 0.983791 |
2206.15251
|
Raul Lopes
|
Allen Ibiapina and Raul Lopes and Andrea Marino and Ana Silva
|
Menger's Theorem for Temporal Paths (Not Walks)
| null | null | null | null |
cs.DM math.CO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A (directed) temporal graph is a (directed) graph whose edges are available
only at specific times during its lifetime $\tau$. Walks are sequences of
adjacent edges whose appearing times are either strictly increasing or
non-strictly increasingly (i.e., non-decreasing) depending on the scenario.
Paths are temporal walks where each vertex is not traversed twice. A temporal
vertex is a pair $(u,i)$ where $u$ is a vertex and $i\in[\tau]$ a timestep. In
this paper we focus on the questions: (i) are there at least $k$ paths from a
single source $s$ to a single target $t$, no two of which internally intersect
on a temporal vertex? (ii) are there at most $h$ temporal vertices whose
removal disconnects $s$ from $t$? Let $k^*$ be the maximum value $k$ for which
the answer to (i) is YES, and let $h^*$ be the minimum value $h$ for which the
answer to (ii) is YES. In static graphs, $k^*$ and $h^*$ are equal by Menger's
Theorem and this is a crucial property to solve efficiently both (i) and (ii).
In temporal graphs such equality has been investigated only focusing on
disjoint walks rather than disjoint paths. We prove that, when dealing with
non-strictly increasing temporal paths, $k^*$ is equal to $h^*$ if and only if
$k^*$ is 1. We show that this implies a dichotomy for (i), which turns out to
be polynomial-time solvable when $k\le 2$, and NP-complete for $k\ge 3$. In
contrast, we also prove that Menger's version does not hold in the strictly
increasing model and give hardness results also for this case. Finally, we give
hardness results and an XP algorithm for (ii).
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2022 12:57:52 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Feb 2023 17:31:20 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Ibiapina",
"Allen",
""
],
[
"Lopes",
"Raul",
""
],
[
"Marino",
"Andrea",
""
],
[
"Silva",
"Ana",
""
]
] |
new_dataset
| 0.999119 |
2207.00265
|
Patrick Gelhausen
|
P. Gelhausen, M. Fischer, G. Peters
|
Affordance Extraction with an External Knowledge Database for Text-Based
Simulated Environments
|
23 pages, 1 figure
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Text-based simulated environments have proven to be a valid testbed for
machine learning approaches. The process of affordance extraction can be used
to generate possible actions for interaction within such an environment. In
this paper the capabilities and challenges for utilizing external knowledge
databases (in particular ConceptNet) in the process of affordance extraction
are studied. An algorithm for automated affordance extraction is introduced and
evaluated on the Interactive Fiction (IF) platforms TextWorld and Jericho. For
this purpose, the collected affordances are translated into text commands for
IF agents. To probe the quality of the automated evaluation process, an
additional human baseline study is conducted. The paper illustrates that,
despite some challenges, external databases can in principle be used for
affordance extraction. The paper concludes with recommendations for further
modification and improvement of the process.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2022 08:39:18 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Feb 2023 15:09:48 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Gelhausen",
"P.",
""
],
[
"Fischer",
"M.",
""
],
[
"Peters",
"G.",
""
]
] |
new_dataset
| 0.993055 |
2207.01009
|
Kevin Ta
|
Kevin Ta, David Bruggemann, Tim Br\"odermann, Christos Sakaridis, Luc
Van Gool
|
L2E: Lasers to Events for 6-DoF Extrinsic Calibration of Lidars and
Event Cameras
|
Accepted to ICRA2023
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
As neuromorphic technology is maturing, its application to robotics and
autonomous vehicle systems has become an area of active research. In
particular, event cameras have emerged as a compelling alternative to
frame-based cameras in low-power and latency-demanding applications. To enable
event cameras to operate alongside staple sensors like lidar in perception
tasks, we propose a direct, temporally-decoupled extrinsic calibration method
between event cameras and lidars. The high dynamic range, high temporal
resolution, and low-latency operation of event cameras are exploited to
directly register lidar laser returns, allowing information-based correlation
methods to optimize for the 6-DoF extrinsic calibration between the two
sensors. This paper presents the first direct calibration method between event
cameras and lidars, removing dependencies on frame-based camera intermediaries
and/or highly-accurate hand measurements.
|
[
{
"version": "v1",
"created": "Sun, 3 Jul 2022 11:05:45 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Aug 2022 22:12:30 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Sep 2022 12:24:58 GMT"
},
{
"version": "v4",
"created": "Mon, 26 Sep 2022 15:21:57 GMT"
},
{
"version": "v5",
"created": "Tue, 21 Feb 2023 02:28:55 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Ta",
"Kevin",
""
],
[
"Bruggemann",
"David",
""
],
[
"Brödermann",
"Tim",
""
],
[
"Sakaridis",
"Christos",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.998544 |
2302.02338
|
Emilio Mart\'inez-Pa\~neda
|
L. Quinteros, E. Garc\'ia-Mac\'ias, E. Mart\'inez-Pa\~neda
|
Electromechanical phase-field fracture modelling of piezoresistive
CNT-based composites
| null | null |
10.1016/j.cma.2023.115941
| null |
cs.CE cond-mat.mtrl-sci physics.app-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel computational framework to simulate the electromechanical
response of self-sensing carbon nanotube (CNT)-based composites experiencing
fracture. The computational framework combines electrical-deformation-fracture
finite element modelling with a mixed micromechanics formulation. The latter is
used to estimate the constitutive properties of CNT-based composites, including
the elastic tensor, fracture energy, electrical conductivity, and linear
piezoresistive coefficients. These properties are inputted into a coupled
electro-structural finite element model, which simulates the evolution of
cracks based upon phase-field fracture. The coupled physical problem is solved
in a monolithic manner, exploiting the robustness and efficiency of a
quasi-Newton algorithm. 2D and 3D boundary value problems are simulated to
illustrate the potential of the modelling framework in assessing the influence
of defects on the electromechanical response of meso- and macro-scale smart
structures. Case studies aim at shedding light into the interplay between
fracture and the electromechanical material response and include parametric
analyses, validation against experiments and the simulation of complex cracking
conditions (multiple defects, crack merging). The presented numerical results
showcase the efficiency and robustness of the computational framework, as well
as its ability to model a large variety of structural configurations and damage
patterns. The deformation-electrical-fracture finite element code developed is
made freely available to download.
|
[
{
"version": "v1",
"created": "Sun, 5 Feb 2023 08:58:12 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Quinteros",
"L.",
""
],
[
"García-Macías",
"E.",
""
],
[
"Martínez-Pañeda",
"E.",
""
]
] |
new_dataset
| 0.996244 |
2302.07489
|
James F. O'Brien
|
Jessica K. Hodgins and James F. O'Brien and Jack Tumblin
|
Perception of Human Motion with Different Geometric Models
|
13 pages, 9 figures. A previous version of this paper (v1) appeared
in Graphics Interface 1997. This version of the paper (v2) appeared in IEEE
Transactions on Visualization and Computer Graphics, 4(4):101-113, December
1998. Alternate locations of this paper:
http://graphics.berkeley.edu/papers/Hodgins-PHM-1998-12 and
https://ieeexplore.ieee.org/document/765325
|
IEEE Transactions on Visualization and Computer Graphics,
4(4):101-113, December 1998
|
10.1109/2945.765325
| null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human figures have been animated using a variety of geometric models
including stick figures, polygonal models, and NURBS-based models with muscles,
flexible skin, or clothing. This paper reports on experimental results
indicating that a viewer's perception of motion characteristics is affected by
the geometric model used for rendering. Subjects were shown a series of paired
motion sequences and asked if the two motions in each pair were the same or
different. The motion sequences in each pair were rendered using the same
geometric model. For the three types of motion variation tested, sensitivity
scores indicate that subjects were better able to observe changes with the
polygonal model than they were with the stick figure model.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 06:29:34 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Feb 2023 03:53:05 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Hodgins",
"Jessica K.",
""
],
[
"O'Brien",
"James F.",
""
],
[
"Tumblin",
"Jack",
""
]
] |
new_dataset
| 0.998431 |
2302.08594
|
Zifan Yu
|
Zifan Yu, Meida Chen, Zhikang Zhang, Suya You and Fengbo Ren
|
TransUPR: A Transformer-based Uncertain Point Refiner for LiDAR Point
Cloud Semantic Segmentation
|
5 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this work, we target the problem of uncertain points refinement for
image-based LiDAR point cloud semantic segmentation (LiDAR PCSS). This problem
mainly results from the boundary-blurring problem of convolution neural
networks (CNNs) and quantitation loss of spherical projection, which are often
hard to avoid for common image-based LiDAR PCSS approaches. We propose a
plug-and-play transformer-based uncertain point refiner (TransUPR) to address
the problem. Through local feature aggregation, uncertain point localization,
and self-attention-based transformer design, TransUPR, integrated into an
existing range image-based LiDAR PCSS approach (e.g., CENet), achieves the
state-of-the-art performance (68.2% mIoU) on Semantic-KITTI benchmark, which
provides a performance improvement of 0.6% on the mIoU.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 21:38:36 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Feb 2023 19:53:43 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Yu",
"Zifan",
""
],
[
"Chen",
"Meida",
""
],
[
"Zhang",
"Zhikang",
""
],
[
"You",
"Suya",
""
],
[
"Ren",
"Fengbo",
""
]
] |
new_dataset
| 0.967444 |
2302.10202
|
Maja Schneider
|
Maja Schneider, Tobias Schelte, Felix Schmitz, Marco K\"orner
|
EuroCrops: All you need to know about the Largest Harmonised Open Crop
Dataset Across the European Union
|
11 pages, 3 figures, for associated dataset, see
https://github.com/maja601/EuroCrops and
https://www.doi.org/10.5281/zenodo.6866846 , submitted to Scientific Data
| null | null | null |
cs.OH
|
http://creativecommons.org/licenses/by/4.0/
|
EuroCrops contains geo-referenced polygons of agricultural croplands from 16
countries of the European Union (EU) as well as information on the respective
crop species grown there. These semantic annotations are derived from
self-declarations by farmers receiving subsidies under the Common Agriculture
Policy (CAP) of the European Commission (EC). Over the last 1.5 years, the
individual national crop datasets have been manually collected, the crop
classes have been translated into the English language and transferred into the
newly developed Hierarchical Crop and Agriculture Taxonomy (HCAT). EuroCrops is
publicly available under continuous improvement through an active user
community.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 10:35:32 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Schneider",
"Maja",
""
],
[
"Schelte",
"Tobias",
""
],
[
"Schmitz",
"Felix",
""
],
[
"Körner",
"Marco",
""
]
] |
new_dataset
| 0.999793 |
2302.10237
|
Lin Gao
|
Lin Gao, Jia-Mu Sun, Kaichun Mo, Yu-Kun Lai, Leonidas J. Guibas, Jie
Yang
|
SceneHGN: Hierarchical Graph Networks for 3D Indoor Scene Generation
with Fine-Grained Geometry
|
21 pages, 21 figures, Project: http://geometrylearning.com/scenehgn/
| null | null | null |
cs.GR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D indoor scenes are widely used in computer graphics, with applications
ranging from interior design to gaming to virtual and augmented reality. They
also contain rich information, including room layout, as well as furniture
type, geometry, and placement. High-quality 3D indoor scenes are highly
demanded while it requires expertise and is time-consuming to design
high-quality 3D indoor scenes manually. Existing research only addresses
partial problems: some works learn to generate room layout, and other works
focus on generating detailed structure and geometry of individual furniture
objects. However, these partial steps are related and should be addressed
together for optimal synthesis. We propose SCENEHGN, a hierarchical graph
network for 3D indoor scenes that takes into account the full hierarchy from
the room level to the object level, then finally to the object part level.
Therefore for the first time, our method is able to directly generate plausible
3D room content, including furniture objects with fine-grained geometry, and
their layout. To address the challenge, we introduce functional regions as
intermediate proxies between the room and object levels to make learning more
manageable. To ensure plausibility, our graph-based representation incorporates
both vertical edges connecting child nodes with parent nodes from different
levels, and horizontal edges encoding relationships between nodes at the same
level. Extensive experiments demonstrate that our method produces superior
generation results, even when comparing results of partial steps with
alternative methods that can only achieve these. We also demonstrate that our
method is effective for various applications such as part-level room editing,
room interpolation, and room generation by arbitrary room boundaries.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 15:31:59 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Gao",
"Lin",
""
],
[
"Sun",
"Jia-Mu",
""
],
[
"Mo",
"Kaichun",
""
],
[
"Lai",
"Yu-Kun",
""
],
[
"Guibas",
"Leonidas J.",
""
],
[
"Yang",
"Jie",
""
]
] |
new_dataset
| 0.999691 |
2302.10284
|
Jiannan Zhao
|
Feng Shuang, Yanpeng Zhu, Yupeng Xie, Lei Zhao, Quansheng Xie, Jiannan
Zhao, and Shigang Yue
|
OppLoD: the Opponency based Looming Detector, Model Extension of Looming
Sensitivity from LGMD to LPLC2
|
12 pages, 11 figures
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Looming detection plays an important role in insect collision prevention
systems. As a vital capability evolutionary survival, it has been extensively
studied in neuroscience and is attracting increasing research interest in
robotics due to its close relationship with collision detection and navigation.
Visual cues such as angular size, angular velocity, and expansion have been
widely studied for looming detection by means of optic flow or elementary
neural computing research. However, a critical visual motion cue has been long
neglected because it is so easy to be confused with expansion, that is
radial-opponent-motion (ROM). Recent research on the discovery of LPLC2, a
ROM-sensitive neuron in Drosophila, has revealed its ultra-selectivity because
it only responds to stimuli with focal, outward movement. This characteristic
of ROM-sensitivity is consistent with the demand for collision detection
because it is strongly associated with danger looming that is moving towards
the center of the observer. Thus, we hope to extend the well-studied neural
model of the lobula giant movement detector (LGMD) with ROM-sensibility in
order to enhance robustness and accuracy at the same time. In this paper, we
investigate the potential to extend an image velocity-based looming detector,
the lobula giant movement detector (LGMD), with ROM-sensibility. To achieve
this, we propose the mathematical definition of ROM and its main property, the
radial motion opponency (RMO). Then, a synaptic neuropile that analogizes the
synaptic processing of LPLC2 is proposed in the form of lateral inhibition and
attention. Thus, our proposed model is the first to perform both image velocity
selectivity and ROM sensitivity. Systematic experiments are conducted to
exhibit the huge potential of the proposed bio-inspired looming detector.
|
[
{
"version": "v1",
"created": "Fri, 10 Feb 2023 03:53:12 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Shuang",
"Feng",
""
],
[
"Zhu",
"Yanpeng",
""
],
[
"Xie",
"Yupeng",
""
],
[
"Zhao",
"Lei",
""
],
[
"Xie",
"Quansheng",
""
],
[
"Zhao",
"Jiannan",
""
],
[
"Yue",
"Shigang",
""
]
] |
new_dataset
| 0.975663 |
2302.10352
|
Chakkrit Tantithamthavorn
|
Saranya Alagarsamy, Chakkrit Tantithamthavorn, Aldeida Aleti
|
A3Test: Assertion-Augmented Automated Test Case Generation
|
Under Review at ACM Transactions on Software Engineering and
Methodology
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Test case generation is an important activity, yet a time-consuming and
laborious task. Recently, AthenaTest -- a deep learning approach for generating
unit test cases -- is proposed. However, AthenaTest can generate less than
one-fifth of the test cases correctly, due to a lack of assertion knowledge and
test signature verification. In this paper, we propose A3Test, a DL-based test
case generation approach that is augmented by assertion knowledge with a
mechanism to verify naming consistency and test signatures. A3Test leverages
the domain adaptation principles where the goal is to adapt the existing
knowledge from an assertion generation task to the test case generation task.
We also introduce a verification approach to verify naming consistency and test
signatures. Through an evaluation of 5,278 focal methods from the Defects4j
dataset, we find that our A3Test (1) achieves 147% more correct test cases and
15% more method coverage, with a lower number of generated test cases than
AthenaTest; (2) still outperforms the existing pre-trained models for the test
case generation task; (3) contributes substantially to performance improvement
via our own proposed assertion pre-training and the verification components;
(4) is 97.2% much faster while being more accurate than AthenaTest.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 22:41:47 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Alagarsamy",
"Saranya",
""
],
[
"Tantithamthavorn",
"Chakkrit",
""
],
[
"Aleti",
"Aldeida",
""
]
] |
new_dataset
| 0.999035 |
2302.10353
|
Murat Kuscu Dr
|
M. Okan Araz, Ahmet R. Emirdagi, M. Serkan Kopuzlu, Murat Kuscu
|
Ratio Shift Keying Modulation for Time-Varying Molecular Communication
Channels
|
13 pages, 7 figures. arXiv admin note: substantial text overlap with
arXiv:2205.13317
| null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Molecular Communications (MC) is a bio-inspired communication technique that
uses molecules to encode and transfer information. Many efforts have been
devoted to developing novel modulation techniques for MC based on various
distinguishable characteristics of molecules, such as their concentrations or
types. In this paper, we investigate a particular modulation scheme called
Ratio Shift Keying (RSK), where the information is encoded in the concentration
ratio of two different types of molecules. RSK modulation is hypothesized to
enable accurate information transfer in dynamic MC scenarios where the
time-varying channel characteristics affect both types of molecules equally. To
validate this hypothesis, we first conduct an information-theoretical analysis
of RSK modulation and derive the capacity of the end-to-end MC channel where
the receiver estimates concentration ratio based on ligand-receptor binding
statistics in an optimal or suboptimal manner. We then analyze the error
performance of RSK modulation in a practical time-varying MC scenario, that is
mobile MC, in which both the transmitter and the receiver undergo
diffusion-based propagation. Our numerical and analytical results, obtained for
varying levels of similarity between the ligand types used for ratio-encoding,
and varying number of receptors, show that RSK can significantly outperform the
most commonly considered MC modulation technique, concentration shift keying
(CSK), in dynamic MC scenarios.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 22:42:47 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Araz",
"M. Okan",
""
],
[
"Emirdagi",
"Ahmet R.",
""
],
[
"Kopuzlu",
"M. Serkan",
""
],
[
"Kuscu",
"Murat",
""
]
] |
new_dataset
| 0.992379 |
2302.10366
|
Tianyin Xu
|
Jinghao Jia and YiFei Zhu and Dan Williams and Andrea Arcangeli and
Claudio Canella and Hubertus Franke and Tobin Feldman-Fitzthum and Dimitrios
Skarlatos and Daniel Gruss and Tianyin Xu
|
Programmable System Call Security with eBPF
| null | null | null | null |
cs.OS cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
System call filtering is a widely used security mechanism for protecting a
shared OS kernel against untrusted user applications. However, existing system
call filtering techniques either are too expensive due to the context switch
overhead imposed by userspace agents, or lack sufficient programmability to
express advanced policies. Seccomp, Linux's system call filtering module, is
widely used by modern container technologies, mobile apps, and system
management services. Despite the adoption of the classic BPF language (cBPF),
security policies in Seccomp are mostly limited to static allow lists,
primarily because cBPF does not support stateful policies. Consequently, many
essential security features cannot be expressed precisely and/or require kernel
modifications.
In this paper, we present a programmable system call filtering mechanism,
which enables more advanced security policies to be expressed by leveraging the
extended BPF language (eBPF). More specifically, we create a new Seccomp eBPF
program type, exposing, modifying or creating new eBPF helper functions to
safely manage filter state, access kernel and user state, and utilize
synchronization primitives. Importantly, our system integrates with existing
kernel privilege and capability mechanisms, enabling unprivileged users to
install advanced filters safely. Our evaluation shows that our eBPF-based
filtering can enhance existing policies (e.g., reducing the attack surface of
early execution phase by up to 55.4% for temporal specialization), mitigate
real-world vulnerabilities, and accelerate filters.
|
[
{
"version": "v1",
"created": "Mon, 20 Feb 2023 23:54:04 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Jia",
"Jinghao",
""
],
[
"Zhu",
"YiFei",
""
],
[
"Williams",
"Dan",
""
],
[
"Arcangeli",
"Andrea",
""
],
[
"Canella",
"Claudio",
""
],
[
"Franke",
"Hubertus",
""
],
[
"Feldman-Fitzthum",
"Tobin",
""
],
[
"Skarlatos",
"Dimitrios",
""
],
[
"Gruss",
"Daniel",
""
],
[
"Xu",
"Tianyin",
""
]
] |
new_dataset
| 0.997137 |
2302.10381
|
Maurice HT Ling
|
Yong-Yao Ng, Maurice HT Ling
|
Electronic Laboratory Notebook on Web2py Framework
| null |
The Python Papers 5(3): 7 (2010)
| null | null |
cs.DL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Proper experimental record-keeping is an important cornerstone in research
and development for the purpose of auditing. The gold standard of
record-keeping is based on the judicious use of physical, permanent notebooks.
However, advances in technology had resulted in large amounts of electronic
records making it virtually impossible to maintain a full set of records in
physical notebooks. Electronic laboratory notebook systems aim to meet the
stringency for keeping records electronically. This manuscript describes CyNote
which is an electronic laboratory notebook system that is compliant with 21 CFP
Part 11 controls on electronic records, requirements set by USA Food and Drug
Administration for electronic records. CyNote is implemented on web2py
framework and is adhering to the architectural paradigm of
model-view-controller (MVC), allowing for extension modules to be built for
CyNote. CyNote is available at http://cynote.sf.net.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 01:03:50 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Ng",
"Yong-Yao",
""
],
[
"Ling",
"Maurice HT",
""
]
] |
new_dataset
| 0.981117 |
2302.10465
|
Wenxuan Guo
|
Meng Zhang, Wenxuan Guo, Bohao Fan, Yifan Chen, Jianjiang Feng and Jie
Zhou
|
A Flexible Multi-view Multi-modal Imaging System for Outdoor Scenes
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-view imaging systems enable uniform coverage of 3D space and reduce the
impact of occlusion, which is beneficial for 3D object detection and tracking
accuracy. However, existing imaging systems built with multi-view cameras or
depth sensors are limited by the small applicable scene and complicated
composition. In this paper, we propose a wireless multi-view multi-modal 3D
imaging system generally applicable to large outdoor scenes, which consists of
a master node and several slave nodes. Multiple spatially distributed slave
nodes equipped with cameras and LiDARs are connected to form a wireless sensor
network. While providing flexibility and scalability, the system applies
automatic spatio-temporal calibration techniques to obtain accurate 3D
multi-view multi-modal data. This system is the first imaging system that
integrates mutli-view RGB cameras and LiDARs in large outdoor scenes among
existing 3D imaging systems. We perform point clouds based 3D object detection
and long-term tracking using the 3D imaging dataset collected by this system.
The experimental results show that multi-view point clouds greatly improve 3D
object detection and tracking accuracy regardless of complex and various
outdoor environments.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 06:14:05 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Zhang",
"Meng",
""
],
[
"Guo",
"Wenxuan",
""
],
[
"Fan",
"Bohao",
""
],
[
"Chen",
"Yifan",
""
],
[
"Feng",
"Jianjiang",
""
],
[
"Zhou",
"Jie",
""
]
] |
new_dataset
| 0.99824 |
2302.10493
|
Xun Zhu
|
Xun Zhu and Yutong Xiong and Ming Wu and Gaozhen Nie and Bin Zhang and
Ziheng Yang
|
Weather2K: A Multivariate Spatio-Temporal Benchmark Dataset for
Meteorological Forecasting Based on Real-Time Observation Data from Ground
Weather Stations
| null | null | null | null |
cs.LG cs.NA math.NA physics.ao-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Weather forecasting is one of the cornerstones of meteorological work. In
this paper, we present a new benchmark dataset named Weather2K, which aims to
make up for the deficiencies of existing weather forecasting datasets in terms
of real-time, reliability, and diversity, as well as the key bottleneck of data
quality. To be specific, our Weather2K is featured from the following aspects:
1) Reliable and real-time data. The data is hourly collected from 2,130 ground
weather stations covering an area of 6 million square kilometers. 2)
Multivariate meteorological variables. 20 meteorological factors and 3
constants for position information are provided with a length of 40,896 time
steps. 3) Applicable to diverse tasks. We conduct a set of baseline tests on
time series forecasting and spatio-temporal forecasting. To the best of our
knowledge, our Weather2K is the first attempt to tackle weather forecasting
task by taking full advantage of the strengths of observation data from ground
weather stations. Based on Weather2K, we further propose Meteorological Factors
based Multi-Graph Convolution Network (MFMGCN), which can effectively construct
the intrinsic correlation among geographic locations based on meteorological
factors. Sufficient experiments show that MFMGCN improves both the forecasting
performance and temporal robustness. We hope our Weather2K can significantly
motivate researchers to develop efficient and accurate algorithms to advance
the task of weather forecasting. The dataset can be available at
https://github.com/bycnfz/weather2k/.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 07:46:08 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Zhu",
"Xun",
""
],
[
"Xiong",
"Yutong",
""
],
[
"Wu",
"Ming",
""
],
[
"Nie",
"Gaozhen",
""
],
[
"Zhang",
"Bin",
""
],
[
"Yang",
"Ziheng",
""
]
] |
new_dataset
| 0.999855 |
2302.10511
|
Yuanzhu Gan
|
Zizhang Wu, Guilian Chen, Yuanzhu Gan, Lei Wang, Jian Pu
|
MVFusion: Multi-View 3D Object Detection with Semantic-aligned Radar and
Camera Fusion
|
Accepted by ICRA 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-view radar-camera fused 3D object detection provides a farther
detection range and more helpful features for autonomous driving, especially
under adverse weather. The current radar-camera fusion methods deliver kinds of
designs to fuse radar information with camera data. However, these fusion
approaches usually adopt the straightforward concatenation operation between
multi-modal features, which ignores the semantic alignment with radar features
and sufficient correlations across modals. In this paper, we present MVFusion,
a novel Multi-View radar-camera Fusion method to achieve semantic-aligned radar
features and enhance the cross-modal information interaction. To achieve so, we
inject the semantic alignment into the radar features via the semantic-aligned
radar encoder (SARE) to produce image-guided radar features. Then, we propose
the radar-guided fusion transformer (RGFT) to fuse our radar and image features
to strengthen the two modals' correlation from the global scope via the
cross-attention mechanism. Extensive experiments show that MVFusion achieves
state-of-the-art performance (51.7% NDS and 45.3% mAP) on the nuScenes dataset.
We shall release our code and trained networks upon publication.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 08:25:50 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Wu",
"Zizhang",
""
],
[
"Chen",
"Guilian",
""
],
[
"Gan",
"Yuanzhu",
""
],
[
"Wang",
"Lei",
""
],
[
"Pu",
"Jian",
""
]
] |
new_dataset
| 0.985513 |
2302.10549
|
Yuanzhu Gan
|
Zizhang Wu, Yuanzhu Gan, Lei Wang, Guilian Chen, Jian Pu
|
MonoPGC: Monocular 3D Object Detection with Pixel Geometry Contexts
|
Accepted by ICRA 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monocular 3D object detection reveals an economical but challenging task in
autonomous driving. Recently center-based monocular methods have developed
rapidly with a great trade-off between speed and accuracy, where they usually
depend on the object center's depth estimation via 2D features. However, the
visual semantic features without sufficient pixel geometry information, may
affect the performance of clues for spatial 3D detection tasks. To alleviate
this, we propose MonoPGC, a novel end-to-end Monocular 3D object detection
framework with rich Pixel Geometry Contexts. We introduce the pixel depth
estimation as our auxiliary task and design depth cross-attention pyramid
module (DCPM) to inject local and global depth geometry knowledge into visual
features. In addition, we present the depth-space-aware transformer (DSAT) to
integrate 3D space position and depth-aware features efficiently. Besides, we
design a novel depth-gradient positional encoding (DGPE) to bring more distinct
pixel geometry contexts into the transformer for better object detection.
Extensive experiments demonstrate that our method achieves the state-of-the-art
performance on the KITTI dataset.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 09:21:58 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Wu",
"Zizhang",
""
],
[
"Gan",
"Yuanzhu",
""
],
[
"Wang",
"Lei",
""
],
[
"Chen",
"Guilian",
""
],
[
"Pu",
"Jian",
""
]
] |
new_dataset
| 0.99975 |
2302.10556
|
Joaquim Borges
|
J. Borges, D. V. Zinoviev and V. A. Zinoviev
|
On the classification of completely regular codes with covering radius
two and antipodal dual
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We classify all linear completely regular codes which have covering radius
$\rho = 2$ and whose dual are antipodal. For this, we firstly show several
properties of such dual codes, which are two-weight codes.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 09:29:46 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Borges",
"J.",
""
],
[
"Zinoviev",
"D. V.",
""
],
[
"Zinoviev",
"V. A.",
""
]
] |
new_dataset
| 0.984994 |
2302.10576
|
Michael F\"arber
|
Michael F\"arber
|
Denotational Semantics and a Fast Interpreter for jq
|
Submitted to OOPSLA 2023
| null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
jq is a widely used tool that provides a programming language to manipulate
JSON data. However, its semantics are currently only specified by its
implementation, making it difficult to reason about its behaviour. To this end,
I provide a syntax and denotational semantics for a subset of the jq language.
In particular, the semantics provide a new way to interpret updates. I
implement an extended version of the semantics in a novel interpreter for the
jq language called jaq. Although jaq uses a significantly simpler approach to
execute jq programs than jq, jaq is faster than jq on ten out of thirteen
benchmarks.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 10:13:20 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Färber",
"Michael",
""
]
] |
new_dataset
| 0.997289 |
2302.10595
|
Pascal Lenzner
|
\'Agnes Cseh, Pascal F\"uhrlich, Pascal Lenzner
|
The Swiss Gambit
| null | null | null | null |
cs.GT cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In each round of a Swiss-system tournament, players of similar score are
paired against each other. An intentional early loss therefore might lead to
weaker opponents in later rounds and thus to a better final tournament result -
a phenomenon known as the Swiss Gambit. To the best of our knowledge it is an
open question whether this strategy can actually work.
This paper provides answers based on an empirical agent-based analysis for
the most prominent application area of the Swiss-system format, namely chess
tournaments. We simulate realistic tournaments by employing the official FIDE
pairing system for computing the player pairings in each round. We show that
even though gambits are widely possible in Swiss-system chess tournaments,
profiting from them requires a high degree of predictability of match results.
Moreover, even if a Swiss Gambit succeeds, the obtained improvement in the
final ranking is limited. Our experiments prove that counting on a Swiss Gambit
is indeed a lot more of a risky gambit than a reliable strategy to improve the
final rank.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 10:56:33 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Cseh",
"Ágnes",
""
],
[
"Führlich",
"Pascal",
""
],
[
"Lenzner",
"Pascal",
""
]
] |
new_dataset
| 0.99355 |
2302.10641
|
Masato Fujitake
|
Masato Fujitake
|
A3S: Adversarial learning of semantic representations for Scene-Text
Spotting
|
Accepted to ICASSP 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene-text spotting is a task that predicts a text area on natural scene
images and recognizes its text characters simultaneously. It has attracted much
attention in recent years due to its wide applications. Existing research has
mainly focused on improving text region detection, not text recognition. Thus,
while detection accuracy is improved, the end-to-end accuracy is insufficient.
Texts in natural scene images tend to not be a random string of characters but
a meaningful string of characters, a word. Therefore, we propose adversarial
learning of semantic representations for scene text spotting (A3S) to improve
end-to-end accuracy, including text recognition. A3S simultaneously predicts
semantic features in the detected text area instead of only performing text
recognition based on existing visual features. Experimental results on publicly
available datasets show that the proposed method achieves better accuracy than
other methods.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 12:59:18 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Fujitake",
"Masato",
""
]
] |
new_dataset
| 0.986192 |
2302.10645
|
Malte Pedersen
|
Malte Pedersen, Daniel Lehotsk\'y, Ivan Nikolov, and Thomas B.
Moeslund
|
BrackishMOT: The Brackish Multi-Object Tracking Dataset
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
There exist no publicly available annotated underwater multi-object tracking
(MOT) datasets captured in turbid environments. To remedy this we propose the
BrackishMOT dataset with focus on tracking schools of small fish, which is a
notoriously difficult MOT task. BrackishMOT consists of 98 sequences captured
in the wild. Alongside the novel dataset, we present baseline results by
training a state-of-the-art tracker. Additionally, we propose a framework for
creating synthetic sequences in order to expand the dataset. The framework
consists of animated fish models and realistic underwater environments. We
analyse the effects of including synthetic data during training and show that a
combination of real and synthetic underwater training data can enhance tracking
performance. Links to code and data can be found at
https://www.vap.aau.dk/brackishmot
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 13:02:36 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Pedersen",
"Malte",
""
],
[
"Lehotský",
"Daniel",
""
],
[
"Nikolov",
"Ivan",
""
],
[
"Moeslund",
"Thomas B.",
""
]
] |
new_dataset
| 0.99958 |
2302.10646
|
Hisaichi Shibata
|
Hisaichi Shibata, Soichiro Miki, Yuta Nakamura
|
Playing the Werewolf game with artificial intelligence for language
understanding
| null | null | null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Werewolf game is a social deduction game based on free natural language
communication, in which players try to deceive others in order to survive. An
important feature of this game is that a large portion of the conversations are
false information, and the behavior of artificial intelligence (AI) in such a
situation has not been widely investigated. The purpose of this study is to
develop an AI agent that can play Werewolf through natural language
conversations. First, we collected game logs from 15 human players. Next, we
fine-tuned a Transformer-based pretrained language model to construct a value
network that can predict a posterior probability of winning a game at any given
phase of the game and given a candidate for the next action. We then developed
an AI agent that can interact with humans and choose the best voting target on
the basis of its probability from the value network. Lastly, we evaluated the
performance of the agent by having it actually play the game with human
players. We found that our AI agent, Deep Wolf, could play Werewolf as
competitively as average human players in a villager or a betrayer role,
whereas Deep Wolf was inferior to human players in a werewolf or a seer role.
These results suggest that current language models have the capability to
suspect what others are saying, tell a lie, or detect lies in conversations.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 13:03:20 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Shibata",
"Hisaichi",
""
],
[
"Miki",
"Soichiro",
""
],
[
"Nakamura",
"Yuta",
""
]
] |
new_dataset
| 0.994989 |
2302.10670
|
Jan Philipp W\"achter
|
Maximilian Kotowsky and Jan Philipp W\"achter
|
The Word Problem for Finitary Automaton Groups
| null | null | null | null |
cs.FL math.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A finitary automaton group is a group generated by an invertible,
deterministic finite-state letter-to-letter transducer whose only cycles are
self-loops at an identity state. We show that, for this presentation of finite
groups, the uniform word problem is coNP-complete. Here, the input consists of
a finitary automaton together with a finite state sequence and the question is
whether the sequence acts trivially on all input words. Additionally, we also
show that the respective compressed word problem, where the state sequence is
given as a straight-line program, is PSPACE-complete. In both cases, we give a
direct reduction from the satisfiablity problem for (quantified) boolean
formulae.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 13:39:54 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Kotowsky",
"Maximilian",
""
],
[
"Wächter",
"Jan Philipp",
""
]
] |
new_dataset
| 0.997146 |
2302.10676
|
Jonatan Krolikowski
|
Jonatan Krolikowski, Zied Ben Houidi, Dario Rossi
|
User-aware WLAN Transmit Power Control in the Wild
| null | null | null | null |
cs.NI cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In Wireless Local Area Networks (WLANs), Access point (AP) transmit power
influences (i) received signal quality for users and thus user throughput, (ii)
user association and thus load across APs and (iii) AP coverage ranges and thus
interference in the network. Despite decades of academic research, transmit
power levels are still, in practice, statically assigned to satisfy uniform
coverage objectives. Yet each network comes with its unique distribution of
users in space, calling for a power control that adapts to users' probabilities
of presence, for example, placing the areas with higher interference
probabilities where user density is the lowest. Although nice on paper, putting
this simple idea in practice comes with a number of challenges, with gains that
are difficult to estimate, if any at all. This paper is the first to address
these challenges and evaluate in a production network serving thousands of
daily users the benefits of a user-aware transmit power control system. Along
the way, we contribute a novel approach to reason about user densities of
presence from historical IEEE 802.11k data, as well as a new machine learning
approach to impute missing signal-strength measurements. Results of a thorough
experimental campaign show feasibility and quantify the gains: compared to
state-of-the-art solutions, the new system can increase the median signal
strength by 15dBm, while decreasing airtime interference at the same time. This
comes at an affordable cost of a 5dBm decrease in uplink signal due to lack of
terminal cooperation.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 13:51:05 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Krolikowski",
"Jonatan",
""
],
[
"Houidi",
"Zied Ben",
""
],
[
"Rossi",
"Dario",
""
]
] |
new_dataset
| 0.993116 |
2302.10786
|
George Boateng
|
George Boateng, Samuel John, Samuel Boateng, Philemon Badu, Patrick
Agyeman-Budu and Victor Kumbol
|
Real-World Deployment and Evaluation of Kwame for Science, An AI
Teaching Assistant for Science Education in West Africa
|
18 pages, under review at International Journal on Artificial
Intelligence in Education
| null | null | null |
cs.CL cs.CY cs.HC cs.IR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Africa has a high student-to-teacher ratio which limits students' access to
teachers for learning support such as educational question answering. In this
work, we extended Kwame, our previous AI teaching assistant for coding
education, adapted it for science education, and deployed it as a web app.
Kwame for Science provides passages from well-curated knowledge sources and
related past national exam questions as answers to questions from students
based on the Integrated Science subject of the West African Senior Secondary
Certificate Examination (WASSCE). Furthermore, students can view past national
exam questions along with their answers and filter by year, question type
(objectives, theory, and practicals), and topics that were automatically
categorized by a topic detection model which we developed (91% unweighted
average recall). We deployed Kwame for Science in the real world over 8 months
and had 750 users across 32 countries (15 in Africa) and 1.5K questions asked.
Our evaluation showed an 87.2% top 3 accuracy (n=109 questions) implying that
Kwame for Science has a high chance of giving at least one useful answer among
the 3 displayed. We categorized the reasons the model incorrectly answered
questions to provide insights for future improvements. We also share challenges
and lessons with the development, deployment, and human-computer interaction
component of such a tool to enable other researchers to deploy similar tools.
With a first-of-its-kind tool within the African context, Kwame for Science has
the potential to enable the delivery of scalable, cost-effective, and quality
remote education to millions of people across Africa.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 16:20:17 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Boateng",
"George",
""
],
[
"John",
"Samuel",
""
],
[
"Boateng",
"Samuel",
""
],
[
"Badu",
"Philemon",
""
],
[
"Agyeman-Budu",
"Patrick",
""
],
[
"Kumbol",
"Victor",
""
]
] |
new_dataset
| 0.962508 |
2302.10808
|
Lu Liu
|
Lu Liu, Lei Zhou, Yuhan Dong
|
Bokeh Rendering Based on Adaptive Depth Calibration Network
|
6 pages, 6 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Bokeh rendering is a popular and effective technique used in photography to
create an aesthetically pleasing effect. It is widely used to blur the
background and highlight the subject in the foreground, thereby drawing the
viewer's attention to the main focus of the image. In traditional digital
single-lens reflex cameras (DSLRs), this effect is achieved through the use of
a large aperture lens. This allows the camera to capture images with shallow
depth-of-field, in which only a small area of the image is in sharp focus,
while the rest of the image is blurred. However, the hardware embedded in
mobile phones is typically much smaller and more limited than that found in
DSLRs. Consequently, mobile phones are not able to capture natural shallow
depth-of-field photos, which can be a significant limitation for mobile
photography. To address this challenge, in this paper, we propose a novel
method for bokeh rendering using the Vision Transformer, a recent and powerful
deep learning architecture. Our approach employs an adaptive depth calibration
network that acts as a confidence level to compensate for errors in monocular
depth estimation. This network is used to supervise the rendering process in
conjunction with depth information, allowing for the generation of high-quality
bokeh images at high resolutions. Our experiments demonstrate that our proposed
method outperforms state-of-the-art methods, achieving about 24.7% improvements
on LPIPS and obtaining higher PSNR scores.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 16:33:51 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Liu",
"Lu",
""
],
[
"Zhou",
"Lei",
""
],
[
"Dong",
"Yuhan",
""
]
] |
new_dataset
| 0.997793 |
2302.10813
|
Zeyu Xiong
|
Zeyu Xiong, Daizong Liu, Pan Zhou, Jiahao Zhu
|
Tracking Objects and Activities with Attention for Temporal Sentence
Grounding
|
accepted by ICASSP2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal sentence grounding (TSG) aims to localize the temporal segment which
is semantically aligned with a natural language query in an untrimmed
video.Most existing methods extract frame-grained features or object-grained
features by 3D ConvNet or detection network under a conventional TSG framework,
failing to capture the subtle differences between frames or to model the
spatio-temporal behavior of core persons/objects. In this paper, we introduce a
new perspective to address the TSG task by tracking pivotal objects and
activities to learn more fine-grained spatio-temporal behaviors. Specifically,
we propose a novel Temporal Sentence Tracking Network (TSTNet), which contains
(A) a Cross-modal Targets Generator to generate multi-modal templates and
search space, filtering objects and activities, and (B) a Temporal Sentence
Tracker to track multi-modal targets for modeling the targets' behavior and to
predict query-related segment. Extensive experiments and comparisons with
state-of-the-arts are conducted on challenging benchmarks: Charades-STA and
TACoS. And our TSTNet achieves the leading performance with a considerable
real-time speed.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 16:42:52 GMT"
}
] | 2023-02-22T00:00:00 |
[
[
"Xiong",
"Zeyu",
""
],
[
"Liu",
"Daizong",
""
],
[
"Zhou",
"Pan",
""
],
[
"Zhu",
"Jiahao",
""
]
] |
new_dataset
| 0.993246 |
2103.01913
|
Krishna Srinivasan
|
Krishna Srinivasan, Karthik Raman, Jiecao Chen, Michael Bendersky,
Marc Najork
|
WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual
Machine Learning
| null | null |
10.1145/3404835.3463257
| null |
cs.CV cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The milestone improvements brought about by deep representation learning and
pre-training techniques have led to large performance gains across downstream
NLP, IR and Vision tasks. Multimodal modeling techniques aim to leverage large
high-quality visio-linguistic datasets for learning complementary information
(across image and text modalities). In this paper, we introduce the
Wikipedia-based Image Text (WIT) Dataset
(https://github.com/google-research-datasets/wit) to better facilitate
multimodal, multilingual learning. WIT is composed of a curated set of 37.6
million entity rich image-text examples with 11.5 million unique images across
108 Wikipedia languages. Its size enables WIT to be used as a pretraining
dataset for multimodal models, as we show when applied to downstream tasks such
as image-text retrieval. WIT has four main and unique advantages. First, WIT is
the largest multimodal dataset by the number of image-text examples by 3x (at
the time of writing). Second, WIT is massively multilingual (first of its kind)
with coverage over 100+ languages (each of which has at least 12K examples) and
provides cross-lingual texts for many images. Third, WIT represents a more
diverse set of concepts and real world entities relative to what previous
datasets cover. Lastly, WIT provides a very challenging real-world test set, as
we empirically illustrate using an image-text retrieval task as an example.
|
[
{
"version": "v1",
"created": "Tue, 2 Mar 2021 18:13:54 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Mar 2021 16:41:01 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Srinivasan",
"Krishna",
""
],
[
"Raman",
"Karthik",
""
],
[
"Chen",
"Jiecao",
""
],
[
"Bendersky",
"Michael",
""
],
[
"Najork",
"Marc",
""
]
] |
new_dataset
| 0.999752 |
2106.01135
|
Noemie Perivier
|
Abdellah Aznag, Vineet Goyal and Noemie Perivier
|
MNL-Bandit with Knapsacks
| null | null | null | null |
cs.LG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a dynamic assortment selection problem where a seller has a fixed
inventory of $N$ substitutable products and faces an unknown demand that
arrives sequentially over $T$ periods. In each period, the seller needs to
decide on the assortment of products (of cardinality at most $K$) to offer to
the customers. The customer's response follows an unknown multinomial logit
model (MNL) with parameters $v$. The goal of the seller is to maximize the
total expected revenue given the fixed initial inventory of $N$ products. We
give a policy that achieves a regret of $\tilde O\Big(K \sqrt{KN
T}\Big(\sqrt{v_{\text{max}}} + \frac{1}{q_{\text{min}}}\text{OPT}\Big)\Big)$,
where $v_{\text{max}}\leq 1$ is the maximum utility for any product and
$q_{\text{min}}$ the minimum inventory level, under a mild assumption on the
model parameters. In particular, our policy achieves a near-optimal $\tilde
O(\sqrt{T})$ regret in a large-inventory setting.
Our policy builds upon the UCB-based approach for MNL-bandit without
inventory constraints in [1] and addresses the inventory constraints through an
exponentially sized LP for which we present a tractable approximation while
keeping the $\tilde O(\sqrt{T})$ regret bound.
|
[
{
"version": "v1",
"created": "Wed, 2 Jun 2021 13:05:34 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2023 22:18:42 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Aznag",
"Abdellah",
""
],
[
"Goyal",
"Vineet",
""
],
[
"Perivier",
"Noemie",
""
]
] |
new_dataset
| 0.983973 |
2112.09569
|
Rishit Dagli
|
Rishit Dagli and Ali Mustufa Shaikh
|
CPPE-5: Medical Personal Protective Equipment Dataset
|
18 pages, 6 tables, 6 figures. Code and models are available at
https://git.io/cppe5-dataset
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new challenging dataset, CPPE - 5 (Medical Personal Protective
Equipment), with the goal to allow the study of subordinate categorization of
medical personal protective equipments, which is not possible with other
popular data sets that focus on broad-level categories (such as PASCAL VOC,
ImageNet, Microsoft COCO, OpenImages, etc). To make it easy for models trained
on this dataset to be used in practical scenarios in complex scenes, our
dataset mainly contains images that show complex scenes with several objects in
each scene in their natural context. The image collection for this dataset
focuses on: obtaining as many non-iconic images as possible and making sure all
the images are real-life images, unlike other existing datasets in this area.
Our dataset includes 5 object categories (coveralls, face shields, gloves,
masks, and goggles), and each image is annotated with a set of bounding boxes
and positive labels. We present a detailed analysis of the dataset in
comparison to other popular broad category datasets as well as datasets
focusing on personal protective equipments, we also find that at present there
exist no such publicly available datasets. Finally, we also analyze performance
and compare model complexities on baseline and state-of-the-art models for
bounding box results. Our code, data, and trained models are available at
https://git.io/cppe5-dataset.
|
[
{
"version": "v1",
"created": "Wed, 15 Dec 2021 18:45:55 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Feb 2023 08:51:42 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Dagli",
"Rishit",
""
],
[
"Shaikh",
"Ali Mustufa",
""
]
] |
new_dataset
| 0.999721 |
2201.13108
|
Kapish Chand Meena
|
Harshdeep Singh and Kapish Chand Meena
|
MDS Multi-twisted Reed-Solomon codes with small dimensional hull
| null | null | null | null |
cs.IT cs.CR math.AC math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we find a necessary and sufficient condition for multi-twisted
Reed-Solomon codes to be MDS. In particular, we introduce a new class of MDS
double-twisted Reed-Solomon codes $\mathcal{C}_{\pmb \alpha, \pmb t, \pmb h,
\pmb \eta}$ with twists $\pmb t = (1, 2)$ and hooks $\pmb h = (0, 1)$ over the
finite fields $\mathbb{F}_q$, providing a non-trivial example over
$\mathbb{F}_{16}$ and enumeration over the finite fields of size up to 17.
Additionally, we enumerate the single-twisted Reed-Solomon codes
$\mathcal{C}_{\pmb \alpha, t, h, \eta}$ with twist $t=2$ and hook $h=1$.
Moreover, we obtain necessary conditions for the existence of multi-twisted
Reed-Solomon codes with zero and one-dimensional hull. Consequently, we derive
conditions for the existence of MDS double-twisted Reed-Solomon codes with zero
and one-dimensional hull.
|
[
{
"version": "v1",
"created": "Mon, 31 Jan 2022 10:38:08 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Feb 2023 06:45:20 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Singh",
"Harshdeep",
""
],
[
"Meena",
"Kapish Chand",
""
]
] |
new_dataset
| 0.998099 |
2202.11554
|
Paolo Giulio Franciosa
|
Endre Boros, Paolo Giulio Franciosa, Vladimir Gurvich, Michael Vyalyi
|
Deterministic n-person shortest path and terminal games on symmetric
digraphs have Nash equilibria in pure stationary strategies
| null | null | null | null |
cs.GT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We prove that a deterministic n-person shortest path game has a Nash
equlibrium in pure and stationary strategies if it is edge-symmetric (that is
(u,v) is a move whenever (v,u) is, apart from moves entering terminal vertices)
and the length of every move is positive for each player. Both conditions are
essential, though it remains an open problem whether there exists a NE-free
2-person non-edge-symmetric game with positive lengths. We provide examples for
NE-free 2-person edge-symmetric games that are not positive. We also consider
the special case of terminal games (shortest path games in which only terminal
moves have nonzero length, possibly negative) and prove that edge-symmetric
n-person terminal games always have Nash equilibria in pure and stationary
strategies. Furthermore, we prove that an edge-symmetric 2-person terminal game
has a uniform (subgame perfect) Nash equilibrium, provided any infinite play is
worse than any of the terminals for both players.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 15:03:09 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Feb 2023 13:57:23 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Boros",
"Endre",
""
],
[
"Franciosa",
"Paolo Giulio",
""
],
[
"Gurvich",
"Vladimir",
""
],
[
"Vyalyi",
"Michael",
""
]
] |
new_dataset
| 0.995432 |
2202.11602
|
Christos Efrem
|
Christos N. Efrem, Ioannis Krikidis
|
Joint IRS Location and Size Optimization in Multi-IRS Aided Two-Way
Full-Duplex Communication Systems
|
16 pages, 8 figures
| null |
10.1109/TWC.2023.3244279
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intelligent reflecting surfaces (IRSs) have emerged as a promising wireless
technology for the dynamic configuration and control of electromagnetic waves,
thus creating a smart (programmable) radio environment. In this context, we
study a multi-IRS assisted two-way communication system consisting of two users
that employ full-duplex (FD) technology. More specifically, we deal with the
joint IRS location and size (i.e., the number of reflecting elements)
optimization in order to minimize an upper bound of system outage probability
under various constraints: minimum and maximum number of reflecting elements
per IRS, maximum number of installed IRSs, maximum total number of reflecting
elements (implicit bound on the signaling overhead) as well as maximum total
IRS installation cost. First, the problem is formulated as a discrete
optimization problem and, then, a theoretical proof of its NP-hardness is
given. Moreover, we provide a lower bound on the optimum value by solving a
linear-programming relaxation (LPR) problem. Subsequently, we design two
polynomial-time algorithms, a deterministic greedy algorithm and a randomized
approximation algorithm, based on the LPR solution. The former is a heuristic
method that always computes a feasible solution for which (a posteriori)
performance guarantee can be provided. The latter achieves an approximate
solution, using randomized rounding, with provable (a priori) probabilistic
guarantees on the performance. Furthermore, extensive numerical simulations
demonstrate the superiority of the proposed algorithms compared to the baseline
schemes. Finally, useful conclusions regarding the comparison between FD and
conventional half-duplex (HD) systems are also drawn.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 16:30:42 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2023 13:53:47 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Efrem",
"Christos N.",
""
],
[
"Krikidis",
"Ioannis",
""
]
] |
new_dataset
| 0.981566 |
2204.04208
|
Renato Juliano Martins
|
Renato Juliano Martins, Emil Marinov, M. Aziz Ben Youssef, Christina
Kyrou, Mathilde Joubert, Constance Colmagro, Valentin G\^at\'e, Colette
Turbil, Pierre-Marie Coulon, Daniel Turover, Samira Khadir, Massimo Giudici,
Charalambos Klitis, Marc Sorel and Patrice Genevet
|
Metasurface-enhanced Light Detection and Ranging Technology
|
25pages, 18 figures. Including supplementary materials
| null |
10.1038/s41467-022-33450-2
| null |
cs.RO physics.ins-det physics.optics
|
http://creativecommons.org/licenses/by/4.0/
|
Deploying advanced imaging solutions to robotic and autonomous systems by
mimicking human vision requires simultaneous acquisition of multiple fields of
views, named the peripheral and fovea regions. Low-resolution peripheral field
provides coarse scene exploration to direct the eye to focus to a highly
resolved fovea region for sharp imaging. Among 3D computer vision techniques,
Light Detection and Ranging (LiDAR) is currently considered at the industrial
level for robotic vision. LiDAR is an imaging technique that monitors pulses of
light at optical frequencies to sense the space and to recover
three-dimensional ranging information. Notwithstanding the efforts on LiDAR
integration and optimization, commercially available devices have slow frame
rate and low image resolution, notably limited by the performance of mechanical
or slow solid-state deflection systems. Metasurfaces (MS) are versatile optical
components that can distribute the optical power in desired regions of space.
Here, we report on an advanced LiDAR technology that uses ultrafast low FoV
deflectors cascaded with large area metasurfaces to achieve large FoV and
simultaneous peripheral and central imaging zones. This technology achieves MHz
frame rate for 2D imaging, and up to KHz for 3D imaging, with extremely large
FoV (up to 150{\deg}deg. on both vertical and horizontal scanning axes). The
use of this disruptive LiDAR technology with advanced learning algorithms
offers perspectives to improve further the perception capabilities and
decision-making process of autonomous vehicles and robotic systems.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 10:03:08 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Martins",
"Renato Juliano",
""
],
[
"Marinov",
"Emil",
""
],
[
"Youssef",
"M. Aziz Ben",
""
],
[
"Kyrou",
"Christina",
""
],
[
"Joubert",
"Mathilde",
""
],
[
"Colmagro",
"Constance",
""
],
[
"Gâté",
"Valentin",
""
],
[
"Turbil",
"Colette",
""
],
[
"Coulon",
"Pierre-Marie",
""
],
[
"Turover",
"Daniel",
""
],
[
"Khadir",
"Samira",
""
],
[
"Giudici",
"Massimo",
""
],
[
"Klitis",
"Charalambos",
""
],
[
"Sorel",
"Marc",
""
],
[
"Genevet",
"Patrice",
""
]
] |
new_dataset
| 0.999362 |
2205.09655
|
Xueying Qin
|
Xueying Qin (University of Edinburgh, UK), Liam O'Connor (University
of Edinburgh, UK), Michel Steuwer (University of Edinburgh, UK)
|
Primrose: Selecting Container Data Types by Their Properties
| null |
The Art, Science, and Engineering of Programming, 2023, Vol. 7,
Issue 3, Article 11
|
10.22152/programming-journal.org/2023/7/11
| null |
cs.PL cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Context: Container data types are ubiquitous in computer programming,
enabling developers to efficiently store and process collections of data with
an easy-to-use programming interface. Many programming languages offer a
variety of container implementations in their standard libraries based on data
structures offering different capabilities and performance characteristics.
Inquiry: Choosing the *best* container for an application is not always
straightforward, as performance characteristics can change drastically in
different scenarios, and as real-world performance is not always correlated to
theoretical complexity.
Approach: We present Primrose, a language-agnostic tool for selecting the
best performing valid container implementation from a set of container data
types that satisfy *properties* given by application developers. Primrose
automatically selects the set of valid container implementations for which the
*library specifications*, written by the developers of container libraries,
satisfies the specified properties. Finally, Primrose ranks the valid library
implementations based on their runtime performance.
Knowledge: With Primrose, application developers can specify the expected
behaviour of a container as a type refinement with *semantic properties*, e.g.,
if the container should only contain unique values (such as a `set`) or should
satisfy the LIFO property of a `stack`. Semantic properties nicely complement
*syntactic properties* (i.e., traits, interfaces, or type classes), together
allowing developers to specify a container's programming interface *and*
behaviour without committing to a concrete implementation.
Grounding: We present our prototype implementation of Primrose that
preprocesses annotated Rust code, selects valid container implementations and
ranks them on their performance. The design of Primrose is, however,
language-agnostic, and is easy to integrate into other programming languages
that support container data types and traits, interfaces, or type classes. Our
implementation encodes properties and library specifications into verification
conditions in Rosette, an interface for SMT solvers, which determines the set
of valid container implementations. We evaluate Primrose by specifying several
container implementations, and measuring the time taken to select valid
implementations for various combinations of properties with the solver. We
automatically validate that container implementations conform to their library
specifications via property-based testing.
Importance: This work provides a novel approach to bring abstract modelling
and specification of container types directly into the programmer's workflow.
Instead of selecting concrete container implementations, application
programmers can now work on the level of specification, merely stating the
behaviours they require from their container types, and the best implementation
can be selected automatically.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 16:15:07 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Jan 2023 00:14:43 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Feb 2023 17:02:14 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Qin",
"Xueying",
"",
"University of Edinburgh, UK"
],
[
"O'Connor",
"Liam",
"",
"University\n of Edinburgh, UK"
],
[
"Steuwer",
"Michel",
"",
"University of Edinburgh, UK"
]
] |
new_dataset
| 0.999392 |
2206.07423
|
Qianfan Zhao
|
Qianfan Zhao, Lu Zhang, Bin He, Hong Qiao, and Zhiyong Liu
|
Zero-shot object goal visual navigation
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Object goal visual navigation is a challenging task that aims to guide a
robot to find the target object based on its visual observation, and the target
is limited to the classes pre-defined in the training stage. However, in real
households, there may exist numerous target classes that the robot needs to
deal with, and it is hard for all of these classes to be contained in the
training stage. To address this challenge, we study the zero-shot object goal
visual navigation task, which aims at guiding robots to find targets belonging
to novel classes without any training samples. To this end, we also propose a
novel zero-shot object navigation framework called semantic similarity network
(SSNet). Our framework use the detection results and the cosine similarity
between semantic word embeddings as input. Such type of input data has a weak
correlation with classes and thus our framework has the ability to generalize
the policy to novel classes. Extensive experiments on the AI2-THOR platform
show that our model outperforms the baseline models in the zero-shot object
navigation task, which proves the generalization ability of our model. Our code
is available at:
https://github.com/pioneer-innovation/Zero-Shot-Object-Navigation.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 09:53:43 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Jan 2023 06:22:21 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Feb 2023 03:46:36 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Zhao",
"Qianfan",
""
],
[
"Zhang",
"Lu",
""
],
[
"He",
"Bin",
""
],
[
"Qiao",
"Hong",
""
],
[
"Liu",
"Zhiyong",
""
]
] |
new_dataset
| 0.99404 |
2207.01470
|
Xing Hu
|
Xing Hu, Sam Toueg
|
On implementing SWMR registers from SWSR registers in systems with
Byzantine failures
|
50 pages
| null | null | null |
cs.DC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The implementation of registers from (potentially) weaker registers is a
classical problem in the theory of distributed computing. Since Lamport's
pioneering work [13], this problem has been extensively studied in the context
of asynchronous processes with crash failures. In this paper, we investigate
this problem in the context of Byzantine process failures, with and without
process signatures.
We first prove that, without signatures, there is no wait-free linearizable
implementation of a 1-writer n-reader register from atomic 1-writer 1-reader
registers. In fact, we show a stronger result, namely, even under the
assumption that the writer can only crash and at most one reader can be
malicious, there is no linearizable implementation of a 1-writer n-reader
register from atomic 1-writer (n-1)-reader registers that ensures that every
correct process eventually completes its operations. In light of this
impossibility result, we give two implementations of a 1-writer n-reader
register from atomic 1-writer 1-reader registers that work under different
assumptions. The first implementation is linearizable (under any combination of
process failures), but it guarantees that every correct process eventually
completes its operations only under the assumption that the writer is correct
or no reader is malicious -- thus matching the impossibility result. The second
implementation assumes process signatures; it is bounded wait-free and
linearizable under any combination of process failures. Finally, we show that
without process signatures, even if we assume that the writer is correct and at
most one of the readers can be malicious, it is impossible to guarantee that
every correct reader completes each read operation in a bounded number of
steps.
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2022 15:03:27 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Feb 2023 03:46:52 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Hu",
"Xing",
""
],
[
"Toueg",
"Sam",
""
]
] |
new_dataset
| 0.99927 |
2209.07334
|
Juan Juli\'an Merelo-Guerv\'os Pr.
|
J. J. Merelo-Guerv\'os
|
What is a good doge? Analyzing the patrician social network of the
Republic of Venice
| null | null | null | null |
cs.SI cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The Venetian republic was one of the most successful trans-modern states,
surviving for a millennium through innovation, commercial cunning, exploitation
of colonies and legal stability. Part of the success might be due to its
government structure, a republic ruled by a doge chosen among a relatively
limited set of Venetian patrician families. In this paper we analyze the
structure of the social network they formed through marriage, and how
government was monopolized by a relatively small set of families, the ones that
became patrician first.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 16:50:12 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Feb 2023 18:24:49 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Merelo-Guervós",
"J. J.",
""
]
] |
new_dataset
| 0.985596 |
2210.03052
|
Yujia Zhai
|
Yujia Zhai, Chengquan Jiang, Leyuan Wang, Xiaoying Jia, Shang Zhang,
Zizhong Chen, Xin Liu, Yibo Zhu
|
ByteTransformer: A High-Performance Transformer Boosted for
Variable-Length Inputs
|
Accepted at IPDPS 2023
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Transformers have become keystone models in natural language processing over
the past decade. They have achieved great popularity in deep learning
applications, but the increasing sizes of the parameter spaces required by
transformer models generate a commensurate need to accelerate performance.
Natural language processing problems are also routinely faced with
variable-length sequences, as word counts commonly vary among sentences.
Existing deep learning frameworks pad variable-length sequences to a maximal
length, which adds significant memory and computational overhead. In this
paper, we present ByteTransformer, a high-performance transformer boosted for
variable-length inputs. We propose a padding-free algorithm that liberates the
entire transformer from redundant computations on zero padded tokens. In
addition to algorithmic-level optimization, we provide architecture-aware
optimizations for transformer functional modules, especially the
performance-critical algorithm Multi-Head Attention (MHA). Experimental results
on an NVIDIA A100 GPU with variable-length sequence inputs validate that our
fused MHA outperforms PyTorch by 6.13x. The end-to-end performance of
ByteTransformer for a forward BERT transformer surpasses state-of-the-art
transformer frameworks, such as PyTorch JIT, TensorFlow XLA, Tencent
TurboTransformer, Microsoft DeepSpeed-Inference and NVIDIA FasterTransformer,
by 87\%, 131\%, 138\%, 74\% and 55\%, respectively. We also demonstrate the
general applicability of our optimization methods to other BERT-like models,
including ALBERT, DistilBERT, and DeBERTa.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 16:57:23 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Jan 2023 06:38:09 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Feb 2023 18:08:02 GMT"
},
{
"version": "v4",
"created": "Mon, 20 Feb 2023 01:23:52 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Zhai",
"Yujia",
""
],
[
"Jiang",
"Chengquan",
""
],
[
"Wang",
"Leyuan",
""
],
[
"Jia",
"Xiaoying",
""
],
[
"Zhang",
"Shang",
""
],
[
"Chen",
"Zizhong",
""
],
[
"Liu",
"Xin",
""
],
[
"Zhu",
"Yibo",
""
]
] |
new_dataset
| 0.972972 |
2210.12755
|
Banghuai Li
|
Zhuoxu Huang, Zhiyou Zhao, Banghuai Li, Jungong Han
|
LCPFormer: Towards Effective 3D Point Cloud Analysis via Local Context
Propagation in Transformers
|
Accepted by TCSVT
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer with its underlying attention mechanism and the ability to
capture long-range dependencies makes it become a natural choice for unordered
point cloud data. However, separated local regions from the general sampling
architecture corrupt the structural information of the instances, and the
inherent relationships between adjacent local regions lack exploration, while
local structural information is crucial in a transformer-based 3D point cloud
model. Therefore, in this paper, we propose a novel module named Local Context
Propagation (LCP) to exploit the message passing between neighboring local
regions and make their representations more informative and discriminative.
More specifically, we use the overlap points of adjacent local regions (which
statistically show to be prevalent) as intermediaries, then re-weight the
features of these shared points from different local regions before passing
them to the next layers. Inserting the LCP module between two transformer
layers results in a significant improvement in network expressiveness. Finally,
we design a flexible LCPFormer architecture equipped with the LCP module. The
proposed method is applicable to different tasks and outperforms various
transformer-based methods in benchmarks including 3D shape classification and
dense prediction tasks such as 3D object detection and semantic segmentation.
Code will be released for reproduction.
|
[
{
"version": "v1",
"created": "Sun, 23 Oct 2022 15:43:01 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Feb 2023 14:44:11 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Huang",
"Zhuoxu",
""
],
[
"Zhao",
"Zhiyou",
""
],
[
"Li",
"Banghuai",
""
],
[
"Han",
"Jungong",
""
]
] |
new_dataset
| 0.975057 |
2210.15947
|
Liangchen Song
|
Liangchen Song, Anpei Chen, Zhong Li, Zhang Chen, Lele Chen, Junsong
Yuan, Yi Xu, Andreas Geiger
|
NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed
Neural Radiance Fields
|
Project page: https://lsongx.github.io/projects/nerfplayer.html
| null | null | null |
cs.CV cs.GR cs.MM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Visually exploring in a real-world 4D spatiotemporal space freely in VR has
been a long-term quest. The task is especially appealing when only a few or
even single RGB cameras are used for capturing the dynamic scene. To this end,
we present an efficient framework capable of fast reconstruction, compact
modeling, and streamable rendering. First, we propose to decompose the 4D
spatiotemporal space according to temporal characteristics. Points in the 4D
space are associated with probabilities of belonging to three categories:
static, deforming, and new areas. Each area is represented and regularized by a
separate neural field. Second, we propose a hybrid representations based
feature streaming scheme for efficiently modeling the neural fields. Our
approach, coined NeRFPlayer, is evaluated on dynamic scenes captured by single
hand-held cameras and multi-camera arrays, achieving comparable or superior
rendering performance in terms of quality and speed comparable to recent
state-of-the-art methods, achieving reconstruction in 10 seconds per frame and
interactive rendering.
|
[
{
"version": "v1",
"created": "Fri, 28 Oct 2022 07:11:05 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Feb 2023 07:00:29 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Song",
"Liangchen",
""
],
[
"Chen",
"Anpei",
""
],
[
"Li",
"Zhong",
""
],
[
"Chen",
"Zhang",
""
],
[
"Chen",
"Lele",
""
],
[
"Yuan",
"Junsong",
""
],
[
"Xu",
"Yi",
""
],
[
"Geiger",
"Andreas",
""
]
] |
new_dataset
| 0.996787 |
2302.06873
|
Rong Zhu
|
Rong Zhu, Wei Chen, Bolin Ding, Xingguang Chen, Andreas Pfadler, Ziniu
Wu, Jingren Zhou
|
Lero: A Learning-to-Rank Query Optimizer
|
PVLDB 2023
| null | null | null |
cs.DB cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
A recent line of works apply machine learning techniques to assist or rebuild
cost-based query optimizers in DBMS. While exhibiting superiority in some
benchmarks, their deficiencies, e.g., unstable performance, high training cost,
and slow model updating, stem from the inherent hardness of predicting the cost
or latency of execution plans using machine learning models. In this paper, we
introduce a learning-to-rank query optimizer, called Lero, which builds on top
of a native query optimizer and continuously learns to improve the optimization
performance. The key observation is that the relative order or rank of plans,
rather than the exact cost or latency, is sufficient for query optimization.
Lero employs a pairwise approach to train a classifier to compare any two plans
and tell which one is better. Such a binary classification task is much easier
than the regression task to predict the cost or latency, in terms of model
efficiency and accuracy. Rather than building a learned optimizer from scratch,
Lero is designed to leverage decades of wisdom of databases and improve the
native query optimizer. With its non-intrusive design, Lero can be implemented
on top of any existing DBMS with minimal integration efforts. We implement Lero
and demonstrate its outstanding performance using PostgreSQL. In our
experiments, Lero achieves near optimal performance on several benchmarks. It
reduces the plan execution time of the native optimizer in PostgreSQL by up to
70% and other learned query optimizers by up to 37%. Meanwhile, Lero
continuously learns and automatically adapts to query workloads and changes in
data.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 07:31:11 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Feb 2023 03:03:40 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Zhu",
"Rong",
""
],
[
"Chen",
"Wei",
""
],
[
"Ding",
"Bolin",
""
],
[
"Chen",
"Xingguang",
""
],
[
"Pfadler",
"Andreas",
""
],
[
"Wu",
"Ziniu",
""
],
[
"Zhou",
"Jingren",
""
]
] |
new_dataset
| 0.965396 |
2302.08706
|
Haipeng Liu
|
Haoran Sun, Yang Wang, Haipeng Liu, Biao Qian
|
Fine-grained Cross-modal Fusion based Refinement for Text-to-Image
Synthesis
|
13 pages, 8 figures, accepted by Chinese Journal of Electronics
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-to-image synthesis refers to generating visual-realistic and
semantically consistent images from given textual descriptions. Previous
approaches generate an initial low-resolution image and then refine it to be
high-resolution. Despite the remarkable progress, these methods are limited in
fully utilizing the given texts and could generate text-mismatched images,
especially when the text description is complex. We propose a novel
Fine-grained text-image Fusion based Generative Adversarial Networks, dubbed
FF-GAN, which consists of two modules: Fine-grained text-image Fusion Block
(FF-Block) and Global Semantic Refinement (GSR). The proposed FF-Block
integrates an attention block and several convolution layers to effectively
fuse the fine-grained word-context features into the corresponding visual
features, in which the text information is fully used to refine the initial
image with more details. And the GSR is proposed to improve the global semantic
consistency between linguistic and visual features during the refinement
process. Extensive experiments on CUB-200 and COCO datasets demonstrate the
superiority of FF-GAN over other state-of-the-art approaches in generating
images with semantic consistency to the given texts.Code is available at
https://github.com/haoranhfut/FF-GAN.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 05:44:05 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Feb 2023 09:38:50 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Sun",
"Haoran",
""
],
[
"Wang",
"Yang",
""
],
[
"Liu",
"Haipeng",
""
],
[
"Qian",
"Biao",
""
]
] |
new_dataset
| 0.999087 |
2302.09070
|
Reza Habibi
|
Reza Habibi, Johannes Pfau, Jonattan Holmes, Magy Seif El-Nasr
|
Empathetic AI for Empowering Resilience in Games
| null | null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Failure and resilience are important aspects of gameplay. This is especially
important for serious and competitive games, where players need to adapt and
cope with failure frequently. In such situations, emotion regulation -- the
active process of modulating ones' emotions to cope and adapt to challenging
situations -- becomes essential. It is one of the prominent aspects of human
intelligence and promotes mental health and well-being. While there has been
work on developing artificial emotional regulation assistants to help users
cope with emotion regulation in the field of Intelligent Tutoring systems,
little is done to incorporate such systems or ideas into (serious) video games.
In this paper, we introduce a data-driven 6-phase approach to establish
empathetic artificial intelligence (EAI), which operates on raw chat log data
to detect key affective states, identify common sequences and emotion
regulation strategies and generalizes these to make them applicable for
intervention systems.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 19:58:47 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Habibi",
"Reza",
""
],
[
"Pfau",
"Johannes",
""
],
[
"Holmes",
"Jonattan",
""
],
[
"El-Nasr",
"Magy Seif",
""
]
] |
new_dataset
| 0.981759 |
2302.09072
|
Pengcheng Wang
|
Charilaos Mousoulis, Pengcheng Wang, Nguyen Luu Do, Jose F Waimin,
Nithin Raghunathan, Rahim Rahimi, Ali Shakouri, and Saurabh Bagchi
|
An Open Dataset of Sensor Data from Soil Sensors and Weather Stations at
Production Farms
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Weather and soil conditions are particularly important when it comes to
farming activities. Study of these factors and their role in nutrient and
nitrate absorption rates can lead to useful insights with benefits for both the
crop yield and the protection of the environment through the more controlled
use of fertilizers and chemicals. There is a paucity of public data from rural,
agricultural sensor networks. This is partly due to the unique challenges faced
during the deployment and maintenance of IoT networks in rural agricultural
areas. As part of a 5-year project called WHIN we have been deploying and
collecting sensor data from production and experimental agricultural farms in
and around Purdue University in Indiana. Here we release a dataset comprising
soil sensor data from a representative sample of 3 nodes across 3 production
farms, each for 5 months. We correlate this data with the weather data and draw
some insights about the absorption of rain in the soil. We provide the dataset
at: https://purduewhin.ecn.purdue.edu/dataset2021.
|
[
{
"version": "v1",
"created": "Thu, 16 Feb 2023 21:41:57 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Mousoulis",
"Charilaos",
""
],
[
"Wang",
"Pengcheng",
""
],
[
"Do",
"Nguyen Luu",
""
],
[
"Waimin",
"Jose F",
""
],
[
"Raghunathan",
"Nithin",
""
],
[
"Rahimi",
"Rahim",
""
],
[
"Shakouri",
"Ali",
""
],
[
"Bagchi",
"Saurabh",
""
]
] |
new_dataset
| 0.999785 |
2302.09116
|
Dipanjan Das
|
Priyanka Bose, Dipanjan Das, Saastha Vasan, Sebastiano Mariani, Ilya
Grishchenko, Andrea Continella, Antonio Bianchi, Christopher Kruegel,
Giovanni Vigna
|
Columbus: Android App Testing Through Systematic Callback Exploration
| null |
International Conference on Software Engineering (ICSE), 2023
| null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
With the continuous rise in the popularity of Android mobile devices,
automated testing of apps has become more important than ever. Android apps are
event-driven programs. Unfortunately, generating all possible types of events
by interacting with the app's interface is challenging for an automated testing
approach. Callback-driven testing eliminates the need for event generation by
directly invoking app callbacks. However, existing callback-driven testing
techniques assume prior knowledge of Android callbacks, and they rely on a
human expert, who is familiar with the Android API, to write stub code that
prepares callback arguments before invocation. Since the Android API is huge
and keeps evolving, prior techniques could only support a small fraction of
callbacks present in the Android framework.
In this work, we introduce Columbus, a callback-driven testing technique that
employs two strategies to eliminate the need for human involvement: (i) it
automatically identifies callbacks by simultaneously analyzing both the Android
framework and the app under test, and (ii) it uses a combination of
under-constrained symbolic execution (primitive arguments), and type-guided
dynamic heap introspection (object arguments) to generate valid and effective
inputs. Lastly, Columbus integrates two novel feedback mechanisms -- data
dependency and crash-guidance, during testing to increase the likelihood of
triggering crashes, and maximizing coverage. In our evaluation, Columbus
outperforms state-of-the-art model-driven, checkpoint-based, and
callback-driven testing tools both in terms of crashes and coverage.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 20:03:12 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Bose",
"Priyanka",
""
],
[
"Das",
"Dipanjan",
""
],
[
"Vasan",
"Saastha",
""
],
[
"Mariani",
"Sebastiano",
""
],
[
"Grishchenko",
"Ilya",
""
],
[
"Continella",
"Andrea",
""
],
[
"Bianchi",
"Antonio",
""
],
[
"Kruegel",
"Christopher",
""
],
[
"Vigna",
"Giovanni",
""
]
] |
new_dataset
| 0.999257 |
2302.09124
|
Vishnu Nair
|
Vishnu Nair, Hanxiu 'Hazel' Zhu, Brian A. Smith
|
ImageAssist: Tools for Enhancing Touchscreen-Based Image Exploration
Systems for Blind and Low Vision Users
| null |
Proceedings of the 2023 CHI Conference on Human Factors in
Computing Systems (CHI '23), April 2023
|
10.1145/3544548.3581302
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blind and low vision (BLV) users often rely on alt text to understand what a
digital image is showing. However, recent research has investigated how
touch-based image exploration on touchscreens can supplement alt text.
Touchscreen-based image exploration systems allow BLV users to deeply
understand images while granting a strong sense of agency. Yet, prior work has
found that these systems require a lot of effort to use, and little work has
been done to explore these systems' bottlenecks on a deeper level and propose
solutions to these issues. To address this, we present ImageAssist, a set of
three tools that assist BLV users through the process of exploring images by
touch -- scaffolding the exploration process. We perform a series of studies
with BLV users to design and evaluate ImageAssist, and our findings reveal
several implications for image exploration tools for BLV users.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 20:16:28 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Nair",
"Vishnu",
""
],
[
"Zhu",
"Hanxiu 'Hazel'",
""
],
[
"Smith",
"Brian A.",
""
]
] |
new_dataset
| 0.987233 |
2302.09155
|
Chandrayee Basu
|
Chandrayee Basu, Rosni Vasu, Michihiro Yasunaga, Qian Yang
|
Med-EASi: Finely Annotated Dataset and Models for Controllable
Simplification of Medical Texts
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic medical text simplification can assist providers with
patient-friendly communication and make medical texts more accessible, thereby
improving health literacy. But curating a quality corpus for this task requires
the supervision of medical experts. In this work, we present
$\textbf{Med-EASi}$ ($\underline{\textbf{Med}}$ical dataset for
$\underline{\textbf{E}}$laborative and $\underline{\textbf{A}}$bstractive
$\underline{\textbf{Si}}$mplification), a uniquely crowdsourced and finely
annotated dataset for supervised simplification of short medical texts. Its
$\textit{expert-layman-AI collaborative}$ annotations facilitate
$\textit{controllability}$ over text simplification by marking four kinds of
textual transformations: elaboration, replacement, deletion, and insertion. To
learn medical text simplification, we fine-tune T5-large with four different
styles of input-output combinations, leading to two control-free and two
controllable versions of the model. We add two types of
$\textit{controllability}$ into text simplification, by using a multi-angle
training approach: $\textit{position-aware}$, which uses in-place annotated
inputs and outputs, and $\textit{position-agnostic}$, where the model only
knows the contents to be edited, but not their positions. Our results show that
our fine-grained annotations improve learning compared to the unannotated
baseline. Furthermore, $\textit{position-aware}$ control generates better
simplification than the $\textit{position-agnostic}$ one. The data and code are
available at https://github.com/Chandrayee/CTRL-SIMP.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 21:50:13 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Basu",
"Chandrayee",
""
],
[
"Vasu",
"Rosni",
""
],
[
"Yasunaga",
"Michihiro",
""
],
[
"Yang",
"Qian",
""
]
] |
new_dataset
| 0.998005 |
2302.09164
|
Alexios Lekidis
|
Alexios Lekidis
|
Cyber-attack TTP analysis for EPES systems
| null | null | null | null |
cs.NI cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The electrical grid constitutes of legacy systems that were built with no
security in mind. As we move towards the Industry 4.0 area though a high-degree
of automation and connectivity provides: 1) fast and flexible configuration and
updates as well as 2) easier maintenance and handling of misconfigurations and
operational errors. Even though considerations are present about the security
implications of the Industry 4.0 area in the electrical grid, electricity
stakeholders deem their infrastructures as secure since they are isolated and
allow no external connections. However, external connections are not the only
security risk for electrical utilities. The Tactics, Techniques and Procedures
(TTPs) that are employed by adversaries to perform cyber-attack towards the
critical Electrical Power and Energy System (EPES) infrastructures are
gradually becoming highly advanced and sophisticated. In this article we
elaborate on these techniques and demonstrate them in a Power Plant of the
Public Power Corporation (PPC). The demonstrated TTPs allow to exploit and
execute remote commands in smart meters as well as Programmable Logic
Controllers (PLCs) that are responsible for the power generator operation.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 22:22:23 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Lekidis",
"Alexios",
""
]
] |
new_dataset
| 0.997766 |
2302.09197
|
Chuxiong Wu
|
Chuxiong Wu, Qiang Zeng
|
Turning Noises to Fingerprint-Free "Credentials": Secure and Usable
Authentication for Drone Delivery
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Drone delivery is an emerging service that gains growing attention.
Authentication is critical to ensure a package is picked up by a legitimate
drone (rather than a malicious one) and delivered to the correct receiver
(rather than an attacker). As delivery drones are expensive and may carry
important packages, a drone should stay away from users until the
authentication succeeds. Thus, authentication approaches that require physical
contact of drones cannot be applied. Bluetooth can indicate proximity without
physical contact but is vulnerable to radio relay attacks. Our work leverages
drone noises for authentication. While using sounds for authentication is
highly usable, how to handle various attacks that manipulate sounds is an
unresolved challenge. It is also unclear whether such a system is robust under
various environmental sounds. We address these challenges by exploiting unique
characteristics of drone noises. We thereby build an authentication system that
does not rely on any sound fingerprints, keeps resilient to attacks, and is
robust under environmental sounds. An extensive evaluation demonstrates its
security and usability.
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2023 00:27:54 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Wu",
"Chuxiong",
""
],
[
"Zeng",
"Qiang",
""
]
] |
new_dataset
| 0.999395 |
2302.09228
|
Sifeng He
|
Feng Qian, Sifeng He, Honghao Huang, Huanyu Ma, Xiaobo Zhang, Lei Yang
|
Web Photo Source Identification based on Neural Enhanced Camera
Fingerprint
|
Accepted by WWW2023 (https://www2023.thewebconf.org/). Codes are all
publicly available at https://github.com/PhotoNecf/PhotoNecf
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the growing popularity of smartphone photography in recent years, web
photos play an increasingly important role in all walks of life. Source camera
identification of web photos aims to establish a reliable linkage from the
captured images to their source cameras, and has a broad range of applications,
such as image copyright protection, user authentication, investigated evidence
verification, etc. This paper presents an innovative and practical source
identification framework that employs neural-network enhanced sensor pattern
noise to trace back web photos efficiently while ensuring security. Our
proposed framework consists of three main stages: initial device fingerprint
registration, fingerprint extraction and cryptographic connection establishment
while taking photos, and connection verification between photos and source
devices. By incorporating metric learning and frequency consistency into the
deep network design, our proposed fingerprint extraction algorithm achieves
state-of-the-art performance on modern smartphone photos for reliable source
identification. Meanwhile, we also propose several optimization sub-modules to
prevent fingerprint leakage and improve accuracy and efficiency. Finally for
practical system design, two cryptographic schemes are introduced to reliably
identify the correlation between registered fingerprint and verified photo
fingerprint, i.e. fuzzy extractor and zero-knowledge proof (ZKP). The codes for
fingerprint extraction network and benchmark dataset with modern smartphone
cameras photos are all publicly available at
https://github.com/PhotoNecf/PhotoNecf.
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2023 04:14:45 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Qian",
"Feng",
""
],
[
"He",
"Sifeng",
""
],
[
"Huang",
"Honghao",
""
],
[
"Ma",
"Huanyu",
""
],
[
"Zhang",
"Xiaobo",
""
],
[
"Yang",
"Lei",
""
]
] |
new_dataset
| 0.990487 |
2302.09230
|
Yue Zhang
|
Yue Zhang, Parisa Kordjamshidi
|
VLN-Trans: Translator for the Vision and Language Navigation Agent
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Language understanding is essential for the navigation agent to follow
instructions. We observe two kinds of issues in the instructions that can make
the navigation task challenging: 1. The mentioned landmarks are not
recognizable by the navigation agent due to the different vision abilities of
the instructor and the modeled agent. 2. The mentioned landmarks are applicable
to multiple targets, thus not distinctive for selecting the target among the
candidate viewpoints. To deal with these issues, we design a translator module
for the navigation agent to convert the original instructions into
easy-to-follow sub-instruction representations at each step. The translator
needs to focus on the recognizable and distinctive landmarks based on the
agent's visual abilities and the observed visual environment. To achieve this
goal, we create a new synthetic sub-instruction dataset and design specific
tasks to train the translator and the navigation agent. We evaluate our
approach on Room2Room~(R2R), Room4room~(R4R), and Room2Room Last (R2R-Last)
datasets and achieve state-of-the-art results on multiple benchmarks.
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2023 04:19:51 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Zhang",
"Yue",
""
],
[
"Kordjamshidi",
"Parisa",
""
]
] |
new_dataset
| 0.996838 |
2302.09239
|
Rossano Venturini
|
Matteo Ceregini, Florian Kurpicz, Rossano Venturini
|
Faster Wavelet Trees with Quad Vectors
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Given a text, rank and select queries return the number of occurrences of a
character up to a position (rank) or the position of a character with a given
rank (select). These queries have applications in, e.g., compression,
computational geometry, and pattern matching in the form of the backwards
search -- the backbone of many compressed full-text indices. A wavelet tree is
a compact data structure that for a text of length $n$ over an alphabet of size
$\sigma$ requires only $n\lceil\log\sigma\rceil(1+o(1))$ bits of space and can
answer rank and select queries in $\Theta(\log \sigma)$ time. Wavelet trees are
used in the applications described above.
In this paper, we show how to improve query performance of wavelet trees by
using a 4-ary tree instead of a binary tree as basis of the wavelet tree. To
this end, we present a space-efficient rank and select data structure for quad
vectors. The 4-ary tree layout of a wavelet tree helps to halve the number of
cache misses during queries and thus reduces the query latency. Our
experimental evaluation shows that our 4-ary wavelet tree can improve the
latency of rank and select queries by a factor of $\approx 2$ compared to the
wavelet tree implementations contained in the widely used Succinct Data
Structure Library (SDSL).
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2023 05:25:51 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Ceregini",
"Matteo",
""
],
[
"Kurpicz",
"Florian",
""
],
[
"Venturini",
"Rossano",
""
]
] |
new_dataset
| 0.994987 |
2302.09250
|
Toshiharu Sugawara
|
Yuki Miyashita, Tomoki Yamauchi and Toshiharu Sugawara
|
Distributed Planning with Asynchronous Execution with Local Navigation
for Multi-agent Pickup and Delivery Problem
|
11 pages, 12 figures. Accepted in AAMAS 2023 (The 22nd International
Conference on Autonomous Agents and Multiagent Systems)
| null | null | null |
cs.MA cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a distributed planning method with asynchronous execution for
multi-agent pickup and delivery (MAPD) problems for environments with
occasional delays in agents' activities and flexible endpoints. MAPD is a
crucial problem framework with many applications; however, most existing
studies assume ideal agent behaviors and environments, such as a fixed speed of
agents, synchronized movements, and a well-designed environment with many short
detours for multiple agents to perform tasks easily. However, such an
environment is often infeasible; for example, the moving speed of agents may be
affected by weather and floor conditions and is often prone to delays. The
proposed method can relax some infeasible conditions to apply MAPD in more
realistic environments by allowing fluctuated speed in agents' actions and
flexible working locations (endpoints). Our experiments showed that our method
enables agents to perform MAPD in such an environment efficiently, compared to
the baseline methods. We also analyzed the behaviors of agents using our method
and discuss the limitations.
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2023 07:02:03 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Miyashita",
"Yuki",
""
],
[
"Yamauchi",
"Tomoki",
""
],
[
"Sugawara",
"Toshiharu",
""
]
] |
new_dataset
| 0.995528 |
2302.09265
|
Hamidreza Arjmandi
|
Hamidreza Arjmandi, Mohamad Zoofaghari, Mitra Rezaei, Kajsa Kanebratt,
Liisa Vilen, David Janzen, Peter Gennemark, and Adam Noel
|
Diffusive Molecular Communication with a Spheroidal Receiver for
Organ-on-Chip Systems
| null | null | null | null |
cs.ET physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Realistic models of the components and processes are required for molecular
communication (MC) systems. In this paper, a spheroidal receiver structure is
proposed for MC that is inspired by the 3D cell cultures known as spheroids
being widely used in organ-on-chip systems. A simple diffusive MC system is
considered where the spheroidal receiver and a point source transmitter are in
an unbounded fluid environment. The spheroidal receiver is modeled as a porous
medium for diffusive signaling molecules, then its boundary conditions and
effective diffusion coefficient are characterized. It is revealed that the
spheroid amplifies the diffusion signal, but also disperses the signal which
reduces the information communication rate. Furthermore, we analytically
formulate and derive the concentration Green's function inside and outside the
spheroid in terms of infinite series-forms that are confirmed by a
particle-based simulator (PBS).
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2023 09:04:24 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Arjmandi",
"Hamidreza",
""
],
[
"Zoofaghari",
"Mohamad",
""
],
[
"Rezaei",
"Mitra",
""
],
[
"Kanebratt",
"Kajsa",
""
],
[
"Vilen",
"Liisa",
""
],
[
"Janzen",
"David",
""
],
[
"Gennemark",
"Peter",
""
],
[
"Noel",
"Adam",
""
]
] |
new_dataset
| 0.999247 |
2302.09291
|
David Gagnon
|
David J. Gagnon
|
ARIS: An open source platform for developing mobile learning experiences
| null | null | null | null |
cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Inspired by mobile, Internet enabled computing and the maturing field of
educational game design, the ARIS project has designed an open source tool for
rapidly producing locative, interactive, narrative-centric, educational
experiences. In addition to the software, the project contributes a global
community of active designers and a growing set of compelling mechanics for
learners in such designs.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 15:55:21 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Gagnon",
"David J.",
""
]
] |
new_dataset
| 0.991277 |
2302.09328
|
Xuxin Cheng
|
Xuxin Cheng, Zhihong Zhu, Hongxiang Li, Yaowei Li, Yuexian Zou
|
SSVMR: Saliency-based Self-training for Video-Music Retrieval
|
Accepted by ICASSP 2023
| null | null | null |
cs.MM cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rise of short videos, the demand for selecting appropriate
background music (BGM) for a video has increased significantly, video-music
retrieval (VMR) task gradually draws much attention by research community. As
other cross-modal learning tasks, existing VMR approaches usually attempt to
measure the similarity between the video and music in the feature space.
However, they (1) neglect the inevitable label noise; (2) neglect to enhance
the ability to capture critical video clips. In this paper, we propose a novel
saliency-based self-training framework, which is termed SSVMR. Specifically, we
first explore to fully make use of the information containing in the training
dataset by applying a semi-supervised method to suppress the adverse impact of
label noise problem, where a self-training approach is adopted. In addition, we
propose to capture the saliency of the video by mixing two videos at span level
and preserving the locality of the two original videos. Inspired by back
translation in NLP, we also conduct back retrieval to obtain more training
data. Experimental results on MVD dataset show that our SSVMR achieves the
state-of-the-art performance by a large margin, obtaining a relative
improvement of 34.8% over the previous best model in terms of R@1.
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2023 13:30:56 GMT"
}
] | 2023-02-21T00:00:00 |
[
[
"Cheng",
"Xuxin",
""
],
[
"Zhu",
"Zhihong",
""
],
[
"Li",
"Hongxiang",
""
],
[
"Li",
"Yaowei",
""
],
[
"Zou",
"Yuexian",
""
]
] |
new_dataset
| 0.998278 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.