id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2206.08267
|
Ganesh Bagler Dr
|
Mansi Goel, Pallab Chakraborty, Vijay Ponnaganti, Minnet Khan,
Sritanaya Tatipamala, Aakanksha Saini and Ganesh Bagler
|
Ratatouille: A tool for Novel Recipe Generation
|
4 pages, 5 figures, 38th IEEE International Conference on Data
Engineering, DECOR Workshop
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Due to availability of a large amount of cooking recipes online, there is a
growing interest in using this as data to create novel recipes. Novel Recipe
Generation is a problem in the field of Natural Language Processing in which
our main interest is to generate realistic, novel cooking recipes. To come up
with such novel recipes, we trained various Deep Learning models such as LSTMs
and GPT-2 with a large amount of recipe data. We present Ratatouille
(https://cosylab.iiitd.edu.in/ratatouille2/), a web based application to
generate novel recipes.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 11:20:19 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Goel",
"Mansi",
""
],
[
"Chakraborty",
"Pallab",
""
],
[
"Ponnaganti",
"Vijay",
""
],
[
"Khan",
"Minnet",
""
],
[
"Tatipamala",
"Sritanaya",
""
],
[
"Saini",
"Aakanksha",
""
],
[
"Bagler",
"Ganesh",
""
]
] |
new_dataset
| 0.957527 |
2206.08292
|
Caio Mucchiani
|
Caio Mucchiani, Zhichao Liu, Ipsita Sahin, Jared Dube, Linh Vu, Elena
Kokkoni, Konstantinos Karydis
|
Closed-loop Position Control of a Pediatric Soft Robotic Wearable Device
for Upper Extremity Assistance
|
6 pages
|
Roman 2022
| null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This work focuses on closed-loop control based on proprioceptive feedback for
a pneumatically-actuated soft wearable device aimed at future support of infant
reaching tasks. The device comprises two soft pneumatic actuators (one
textile-based and one silicone-casted) actively controlling two
degrees-of-freedom per arm (shoulder adduction/abduction and elbow
flexion/extension, respectively). Inertial measurement units (IMUs) attached to
the wearable device provide real-time joint angle feedback. Device kinematics
analysis is informed by anthropometric data from infants (arm lengths) reported
in the literature. Range of motion and muscle co-activation patterns in infant
reaching are considered to derive desired trajectories for the device's
end-effector. Then, a proportional-derivative controller is developed to
regulate the pressure inside the actuators and in turn move the arm along
desired setpoints within the reachable workspace. Experimental results on
tracking desired arm trajectories using an engineered mannequin are presented,
demonstrating that the proposed controller can help guide the mannequin's wrist
to the desired setpoints.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 16:48:29 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Mucchiani",
"Caio",
""
],
[
"Liu",
"Zhichao",
""
],
[
"Sahin",
"Ipsita",
""
],
[
"Dube",
"Jared",
""
],
[
"Vu",
"Linh",
""
],
[
"Kokkoni",
"Elena",
""
],
[
"Karydis",
"Konstantinos",
""
]
] |
new_dataset
| 0.998302 |
2206.08304
|
Yijun Bian
|
Abhijith Sharma, Yijun Bian, Phil Munz, Apurva Narayan
|
Adversarial Patch Attacks and Defences in Vision-Based Tasks: A Survey
|
A. Sharma and Y. Bian share equal contribution
| null | null | null |
cs.CV cs.CR cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial attacks in deep learning models, especially for safety-critical
systems, are gaining more and more attention in recent years, due to the lack
of trust in the security and robustness of AI models. Yet the more primitive
adversarial attacks might be physically infeasible or require some resources
that are hard to access like the training data, which motivated the emergence
of patch attacks. In this survey, we provide a comprehensive overview to cover
existing techniques of adversarial patch attacks, aiming to help interested
researchers quickly catch up with the progress in this field. We also discuss
existing techniques for developing detection and defences against adversarial
patches, aiming to help the community better understand this field and its
applications in the real world.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 17:06:47 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Sharma",
"Abhijith",
""
],
[
"Bian",
"Yijun",
""
],
[
"Munz",
"Phil",
""
],
[
"Narayan",
"Apurva",
""
]
] |
new_dataset
| 0.99714 |
2206.08343
|
Taras Khakhulin
|
Taras Khakhulin, Vanessa Sklyarova, Victor Lempitsky, Egor Zakharov
|
Realistic One-shot Mesh-based Head Avatars
| null | null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present a system for realistic one-shot mesh-based human head avatars
creation, ROME for short. Using a single photograph, our model estimates a
person-specific head mesh and the associated neural texture, which encodes both
local photometric and geometric details. The resulting avatars are rigged and
can be rendered using a neural network, which is trained alongside the mesh and
texture estimators on a dataset of in-the-wild videos. In the experiments, we
observe that our system performs competitively both in terms of head geometry
recovery and the quality of renders, especially for the cross-person
reenactment. See results https://samsunglabs.github.io/rome/
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 17:45:23 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Khakhulin",
"Taras",
""
],
[
"Sklyarova",
"Vanessa",
""
],
[
"Lempitsky",
"Victor",
""
],
[
"Zakharov",
"Egor",
""
]
] |
new_dataset
| 0.988936 |
2206.08345
|
Mohammad Shahab Uddin
|
Mohammad Shahab Uddin
|
Real-World Single Image Super-Resolution Under Rainy Condition
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Image super-resolution is an important research area in computer vision that
has a wide variety of applications including surveillance, medical imaging etc.
Real-world signal image super-resolution has become very popular now-a-days due
to its real-time application. There are still a lot of scopes to improve
real-world single image super-resolution specially during challenging weather
scenarios. In this paper, we have proposed a new algorithm to perform
real-world single image super-resolution during rainy condition. Our proposed
method can mitigate the influence of rainy conditions during image
super-resolution. Our experiment results show that our proposed algorithm can
perform image super-resolution decreasing the negative effects of the rain.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 17:48:27 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Uddin",
"Mohammad Shahab",
""
]
] |
new_dataset
| 0.974434 |
2206.08367
|
Mattia Segu
|
Tao Sun, Mattia Segu, Janis Postels, Yuxuan Wang, Luc Van Gool, Bernt
Schiele, Federico Tombari, Fisher Yu
|
SHIFT: A Synthetic Driving Dataset for Continuous Multi-Task Domain
Adaptation
|
Published at IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) 2022
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Adapting to a continuously evolving environment is a safety-critical
challenge inevitably faced by all autonomous driving systems. Existing image
and video driving datasets, however, fall short of capturing the mutable nature
of the real world. In this paper, we introduce the largest multi-task synthetic
dataset for autonomous driving, SHIFT. It presents discrete and continuous
shifts in cloudiness, rain and fog intensity, time of day, and vehicle and
pedestrian density. Featuring a comprehensive sensor suite and annotations for
several mainstream perception tasks, SHIFT allows investigating the degradation
of a perception system performance at increasing levels of domain shift,
fostering the development of continuous adaptation strategies to mitigate this
problem and assess model robustness and generality. Our dataset and benchmark
toolkit are publicly available at www.vis.xyz/shift.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 17:59:52 GMT"
}
] | 2022-06-17T00:00:00 |
[
[
"Sun",
"Tao",
""
],
[
"Segu",
"Mattia",
""
],
[
"Postels",
"Janis",
""
],
[
"Wang",
"Yuxuan",
""
],
[
"Van Gool",
"Luc",
""
],
[
"Schiele",
"Bernt",
""
],
[
"Tombari",
"Federico",
""
],
[
"Yu",
"Fisher",
""
]
] |
new_dataset
| 0.999846 |
1908.09042
|
Hamed Rahimi
|
Parsa Rajabzadeh, Amin Pishevar and Hamed Rahimi
|
SIDLE: Semantically Intelligent Distributed Leader Election Algorithm
for Wireless Sensor Networks
|
not agreed anymore
| null | null | null |
cs.NI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces the deployment of a group of Wireless Sensor and
Actuator Network (WSAN) for Internet of Thing (IoT) systems in rural regions
deployed by a drone dropping sensors and actuators at a certain position as a
mesh of a hexagonal form. Nodes are heterogeneous in hardware and functionality
thus not all nodes are able to transfer data directly to the base station.
Primitive ones are only capable of collecting local data. However, ones that
are more sophisticated are equipped with long-range radio telemetry and more
computational power. Power optimization is one of the crucial factors in
designing WSANs. Total power consumption must be minimized, as sensors are
self-managed. It is not feasible to collect sensors on time bases and recharge
the batteries. Therefore, energy consumption optimization and harvesting green
energy are other factors that are considered. In this regard, protocols are
designed in a way to support such requirements. The preprocessed data are first
collected and combined by the leaders at each hexagonal cell. Then, the
information packets are sent to the head clusters. Consequently, head clusters
reprocess the received information and depict a better global view of the zone,
using a variety of the received information. Finally, the processed information
is sent to the nearest base station or a mobile drone.
|
[
{
"version": "v1",
"created": "Fri, 23 Aug 2019 22:19:15 GMT"
},
{
"version": "v2",
"created": "Wed, 19 May 2021 21:06:30 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Jun 2022 23:52:38 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Rajabzadeh",
"Parsa",
""
],
[
"Pishevar",
"Amin",
""
],
[
"Rahimi",
"Hamed",
""
]
] |
new_dataset
| 0.997736 |
2101.11956
|
Pere-Llu\'is Huguet Cabot
|
Pere-Llu\'is Huguet-Cabot and David Abadi and Agneta Fischer and
Ekaterina Shutova
|
Us vs. Them: A Dataset of Populist Attitudes, News Bias and Emotions
|
Camera-ready version in EACL 2021
| null |
10.18653/v1/2021.eacl-main.165
| null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computational modelling of political discourse tasks has become an
increasingly important area of research in natural language processing.
Populist rhetoric has risen across the political sphere in recent years;
however, computational approaches to it have been scarce due to its complex
nature. In this paper, we present the new $\textit{Us vs. Them}$ dataset,
consisting of 6861 Reddit comments annotated for populist attitudes and the
first large-scale computational models of this phenomenon. We investigate the
relationship between populist mindsets and social groups, as well as a range of
emotions typically associated with these. We set a baseline for two tasks
related to populist attitudes and present a set of multi-task learning models
that leverage and demonstrate the importance of emotion and group
identification as auxiliary tasks.
|
[
{
"version": "v1",
"created": "Thu, 28 Jan 2021 12:18:19 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Feb 2021 21:53:40 GMT"
},
{
"version": "v3",
"created": "Sun, 14 Feb 2021 17:42:12 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Huguet-Cabot",
"Pere-Lluís",
""
],
[
"Abadi",
"David",
""
],
[
"Fischer",
"Agneta",
""
],
[
"Shutova",
"Ekaterina",
""
]
] |
new_dataset
| 0.999647 |
2107.00962
|
Antonella Barisic
|
Antonella Barisic, Frano Petric, Stjepan Bogdan
|
Brain over Brawn: Using a Stereo Camera to Detect, Track, and Intercept
a Faster UAV by Reconstructing the Intruder's Trajectory
|
Published in journal Field Robotics, March 2022. UAV-Eagle dataset
available at: https://github.com/larics/UAV-Eagle
|
Field Robotics 2022 ISSN: 2771-3989
|
10.55417/fr.2022009
| null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents our approach to intercepting a faster intruder UAV,
inspired by the MBZIRC 2020 Challenge 1. By utilizing a priori knowledge of the
shape of the intruder's trajectory, we can calculate an interception point.
Target tracking is based on image processing by a YOLOv3 Tiny convolutional
neural network, combined with depth calculation using a gimbal-mounted ZED Mini
stereo camera. We use RGB and depth data from the camera, devising a
noise-reducing histogram-filter to extract the target's 3D position. Obtained
3D measurements of target's position are used to calculate the position,
orientation, and size of a figure-eight shaped trajectory, which we approximate
using a Bernoulli lemniscate. Once the approximation is deemed sufficiently
precise, as measured by the distance between observations and estimate, we
calculate an interception point to position the interceptor UAV directly on the
intruder's path. Our method, which we have significantly improved based on the
experience gathered during the MBZIRC competition, has been validated in
simulation and through field experiments. Our results confirm that we have
developed an efficient, visual-perception module that can extract information
describing the intruder UAV's motion with precision sufficient to support
interception planning. In a majority of our simulated encounters, we can track
and intercept a target that moves 30% faster than the interceptor.
Corresponding tests in an unstructured environment yielded 9 out of 12
successful results.
|
[
{
"version": "v1",
"created": "Fri, 2 Jul 2021 10:49:22 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jun 2022 22:15:46 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Barisic",
"Antonella",
""
],
[
"Petric",
"Frano",
""
],
[
"Bogdan",
"Stjepan",
""
]
] |
new_dataset
| 0.982038 |
2109.04572
|
Sayan Nag
|
Mayukh Bhattacharyya, Sayan Nag, Udita Ghosh
|
Deciphering Environmental Air Pollution with Large Scale City Data
|
Accepted as a Oral Spotlight Paper at International Joint Conference
of Artificial Intelligence (IJCAI) 2022
| null | null | null |
cs.LG cs.AI physics.data-an
|
http://creativecommons.org/licenses/by/4.0/
|
Air pollution poses a serious threat to sustainable environmental conditions
in the 21st century. Its importance in determining the health and living
standards in urban settings is only expected to increase with time. Various
factors ranging from artificial emissions to natural phenomena are known to be
primary causal agents or influencers behind rising air pollution levels.
However, the lack of large scale data involving the major artificial and
natural factors has hindered the research on the causes and relations governing
the variability of the different air pollutants. Through this work, we
introduce a large scale city-wise dataset for exploring the relationships among
these agents over a long period of time. We also introduce a transformer based
model - cosSquareFormer, for the problem of pollutant level estimation and
forecasting. Our model outperforms most of the benchmark models for this task.
We also analyze and explore the dataset through our model and other
methodologies to bring out important inferences which enable us to understand
the dynamics of the causal agents at a deeper level. Through our paper, we seek
to provide a great set of foundations for further research into this domain
that will demand critical attention of ours in the near future.
|
[
{
"version": "v1",
"created": "Thu, 9 Sep 2021 22:00:51 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2022 15:23:49 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Bhattacharyya",
"Mayukh",
""
],
[
"Nag",
"Sayan",
""
],
[
"Ghosh",
"Udita",
""
]
] |
new_dataset
| 0.999689 |
2110.02453
|
Lin Zheng
|
Lin Zheng, Huijie Pan, Lingpeng Kong
|
Ripple Attention for Visual Perception with Sub-quadratic Complexity
|
19 pages, 2 figures, ICML 2022 camera ready
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer architectures are now central to sequence modeling tasks. At its
heart is the attention mechanism, which enables effective modeling of long-term
dependencies in a sequence. Recently, transformers have been successfully
applied in the computer vision domain, where 2D images are first segmented into
patches and then treated as 1D sequences. Such linearization, however, impairs
the notion of spatial locality in images, which bears important visual clues.
To bridge the gap, we propose ripple attention, a sub-quadratic attention
mechanism for vision transformers. Built upon the recent kernel-based efficient
attention mechanisms, we design a novel dynamic programming algorithm that
weights contributions of different tokens to a query with respect to their
relative spatial distances in the 2D space in linear observed time. Extensive
experiments and analyses demonstrate the effectiveness of ripple attention on
various visual tasks.
|
[
{
"version": "v1",
"created": "Wed, 6 Oct 2021 02:00:38 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2022 13:59:31 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Zheng",
"Lin",
""
],
[
"Pan",
"Huijie",
""
],
[
"Kong",
"Lingpeng",
""
]
] |
new_dataset
| 0.989464 |
2110.02911
|
Miguel Costa
|
Miguel Costa, Diogo Costa, Tiago Gomes, Sandro Pinto
|
Shifting Capsule Networks from the Cloud to the Deep Edge
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Capsule networks (CapsNets) are an emerging trend in image processing. In
contrast to a convolutional neural network, CapsNets are not vulnerable to
object deformation, as the relative spatial information of the objects is
preserved across the network. However, their complexity is mainly related to
the capsule structure and the dynamic routing mechanism, which makes it almost
unreasonable to deploy a CapsNet, in its original form, in a
resource-constrained device powered by a small microcontroller (MCU). In an era
where intelligence is rapidly shifting from the cloud to the edge, this high
complexity imposes serious challenges to the adoption of CapsNets at the very
edge. To tackle this issue, we present an API for the execution of quantized
CapsNets in Arm Cortex-M and RISC-V MCUs. Our software kernels extend the Arm
CMSIS-NN and RISC-V PULP-NN to support capsule operations with 8-bit integers
as operands. Along with it, we propose a framework to perform post-training
quantization of a CapsNet. Results show a reduction in memory footprint of
almost 75%, with accuracy loss ranging from 0.07% to 0.18%. In terms of
throughput, our Arm Cortex-M API enables the execution of primary capsule and
capsule layers with medium-sized kernels in just 119.94 and 90.60 milliseconds
(ms), respectively (STM32H755ZIT6U, Cortex-M7 @ 480 MHz). For the GAP-8 SoC
(RISC-V RV32IMCXpulp @ 170 MHz), the latency drops to 7.02 and 38.03 ms,
respectively.
|
[
{
"version": "v1",
"created": "Wed, 6 Oct 2021 16:52:01 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2022 10:41:49 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Costa",
"Miguel",
""
],
[
"Costa",
"Diogo",
""
],
[
"Gomes",
"Tiago",
""
],
[
"Pinto",
"Sandro",
""
]
] |
new_dataset
| 0.978395 |
2112.11896
|
Simeon Ball
|
Simeon Ball
|
The Grassl-R\"otteler cyclic and consta-cyclic MDS codes are generalised
Reed-Solomon codes
| null | null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We prove that the cyclic and constacyclic codes constructed by Grassl and
R\"otteler in arXiv:1502.05267 are generalised Reed-Solomon codes. This note
can be considered as an addendum to that article. It can also be considered as
an appendix to arXiv:2106.10180, where Conjecture 11 of arXiv:1502.0526, which
was stated for Grassl-R\"otteler codes, is proven for generalised Reed-Solomon
codes. The content of this note, together with arXiv:2106.10180, therefore
implies that Conjecture 11 from arXiv:1502.0526 is true.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 14:28:31 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Dec 2021 15:48:03 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Jun 2022 09:40:52 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Ball",
"Simeon",
""
]
] |
new_dataset
| 0.999552 |
2201.04127
|
Chung-Yi Weng
|
Chung-Yi Weng, Brian Curless, Pratul P. Srinivasan, Jonathan T. Barron
and Ira Kemelmacher-Shlizerman
|
HumanNeRF: Free-viewpoint Rendering of Moving People from Monocular
Video
|
CVPR 2022 (oral). Project page with videos:
https://grail.cs.washington.edu/projects/humannerf/
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce a free-viewpoint rendering method -- HumanNeRF -- that works on
a given monocular video of a human performing complex body motions, e.g. a
video from YouTube. Our method enables pausing the video at any frame and
rendering the subject from arbitrary new camera viewpoints or even a full
360-degree camera path for that particular frame and body pose. This task is
particularly challenging, as it requires synthesizing photorealistic details of
the body, as seen from various camera angles that may not exist in the input
video, as well as synthesizing fine details such as cloth folds and facial
appearance. Our method optimizes for a volumetric representation of the person
in a canonical T-pose, in concert with a motion field that maps the estimated
canonical representation to every frame of the video via backward warps. The
motion field is decomposed into skeletal rigid and non-rigid motions, produced
by deep networks. We show significant performance improvements over prior work,
and compelling examples of free-viewpoint renderings from monocular video of
moving humans in challenging uncontrolled capture scenarios.
|
[
{
"version": "v1",
"created": "Tue, 11 Jan 2022 18:51:21 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jun 2022 20:06:42 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Weng",
"Chung-Yi",
""
],
[
"Curless",
"Brian",
""
],
[
"Srinivasan",
"Pratul P.",
""
],
[
"Barron",
"Jonathan T.",
""
],
[
"Kemelmacher-Shlizerman",
"Ira",
""
]
] |
new_dataset
| 0.95652 |
2201.12288
|
Jingyun Liang
|
Jingyun Liang and Jiezhang Cao and Yuchen Fan and Kai Zhang and Rakesh
Ranjan and Yawei Li and Radu Timofte and Luc Van Gool
|
VRT: A Video Restoration Transformer
|
add results on VFI and STVSR; SOTA results (+up to 2.16dB) on video
SR, video deblurring, video denoising, video frame interpolation and
space-time video super-resolution. Code: https://github.com/JingyunLiang/VRT
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Video restoration (e.g., video super-resolution) aims to restore high-quality
frames from low-quality frames. Different from single image restoration, video
restoration generally requires to utilize temporal information from multiple
adjacent but usually misaligned video frames. Existing deep methods generally
tackle with this by exploiting a sliding window strategy or a recurrent
architecture, which either is restricted by frame-by-frame restoration or lacks
long-range modelling ability. In this paper, we propose a Video Restoration
Transformer (VRT) with parallel frame prediction and long-range temporal
dependency modelling abilities. More specifically, VRT is composed of multiple
scales, each of which consists of two kinds of modules: temporal mutual self
attention (TMSA) and parallel warping. TMSA divides the video into small clips,
on which mutual attention is applied for joint motion estimation, feature
alignment and feature fusion, while self attention is used for feature
extraction. To enable cross-clip interactions, the video sequence is shifted
for every other layer. Besides, parallel warping is used to further fuse
information from neighboring frames by parallel feature warping. Experimental
results on five tasks, including video super-resolution, video deblurring,
video denoising, video frame interpolation and space-time video
super-resolution, demonstrate that VRT outperforms the state-of-the-art methods
by large margins ($\textbf{up to 2.16dB}$) on fourteen benchmark datasets.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 17:54:43 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2022 17:17:05 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Liang",
"Jingyun",
""
],
[
"Cao",
"Jiezhang",
""
],
[
"Fan",
"Yuchen",
""
],
[
"Zhang",
"Kai",
""
],
[
"Ranjan",
"Rakesh",
""
],
[
"Li",
"Yawei",
""
],
[
"Timofte",
"Radu",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.996563 |
2203.01556
|
Ruck Thawonmas
|
Ibrahim Khan, Thai Van Nguyen, Xincheng Dai, and Ruck Thawonmas
|
DareFightingICE Competition: A Fighting Game Sound Design and AI
Competition
|
2022 IEEE Conference on Games
| null | null | null |
cs.HC cs.AI cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a new competition -- at the 2022 IEEE Conference on Games
(CoG) -- called DareFightingICE Competition. The competition has two tracks: a
sound design track and an AI track. The game platform for this competition is
also called DareFightingICE, a fighting game platform. DareFightingICE is a
sound-design-enhanced version of FightingICE, used earlier in a competition at
CoG until 2021 to promote artificial intelligence (AI) research in fighting
games. In the sound design track, participants compete for the best sound
design, given the default sound design of DareFightingICE as a sample, where we
define a sound design as a set of sound effects combined with the source code
that implements their timing-control algorithm. Participants of the AI track
are asked to develop their AI algorithm that controls a character given only
sound as the input (blind AI) to fight against their opponent; a sample
deep-learning blind AI will be provided by us. Our means to maximize the
synergy between the two tracks are also described. This competition serves to
come up with effective sound designs for visually impaired players, a group in
the gaming community which has been mostly ignored. To the best of our
knowledge, DareFightingICE Competition is the first of its kind within and
outside of CoG.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 08:12:15 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2022 12:51:38 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Khan",
"Ibrahim",
""
],
[
"Van Nguyen",
"Thai",
""
],
[
"Dai",
"Xincheng",
""
],
[
"Thawonmas",
"Ruck",
""
]
] |
new_dataset
| 0.999784 |
2203.16512
|
Harveen Singh Chadha
|
Harveen Singh Chadha, Anirudh Gupta, Priyanshi Shah, Neeraj Chhimwal,
Ankur Dhuriya, Rishabh Gaur, Vivek Raghavan
|
Vakyansh: ASR Toolkit for Low Resource Indic languages
| null | null | null | null |
cs.CL eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present Vakyansh, an end to end toolkit for Speech Recognition in Indic
languages. India is home to almost 121 languages and around 125 crore speakers.
Yet most of the languages are low resource in terms of data and pretrained
models. Through Vakyansh, we introduce automatic data pipelines for data
creation, model training, model evaluation and deployment. We create 14,000
hours of speech data in 23 Indic languages and train wav2vec 2.0 based
pretrained models. These pretrained models are then finetuned to create state
of the art speech recognition models for 18 Indic languages which are followed
by language models and punctuation restoration models. We open source all these
resources with a mission that this will inspire the speech community to develop
speech first applications using our ASR models in Indic languages.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 17:50:18 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2022 17:04:54 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Chadha",
"Harveen Singh",
""
],
[
"Gupta",
"Anirudh",
""
],
[
"Shah",
"Priyanshi",
""
],
[
"Chhimwal",
"Neeraj",
""
],
[
"Dhuriya",
"Ankur",
""
],
[
"Gaur",
"Rishabh",
""
],
[
"Raghavan",
"Vivek",
""
]
] |
new_dataset
| 0.999369 |
2204.10749
|
Wenqian Ronny Huang
|
W. Ronny Huang, Shuo-yiin Chang, David Rybach, Rohit Prabhavalkar,
Tara N. Sainath, Cyril Allauzen, Cal Peyser, Zhiyun Lu
|
E2E Segmenter: Joint Segmenting and Decoding for Long-Form ASR
|
Interspeech 2022
| null | null | null |
cs.SD cs.CL cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Improving the performance of end-to-end ASR models on long utterances ranging
from minutes to hours in length is an ongoing challenge in speech recognition.
A common solution is to segment the audio in advance using a separate voice
activity detector (VAD) that decides segment boundary locations based purely on
acoustic speech/non-speech information. VAD segmenters, however, may be
sub-optimal for real-world speech where, e.g., a complete sentence that should
be taken as a whole may contain hesitations in the middle ("set an alarm for...
5 o'clock").
We propose to replace the VAD with an end-to-end ASR model capable of
predicting segment boundaries in a streaming fashion, allowing the segmentation
decision to be conditioned not only on better acoustic features but also on
semantic features from the decoded text with negligible extra computation. In
experiments on real world long-form audio (YouTube) with lengths of up to 30
minutes, we demonstrate 8.5% relative WER improvement and 250 ms reduction in
median end-of-segment latency compared to the VAD segmenter baseline on a
state-of-the-art Conformer RNN-T model.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 15:13:12 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2022 14:49:28 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Huang",
"W. Ronny",
""
],
[
"Chang",
"Shuo-yiin",
""
],
[
"Rybach",
"David",
""
],
[
"Prabhavalkar",
"Rohit",
""
],
[
"Sainath",
"Tara N.",
""
],
[
"Allauzen",
"Cyril",
""
],
[
"Peyser",
"Cal",
""
],
[
"Lu",
"Zhiyun",
""
]
] |
new_dataset
| 0.988944 |
2204.13843
|
Aiqing Zhu
|
Aiqing Zhu, Beibei Zhu, Jiawei Zhang, Yifa Tang, Jian Liu
|
VPNets: Volume-preserving neural networks for learning source-free
dynamics
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose volume-preserving networks (VPNets) for learning unknown
source-free dynamical systems using trajectory data. We propose three modules
and combine them to obtain two network architectures, coined R-VPNet and
LA-VPNet. The distinct feature of the proposed models is that they are
intrinsic volume-preserving. In addition, the corresponding approximation
theorems are proved, which theoretically guarantee the expressivity of the
proposed VPNets to learn source-free dynamics. The effectiveness,
generalization ability and structure-preserving property of the VP-Nets are
demonstrated by numerical experiments.
|
[
{
"version": "v1",
"created": "Fri, 29 Apr 2022 01:36:55 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2022 07:53:36 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Zhu",
"Aiqing",
""
],
[
"Zhu",
"Beibei",
""
],
[
"Zhang",
"Jiawei",
""
],
[
"Tang",
"Yifa",
""
],
[
"Liu",
"Jian",
""
]
] |
new_dataset
| 0.97705 |
2205.08479
|
Ali Farahbakhsh
|
Ali Farahbakhsh, Chen Feng
|
Opportunistic Routing in Quantum Networks
|
This version extends our INFOCOM'2022 paper by adding more analysis
and simulations
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Unlike classical routing algorithms, quantum routing algorithms make use of
entangled states - a type of resources that have a limited lifetime and need to
be regenerated after consumption. In a nutshell, quantum routing algorithms
have to use these resources efficiently, while optimizing some objectives such
as the total waiting time. Current routing algorithms tend to keep a routing
request waiting until all of the resources on its path are available. In this
paper, we introduce a new way of managing entanglement resources in an
opportunistic fashion: a request can move forward along its path as soon as
possible (even if some resources on its path are not ready). We show that this
opportunistic approach is fundamentally better than conventional approaches. In
particular, our results indicate that this new approach achieves a 30-50%
improvement in the average total waiting time and average link waiting time
compared with several state-of-the-art routing algorithms. As a by-product of
this work, we develop a new simulator for quantum routing, which can be used to
evaluate various design choices under different scenarios.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 16:44:10 GMT"
},
{
"version": "v2",
"created": "Sat, 28 May 2022 16:54:53 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Jun 2022 16:30:19 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Farahbakhsh",
"Ali",
""
],
[
"Feng",
"Chen",
""
]
] |
new_dataset
| 0.999111 |
2205.15783
|
William Bennett
|
William Bennett, Ryan G. McClarren
|
Benchmarks for infinite medium, time dependent transport problems with
isotropic scattering
| null | null | null | null |
cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
The widely used AZURV1 transport benchmarks package provides a suite of
solutions to isotropic scattering transport problems with a variety of initial
conditions (Ganapol 2001). Most of these solutions have an initial condition
that is a Dirac delta function in space; as a result these benchmarks are
challenging problems to use for verification tests in computer codes.
Nevertheless, approximating a delta function in simulation often leads to low
orders of convergence and the inability to test the convergence of high-order
numerical methods. While there are examples in the literature of integration of
these solutions as Green's functions for the transport operator to produce
results for more easily simulated sources, they are limited in scope and
briefly explained. For a sampling of initial conditions and sources, we present
solutions for the uncollided and collided scalar flux to facilitate accurate
testing of source treatment in numerical solvers. The solution for the
uncollided scalar flux is found in analytic form for some sources. Since
integrating the Green's functions is often nontrivial, discussion of
integration difficulty and workarounds to find convergent integrals is
included. Additionally, our uncollided solutions can be used as source terms in
verification studies, in a similar way to the method of manufactured solutions.
|
[
{
"version": "v1",
"created": "Sat, 28 May 2022 12:37:56 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2022 14:20:37 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Bennett",
"William",
""
],
[
"McClarren",
"Ryan G.",
""
]
] |
new_dataset
| 0.999571 |
2206.03132
|
Zirong Chen
|
Zirong Chen, Isaac Li, Haoxiang Zhang, Sarah Preum, John A. Stankovic,
Meiyi Ma
|
CitySpec: An Intelligent Assistant System for Requirement Specification
in Smart Cities
|
This paper is accepted by SMARTCOMP 2022
| null | null | null |
cs.AI cs.CL cs.LG cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
An increasing number of monitoring systems have been developed in smart
cities to ensure that real-time operations of a city satisfy safety and
performance requirements. However, many existing city requirements are written
in English with missing, inaccurate, or ambiguous information. There is a high
demand for assisting city policy makers in converting human-specified
requirements to machine-understandable formal specifications for monitoring
systems. To tackle this limitation, we build CitySpec, the first intelligent
assistant system for requirement specification in smart cities. To create
CitySpec, we first collect over 1,500 real-world city requirements across
different domains from over 100 cities and extract city-specific knowledge to
generate a dataset of city vocabulary with 3,061 words. We also build a
translation model and enhance it through requirement synthesis and develop a
novel online learning framework with validation under uncertainty. The
evaluation results on real-world city requirements show that CitySpec increases
the sentence-level accuracy of requirement specification from 59.02% to 86.64%,
and has strong adaptability to a new city and a new domain (e.g., F1 score for
requirements in Seattle increases from 77.6% to 93.75% with online learning).
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 09:15:25 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jun 2022 20:21:54 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Chen",
"Zirong",
""
],
[
"Li",
"Isaac",
""
],
[
"Zhang",
"Haoxiang",
""
],
[
"Preum",
"Sarah",
""
],
[
"Stankovic",
"John A.",
""
],
[
"Ma",
"Meiyi",
""
]
] |
new_dataset
| 0.999235 |
2206.06581
|
Shweta Yadav
|
Shweta Yadav, Deepak Gupta, and Dina Demner-Fushman
|
CHQ-Summ: A Dataset for Consumer Healthcare Question Summarization
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The quest for seeking health information has swamped the web with consumers'
health-related questions. Generally, consumers use overly descriptive and
peripheral information to express their medical condition or other healthcare
needs, contributing to the challenges of natural language understanding. One
way to address this challenge is to summarize the questions and distill the key
information of the original question. To address this issue, we introduce a new
dataset, CHQ-Summ that contains 1507 domain-expert annotated consumer health
questions and corresponding summaries. The dataset is derived from the
community question-answering forum and therefore provides a valuable resource
for understanding consumer health-related posts on social media. We benchmark
the dataset on multiple state-of-the-art summarization models to show the
effectiveness of the dataset.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 03:49:03 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2022 16:07:12 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Yadav",
"Shweta",
""
],
[
"Gupta",
"Deepak",
""
],
[
"Demner-Fushman",
"Dina",
""
]
] |
new_dataset
| 0.999737 |
2206.07093
|
Michael Howard P.Eng
|
Michael Howard
|
Helm -- What It Can Do and Where Is It Going?
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Deploying an application into a Kubernetes cluster requires sending a
manifest file to the cluster's control plane interface. This action is
typically performed through a kubectl client which is configured and authorized
to communicate with the control plane's Uniform Resource Locator (URL). An
application typically requires many Kubernetes resources such as pods,
deployments, secrets, service and volumes. Configuring each of these through
manifest files requires complex scripting, especially when there are numerous
resources needed.
A solution to the complex management tasks is Helm. Helm provides both a tool
and underlying framework that packages the necessary manifest files. These
packages are deployed through a single step install command which abstracts all
the underlying control plane interaction from the user. Similar to application
installs through Debian's package manager dpkg, packages are shared through
local and remote repositories and allow the user to easily install, update,
delete or handle concurrent versions.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 18:32:14 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Howard",
"Michael",
""
]
] |
new_dataset
| 0.991814 |
2206.07106
|
Alexander Spangher
|
Alexander Spangher, Xiang Ren, Jonathan May and Nanyun Peng
|
NewsEdits: A News Article Revision Dataset and a Document-Level
Reasoning Challenge
| null |
2022 Annual Conference of the North American Chapter of the
Association for Computational Linguistics
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
News article revision histories provide clues to narrative and factual
evolution in news articles. To facilitate analysis of this evolution, we
present the first publicly available dataset of news revision histories,
NewsEdits. Our dataset is large-scale and multilingual; it contains 1.2 million
articles with 4.6 million versions from over 22 English- and French-language
newspaper sources based in three countries, spanning 15 years of coverage
(2006-2021).
We define article-level edit actions: Addition, Deletion, Edit and Refactor,
and develop a high-accuracy extraction algorithm to identify these actions. To
underscore the factual nature of many edit actions, we conduct analyses showing
that added and deleted sentences are more likely to contain updating events,
main content and quotes than unchanged sentences.
Finally, to explore whether edit actions are predictable, we introduce three
novel tasks aimed at predicting actions performed during version updates. We
show that these tasks are possible for expert humans but are challenging for
large NLP models. We hope this can spur research in narrative framing and help
provide predictive tools for journalists chasing breaking news.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 18:47:13 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Spangher",
"Alexander",
""
],
[
"Ren",
"Xiang",
""
],
[
"May",
"Jonathan",
""
],
[
"Peng",
"Nanyun",
""
]
] |
new_dataset
| 0.999835 |
2206.07116
|
Avraham N. Trahtman
|
A.N. Trahtman
|
A Partially Synchronizing Coloring
|
9 pages, 2 figures Lecture Notes in Computer Science, 6072(2010),
363-370. arXiv admin note: text overlap with arXiv:0801.2838, arXiv:0709.0099
| null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Given a finite directed graph, a coloring of its edges turns the graph into a
finite-state automaton. A k-synchronizing word of a deterministic automaton is
a word in the alphabet of colors at its edges that maps the state set of the
automaton at least on k-element subset. A coloring of edges of a directed
strongly connected finite graph of a uniform outdegree (constant outdegree of
any vertex) is k-synchronizing if the coloring turns the graph into a
deterministic finite automaton possessing a k-synchronizing word.
For k=1 one has the well known road coloring problem. The recent positive
solution of the road coloring problem implies an elegant generalization
considered first by Beal and Perrin: a directed finite strongly connected graph
of uniform outdegree is k-synchronizing iff the greatest common divisor of
lengths of all its cycles is k.
Some consequences for coloring of an arbitrary finite digraph are presented.
We describe a subquadratic algorithm of the road coloring for the
k-synchronization implemented in the package TESTAS. A new linear visualization
program demonstrates the obtained coloring. Some consequences for coloring of
an arbitrary finite digraph and of such a graph of uniform outdegree are
presented.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 19:08:31 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Trahtman",
"A. N.",
""
]
] |
new_dataset
| 0.99825 |
2206.07163
|
Qi Chang
|
Qi Chang, Zhennan Yan, Mu Zhou, Di Liu, Khalid Sawalha, Meng Ye,
Qilong Zhangli, Mikael Kanski, Subhi Al Aref, Leon Axel, Dimitris Metaxas
|
DeepRecon: Joint 2D Cardiac Segmentation and 3D Volume Reconstruction
via A Structure-Specific Generative Method
|
MICCAI2022
| null | null | null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Joint 2D cardiac segmentation and 3D volume reconstruction are fundamental to
building statistical cardiac anatomy models and understanding functional
mechanisms from motion patterns. However, due to the low through-plane
resolution of cine MR and high inter-subject variance, accurately segmenting
cardiac images and reconstructing the 3D volume are challenging. In this study,
we propose an end-to-end latent-space-based framework, DeepRecon, that
generates multiple clinically essential outcomes, including accurate image
segmentation, synthetic high-resolution 3D image, and 3D reconstructed volume.
Our method identifies the optimal latent representation of the cine image that
contains accurate semantic information for cardiac structures. In particular,
our model jointly generates synthetic images with accurate semantic information
and segmentation of the cardiac structures using the optimal latent
representation. We further explore downstream applications of 3D shape
reconstruction and 4D motion pattern adaptation by the different latent-space
manipulation strategies.The simultaneously generated high-resolution images
present a high interpretable value to assess the cardiac shape and
motion.Experimental results demonstrate the effectiveness of our approach on
multiple fronts including 2D segmentation, 3D reconstruction, downstream 4D
motion pattern adaption performance.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 20:46:11 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Chang",
"Qi",
""
],
[
"Yan",
"Zhennan",
""
],
[
"Zhou",
"Mu",
""
],
[
"Liu",
"Di",
""
],
[
"Sawalha",
"Khalid",
""
],
[
"Ye",
"Meng",
""
],
[
"Zhangli",
"Qilong",
""
],
[
"Kanski",
"Mikael",
""
],
[
"Aref",
"Subhi Al",
""
],
[
"Axel",
"Leon",
""
],
[
"Metaxas",
"Dimitris",
""
]
] |
new_dataset
| 0.997987 |
2206.07176
|
Soumyabrata Dev
|
Pierre Berjon, Rajib Sharma, Avishek Nag, and Soumyabrata Dev
|
Frequency-centroid features for word recognition of non-native English
speakers
|
Published in IEEE Irish Signals & Systems Conference (ISSC), 2022
| null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The objective of this work is to investigate complementary features which can
aid the quintessential Mel frequency cepstral coefficients (MFCCs) in the task
of closed, limited set word recognition for non-native English speakers of
different mother-tongues. Unlike the MFCCs, which are derived from the spectral
energy of the speech signal, the proposed frequency-centroids (FCs) encapsulate
the spectral centres of the different bands of the speech spectrum, with the
bands defined by the Mel filterbank. These features, in combination with the
MFCCs, are observed to provide relative performance improvement in English word
recognition, particularly under varied noisy conditions. A two-stage
Convolution Neural Network (CNN) is used to model the features of the English
words uttered with Arabic, French and Spanish accents.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 21:19:49 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Berjon",
"Pierre",
""
],
[
"Sharma",
"Rajib",
""
],
[
"Nag",
"Avishek",
""
],
[
"Dev",
"Soumyabrata",
""
]
] |
new_dataset
| 0.994749 |
2206.07190
|
Ahmed Mahran
|
Ahmed Mahran, Carlo Alessandro Borella, Konstantinos Perifanos
|
Codec at SemEval-2022 Task 5: Multi-Modal Multi-Transformer Misogynous
Meme Classification Framework
|
Accepted for publication at the 16th International Workshop on
Semantic Evaluation, Task 5: MAMI - Multimedia Automatic Misogyny
Identification co-located with NAACL 2022
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper we describe our work towards building a generic framework for
both multi-modal embedding and multi-label binary classification tasks, while
participating in task 5 (Multimedia Automatic Misogyny Identification) of
SemEval 2022 competition.
Since pretraining deep models from scratch is a resource and data hungry
task, our approach is based on three main strategies. We combine different
state-of-the-art architectures to capture a wide spectrum of semantic signals
from the multi-modal input. We employ a multi-task learning scheme to be able
to use multiple datasets from the same knowledge domain to help increase the
model's performance. We also use multiple objectives to regularize and fine
tune different system components.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 22:37:25 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Mahran",
"Ahmed",
""
],
[
"Borella",
"Carlo Alessandro",
""
],
[
"Perifanos",
"Konstantinos",
""
]
] |
new_dataset
| 0.97238 |
2206.07198
|
Yunfan Li
|
Yunfan Li, Vinayak Shenoy, Prateek Prasanna, I.V. Ramakrishnan, Haibin
Ling, Himanshu Gupta
|
Surgical Phase Recognition in Laparoscopic Cholecystectomy
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic recognition of surgical phases in surgical videos is a fundamental
task in surgical workflow analysis. In this report, we propose a
Transformer-based method that utilizes calibrated confidence scores for a
2-stage inference pipeline, which dynamically switches between a baseline model
and a separately trained transition model depending on the calibrated
confidence level. Our method outperforms the baseline model on the Cholec80
dataset, and can be applied to a variety of action segmentation methods.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 22:55:31 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Li",
"Yunfan",
""
],
[
"Shenoy",
"Vinayak",
""
],
[
"Prasanna",
"Prateek",
""
],
[
"Ramakrishnan",
"I. V.",
""
],
[
"Ling",
"Haibin",
""
],
[
"Gupta",
"Himanshu",
""
]
] |
new_dataset
| 0.997368 |
2206.07201
|
Alexander You
|
Alexander You, Nidhi Parayil, Josyula Gopala Krishna, Uddhav
Bhattarai, Ranjan Sapkota, Dawood Ahmed, Matthew Whiting, Manoj Karkee, Cindy
M. Grimm, Joseph R. Davidson
|
An autonomous robot for pruning modern, planar fruit trees
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Dormant pruning of fruit trees is an important task for maintaining tree
health and ensuring high-quality fruit. Due to decreasing labor availability,
pruning is a prime candidate for robotic automation. However, pruning also
represents a uniquely difficult problem for robots, requiring robust systems
for perception, pruning point determination, and manipulation that must operate
under variable lighting conditions and in complex, highly unstructured
environments. In this paper, we introduce a system for pruning sweet cherry
trees (in a planar tree architecture called an upright fruiting offshoot
configuration) that integrates various subsystems from our previous work on
perception and manipulation. The resulting system is capable of operating
completely autonomously and requires minimal control of the environment. We
validate the performance of our system through field trials in a sweet cherry
orchard, ultimately achieving a cutting success rate of 58%. Though not fully
robust and requiring improvements in throughput, our system is the first to
operate on fruit trees and represents a useful base platform to be improved in
the future.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 23:03:01 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"You",
"Alexander",
""
],
[
"Parayil",
"Nidhi",
""
],
[
"Krishna",
"Josyula Gopala",
""
],
[
"Bhattarai",
"Uddhav",
""
],
[
"Sapkota",
"Ranjan",
""
],
[
"Ahmed",
"Dawood",
""
],
[
"Whiting",
"Matthew",
""
],
[
"Karkee",
"Manoj",
""
],
[
"Grimm",
"Cindy M.",
""
],
[
"Davidson",
"Joseph R.",
""
]
] |
new_dataset
| 0.994774 |
2206.07207
|
Hammad Ayyubi
|
Hammad A. Ayyubi, Christopher Thomas, Lovish Chum, Rahul Lokesh, Yulei
Niu, Xudong Lin, Long Chen, Jaywon Koo, Sounak Ray and Shih-Fu Chang
|
Multimodal Event Graphs: Towards Event Centric Understanding of
Multimodal World
| null | null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Understanding how events described or shown in multimedia content relate to
one another is a critical component to developing robust artificially
intelligent systems which can reason about real-world media. While much
research has been devoted to event understanding in the text, image, and video
domains, none have explored the complex relations that events experience across
domains. For example, a news article may describe a `protest' event while a
video shows an `arrest' event. Recognizing that the visual `arrest' event is a
subevent of the broader `protest' event is a challenging, yet important problem
that prior work has not explored. In this paper, we propose the novel task of
MultiModal Event Event Relations to recognize such cross-modal event relations.
We contribute a large-scale dataset consisting of 100k video-news article
pairs, as well as a benchmark of densely annotated data. We also propose a
weakly supervised multimodal method which integrates commonsense knowledge from
an external knowledge base (KB) to predict rich multimodal event hierarchies.
Experiments show that our model outperforms a number of competitive baselines
on our proposed benchmark. We also perform a detailed analysis of our model's
performance and suggest directions for future research.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 23:24:15 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Ayyubi",
"Hammad A.",
""
],
[
"Thomas",
"Christopher",
""
],
[
"Chum",
"Lovish",
""
],
[
"Lokesh",
"Rahul",
""
],
[
"Niu",
"Yulei",
""
],
[
"Lin",
"Xudong",
""
],
[
"Chen",
"Long",
""
],
[
"Koo",
"Jaywon",
""
],
[
"Ray",
"Sounak",
""
],
[
"Chang",
"Shih-Fu",
""
]
] |
new_dataset
| 0.997532 |
2206.07238
|
Mukhlis Amien
|
Mukhlis Amien, Chong Feng, Heyan Huang
|
Location-based Twitter Filtering for the Creation of Low-Resource
Language Datasets in Indonesian Local Languages
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Twitter contains an abundance of linguistic data from the real world. We
examine Twitter for user-generated content in low-resource languages such as
local Indonesian. For NLP to work in Indonesian, it must consider local
dialects, geographic context, and regional culture influence Indonesian
languages. This paper identifies the problems we faced when constructing a
Local Indonesian NLP dataset. Furthermore, we are developing a framework for
creating, collecting, and classifying Local Indonesian datasets for NLP. Using
twitter's geolocation tool for automatic annotating.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 01:53:43 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Amien",
"Mukhlis",
""
],
[
"Feng",
"Chong",
""
],
[
"Huang",
"Heyan",
""
]
] |
new_dataset
| 0.998536 |
2206.07253
|
Zhizhi Yu
|
Zhizhi Yu, Di Jin, Jianguo Wei, Ziyang Liu, Yue Shang, Yun Xiao,
Jiawei Han, and Lingfei Wu
|
TeKo: Text-Rich Graph Neural Networks with External Knowledge
| null | null | null | null |
cs.SI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph Neural Networks (GNNs) have gained great popularity in tackling various
analytical tasks on graph-structured data (i.e., networks). Typical GNNs and
their variants follow a message-passing manner that obtains network
representations by the feature propagation process along network topology,
which however ignore the rich textual semantics (e.g., local word-sequence)
that exist in many real-world networks. Existing methods for text-rich networks
integrate textual semantics by mainly utilizing internal information such as
topics or phrases/words, which often suffer from an inability to
comprehensively mine the text semantics, limiting the reciprocal guidance
between network structure and text semantics. To address these problems, we
propose a novel text-rich graph neural network with external knowledge (TeKo),
in order to take full advantage of both structural and textual information
within text-rich networks. Specifically, we first present a flexible
heterogeneous semantic network that incorporates high-quality entities and
interactions among documents and entities. We then introduce two types of
external knowledge, that is, structured triplets and unstructured entity
description, to gain a deeper insight into textual semantics. We further design
a reciprocal convolutional mechanism for the constructed heterogeneous semantic
network, enabling network structure and textual semantics to collaboratively
enhance each other and learn high-level network representations. Extensive
experimental results on four public text-rich networks as well as a large-scale
e-commerce searching dataset illustrate the superior performance of TeKo over
state-of-the-art baselines.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 02:33:10 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Yu",
"Zhizhi",
""
],
[
"Jin",
"Di",
""
],
[
"Wei",
"Jianguo",
""
],
[
"Liu",
"Ziyang",
""
],
[
"Shang",
"Yue",
""
],
[
"Xiao",
"Yun",
""
],
[
"Han",
"Jiawei",
""
],
[
"Wu",
"Lingfei",
""
]
] |
new_dataset
| 0.993738 |
2206.07266
|
Ruchita Bhadre
|
Ruchita Bhadre, Prathamesh Yeole
|
Deployment of AGRI-BOT in Greenhouse Administration
|
Presented at Eureka Hackathon, India
| null | null | null |
cs.RO cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Modern agriculture is constantly evolving to increase production despite
unfavorable environmental conditions. A promising approach is 'greenhouse
cultivation' providing a microclimate to the cultivated plants to overcome
unfavorable climate. However, massive-sized greenhouses develop non-uniform
micro-climate throughout the complex requiring high degree of human
supervision. We propose deploying an Agri-Bot to create and maintain positive
ecological conditions in the greenhouse, reducing labor costs and increasing
production. The prototype will contain two primary systems, the navigation
system and the data analytics system. The navigation system will be controlled
by an Arduino, and data analytics will be handled using an ESP8266 microchip.
Numerous sensors for measuring the greenhouse parameters will be mounted on the
robot. It will follow a predefined path, while taking readings at checkpoints.
The microchip will collect and process data from sensors, transmit to the
cloud, and give commands to the actuators. The soil and climate parameters like
temperature, humidity, light intensity, soil moisture, pH will be measured
periodically. When the parameters are not within a specified range, the
Agri-Bot will take corrective actions like switching on blowers/heaters,
starting irrigation etc. If external intervention is required, eg., fertilizer,
it will indicate accordingly. Deploying such an Agri-Bot for monitoring and
controlling microclimate in large-scale greenhouses can mitigate labor costs
while increasing productivity. In spite of an initial cost, it can provide a
high return on investment by providing flexibility, low power consumption and
easy management to help greenhouse be water efficient, provide evenly dispersed
and controlled sunlight intensity, temperature and humidity.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 03:01:36 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Bhadre",
"Ruchita",
""
],
[
"Yeole",
"Prathamesh",
""
]
] |
new_dataset
| 0.983362 |
2206.07278
|
Min Wu
|
Min Wu, Xinglu Yi, Hui Yu, Yu Liu and Yujue Wang
|
Nebula Graph: An open source distributed graph database
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper introduces the recent work of Nebula Graph, an open-source,
distributed, scalable, and native graph database. We present a system design
trade-off and a comprehensive overview of Nebula Graph internals, including
graph data models, partitioning strategies, secondary indexes, optimizer rules,
storage-side transactions, graph query languages, observability, graph
processing frameworks, and visualization tool-kits. In addition, three sets of
large-scale graph b
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 03:38:01 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Wu",
"Min",
""
],
[
"Yi",
"Xinglu",
""
],
[
"Yu",
"Hui",
""
],
[
"Liu",
"Yu",
""
],
[
"Wang",
"Yujue",
""
]
] |
new_dataset
| 0.993723 |
2206.07318
|
Suman Dowlagar
|
Suman Dowlagar, Radhika Mamidi
|
CMNEROne at SemEval-2022 Task 11: Code-Mixed Named Entity Recognition by
leveraging multilingual data
|
SemEval 2022 Task 11: MultiCoNER Multilingual Complex Named Entity
Recognition, NAACL, 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Identifying named entities is, in general, a practical and challenging task
in the field of Natural Language Processing. Named Entity Recognition on the
code-mixed text is further challenging due to the linguistic complexity
resulting from the nature of the mixing. This paper addresses the submission of
team CMNEROne to the SEMEVAL 2022 shared task 11 MultiCoNER. The Code-mixed NER
task aimed to identify named entities on the code-mixed dataset. Our work
consists of Named Entity Recognition (NER) on the code-mixed dataset by
leveraging the multilingual data. We achieved a weighted average F1 score of
0.7044, i.e., 6% greater than the baseline.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 06:33:13 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Dowlagar",
"Suman",
""
],
[
"Mamidi",
"Radhika",
""
]
] |
new_dataset
| 0.999496 |
2206.07350
|
Florian Seiffarth
|
Florian Seiffarth, Tam\'as Horv\'ath, Stefan Wrobel
|
A Fast Heuristic for Computing Geodesic Cores in Large Networks
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by the increasing interest in applications of graph geodesic
convexity in machine learning and data mining, we present a heuristic for
computing the geodesic convex hull of node sets in networks. It generates a set
of almost maximal outerplanar spanning subgraphs for the input graph, computes
the geodesic closure in each of these graphs, and regards a node as an element
of the convex hull if it belongs to the closed sets for at least a user
specified number of outerplanar graphs. Our heuristic algorithm runs in time
linear in the number of edges of the input graph, i.e., it is faster with one
order of magnitude than the standard algorithm computing the closure exactly.
Its performance is evaluated empirically by approximating convexity based
core-periphery decomposition of networks. Our experimental results with large
real-world networks show that for most networks, the proposed heuristic was
able to produce close approximations significantly faster than the standard
algorithm computing the exact convex hulls. For example, while our algorithm
calculated an approximate core-periphery decomposition in 5 hours or less for
networks with more than 20 million edges, the standard algorithm did not
terminate within 50 days.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 08:01:34 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Seiffarth",
"Florian",
""
],
[
"Horváth",
"Tamás",
""
],
[
"Wrobel",
"Stefan",
""
]
] |
new_dataset
| 0.998959 |
2206.07368
|
Valerio Schiavoni Dr
|
Rasha Faqeh, Andr\`e Martin, Valerio Schiavoni, Pramod Bhatotia,
Pascal Felber, Christof Fetzer
|
PCRAFT: Capacity Planning for Dependable Stateless Services
|
11 pages
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Fault-tolerance techniques depend on replication to enhance availability,
albeit at the cost of increased infrastructure costs. This results in a
fundamental trade-off: Fault-tolerant services must satisfy given availability
and performance constraints while minimising the number of replicated
resources. These constraints pose capacity planning challenges for the service
operators to minimise replication costs without negatively impacting
availability.
To this end, we present PCRAFT, a system to enable capacity planning of
dependable services. PCRAFT's capacity planning is based on a hybrid approach
that combines empirical performance measurements with probabilistic modelling
of availability based on fault injection. In particular, we integrate
traditional service-level availability mechanisms (active route anywhere and
passive failover) and deployment schemes (cloud and on-premises) to quantify
the number of nodes needed to satisfy the given availability and performance
constraints. Our evaluation based on real-world applications shows that cloud
deployment requires fewer nodes than on-premises deployments. Additionally,
when considering on-premises deployments, we show how passive failover requires
fewer nodes than active route anywhere. Furthermore, our evaluation quantify
the quality enhancement given by additional integrity mechanisms and how this
affects the number of nodes needed.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 08:21:44 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Faqeh",
"Rasha",
""
],
[
"Martin",
"Andrè",
""
],
[
"Schiavoni",
"Valerio",
""
],
[
"Bhatotia",
"Pramod",
""
],
[
"Felber",
"Pascal",
""
],
[
"Fetzer",
"Christof",
""
]
] |
new_dataset
| 0.999307 |
2206.07372
|
Xi Li
|
Zequn Qin, Xi Li
|
MonoGround: Detecting Monocular 3D Objects from the Ground
|
CVPR22
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monocular 3D object detection has attracted great attention for its
advantages in simplicity and cost. Due to the ill-posed 2D to 3D mapping
essence from the monocular imaging process, monocular 3D object detection
suffers from inaccurate depth estimation and thus has poor 3D detection
results. To alleviate this problem, we propose to introduce the ground plane as
a prior in the monocular 3d object detection. The ground plane prior serves as
an additional geometric condition to the ill-posed mapping and an extra source
in depth estimation. In this way, we can get a more accurate depth estimation
from the ground. Meanwhile, to take full advantage of the ground plane prior,
we propose a depth-align training strategy and a precise two-stage depth
inference method tailored for the ground plane prior. It is worth noting that
the introduced ground plane prior requires no extra data sources like LiDAR,
stereo images, and depth information. Extensive experiments on the KITTI
benchmark show that our method could achieve state-of-the-art results compared
with other methods while maintaining a very fast speed. Our code and models are
available at https://github.com/cfzd/MonoGround.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 08:27:46 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Qin",
"Zequn",
""
],
[
"Li",
"Xi",
""
]
] |
new_dataset
| 0.975415 |
2206.07538
|
Javier Laplaza
|
Javier Laplaza, Joan Jaume Oliver, Ram\'on Romero, Alberto Sanfeliu
and Ana\'is Garrell
|
Body Gesture Recognition to Control a Social Robot
| null | null | null | null |
cs.RO cs.CV cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we propose a gesture based language to allow humans to interact
with robots using their body in a natural way. We have created a new gesture
detection model using neural networks and a custom dataset of humans performing
a set of body gestures to train our network. Furthermore, we compare body
gesture communication with other communication channels to acknowledge the
importance of adding this knowledge to robots. The presented approach is
extensively validated in diverse simulations and real-life experiments with
non-trained volunteers. This attains remarkable results and shows that it is a
valuable framework for social robotics applications, such as human robot
collaboration or human-robot interaction.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 13:49:22 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Laplaza",
"Javier",
""
],
[
"Oliver",
"Joan Jaume",
""
],
[
"Romero",
"Ramón",
""
],
[
"Sanfeliu",
"Alberto",
""
],
[
"Garrell",
"Anaís",
""
]
] |
new_dataset
| 0.999427 |
2206.07593
|
Benjamin Wortman
|
Benjamin Wortman and James Z. Wang
|
HICEM: A High-Coverage Emotion Model for Artificial Emotional
Intelligence
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
As social robots and other intelligent machines enter the home, artificial
emotional intelligence (AEI) is taking center stage to address users' desire
for deeper, more meaningful human-machine interaction. To accomplish such
efficacious interaction, the next-generation AEI need comprehensive human
emotion models for training. Unlike theory of emotion, which has been the
historical focus in psychology, emotion models are a descriptive tools. In
practice, the strongest models need robust coverage, which means defining the
smallest core set of emotions from which all others can be derived. To achieve
the desired coverage, we turn to word embeddings from natural language
processing. Using unsupervised clustering techniques, our experiments show that
with as few as 15 discrete emotion categories, we can provide maximum coverage
across six major languages--Arabic, Chinese, English, French, Spanish, and
Russian. In support of our findings, we also examine annotations from two
large-scale emotion recognition datasets to assess the validity of existing
emotion models compared to human perception at scale. Because robust,
comprehensive emotion models are foundational for developing real-world
affective computing applications, this work has broad implications in social
robotics, human-machine interaction, mental healthcare, and computational
psychology.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 15:21:30 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Wortman",
"Benjamin",
""
],
[
"Wang",
"James Z.",
""
]
] |
new_dataset
| 0.996961 |
2206.07615
|
Khuyagbaatar Batsuren
|
Khuyagbaatar Batsuren, G\'abor Bella, Aryaman Arora, Viktor
Martinovi\'c, Kyle Gorman, Zden\v{e}k \v{Z}abokrtsk\'y, Amarsanaa Ganbold,
\v{S}\'arka Dohnalov\'a, Magda \v{S}ev\v{c}\'ikov\'a, Kate\v{r}ina
Pelegrinov\'a, Fausto Giunchiglia, Ryan Cotterell, Ekaterina Vylomova
|
The SIGMORPHON 2022 Shared Task on Morpheme Segmentation
|
The 19th SIGMORPHON Workshop on Computational Research in Phonetics,
Phonology, and Morphology
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The SIGMORPHON 2022 shared task on morpheme segmentation challenged systems
to decompose a word into a sequence of morphemes and covered most types of
morphology: compounds, derivations, and inflections. Subtask 1, word-level
morpheme segmentation, covered 5 million words in 9 languages (Czech, English,
Spanish, Hungarian, French, Italian, Russian, Latin, Mongolian) and received 13
system submissions from 7 teams and the best system averaged 97.29% F1 score
across all languages, ranging English (93.84%) to Latin (99.38%). Subtask 2,
sentence-level morpheme segmentation, covered 18,735 sentences in 3 languages
(Czech, English, Mongolian), received 10 system submissions from 3 teams, and
the best systems outperformed all three state-of-the-art subword tokenization
methods (BPE, ULM, Morfessor2) by 30.71% absolute. To facilitate error analysis
and support any type of future studies, we released all system predictions, the
evaluation script, and all gold standard datasets.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 15:57:22 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Batsuren",
"Khuyagbaatar",
""
],
[
"Bella",
"Gábor",
""
],
[
"Arora",
"Aryaman",
""
],
[
"Martinović",
"Viktor",
""
],
[
"Gorman",
"Kyle",
""
],
[
"Žabokrtský",
"Zdeněk",
""
],
[
"Ganbold",
"Amarsanaa",
""
],
[
"Dohnalová",
"Šárka",
""
],
[
"Ševčíková",
"Magda",
""
],
[
"Pelegrinová",
"Kateřina",
""
],
[
"Giunchiglia",
"Fausto",
""
],
[
"Cotterell",
"Ryan",
""
],
[
"Vylomova",
"Ekaterina",
""
]
] |
new_dataset
| 0.998726 |
2206.07662
|
Yuxuan Zhou
|
Yuxuan Zhou, Wangmeng Xiang, Chao Li, Biao Wang, Xihan Wei, Lei Zhang,
Margret Keuper, Xiansheng Hua
|
SP-ViT: Learning 2D Spatial Priors for Vision Transformers
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, transformers have shown great potential in image classification and
established state-of-the-art results on the ImageNet benchmark. However,
compared to CNNs, transformers converge slowly and are prone to overfitting in
low-data regimes due to the lack of spatial inductive biases. Such spatial
inductive biases can be especially beneficial since the 2D structure of an
input image is not well preserved in transformers. In this work, we present
Spatial Prior-enhanced Self-Attention (SP-SA), a novel variant of vanilla
Self-Attention (SA) tailored for vision transformers. Spatial Priors (SPs) are
our proposed family of inductive biases that highlight certain groups of
spatial relations. Unlike convolutional inductive biases, which are forced to
focus exclusively on hard-coded local regions, our proposed SPs are learned by
the model itself and take a variety of spatial relations into account.
Specifically, the attention score is calculated with emphasis on certain kinds
of spatial relations at each head, and such learned spatial foci can be
complementary to each other. Based on SP-SA we propose the SP-ViT family, which
consistently outperforms other ViT models with similar GFlops or parameters.
Our largest model SP-ViT-L achieves a record-breaking 86.3% Top-1 accuracy with
a reduction in the number of parameters by almost 50% compared to previous
state-of-the-art model (150M for SP-ViT-L vs 271M for CaiT-M-36) among all
ImageNet-1K models trained on 224x224 and fine-tuned on 384x384 resolution w/o
extra data.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 16:54:02 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Zhou",
"Yuxuan",
""
],
[
"Xiang",
"Wangmeng",
""
],
[
"Li",
"Chao",
""
],
[
"Wang",
"Biao",
""
],
[
"Wei",
"Xihan",
""
],
[
"Zhang",
"Lei",
""
],
[
"Keuper",
"Margret",
""
],
[
"Hua",
"Xiansheng",
""
]
] |
new_dataset
| 0.99673 |
2206.07684
|
Paul Hongsuck Seo
|
Valentin Gabeur, Paul Hongsuck Seo, Arsha Nagrani, Chen Sun, Karteek
Alahari, Cordelia Schmid
|
AVATAR: Unconstrained Audiovisual Speech Recognition
| null | null | null | null |
cs.CV cs.MM cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Audio-visual automatic speech recognition (AV-ASR) is an extension of ASR
that incorporates visual cues, often from the movements of a speaker's mouth.
Unlike works that simply focus on the lip motion, we investigate the
contribution of entire visual frames (visual actions, objects, background
etc.). This is particularly useful for unconstrained videos, where the speaker
is not necessarily visible. To solve this task, we propose a new
sequence-to-sequence AudioVisual ASR TrAnsformeR (AVATAR) which is trained
end-to-end from spectrograms and full-frame RGB. To prevent the audio stream
from dominating training, we propose different word-masking strategies, thereby
encouraging our model to pay attention to the visual stream. We demonstrate the
contribution of the visual modality on the How2 AV-ASR benchmark, especially in
the presence of simulated noise, and show that our model outperforms all other
prior work by a large margin. Finally, we also create a new, real-world test
bed for AV-ASR called VisSpeech, which demonstrates the contribution of the
visual modality under challenging audio conditions.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 17:33:19 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Gabeur",
"Valentin",
""
],
[
"Seo",
"Paul Hongsuck",
""
],
[
"Nagrani",
"Arsha",
""
],
[
"Sun",
"Chen",
""
],
[
"Alahari",
"Karteek",
""
],
[
"Schmid",
"Cordelia",
""
]
] |
new_dataset
| 0.999794 |
2206.07685
|
Ryle Zhou
|
Ryle Zhou
|
Decentralized WebRCT P2P network using Kademlia
| null | null | null | null |
cs.NI cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Web Real-Time Communication (WebRTC) is a new standard and industry effort
that extends the web browsing model. For the first time, browsers are able to
directly exchange real-time media with other browsers in a peer-to-peer
fashion. Before WebRTC was introduced, it was cumbersome to build smooth chat
and video applications, users often experience unstable connections, blurry
videos, and unclear sounds. WebRTC's peer-to-peer communication paradigm
establishes the real-time connection between browsers using the SIP(Session
Initiation Protocol) Trapezoid. A wide set of protocols are bundled in WebRTC
API, such as connection management, encoding/decoding negotiation, media
control, selection and control, firewall and NAT element traversal, etc.
However, almost all current WebRTC applications are using centralized signaling
infrastructure which brings the problems of scalability, stability, and
fault-tolerance. In this paper, I am presenting a decentralized architecture by
introducing the Kademlia network into WebRTC to reduce the need for a
centralized signaling service for WebRTC.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 17:33:59 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Zhou",
"Ryle",
""
]
] |
new_dataset
| 0.989565 |
2206.07704
|
Alex Zihao Zhu
|
Jieru Mei, Alex Zihao Zhu, Xinchen Yan, Hang Yan, Siyuan Qiao, Yukun
Zhu, Liang-Chieh Chen, Henrik Kretzschmar, Dragomir Anguelov
|
Waymo Open Dataset: Panoramic Video Panoptic Segmentation
|
Our dataset can be found at https://waymo.com/open
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Panoptic image segmentation is the computer vision task of finding groups of
pixels in an image and assigning semantic classes and object instance
identifiers to them. Research in image segmentation has become increasingly
popular due to its critical applications in robotics and autonomous driving.
The research community thereby relies on publicly available benchmark dataset
to advance the state-of-the-art in computer vision. Due to the high costs of
densely labeling the images, however, there is a shortage of publicly available
ground truth labels that are suitable for panoptic segmentation. The high
labeling costs also make it challenging to extend existing datasets to the
video domain and to multi-camera setups. We therefore present the Waymo Open
Dataset: Panoramic Video Panoptic Segmentation Dataset, a large-scale dataset
that offers high-quality panoptic segmentation labels for autonomous driving.
We generate our dataset using the publicly available Waymo Open Dataset,
leveraging the diverse set of camera images. Our labels are consistent over
time for video processing and consistent across multiple cameras mounted on the
vehicles for full panoramic scene understanding. Specifically, we offer labels
for 28 semantic categories and 2,860 temporal sequences that were captured by
five cameras mounted on autonomous vehicles driving in three different
geographical locations, leading to a total of 100k labeled camera images. To
the best of our knowledge, this makes our dataset an order of magnitude larger
than existing datasets that offer video panoptic segmentation labels. We
further propose a new benchmark for Panoramic Video Panoptic Segmentation and
establish a number of strong baselines based on the DeepLab family of models.
We will make the benchmark and the code publicly available. Find the dataset at
https://waymo.com/open.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 17:57:28 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Mei",
"Jieru",
""
],
[
"Zhu",
"Alex Zihao",
""
],
[
"Yan",
"Xinchen",
""
],
[
"Yan",
"Hang",
""
],
[
"Qiao",
"Siyuan",
""
],
[
"Zhu",
"Yukun",
""
],
[
"Chen",
"Liang-Chieh",
""
],
[
"Kretzschmar",
"Henrik",
""
],
[
"Anguelov",
"Dragomir",
""
]
] |
new_dataset
| 0.999823 |
2206.07710
|
Yiming Xie
|
Yiming Xie, Matheus Gadelha, Fengting Yang, Xiaowei Zhou, Huaizu Jiang
|
PlanarRecon: Real-time 3D Plane Detection and Reconstruction from Posed
Monocular Videos
|
CVPR 2022. Project page: https://neu-vi.github.io/planarrecon/
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present PlanarRecon -- a novel framework for globally coherent detection
and reconstruction of 3D planes from a posed monocular video. Unlike previous
works that detect planes in 2D from a single image, PlanarRecon incrementally
detects planes in 3D for each video fragment, which consists of a set of key
frames, from a volumetric representation of the scene using neural networks. A
learning-based tracking and fusion module is designed to merge planes from
previous fragments to form a coherent global plane reconstruction. Such design
allows PlanarRecon to integrate observations from multiple views within each
fragment and temporal information across different ones, resulting in an
accurate and coherent reconstruction of the scene abstraction with
low-polygonal geometry. Experiments show that the proposed approach achieves
state-of-the-art performances on the ScanNet dataset while being real-time.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 17:59:16 GMT"
}
] | 2022-06-16T00:00:00 |
[
[
"Xie",
"Yiming",
""
],
[
"Gadelha",
"Matheus",
""
],
[
"Yang",
"Fengting",
""
],
[
"Zhou",
"Xiaowei",
""
],
[
"Jiang",
"Huaizu",
""
]
] |
new_dataset
| 0.999815 |
2102.11035
|
Michael Welzl
|
Michael Welzl, Safiqul Islam, Michael Gundersen, Andreas Fischer
|
Transport Services: A Modern API for an Adaptive Internet Transport
Layer
|
Accepted for publication in the April 2021 issue of IEEE
Communications Magazine
| null |
10.1109/MCOM.001.2000870
| null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Transport services (TAPS) is a working group of the Internet's
standardization body, the Internet Engineering Task Force (IETF). TAPS defines
a new recommended API for the Internet's transport layer. This API gives access
to a wide variety of services from various protocols, and it is
protocol-independent: the transport layer becomes adaptive, and applications
are no longer statically bound to a particular protocol and/or network
interface. We give an overview of the TAPS API, and we demonstrate its
flexibility and ease of use with an example using a Python-based open-source
implementation.
|
[
{
"version": "v1",
"created": "Mon, 22 Feb 2021 14:13:30 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Welzl",
"Michael",
""
],
[
"Islam",
"Safiqul",
""
],
[
"Gundersen",
"Michael",
""
],
[
"Fischer",
"Andreas",
""
]
] |
new_dataset
| 0.997749 |
2107.06263
|
Jianyuan Guo
|
Jianyuan Guo, Kai Han, Han Wu, Yehui Tang, Xinghao Chen, Yunhe Wang
and Chang Xu
|
CMT: Convolutional Neural Networks Meet Vision Transformers
|
Accepted in CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision transformers have been successfully applied to image recognition tasks
due to their ability to capture long-range dependencies within an image.
However, there are still gaps in both performance and computational cost
between transformers and existing convolutional neural networks (CNNs). In this
paper, we aim to address this issue and develop a network that can outperform
not only the canonical transformers, but also the high-performance
convolutional models. We propose a new transformer based hybrid network by
taking advantage of transformers to capture long-range dependencies, and of
CNNs to model local features. Furthermore, we scale it to obtain a family of
models, called CMTs, obtaining much better accuracy and efficiency than
previous convolution and transformer based models. In particular, our CMT-S
achieves 83.5% top-1 accuracy on ImageNet, while being 14x and 2x smaller on
FLOPs than the existing DeiT and EfficientNet, respectively. The proposed CMT-S
also generalizes well on CIFAR10 (99.2%), CIFAR100 (91.7%), Flowers (98.7%),
and other challenging vision datasets such as COCO (44.3% mAP), with
considerably less computational cost.
|
[
{
"version": "v1",
"created": "Tue, 13 Jul 2021 17:47:19 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jul 2021 06:22:16 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Jun 2022 14:05:23 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Guo",
"Jianyuan",
""
],
[
"Han",
"Kai",
""
],
[
"Wu",
"Han",
""
],
[
"Tang",
"Yehui",
""
],
[
"Chen",
"Xinghao",
""
],
[
"Wang",
"Yunhe",
""
],
[
"Xu",
"Chang",
""
]
] |
new_dataset
| 0.99798 |
2112.13610
|
Yuan Yao
|
Yuan Yao, Qingxiu Dong, Jian Guan, Boxi Cao, Zhengyan Zhang, Chaojun
Xiao, Xiaozhi Wang, Fanchao Qi, Junwei Bao, Jinran Nie, Zheni Zeng, Yuxian
Gu, Kun Zhou, Xuancheng Huang, Wenhao Li, Shuhuai Ren, Jinliang Lu,
Chengqiang Xu, Huadong Wang, Guoyang Zeng, Zile Zhou, Jiajun Zhang, Juanzi
Li, Minlie Huang, Rui Yan, Xiaodong He, Xiaojun Wan, Xin Zhao, Xu Sun, Yang
Liu, Zhiyuan Liu, Xianpei Han, Erhong Yang, Zhifang Sui, Maosong Sun
|
CUGE: A Chinese Language Understanding and Generation Evaluation
Benchmark
|
We add two new datasets, including grammatical error correction
dataset YACLC from Beijing Language and Culture University, and reading
comprehension dataset GCRC from Shanxi University, and also improve the
description consistency of all datasets
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Realizing general-purpose language intelligence has been a longstanding goal
for natural language processing, where standard evaluation benchmarks play a
fundamental and guiding role. We argue that for general-purpose language
intelligence evaluation, the benchmark itself needs to be comprehensive and
systematic. To this end, we propose CUGE, a Chinese Language Understanding and
Generation Evaluation benchmark with the following features: (1) Hierarchical
benchmark framework, where datasets are principally selected and organized with
a language capability-task-dataset hierarchy. (2) Multi-level scoring strategy,
where different levels of model performance are provided based on the
hierarchical framework. To facilitate CUGE, we provide a public leaderboard
that can be customized to support flexible model judging criteria. Evaluation
results on representative pre-trained language models indicate ample room for
improvement towards general-purpose language intelligence. CUGE is publicly
available at cuge.baai.ac.cn.
|
[
{
"version": "v1",
"created": "Mon, 27 Dec 2021 11:08:58 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jun 2022 07:19:35 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Yao",
"Yuan",
""
],
[
"Dong",
"Qingxiu",
""
],
[
"Guan",
"Jian",
""
],
[
"Cao",
"Boxi",
""
],
[
"Zhang",
"Zhengyan",
""
],
[
"Xiao",
"Chaojun",
""
],
[
"Wang",
"Xiaozhi",
""
],
[
"Qi",
"Fanchao",
""
],
[
"Bao",
"Junwei",
""
],
[
"Nie",
"Jinran",
""
],
[
"Zeng",
"Zheni",
""
],
[
"Gu",
"Yuxian",
""
],
[
"Zhou",
"Kun",
""
],
[
"Huang",
"Xuancheng",
""
],
[
"Li",
"Wenhao",
""
],
[
"Ren",
"Shuhuai",
""
],
[
"Lu",
"Jinliang",
""
],
[
"Xu",
"Chengqiang",
""
],
[
"Wang",
"Huadong",
""
],
[
"Zeng",
"Guoyang",
""
],
[
"Zhou",
"Zile",
""
],
[
"Zhang",
"Jiajun",
""
],
[
"Li",
"Juanzi",
""
],
[
"Huang",
"Minlie",
""
],
[
"Yan",
"Rui",
""
],
[
"He",
"Xiaodong",
""
],
[
"Wan",
"Xiaojun",
""
],
[
"Zhao",
"Xin",
""
],
[
"Sun",
"Xu",
""
],
[
"Liu",
"Yang",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Han",
"Xianpei",
""
],
[
"Yang",
"Erhong",
""
],
[
"Sui",
"Zhifang",
""
],
[
"Sun",
"Maosong",
""
]
] |
new_dataset
| 0.999836 |
2112.15099
|
Alessio Palmero Aprosio
|
Teresa Paccosi, Alessio Palmero Aprosio
|
KIND: an Italian Multi-Domain Dataset for Named Entity Recognition
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present KIND, an Italian dataset for Named-entity
recognition. It contains more than one million tokens with annotation covering
three classes: person, location, and organization. The dataset (around 600K
tokens) mostly contains manual gold annotations in three different domains
(news, literature, and political discourses) and a semi-automatically annotated
part. The multi-domain feature is the main strength of the present work,
offering a resource which covers different styles and language uses, as well as
the largest Italian NER dataset with manual gold annotations. It represents an
important resource for the training of NER systems in Italian. Texts and
annotations are freely downloadable from the Github repository.
|
[
{
"version": "v1",
"created": "Thu, 30 Dec 2021 15:41:52 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jun 2022 08:03:48 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Paccosi",
"Teresa",
""
],
[
"Aprosio",
"Alessio Palmero",
""
]
] |
new_dataset
| 0.999723 |
2202.04165
|
Liang Hong
|
Xiufeng Xu and Liang Hong
|
Instantaneous and limiting behavior of an n-node blockchain under cyber
attacks from a single hacker
| null | null | null | null |
cs.CR math.OC stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
We investigate the instantaneous and limiting behavior of an n-node
blockchain which is under continuous monitoring of the IT department of a
company but faces non-stop cyber attacks from a single hacker. The blockchain
is functional as far as no data stored on it has been changed, deleted, or
locked. Once the IT department detects the attack from the hacker, it will
immediately re-set the blockchain, rendering all previous efforts of the hacker
in vain. The hacker will not stop until the blockchain is dysfunctional. For
arbitrary distributions of the hacking times and detecting times, we derive the
limiting functional probability, instantaneous functional probability, and mean
functional time of the blockchain. We also show that all these quantities are
increasing functions of the number of nodes, substantiating the intuition that
the more nodes a blockchain has, the harder it is for a hacker to succeed in a
cyber attack.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 22:01:27 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jun 2022 20:05:21 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Xu",
"Xiufeng",
""
],
[
"Hong",
"Liang",
""
]
] |
new_dataset
| 0.988426 |
2204.03113
|
Karuna Grewal
|
Karuna Grewal, Loris D'Antoni, Justin Hsu
|
P4BID: Information Flow Control in P4
| null | null |
10.1145/3519939.3523717
| null |
cs.PL cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern programmable network switches can implement custom applications using
efficient packet processing hardware, and the programming language P4 provides
high-level constructs to program such switches. The increase in speed and
programmability has inspired research in dataplane programming, where many
complex functionalities, e.g., key-value stores and load balancers, can be
implemented entirely in network switches. However, dataplane programs may
suffer from novel security errors that are not traditionally found in network
switches.
To address this issue, we present a new information-flow control type system
for P4. We formalize our type system in a recently-proposed core version of P4,
and we prove a soundness theorem: well-typed programs satisfy non-interference.
We also implement our type system in a tool, P4bid, which extends the type
checker in the p4c compiler, the reference compiler for the latest version of
P4. We present several case studies showing that natural security, integrity,
and isolation properties in networks can be captured by non-interference, and
our type system can detect violations of these properties while certifying
correct programs.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 22:03:01 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jun 2022 04:52:50 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Grewal",
"Karuna",
""
],
[
"D'Antoni",
"Loris",
""
],
[
"Hsu",
"Justin",
""
]
] |
new_dataset
| 0.997954 |
2204.05381
|
Yi Wang
|
Yi Wang, Conrad M Albrecht, Xiao Xiang Zhu
|
Self-supervised Vision Transformers for Joint SAR-optical Representation
Learning
|
4 pages, 1 figure; IGARSS 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning (SSL) has attracted much interest in remote sensing
and earth observation due to its ability to learn task-agnostic representations
without human annotation. While most of the existing SSL works in remote
sensing utilize ConvNet backbones and focus on a single modality, we explore
the potential of vision transformers (ViTs) for joint SAR-optical
representation learning. Based on DINO, a state-of-the-art SSL algorithm that
distills knowledge from two augmented views of an input image, we combine SAR
and optical imagery by concatenating all channels to a unified input.
Subsequently, we randomly mask out channels of one modality as a data
augmentation strategy. While training, the model gets fed optical-only,
SAR-only, and SAR-optical image pairs learning both inner- and intra-modality
representations. Experimental results employing the BigEarthNet-MM dataset
demonstrate the benefits of both, the ViT backbones and the proposed multimodal
SSL algorithm DINO-MM.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2022 19:42:53 GMT"
},
{
"version": "v2",
"created": "Thu, 12 May 2022 18:31:03 GMT"
},
{
"version": "v3",
"created": "Tue, 31 May 2022 22:12:12 GMT"
},
{
"version": "v4",
"created": "Tue, 14 Jun 2022 17:19:42 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Wang",
"Yi",
""
],
[
"Albrecht",
"Conrad M",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
new_dataset
| 0.968494 |
2204.10511
|
Minji Kwak
|
Youngmin Kim, Minji Kwak, Dain Lee, Yeongeun Kim, Hyeongboo Baek
|
Keypoint based Sign Language Translation without Glosses
|
14 pages, 5 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Sign Language Translation (SLT) is a task that has not been studied
relatively much compared to the study of Sign Language Recognition (SLR).
However, the SLR is a study that recognizes the unique grammar of sign
language, which is different from the spoken language and has a problem that
non-disabled people cannot easily interpret. So, we're going to solve the
problem of translating directly spoken language in sign language video. To this
end, we propose a new keypoint normalization method for performing translation
based on the skeleton point of the signer and robustly normalizing these points
in sign language translation. It contributed to performance improvement by a
customized normalization method depending on the body parts. In addition, we
propose a stochastic frame selection method that enables frame augmentation and
sampling at the same time. Finally, it is translated into the spoken language
through an Attention-based translation model. Our method can be applied to
various datasets in a way that can be applied to datasets without glosses. In
addition, quantitative experimental evaluation proved the excellence of our
method.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 05:37:56 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jun 2022 02:05:47 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Kim",
"Youngmin",
""
],
[
"Kwak",
"Minji",
""
],
[
"Lee",
"Dain",
""
],
[
"Kim",
"Yeongeun",
""
],
[
"Baek",
"Hyeongboo",
""
]
] |
new_dataset
| 0.996161 |
2204.12721
|
Yujia Jin
|
Arun Jambulapati, Yujia Jin, Aaron Sidford, Kevin Tian
|
Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching
|
Accepted at ICALP'22
| null | null | null |
cs.DS math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Box-simplex games are a family of bilinear minimax objectives which
encapsulate graph-structured problems such as maximum flow [She17], optimal
transport [JST19], and bipartite matching [AJJ+22]. We develop efficient
near-linear time, high-accuracy solvers for regularized variants of these
games. Beyond the immediate applications of such solvers for computing Sinkhorn
distances, a prominent tool in machine learning, we show that these solvers can
be used to obtain improved running times for maintaining a (fractional)
$\epsilon$-approximate maximum matching in a dynamic decremental bipartite
graph against an adaptive adversary. We give a generic framework which reduces
this dynamic matching problem to solving regularized graph-structured
optimization problems to high accuracy. Through our reduction framework, our
regularized box-simplex game solver implies a new algorithm for dynamic
decremental bipartite matching in total time $\tilde{O}(m \cdot
\epsilon^{-3})$, from an initial graph with $m$ edges and $n$ nodes. We further
show how to use recent advances in flow optimization [CKL+22] to improve our
runtime to $m^{1 + o(1)} \cdot \epsilon^{-2}$, thereby demonstrating the
versatility of our reduction-based approach. These results improve upon the
previous best runtime of $\tilde{O}(m \cdot \epsilon^{-4})$ [BGS20] and
illustrate the utility of using regularized optimization problem solvers for
designing dynamic algorithms.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 06:22:03 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 06:28:44 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Jun 2022 00:07:19 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Jambulapati",
"Arun",
""
],
[
"Jin",
"Yujia",
""
],
[
"Sidford",
"Aaron",
""
],
[
"Tian",
"Kevin",
""
]
] |
new_dataset
| 0.987487 |
2205.15018
|
Patrick Ruch
|
Gianmarco Gabrieli, Michal Muszynski, Patrick W. Ruch
|
A reconfigurable integrated electronic tongue and its use in accelerated
analysis of juices and wines
| null | null |
10.1109/ISOEN54820.2022.9789630
| null |
cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Potentiometric electronic tongues (ETs) leveraging trends in miniaturization
and internet of things (IoT) bear promise for facile mobile chemical analysis
of complex multicomponent liquids, such as beverages. In this work,
hand-crafted feature extraction from the transient potentiometric response of
an array of low-selective miniaturized polymeric sensors is combined with a
data pipeline for deployment of trained machine learning models on a cloud
back-end or edge device. The sensor array demonstrated sensitivity to different
organic acids and exhibited interesting performance for the fingerprinting of
fruit juices and wines, including differentiation of samples through supervised
learning based on sensory descriptors and prediction of consumer acceptability
of aged juice samples. Product authentication, quality control and support of
sensory evaluation are some of the applications that are expected to benefit
from integrated electronic tongues that facilitate the characterization of
complex properties of multi-component liquids.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 07:01:25 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Gabrieli",
"Gianmarco",
""
],
[
"Muszynski",
"Michal",
""
],
[
"Ruch",
"Patrick W.",
""
]
] |
new_dataset
| 0.999253 |
2206.05777
|
Ziqiang Zhang
|
Ziqiang Zhang, Junyi Ao, Long Zhou, Shujie Liu, Furu Wei, Jinyu Li
|
The YiTrans End-to-End Speech Translation System for IWSLT 2022 Offline
Shared Task
|
11 pages
| null | null | null |
cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes the submission of our end-to-end YiTrans speech
translation system for the IWSLT 2022 offline task, which translates from
English audio to German, Chinese, and Japanese. The YiTrans system is built on
large-scale pre-trained encoder-decoder models. More specifically, we first
design a multi-stage pre-training strategy to build a multi-modality model with
a large amount of labeled and unlabeled data. We then fine-tune the
corresponding components of the model for the downstream speech translation
tasks. Moreover, we make various efforts to improve performance, such as data
filtering, data augmentation, speech segmentation, model ensemble, and so on.
Experimental results show that our YiTrans system obtains a significant
improvement than the strong baseline on three translation directions, and it
achieves +5.2 BLEU improvements over last year's optimal end-to-end system on
tst2021 English-German. Our final submissions rank first on English-German and
English-Chinese end-to-end systems in terms of the automatic evaluation metric.
We make our code and models publicly available.
|
[
{
"version": "v1",
"created": "Sun, 12 Jun 2022 16:13:01 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jun 2022 02:25:56 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Zhang",
"Ziqiang",
""
],
[
"Ao",
"Junyi",
""
],
[
"Zhou",
"Long",
""
],
[
"Liu",
"Shujie",
""
],
[
"Wei",
"Furu",
""
],
[
"Li",
"Jinyu",
""
]
] |
new_dataset
| 0.980604 |
2206.06031
|
Jan Schuetzke
|
Jan Schuetzke, Nathan J. Szymanski, Markus Reischl
|
A universal synthetic dataset for machine learning on spectroscopic data
|
8 pages, 2 figures, 2 tables
| null | null | null |
cs.LG cond-mat.mtrl-sci
|
http://creativecommons.org/licenses/by/4.0/
|
To assist in the development of machine learning methods for automated
classification of spectroscopic data, we have generated a universal synthetic
dataset that can be used for model validation. This dataset contains artificial
spectra designed to represent experimental measurements from techniques
including X-ray diffraction, nuclear magnetic resonance, and Raman
spectroscopy. The dataset generation process features customizable parameters,
such as scan length and peak count, which can be adjusted to fit the problem at
hand. As an initial benchmark, we simulated a dataset containing 35,000 spectra
based on 500 unique classes. To automate the classification of this data, eight
different machine learning architectures were evaluated. From the results, we
shed light on which factors are most critical to achieve optimal performance
for the classification task. The scripts used to generate synthetic spectra, as
well as our benchmark dataset and evaluation routines, are made publicly
available to aid in the development of improved machine learning models for
spectroscopic analysis.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 10:37:19 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jun 2022 09:25:53 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Schuetzke",
"Jan",
""
],
[
"Szymanski",
"Nathan J.",
""
],
[
"Reischl",
"Markus",
""
]
] |
new_dataset
| 0.998506 |
2206.06401
|
Hao Bai
|
Hao Bai
|
GoAutoBash: Golang-based Multi-Thread Automatic Pull-Execute Framework
with GitHub Webhooks And Queuing Strategy
|
Accepted by EPCE'22
| null | null | null |
cs.NI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, more and more server tasks are done using full automation,
including grading tasks for students in the college courses, integrating tasks
for programmers in big projects and server-based transactions, and
visualization tasks for researchers in a data-dense topic. Using automation on
servers provides a great possibility for reducing the burden on manual tasks.
Although server tools like CI/CD for continuous integration and Hexo for
automated blog deployment have been developed, they're highly dedicated to
certain functionalities and thus lack general usage. In this paper, we
introduce a Golang-based automation framework that reacts to the events
happening on GitHub in a multi-thread approach. This framework utilizes a queue
to arrange the tasks submitted and execute each task with a thread in a
preemptive manner. We then use the project GoAutoGrader to illustrate a
specific implementation of this framework and its value in implementing
high-freedom server applications. As Golang is developing in a rapid way
because of its incredible parallel programming efficiency and a super-easy way
to learn on the basis of C-like programming languages, we decide to develop
this system in Golang.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 18:11:25 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Bai",
"Hao",
""
]
] |
new_dataset
| 0.998856 |
2206.06423
|
Xinchen Yu
|
Xinchen Yu, Eduardo Blanco, Lingzi Hong
|
Hate Speech and Counter Speech Detection: Conversational Context Does
Matter
|
Accepted by NAACL 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hate speech is plaguing the cyberspace along with user-generated content.
This paper investigates the role of conversational context in the annotation
and detection of online hate and counter speech, where context is defined as
the preceding comment in a conversation thread. We created a context-aware
dataset for a 3-way classification task on Reddit comments: hate speech,
counter speech, or neutral. Our analyses indicate that context is critical to
identify hate and counter speech: human judgments change for most comments
depending on whether we show annotators the context. A linguistic analysis
draws insights into the language people use to express hate and counter speech.
Experimental results show that neural networks obtain significantly better
results if context is taken into account. We also present qualitative error
analyses shedding light into (a) when and why context is beneficial and (b) the
remaining errors made by our best model when context is taken into account.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 19:05:44 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Yu",
"Xinchen",
""
],
[
"Blanco",
"Eduardo",
""
],
[
"Hong",
"Lingzi",
""
]
] |
new_dataset
| 0.999807 |
2206.06428
|
Hao Bai
|
Hao Bai
|
VSC-WebGPU: A Selenium-based VS Code Extension For Local Edit And Cloud
Compilation on WebGPU
|
Published by IEEE on conference ICFTIC'21
| null |
10.1109/ICFTIC54370.2021.9647189
| null |
cs.NI cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid development of information transmission, Software as a Service
(SaaS) is developing at a rapid speed that everything originally local tends to
be transplanted onto servers and executed on the cloud. WebGPU is such a SaaS
system that it holds the GPU-equipped server to execute students' CUDA code and
releases the RESTful front-end website for students to write their code on.
However, programming on an HTML-based interface is not satisfactory due to a
lack of syntax highlighting and automatic keyword complement. On the other
side, Visual Studio Code is now becoming the most popular programming interface
due to its strong community and eclectic functionalities. Thus, we propose such
a system that, students write code locally using VS Code with its
coding-auxiliary extensions, and push the code to WebGPU with only one button
pressed using our VSC-WebGPU extension. The extension is divided into 4 parts:
the login process for automatically logging the student into WebGPU, the pull
process that pulls the code down to the local workspace, the push process that
copies the code to the browser for compiling and running, and the exit process
to exit the browser and close the connection. This 4-step architecture is also
applicable for any other automated tools to push local code to
authorization-required SaaS systems using Web automata.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 19:18:26 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Bai",
"Hao",
""
]
] |
new_dataset
| 0.999731 |
2206.06481
|
ShahRukh Athar
|
ShahRukh Athar, Zexiang Xu, Kalyan Sunkavalli, Eli Shechtman and
Zhixin Shu
|
RigNeRF: Fully Controllable Neural 3D Portraits
|
The project page can be found here:
http://shahrukhathar.github.io/2022/06/06/RigNeRF.html
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Volumetric neural rendering methods, such as neural radiance fields (NeRFs),
have enabled photo-realistic novel view synthesis. However, in their standard
form, NeRFs do not support the editing of objects, such as a human head, within
a scene. In this work, we propose RigNeRF, a system that goes beyond just novel
view synthesis and enables full control of head pose and facial expressions
learned from a single portrait video. We model changes in head pose and facial
expressions using a deformation field that is guided by a 3D morphable face
model (3DMM). The 3DMM effectively acts as a prior for RigNeRF that learns to
predict only residuals to the 3DMM deformations and allows us to render novel
(rigid) poses and (non-rigid) expressions that were not present in the input
sequence. Using only a smartphone-captured short video of a subject for
training, we demonstrate the effectiveness of our method on free view synthesis
of a portrait scene with explicit head pose and expression controls. The
project page can be found here:
http://shahrukhathar.github.io/2022/06/06/RigNeRF.html
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 21:28:34 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Athar",
"ShahRukh",
""
],
[
"Xu",
"Zexiang",
""
],
[
"Sunkavalli",
"Kalyan",
""
],
[
"Shechtman",
"Eli",
""
],
[
"Shu",
"Zhixin",
""
]
] |
new_dataset
| 0.98437 |
2206.06489
|
Ziang Liu
|
Ziang Liu, Roberto Mart\'in-Mart\'in, Fei Xia, Jiajun Wu, Li Fei-Fei
|
BEHAVIOR in Habitat 2.0: Simulator-Independent Logical Task Description
for Benchmarking Embodied AI Agents
| null | null | null | null |
cs.AI cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robots excel in performing repetitive and precision-sensitive tasks in
controlled environments such as warehouses and factories, but have not been yet
extended to embodied AI agents providing assistance in household tasks.
Inspired by the catalyzing effect that benchmarks have played in the AI fields
such as computer vision and natural language processing, the community is
looking for new benchmarks for embodied AI. Prior work in embodied AI benchmark
defines tasks using a different formalism, often specific to one environment,
simulator or domain, making it hard to develop general and comparable
solutions. In this work, we bring a subset of BEHAVIOR activities into Habitat
2.0 to benefit from its fast simulation speed, as a first step towards
demonstrating the ease of adapting activities defined in the logic space into
different simulators.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 21:37:31 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Liu",
"Ziang",
""
],
[
"Martín-Martín",
"Roberto",
""
],
[
"Xia",
"Fei",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Fei-Fei",
"Li",
""
]
] |
new_dataset
| 0.995698 |
2206.06588
|
Chandan Reddy
|
Chandan K. Reddy, Llu\'is M\`arquez, Fran Valero, Nikhil Rao, Hugo
Zaragoza, Sambaran Bandyopadhyay, Arnab Biswas, Anlu Xing, Karthik Subbian
|
Shopping Queries Dataset: A Large-Scale ESCI Benchmark for Improving
Product Search
| null | null | null | null |
cs.IR cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Improving the quality of search results can significantly enhance users
experience and engagement with search engines. In spite of several recent
advancements in the fields of machine learning and data mining, correctly
classifying items for a particular user search query has been a long-standing
challenge, which still has a large room for improvement. This paper introduces
the "Shopping Queries Dataset", a large dataset of difficult Amazon search
queries and results, publicly released with the aim of fostering research in
improving the quality of search results. The dataset contains around 130
thousand unique queries and 2.6 million manually labeled (query,product)
relevance judgements. The dataset is multilingual with queries in English,
Japanese, and Spanish. The Shopping Queries Dataset is being used in one of the
KDDCup'22 challenges. In this paper, we describe the dataset and present three
evaluation tasks along with baseline results: (i) ranking the results list,
(ii) classifying product results into relevance categories, and (iii)
identifying substitute products for a given query. We anticipate that this data
will become the gold standard for future research in the topic of product
search.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 04:25:26 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Reddy",
"Chandan K.",
""
],
[
"Màrquez",
"Lluís",
""
],
[
"Valero",
"Fran",
""
],
[
"Rao",
"Nikhil",
""
],
[
"Zaragoza",
"Hugo",
""
],
[
"Bandyopadhyay",
"Sambaran",
""
],
[
"Biswas",
"Arnab",
""
],
[
"Xing",
"Anlu",
""
],
[
"Subbian",
"Karthik",
""
]
] |
new_dataset
| 0.999553 |
2206.06606
|
Jinan Zou
|
Jinan Zou, Haiyao Cao, Lingqiao Liu, Yuhao Lin, Ehsan Abbasnejad,
Javen Qinfeng Shi
|
Astock: A New Dataset and Automated Stock Trading based on
Stock-specific News Analyzing Model
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Natural Language Processing(NLP) demonstrates a great potential to support
financial decision-making by analyzing the text from social media or news
outlets. In this work, we build a platform to study the NLP-aided stock
auto-trading algorithms systematically. In contrast to the previous work, our
platform is characterized by three features: (1) We provide financial news for
each specific stock. (2) We provide various stock factors for each stock. (3)
We evaluate performance from more financial-relevant metrics. Such a design
allows us to develop and evaluate NLP-aided stock auto-trading algorithms in a
more realistic setting. In addition to designing an evaluation platform and
dataset collection, we also made a technical contribution by proposing a system
to automatically learn a good feature representation from various input
information. The key to our algorithm is a method called semantic role labeling
Pooling (SRLP), which leverages Semantic Role Labeling (SRL) to create a
compact representation of each news paragraph. Based on SRLP, we further
incorporate other stock factors to make the final prediction. In addition, we
propose a self-supervised learning strategy based on SRLP to enhance the
out-of-distribution generalization performance of our system. Through our
experimental study, we show that the proposed method achieves better
performance and outperforms all the baselines' annualized rate of return as
well as the maximum drawdown of the CSI300 index and XIN9 index on real
trading. Our Astock dataset and code are available at
https://github.com/JinanZou/Astock.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 05:55:23 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Zou",
"Jinan",
""
],
[
"Cao",
"Haiyao",
""
],
[
"Liu",
"Lingqiao",
""
],
[
"Lin",
"Yuhao",
""
],
[
"Abbasnejad",
"Ehsan",
""
],
[
"Shi",
"Javen Qinfeng",
""
]
] |
new_dataset
| 0.999804 |
2206.06635
|
Haiming Gao
|
Haiming Gao, Qibo Qiu, Wei Hua, Xuebo Zhang, Zhengyong Han, Shun Zhang
|
CVR-LSE: Compact Vectorization Representation of Local Static
Environments for Unmanned Ground Vehicles
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
According to the requirement of general static obstacle detection, this paper
proposes a compact vectorization representation approach of local static
environments for unmanned ground vehicles. At first, by fusing the data of
LiDAR and IMU, high-frequency pose information is obtained. Then, through the
two-dimensional (2D) obstacle points generation, the process of grid map
maintenance with a fixed size is proposed. Finally, the local static
environment is described via multiple convex polygons, which is realized
throungh the double threshold-based boundary simplification and the convex
polygon segmentation. Our proposed approach has been applied in a practical
driverless project in the park, and the qualitative experimental results on
typical scenes verify the effectiveness and robustness. In addition, the
quantitative evaluation shows the superior performance on making use of fewer
number of points information (decreased by about 60%) to represent the local
static environment compared with the traditional grid map-based methods.
Furthermore, the performance of running time (15ms) shows that the proposed
approach can be used for real-time local static environment perception. The
corresponding code can be accessed at https://github.com/ghm0819/cvr_lse.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 06:54:19 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Gao",
"Haiming",
""
],
[
"Qiu",
"Qibo",
""
],
[
"Hua",
"Wei",
""
],
[
"Zhang",
"Xuebo",
""
],
[
"Han",
"Zhengyong",
""
],
[
"Zhang",
"Shun",
""
]
] |
new_dataset
| 0.997638 |
2206.06642
|
Michael Welzl
|
Michael Welzl, Peyman Teymoori, Safiqul Islam, David Hutchison, Stein
Gjessing
|
Future Internet Congestion Control: The Diminishing Feedback Problem
|
Accepted for publication in IEEE Communications Magazine, 2022 (Open
Call Article)
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
It is increasingly difficult for Internet congestion control mechanisms to
obtain the feedback that they need. This lack of feedback can have severe
performance implications, and it is bound to become worse. In the long run, the
problem may only be fixable by fundamentally changing the way congestion
control is done in the Internet. We substantiate this claim by looking at the
evolution of the Internet's infrastructure over the past thirty years, and by
examining the most common behavior of Internet traffic. Considering the goals
that congestion control mechanisms are intended to address, and taking into
account contextual developments in the Internet ecosystem, we arrive at
conclusions and recommendations about possible future congestion control design
directions. In particular, we argue that congestion control mechanisms should
move away from their strict "end-to-end" adherence. This change would benefit
from avoiding a "one size fits all circumstances" approach, and moving towards
a more selective set of mechanisms that will result in a better performing
Internet. We will also discuss how this future vision differs from today's use
of Performance Enhancing Proxies (PEPs).
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 07:15:48 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Welzl",
"Michael",
""
],
[
"Teymoori",
"Peyman",
""
],
[
"Islam",
"Safiqul",
""
],
[
"Hutchison",
"David",
""
],
[
"Gjessing",
"Stein",
""
]
] |
new_dataset
| 0.978471 |
2206.06694
|
Ezequiel de la Rosa
|
Moritz Roman Hernandez Petzsche, Ezequiel de la Rosa, Uta Hanning,
Roland Wiest, Waldo Enrique Valenzuela Pinilla, Mauricio Reyes, Maria Ines
Meyer, Sook-Lei Liew, Florian Kofler, Ivan Ezhov, David Robben, Alexander
Hutton, Tassilo Friedrich, Teresa Zarth, Johannes B\"urkle, The Anh Baran,
Bjoern Menze, Gabriel Broocks, Lukas Meyer, Claus Zimmer, Tobias
Boeckh-Behrens, Maria Berndt, Benno Ikenberg, Benedikt Wiestler, Jan S.
Kirschke
|
ISLES 2022: A multi-center magnetic resonance imaging stroke lesion
segmentation dataset
|
12 pages, 2 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Magnetic resonance imaging (MRI) is a central modality for stroke imaging. It
is used upon patient admission to make treatment decisions such as selecting
patients for intravenous thrombolysis or endovascular therapy. MRI is later
used in the duration of hospital stay to predict outcome by visualizing infarct
core size and location. Furthermore, it may be used to characterize stroke
etiology, e.g. differentiation between (cardio)-embolic and non-embolic stroke.
Computer based automated medical image processing is increasingly finding its
way into clinical routine. Previous iterations of the Ischemic Stroke Lesion
Segmentation (ISLES) challenge have aided in the generation of identifying
benchmark methods for acute and sub-acute ischemic stroke lesion segmentation.
Here we introduce an expert-annotated, multicenter MRI dataset for segmentation
of acute to subacute stroke lesions. This dataset comprises 400 multi-vendor
MRI cases with high variability in stroke lesion size, quantity and location.
It is split into a training dataset of n=250 and a test dataset of n=150. All
training data will be made publicly available. The test dataset will be used
for model validation only and will not be released to the public. This dataset
serves as the foundation of the ISLES 2022 challenge with the goal of finding
algorithmic methods to enable the development and benchmarking of robust and
accurate segmentation algorithms for ischemic stroke.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 08:54:40 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Petzsche",
"Moritz Roman Hernandez",
""
],
[
"de la Rosa",
"Ezequiel",
""
],
[
"Hanning",
"Uta",
""
],
[
"Wiest",
"Roland",
""
],
[
"Pinilla",
"Waldo Enrique Valenzuela",
""
],
[
"Reyes",
"Mauricio",
""
],
[
"Meyer",
"Maria Ines",
""
],
[
"Liew",
"Sook-Lei",
""
],
[
"Kofler",
"Florian",
""
],
[
"Ezhov",
"Ivan",
""
],
[
"Robben",
"David",
""
],
[
"Hutton",
"Alexander",
""
],
[
"Friedrich",
"Tassilo",
""
],
[
"Zarth",
"Teresa",
""
],
[
"Bürkle",
"Johannes",
""
],
[
"Baran",
"The Anh",
""
],
[
"Menze",
"Bjoern",
""
],
[
"Broocks",
"Gabriel",
""
],
[
"Meyer",
"Lukas",
""
],
[
"Zimmer",
"Claus",
""
],
[
"Boeckh-Behrens",
"Tobias",
""
],
[
"Berndt",
"Maria",
""
],
[
"Ikenberg",
"Benno",
""
],
[
"Wiestler",
"Benedikt",
""
],
[
"Kirschke",
"Jan S.",
""
]
] |
new_dataset
| 0.999711 |
2206.06769
|
Xuan Guo
|
Xuan Guo, Daniel Bates, Robert Mullins, Alex Bradbury
|
Muntjac -- Open Source Multicore RV64 Linux-capable SoC
|
To be published in the First Workshop on Open-Source Computer
Architecture Research
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Muntjac is an open-source collection of components which can be used to build
a multicore, Linux-capable system-on-chip. This includes a 64-bit RISC-V core,
a cache subsystem, and TileLink interconnect allowing cache-coherent multicore
configurations. Each component is easy to understand, verify, and extend, with
most being configurable enough to be useful across a wide range of
applications.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 12:02:59 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Guo",
"Xuan",
""
],
[
"Bates",
"Daniel",
""
],
[
"Mullins",
"Robert",
""
],
[
"Bradbury",
"Alex",
""
]
] |
new_dataset
| 0.99945 |
2206.06994
|
Roozbeh Mottaghi
|
Matt Deitke, Eli VanderBilt, Alvaro Herrasti, Luca Weihs, Jordi
Salvador, Kiana Ehsani, Winson Han, Eric Kolve, Ali Farhadi, Aniruddha
Kembhavi, Roozbeh Mottaghi
|
ProcTHOR: Large-Scale Embodied AI Using Procedural Generation
|
ProcTHOR website: https://procthor.allenai.org
| null | null | null |
cs.AI cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Massive datasets and high-capacity models have driven many recent
advancements in computer vision and natural language understanding. This work
presents a platform to enable similar success stories in Embodied AI. We
propose ProcTHOR, a framework for procedural generation of Embodied AI
environments. ProcTHOR enables us to sample arbitrarily large datasets of
diverse, interactive, customizable, and performant virtual environments to
train and evaluate embodied agents across navigation, interaction, and
manipulation tasks. We demonstrate the power and potential of ProcTHOR via a
sample of 10,000 generated houses and a simple neural model. Models trained
using only RGB images on ProcTHOR, with no explicit mapping and no human task
supervision produce state-of-the-art results across 6 embodied AI benchmarks
for navigation, rearrangement, and arm manipulation, including the presently
running Habitat 2022, AI2-THOR Rearrangement 2022, and RoboTHOR challenges. We
also demonstrate strong 0-shot results on these benchmarks, via pre-training on
ProcTHOR with no fine-tuning on the downstream benchmark, often beating
previous state-of-the-art systems that access the downstream training data.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 17:09:35 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Deitke",
"Matt",
""
],
[
"VanderBilt",
"Eli",
""
],
[
"Herrasti",
"Alvaro",
""
],
[
"Weihs",
"Luca",
""
],
[
"Salvador",
"Jordi",
""
],
[
"Ehsani",
"Kiana",
""
],
[
"Han",
"Winson",
""
],
[
"Kolve",
"Eric",
""
],
[
"Farhadi",
"Ali",
""
],
[
"Kembhavi",
"Aniruddha",
""
],
[
"Mottaghi",
"Roozbeh",
""
]
] |
new_dataset
| 0.990954 |
2206.07028
|
Georgia Gkioxari
|
Georgia Gkioxari, Nikhila Ravi, Justin Johnson
|
Learning 3D Object Shape and Layout without 3D Supervision
|
CVPR 2022, project page: https://gkioxari.github.io/usl/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
A 3D scene consists of a set of objects, each with a shape and a layout
giving their position in space. Understanding 3D scenes from 2D images is an
important goal, with applications in robotics and graphics. While there have
been recent advances in predicting 3D shape and layout from a single image,
most approaches rely on 3D ground truth for training which is expensive to
collect at scale. We overcome these limitations and propose a method that
learns to predict 3D shape and layout for objects without any ground truth
shape or layout information: instead we rely on multi-view images with 2D
supervision which can more easily be collected at scale. Through extensive
experiments on 3D Warehouse, Hypersim, and ScanNet we demonstrate that our
approach scales to large datasets of realistic images, and compares favorably
to methods relying on 3D ground truth. On Hypersim and ScanNet where reliable
3D ground truth is not available, our approach outperforms supervised
approaches trained on smaller and less diverse datasets.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 17:49:44 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Gkioxari",
"Georgia",
""
],
[
"Ravi",
"Nikhila",
""
],
[
"Johnson",
"Justin",
""
]
] |
new_dataset
| 0.997053 |
2206.07047
|
Matteo Poggi
|
Fabio Tosi, Pierluigi Zama Ramirez, Matteo Poggi, Samuele Salti,
Stefano Mattoccia, Luigi Di Stefano
|
RGB-Multispectral Matching: Dataset, Learning Methodology, Evaluation
|
CVPR 2022, New Orleans. Project page:
https://cvlab-unibo.github.io/rgb-ms-web/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the problem of registering synchronized color (RGB) and
multi-spectral (MS) images featuring very different resolution by solving
stereo matching correspondences. Purposely, we introduce a novel RGB-MS dataset
framing 13 different scenes in indoor environments and providing a total of 34
image pairs annotated with semi-dense, high-resolution ground-truth labels in
the form of disparity maps. To tackle the task, we propose a deep learning
architecture trained in a self-supervised manner by exploiting a further RGB
camera, required only during training data acquisition. In this setup, we can
conveniently learn cross-modal matching in the absence of ground-truth labels
by distilling knowledge from an easier RGB-RGB matching task based on a
collection of about 11K unlabeled image triplets. Experiments show that the
proposed pipeline sets a good performance bar (1.16 pixels average registration
error) for future research on this novel, challenging task.
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 17:59:59 GMT"
}
] | 2022-06-15T00:00:00 |
[
[
"Tosi",
"Fabio",
""
],
[
"Ramirez",
"Pierluigi Zama",
""
],
[
"Poggi",
"Matteo",
""
],
[
"Salti",
"Samuele",
""
],
[
"Mattoccia",
"Stefano",
""
],
[
"Di Stefano",
"Luigi",
""
]
] |
new_dataset
| 0.997855 |
2001.11224
|
Tuomo Hiippala
|
Tuomo Hiippala and John A. Bateman
|
Introducing the diagrammatic semiotic mode
|
16 pages; accepted at Diagrams 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
As the use and diversity of diagrams across many disciplines grows, there is
an increasing interest in the diagrams research community concerning how such
diversity might be documented and explained. In this article, we argue that one
way of achieving increased reliability, coverage, and utility for a general
classification of diagrams is to draw on recently developed semiotic principles
developed within the field of multimodality. To this end, we sketch out the
internal details of what may tentatively be termed the diagrammatic semiotic
mode. This provides a natural account of how diagrammatic representations may
integrate natural language, various forms of graphics, diagrammatic elements
such as arrows, lines and other expressive resources into coherent
organisations, while still respecting the crucial diagrammatic contributions of
visual organisation. We illustrate the proposed approach using two recent
diagram corpora and show how a multimodal approach supports the empirical
analysis of diagrammatic representations, especially in identifying
diagrammatic constituents and describing their interrelations in a manner that
may be generalised across diagram types and be used to characterise distinct
kinds of functionality.
|
[
{
"version": "v1",
"created": "Thu, 30 Jan 2020 09:17:32 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Jun 2022 17:42:04 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Hiippala",
"Tuomo",
""
],
[
"Bateman",
"John A.",
""
]
] |
new_dataset
| 0.999641 |
2010.08391
|
Zihui Zhang
|
Cuican Yu, Zihui Zhang, Huibin Li
|
Reconstructing A Large Scale 3D Face Dataset for Deep 3D Face
Identification
|
we want to re-organize this paper
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning methods have brought many breakthroughs to computer vision,
especially in 2D face recognition. However, the bottleneck of deep learning
based 3D face recognition is that it is difficult to collect millions of 3D
faces, whether for industry or academia. In view of this situation, there are
many methods to generate more 3D faces from existing 3D faces through 3D face
data augmentation, which are used to train deep 3D face recognition models.
However, to the best of our knowledge, there is no method to generate 3D faces
from 2D face images for training deep 3D face recognition models. This letter
focuses on the role of reconstructed 3D facial surfaces in 3D face
identification and proposes a framework of 2D-aided deep 3D face
identification. In particular, we propose to reconstruct millions of 3D face
scans from a large scale 2D face database (i.e.VGGFace2), using a deep learning
based 3D face reconstruction method (i.e.ExpNet). Then, we adopt a two-phase
training approach: In the first phase, we use millions of face images to
pre-train the deep convolutional neural network (DCNN), and in the second
phase, we use normal component images (NCI) of reconstructed 3D face scans to
train the DCNN. Extensive experimental results illustrate that the proposed
approach can greatly improve the rank-1 score of 3D face identification on the
FRGC v2.0, the Bosphorus, and the BU-3DFE 3D face databases, compared to the
model trained by 2D face images. Finally, our proposed approach achieves
state-of-the-art rank-1 scores on the FRGC v2.0 (97.6%), Bosphorus (98.4%), and
BU-3DFE (98.8%) databases. The experimental results show that the reconstructed
3D facial surfaces are useful and our 2D-aided deep 3D face identification
framework is meaningful, facing the scarcity of 3D faces.
|
[
{
"version": "v1",
"created": "Fri, 16 Oct 2020 13:48:38 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Jun 2022 10:01:39 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Yu",
"Cuican",
""
],
[
"Zhang",
"Zihui",
""
],
[
"Li",
"Huibin",
""
]
] |
new_dataset
| 0.99821 |
2101.07663
|
Dingwen Zhang
|
Mingchen Zhuge, Deng-Ping Fan, Nian Liu, Dingwen Zhang, Dong Xu, and
Ling Shao
|
Salient Object Detection via Integrity Learning
|
TPAMI accepted
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although current salient object detection (SOD) works have achieved
significant progress, they are limited when it comes to the integrity of the
predicted salient regions. We define the concept of integrity at both a micro
and macro level. Specifically, at the micro level, the model should highlight
all parts that belong to a certain salient object. Meanwhile, at the macro
level, the model needs to discover all salient objects in a given image. To
facilitate integrity learning for SOD, we design a novel Integrity Cognition
Network (ICON), which explores three important components for learning strong
integrity features. 1) Unlike existing models, which focus more on feature
discriminability, we introduce a diverse feature aggregation (DFA) component to
aggregate features with various receptive fields (i.e., kernel shape and
context) and increase feature diversity. Such diversity is the foundation for
mining the integral salient objects. 2) Based on the DFA features, we introduce
an integrity channel enhancement (ICE) component with the goal of enhancing
feature channels that highlight the integral salient objects, while suppressing
the other distracting ones. 3) After extracting the enhanced features, the
part-whole verification (PWV) method is employed to determine whether the part
and whole object features have strong agreement. Such part-whole agreements can
further improve the micro-level integrity for each salient object. To
demonstrate the effectiveness of our ICON, comprehensive experiments are
conducted on seven challenging benchmarks. Our ICON outperforms the baseline
methods in terms of a wide range of metrics. Notably, our ICON achieves about
10% relative improvement over the previous best model in terms of average false
negative ratio (FNR), on six datasets. Codes and results are available at:
https://github.com/mczhuge/ICON.
|
[
{
"version": "v1",
"created": "Tue, 19 Jan 2021 14:53:12 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jan 2021 03:55:27 GMT"
},
{
"version": "v3",
"created": "Sun, 21 Feb 2021 07:01:56 GMT"
},
{
"version": "v4",
"created": "Wed, 8 Sep 2021 05:18:21 GMT"
},
{
"version": "v5",
"created": "Wed, 15 Sep 2021 04:16:42 GMT"
},
{
"version": "v6",
"created": "Wed, 13 Apr 2022 08:07:07 GMT"
},
{
"version": "v7",
"created": "Mon, 13 Jun 2022 08:14:47 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Zhuge",
"Mingchen",
""
],
[
"Fan",
"Deng-Ping",
""
],
[
"Liu",
"Nian",
""
],
[
"Zhang",
"Dingwen",
""
],
[
"Xu",
"Dong",
""
],
[
"Shao",
"Ling",
""
]
] |
new_dataset
| 0.99739 |
2104.07921
|
Hung Le
|
Hung Le, Nancy F. Chen, Steven C.H. Hoi
|
VGNMN: Video-grounded Neural Module Network to Video-Grounded Language
Tasks
|
Accepted at NAACL 2022 (Oral)
| null | null | null |
cs.CV cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Neural module networks (NMN) have achieved success in image-grounded tasks
such as Visual Question Answering (VQA) on synthetic images. However, very
limited work on NMN has been studied in the video-grounded dialogue tasks.
These tasks extend the complexity of traditional visual tasks with the
additional visual temporal variance and language cross-turn dependencies.
Motivated by recent NMN approaches on image-grounded tasks, we introduce
Video-grounded Neural Module Network (VGNMN) to model the information retrieval
process in video-grounded language tasks as a pipeline of neural modules. VGNMN
first decomposes all language components in dialogues to explicitly resolve any
entity references and detect corresponding action-based inputs from the
question. The detected entities and actions are used as parameters to
instantiate neural module networks and extract visual cues from the video. Our
experiments show that VGNMN can achieve promising performance on a challenging
video-grounded dialogue benchmark as well as a video QA benchmark.
|
[
{
"version": "v1",
"created": "Fri, 16 Apr 2021 06:47:41 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Jun 2022 14:13:09 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Le",
"Hung",
""
],
[
"Chen",
"Nancy F.",
""
],
[
"Hoi",
"Steven C. H.",
""
]
] |
new_dataset
| 0.999491 |
2105.00613
|
Barak Shoshany
|
Barak Shoshany
|
A C++17 Thread Pool for High-Performance Scientific Computing
|
23 pages, source code available at
https://github.com/bshoshany/thread-pool
| null |
10.5281/zenodo.4742687
| null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
We present a modern C++17-compatible thread pool implementation, built from
scratch with high-performance scientific computing in mind. The thread pool is
implemented as a single lightweight and self-contained class, and does not have
any dependencies other than the C++17 standard library, thus allowing a great
degree of portability. In particular, our implementation does not utilize
OpenMP or any other high-level multithreading APIs, and thus gives the
programmer precise low-level control over the details of the parallelization,
which permits more robust optimizations. The thread pool was extensively tested
on both AMD and Intel CPUs with up to 40 cores and 80 threads. This paper
provides motivation, detailed usage instructions, and performance tests.
|
[
{
"version": "v1",
"created": "Mon, 3 May 2021 03:04:49 GMT"
},
{
"version": "v2",
"created": "Sat, 8 May 2021 16:12:52 GMT"
},
{
"version": "v3",
"created": "Sun, 12 Jun 2022 05:37:43 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Shoshany",
"Barak",
""
]
] |
new_dataset
| 0.994539 |
2112.02604
|
Renran Tian
|
Tina Chen, Taotao Jing, Renran Tian, Yaobin Chen, Joshua Domeyer,
Heishiro Toyoda, Rini Sherony, Zhengming Ding
|
PSI: A Pedestrian Behavior Dataset for Socially Intelligent Autonomous
Car
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prediction of pedestrian behavior is critical for fully autonomous vehicles
to drive in busy city streets safely and efficiently. The future autonomous
cars need to fit into mixed conditions with not only technical but also social
capabilities. As more algorithms and datasets have been developed to predict
pedestrian behaviors, these efforts lack the benchmark labels and the
capability to estimate the temporal-dynamic intent changes of the pedestrians,
provide explanations of the interaction scenes, and support algorithms with
social intelligence. This paper proposes and shares another benchmark dataset
called the IUPUI-CSRC Pedestrian Situated Intent (PSI) data with two innovative
labels besides comprehensive computer vision labels. The first novel label is
the dynamic intent changes for the pedestrians to cross in front of the
ego-vehicle, achieved from 24 drivers with diverse backgrounds. The second one
is the text-based explanations of the driver reasoning process when estimating
pedestrian intents and predicting their behaviors during the interaction
period. These innovative labels can enable several computer vision tasks,
including pedestrian intent/behavior prediction, vehicle-pedestrian interaction
segmentation, and video-to-language mapping for explainable algorithms. The
released dataset can fundamentally improve the development of pedestrian
behavior prediction models and develop socially intelligent autonomous cars to
interact with pedestrians efficiently. The dataset has been evaluated with
different tasks and is released to the public to access.
|
[
{
"version": "v1",
"created": "Sun, 5 Dec 2021 15:54:57 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Jun 2022 21:08:21 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Chen",
"Tina",
""
],
[
"Jing",
"Taotao",
""
],
[
"Tian",
"Renran",
""
],
[
"Chen",
"Yaobin",
""
],
[
"Domeyer",
"Joshua",
""
],
[
"Toyoda",
"Heishiro",
""
],
[
"Sherony",
"Rini",
""
],
[
"Ding",
"Zhengming",
""
]
] |
new_dataset
| 0.999685 |
2201.04756
|
Tianya Zhang Dr.
|
Tianya Zhang and Peter J. Jin
|
Roadside Lidar Vehicle Detection and Tracking Using Range And Intensity
Background Subtraction
| null |
Journal of Advanced Transportation, 2022
|
10.1155/2022/2771085
| null |
cs.CV eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we developed the solution of roadside LiDAR object detection
using a combination of two unsupervised learning algorithms. The 3D point
clouds are firstly converted into spherical coordinates and filled into the
elevation-azimuth matrix using a hash function. After that, the raw LiDAR data
were rearranged into new data structures to store the information of range,
azimuth, and intensity. Then, the Dynamic Mode Decomposition method is applied
to decompose the LiDAR data into low-rank backgrounds and sparse foregrounds
based on intensity channel pattern recognition. The Coarse Fine Triangle
Algorithm (CFTA) automatically finds the dividing value to separate the moving
targets from static background according to range information. After intensity
and range background subtraction, the foreground moving objects will be
detected using a density-based detector and encoded into the state-space model
for tracking. The output of the proposed solution includes vehicle trajectories
that can enable many mobility and safety applications. The method was validated
at both path and point levels and outperformed the state-of-the-art. In
contrast to the previous methods that process directly on the scattered and
discrete point clouds, the dynamic classification method can establish the less
sophisticated linear relationship of the 3D measurement data, which captures
the spatial-temporal structure that we often desire.
|
[
{
"version": "v1",
"created": "Thu, 13 Jan 2022 00:54:43 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Jan 2022 22:38:22 GMT"
},
{
"version": "v3",
"created": "Sat, 12 Feb 2022 23:21:53 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Mar 2022 18:20:39 GMT"
},
{
"version": "v5",
"created": "Wed, 8 Jun 2022 01:54:56 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Zhang",
"Tianya",
""
],
[
"Jin",
"Peter J.",
""
]
] |
new_dataset
| 0.999314 |
2201.12771
|
Jannik Z\"urn
|
Jannik Z\"urn, Wolfram Burgard
|
Self-Supervised Moving Vehicle Detection from Audio-Visual Cues
|
8 pages, 6 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Robust detection of moving vehicles is a critical task for any autonomously
operating outdoor robot or self-driving vehicle. Most modern approaches for
solving this task rely on training image-based detectors using large-scale
vehicle detection datasets such as nuScenes or the Waymo Open Dataset.
Providing manual annotations is an expensive and laborious exercise that does
not scale well in practice. To tackle this problem, we propose a
self-supervised approach that leverages audio-visual cues to detect moving
vehicles in videos. Our approach employs contrastive learning for localizing
vehicles in images from corresponding pairs of images and recorded audio. In
extensive experiments carried out with a real-world dataset, we demonstrate
that our approach provides accurate detections of moving vehicles and does not
require manual annotations. We furthermore show that our model can be used as a
teacher to supervise an audio-only detection model. This student model is
invariant to illumination changes and thus effectively bridges the domain gap
inherent to models leveraging exclusively vision as the predominant modality.
|
[
{
"version": "v1",
"created": "Sun, 30 Jan 2022 09:52:14 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jun 2022 06:12:31 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Zürn",
"Jannik",
""
],
[
"Burgard",
"Wolfram",
""
]
] |
new_dataset
| 0.988271 |
2203.04860
|
Patrizio Bellan
|
Patrizio Bellan, Han van der Aa, Mauro Dragoni, Chiara Ghidini, Simone
Paolo Ponzetto
|
PET: An Annotated Dataset for Process Extraction from Natural Language
Text
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Process extraction from text is an important task of process discovery, for
which various approaches have been developed in recent years. However, in
contrast to other information extraction tasks, there is a lack of
gold-standard corpora of business process descriptions that are carefully
annotated with all the entities and relationships of interest. Due to this, it
is currently hard to compare the results obtained by extraction approaches in
an objective manner, whereas the lack of annotated texts also prevents the
application of data-driven information extraction methodologies, typical of the
natural language processing field. Therefore, to bridge this gap, we present
the PET dataset, a first corpus of business process descriptions annotated with
activities, gateways, actors, and flow information. We present our new
resource, including a variety of baselines to benchmark the difficulty and
challenges of business process extraction from text. PET can be accessed via
huggingface.co/datasets/patriziobellan/PET
|
[
{
"version": "v1",
"created": "Wed, 9 Mar 2022 16:33:59 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jun 2022 13:19:25 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Bellan",
"Patrizio",
""
],
[
"van der Aa",
"Han",
""
],
[
"Dragoni",
"Mauro",
""
],
[
"Ghidini",
"Chiara",
""
],
[
"Ponzetto",
"Simone Paolo",
""
]
] |
new_dataset
| 0.997537 |
2204.03465
|
Javier Huertas-Tato
|
Javier Huertas-Tato and Alejandro Martin and David Camacho
|
BERTuit: Understanding Spanish language in Twitter through a native
transformer
|
Support: 1) BBVA FOUNDATION - CIVIC, 2) Spanish Ministry of Science
and Innovation - FightDIS (PID2020-117263GB-100) and XAI-Disinfodemics
(PLEC2021-007681), 3) Comunidad Autonoma de Madrid - S2018/TCS-4566, 4)
European Comission - IBERIFIER (2020-EU-IA-0252), 5) Digital Future Society
(Mobile World Capital Barcelona) - DisTrack, 6) UPM - Programa de Excelencia
para el Profesorado Universitario
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The appearance of complex attention-based language models such as BERT,
Roberta or GPT-3 has allowed to address highly complex tasks in a plethora of
scenarios. However, when applied to specific domains, these models encounter
considerable difficulties. This is the case of Social Networks such as Twitter,
an ever-changing stream of information written with informal and complex
language, where each message requires careful evaluation to be understood even
by humans given the important role that context plays. Addressing tasks in this
domain through Natural Language Processing involves severe challenges. When
powerful state-of-the-art multilingual language models are applied to this
scenario, language specific nuances use to get lost in translation. To face
these challenges we present \textbf{BERTuit}, the larger transformer proposed
so far for Spanish language, pre-trained on a massive dataset of 230M Spanish
tweets using RoBERTa optimization. Our motivation is to provide a powerful
resource to better understand Spanish Twitter and to be used on applications
focused on this social network, with special emphasis on solutions devoted to
tackle the spreading of misinformation in this platform. BERTuit is evaluated
on several tasks and compared against M-BERT, XLM-RoBERTa and XLM-T, very
competitive multilingual transformers. The utility of our approach is shown
with applications, in this case: a zero-shot methodology to visualize groups of
hoaxes and profiling authors spreading disinformation.
Misinformation spreads wildly on platforms such as Twitter in languages other
than English, meaning performance of transformers may suffer when transferred
outside English speaking communities.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 14:28:51 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jun 2022 11:29:34 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Huertas-Tato",
"Javier",
""
],
[
"Martin",
"Alejandro",
""
],
[
"Camacho",
"David",
""
]
] |
new_dataset
| 0.979788 |
2204.11167
|
Xiaojian Ma
|
Xiaojian Ma, Weili Nie, Zhiding Yu, Huaizu Jiang, Chaowei Xiao, Yuke
Zhu, Song-Chun Zhu, Anima Anandkumar
|
RelViT: Concept-guided Vision Transformer for Visual Relational
Reasoning
|
ICLR 2022; Code: https://github.com/NVlabs/RelViT
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Reasoning about visual relationships is central to how humans interpret the
visual world. This task remains challenging for current deep learning
algorithms since it requires addressing three key technical problems jointly:
1) identifying object entities and their properties, 2) inferring semantic
relations between pairs of entities, and 3) generalizing to novel
object-relation combinations, i.e., systematic generalization. In this work, we
use vision transformers (ViTs) as our base model for visual reasoning and make
better use of concepts defined as object entities and their relations to
improve the reasoning ability of ViTs. Specifically, we introduce a novel
concept-feature dictionary to allow flexible image feature retrieval at
training time with concept keys. This dictionary enables two new concept-guided
auxiliary tasks: 1) a global task for promoting relational reasoning, and 2) a
local task for facilitating semantic object-centric correspondence learning. To
examine the systematic generalization of visual reasoning models, we introduce
systematic splits for the standard HICO and GQA benchmarks. We show the
resulting model, Concept-guided Vision Transformer (or RelViT for short)
significantly outperforms prior approaches on HICO and GQA by 16% and 13% in
the original split, and by 43% and 18% in the systematic split. Our ablation
analyses also reveal our model's compatibility with multiple ViT variants and
robustness to hyper-parameters.
|
[
{
"version": "v1",
"created": "Sun, 24 Apr 2022 02:46:43 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Jun 2022 13:42:27 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Ma",
"Xiaojian",
""
],
[
"Nie",
"Weili",
""
],
[
"Yu",
"Zhiding",
""
],
[
"Jiang",
"Huaizu",
""
],
[
"Xiao",
"Chaowei",
""
],
[
"Zhu",
"Yuke",
""
],
[
"Zhu",
"Song-Chun",
""
],
[
"Anandkumar",
"Anima",
""
]
] |
new_dataset
| 0.998402 |
2205.06116
|
Arash Tavakoli
|
Arash Tavakoli, Nathan Lai, Vahid Balali, and Arsalan Heydarian
|
How are Drivers' Stress Levels and Emotions Associated with the Driving
Context? A Naturalistic Study
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding and mitigating drivers' negative emotions, stress levels, and
anxiety is of high importance for decreasing accident rates, and enhancing road
safety. While detecting drivers' stress and negative emotions can significantly
help with this goal, understanding what might be associated with increases in
drivers' negative emotions and high stress level, might better help with
planning interventions. While studies have provided significant insight into
detecting drivers' emotions and stress levels, not many studies focused on the
reasons behind changes in stress levels and negative emotions. In this study,
by using a naturalistic driving study database, we analyze the changes in the
driving scene, including road objects and the dynamical relationship between
the ego vehicle and the lead vehicle with respect to changes in drivers'
psychophysiological metrics (i.e., heart rate (HR) and facial expressions). Our
results indicate that different road objects might be associated with varying
levels of increase in drivers' HR as well as different proportions of negative
facial emotions detected through computer vision. Larger vehicles on the road,
such as trucks and buses, are associated with the highest amount of increase in
drivers' HR as well as negative emotions. Additionally, shorter distances and
higher standard deviation in the distance to the lead vehicle are associated
with a higher number of abrupt increases in drivers' HR, depicting a possible
increase in stress level. Our finding indicates more positive emotions, lower
facial engagement, and a lower abrupt increase in HR at a higher speed of
driving, which often happens in highway environments. This research
collectively shows that driving at higher speeds happening in highways by
avoiding certain road objects might be a better fit for keeping drivers in a
calmer, more positive state.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2022 14:30:50 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Jun 2022 02:55:30 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Tavakoli",
"Arash",
""
],
[
"Lai",
"Nathan",
""
],
[
"Balali",
"Vahid",
""
],
[
"Heydarian",
"Arsalan",
""
]
] |
new_dataset
| 0.983411 |
2206.01872
|
Fernando Pi\~nero Gonz\'alez
|
Fernando Pi\~nero Gonz\'alez and Doel Rivera Laboy
|
Affine Symplectic Grassmann codes
|
arXiv admin note: substantial text overlap with arXiv:2110.08964
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this manuscript, we introduce a new class of linear codes, called affine
symplectic Grassmann codes, and determine their parameters, automorphism group,
minimum distance codewords, dual code and other key features. These linear
codes are defined from an affine part of a polar symplectic Grassmannian. They
combine polar symplectic Grassmann codes and affine Grassmann codes.
|
[
{
"version": "v1",
"created": "Sat, 4 Jun 2022 01:44:32 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jun 2022 22:53:07 GMT"
},
{
"version": "v3",
"created": "Sat, 11 Jun 2022 15:32:56 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"González",
"Fernando Piñero",
""
],
[
"Laboy",
"Doel Rivera",
""
]
] |
new_dataset
| 0.996552 |
2206.02327
|
Jaime Moraga
|
Jaime Moraga, H. Sebnem Duzgun
|
JigsawHSI: a network for Hyperspectral Image classification
|
7 pages, 7 figures, not peer reviewed
| null | null | null |
cs.CV cs.LG stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This article describes Jigsaw, a convolutional neural network (CNN) used in
geosciences and based on Inception but tailored for geoscientific analyses.
Introduces JigsawHSI (based on Jigsaw) and uses it on the land-use land-cover
(LULC) classification problem with the Indian Pines, Pavia University and
Salinas hyperspectral image data sets. The network is compared against
HybridSN, a spectral-spatial 3D-CNN followed by 2D-CNN that achieves
state-of-the-art results on the datasets. This short article proves that
JigsawHSI is able to meet or exceed HybridSN's performance in all three cases.
Additionally, the use of jigsaw in geosciences is highlighted, while the code
and toolkit are made available.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 02:56:51 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jun 2022 22:04:10 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Moraga",
"Jaime",
""
],
[
"Duzgun",
"H. Sebnem",
""
]
] |
new_dataset
| 0.994978 |
2206.02852
|
Hesham Almatary
|
Hesham Almatary, Michael Dodson, Jessica Clarke, Peter Rugg, Ivan
Gomes, Michal Podhradsky, Peter G. Neumann, Simon W. Moore, Robert N. M.
Watson
|
CompartOS: CHERI Compartmentalization for Embedded Systems
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Existing high-end embedded systems face frequent security attacks. Software
compartmentalization is one technique to limit the attacks' effects to the
compromised compartment and not the entire system. Unfortunately, the existing
state-of-the-art embedded hardware-software solutions do not work well to
enforce software compartmentalization for high-end embedded systems. MPUs are
not fine-grained and suffer from significant scalability limitations as they
can only protect a small and fixed number of memory regions. On the other hand,
MMUs suffer from non-determinism and coarse-grained protection. This paper
introduces CompartOS as a lightweight linkage-based compartmentalization model
for high-end, complex, mainstream embedded systems. CompartOS builds on CHERI,
a capability-based hardware architecture, to meet scalability, availability,
compatibility, and fine-grained security goals. Microbenchmarks show that
CompartOS' protection-domain crossing is 95% faster than MPU-based IPC. We
applied the CompartOS model, with low effort, to complex existing systems,
including TCP servers and a safety-critical automotive demo. CompartOS not only
catches 10 out of 13 FreeRTOS-TCP published vulnerabilities that MPU-based
protection (e.g., uVisor) cannot catch but can also recover from them. Further,
our TCP throughput evaluations show that our CompartOS prototype is 52% faster
than relevant MPU-based compartmentalization models (e.g., ACES), with a 15%
overhead compared to an unprotected system. This comes at an FPGA's LUTs
overhead of 10.4% to support CHERI for an unprotected baseline RISC-V
processor, compared to 7.6% to support MPU, while CHERI only incurs 1.3% of the
registers area overhead compared to 2% for MPU.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 18:59:02 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Jun 2022 11:00:15 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Almatary",
"Hesham",
""
],
[
"Dodson",
"Michael",
""
],
[
"Clarke",
"Jessica",
""
],
[
"Rugg",
"Peter",
""
],
[
"Gomes",
"Ivan",
""
],
[
"Podhradsky",
"Michal",
""
],
[
"Neumann",
"Peter G.",
""
],
[
"Moore",
"Simon W.",
""
],
[
"Watson",
"Robert N. M.",
""
]
] |
new_dataset
| 0.965257 |
2206.03544
|
Roman Beliy
|
Ganit Kupershmidt, Roman Beliy, Guy Gaziv, Michal Irani
|
A Penny for Your (visual) Thoughts: Self-Supervised Reconstruction of
Natural Movies from Brain Activity
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Reconstructing natural videos from fMRI brain recordings is very challenging,
for two main reasons: (i) As fMRI data acquisition is difficult, we only have a
limited amount of supervised samples, which is not enough to cover the huge
space of natural videos; and (ii) The temporal resolution of fMRI recordings is
much lower than the frame rate of natural videos. In this paper, we propose a
self-supervised approach for natural-movie reconstruction. By employing
cycle-consistency over Encoding-Decoding natural videos, we can: (i) exploit
the full framerate of the training videos, and not be limited only to clips
that correspond to fMRI recordings; (ii) exploit massive amounts of external
natural videos which the subjects never saw inside the fMRI machine. These
enable increasing the applicable training data by several orders of magnitude,
introducing natural video priors to the decoding network, as well as temporal
coherence. Our approach significantly outperforms competing methods, since
those train only on the limited supervised data. We further introduce a new and
simple temporal prior of natural videos, which - when folded into our fMRI
decoder further - allows us to reconstruct videos at a higher frame-rate (HFR)
of up to x8 of the original fMRI sample rate.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 19:27:22 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jun 2022 01:16:19 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Jun 2022 22:15:21 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Kupershmidt",
"Ganit",
""
],
[
"Beliy",
"Roman",
""
],
[
"Gaziv",
"Guy",
""
],
[
"Irani",
"Michal",
""
]
] |
new_dataset
| 0.998626 |
2206.05269
|
Nithin Kavi
|
Nithin Kavi
|
MapReduce for Counting Word Frequencies with MPI and GPUs
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
In this project, the goal was to use the Julia programming language and
parallelization to write a fast map reduce algorithm to count word frequencies
across large numbers of documents. We first implement the word frequency
counter algorithm on a CPU using two processes with MPI. Then, we create
another implementation, but on a GPU using the Julia CUDA library, though not
using the in built map reduce algorithm within FoldsCUDA.jl. After doing this,
we apply our CPU and GPU algorithms to count the frequencies of words in
speeches given by Presidents George W Bush, Barack H Obama, Donald J Trump, and
Joseph R Biden with the aim of finding patterns in word choice that could be
used to uniquely identify each President. We find that each President does have
certain words that they use distinctly more often than their fellow Presidents,
and these words are not surprising given the political climate at the time.
The goal of this project was to create faster MapReduce algorithms in Julia
on the CPU and GPU than the ones that have already been written previously. We
present some simple cases of mapping functions where our GPU algorithm
outperforms Julia's FoldsCUDA implementation. We also discuss ideas for further
optimizations in the case of counting word frequencies in documents and for
these specific mapping functions.
|
[
{
"version": "v1",
"created": "Sat, 21 May 2022 17:28:12 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Kavi",
"Nithin",
""
]
] |
new_dataset
| 0.998309 |
2206.05309
|
Pragyana Mishra
|
Pragyana Mishra and Omead Amidi and Takeo Kanade
|
EigenFairing: 3D Model Fairing using Image Coherence
|
British Machine Vision Conference, BMVC 2004, Kingston, UK, September
7-9, 2004
|
Proceedings of the British Machine Conference, pages 1-10, BMVA
Press, September 2004
|
10.5244/C.18.4
| null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
A surface is often modeled as a triangulated mesh of 3D points and textures
associated with faces of the mesh. The 3D points could be either sampled from
range data or derived from a set of images using a stereo or
Structure-from-Motion algorithm. When the points do not lie at critical points
of maximum curvature or discontinuities of the real surface, faces of the mesh
do not lie close to the modeled surface. This results in textural artifacts,
and the model is not perfectly coherent with a set of actual images -- the ones
that are used to texture-map its mesh. This paper presents a technique for
perfecting the 3D surface model by repositioning its vertices so that it is
coherent with a set of observed images of the object. The textural artifacts
and incoherence with images are due to the non-planarity of a surface patch
being approximated by a planar face, as observed from multiple viewpoints.
Image areas from the viewpoints are used to represent texture for the patch in
Eigenspace. The Eigenspace representation captures variations of texture, which
we seek to minimize. A coherence measure based on the difference between the
face textures reconstructed from Eigenspace and the actual images is used to
reposition the vertices so that the model is improved or faired. We refer to
this technique of model refinement as EigenFairing, by which the model is
faired, both geometrically and texturally, to better approximate the real
surface.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 18:13:19 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Mishra",
"Pragyana",
""
],
[
"Amidi",
"Omead",
""
],
[
"Kanade",
"Takeo",
""
]
] |
new_dataset
| 0.961856 |
2206.05319
|
Takuma Yagi
|
Takuma Yagi, Md Tasnimul Hasan, Yoichi Sato
|
Object Instance Identification in Dynamic Environments
|
Joint 1st Ego4D and 10th EPIC Workshop (EPIC@CVPR2022) Extended
Abstract
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We study the problem of identifying object instances in a dynamic environment
where people interact with the objects. In such an environment, objects'
appearance changes dynamically by interaction with other entities, occlusion by
hands, background change, etc. This leads to a larger intra-instance variation
of appearance than in static environments. To discover the challenges in this
setting, we newly built a benchmark of more than 1,500 instances built on the
EPIC-KITCHENS dataset which includes natural activities and conducted an
extensive analysis of it. Experimental results suggest that (i) robustness
against instance-specific appearance change (ii) integration of low-level
(e.g., color, texture) and high-level (e.g., object category) features (iii)
foreground feature selection on overlapping objects are required for further
improvement.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 18:38:10 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Yagi",
"Takuma",
""
],
[
"Hasan",
"Md Tasnimul",
""
],
[
"Sato",
"Yoichi",
""
]
] |
new_dataset
| 0.999175 |
2206.05379
|
Aimen Zerroug
|
Aimen Zerroug, Mohit Vaishnav, Julien Colin, Sebastian Musslick,
Thomas Serre
|
A Benchmark for Compositional Visual Reasoning
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
A fundamental component of human vision is our ability to parse complex
visual scenes and judge the relations between their constituent objects. AI
benchmarks for visual reasoning have driven rapid progress in recent years with
state-of-the-art systems now reaching human accuracy on some of these
benchmarks. Yet, a major gap remains in terms of the sample efficiency with
which humans and AI systems learn new visual reasoning tasks. Humans'
remarkable efficiency at learning has been at least partially attributed to
their ability to harness compositionality -- such that they can efficiently
take advantage of previously gained knowledge when learning new tasks. Here, we
introduce a novel visual reasoning benchmark, Compositional Visual Relations
(CVR), to drive progress towards the development of more data-efficient
learning algorithms. We take inspiration from fluidic intelligence and
non-verbal reasoning tests and describe a novel method for creating
compositions of abstract rules and associated image datasets at scale. Our
proposed benchmark includes measures of sample efficiency, generalization and
transfer across task rules, as well as the ability to leverage
compositionality. We systematically evaluate modern neural architectures and
find that, surprisingly, convolutional architectures surpass transformer-based
architectures across all performance measures in most data regimes. However,
all computational models are a lot less data efficient compared to humans even
after learning informative visual representations using self-supervision.
Overall, we hope that our challenge will spur interest in the development of
neural architectures that can learn to harness compositionality toward more
efficient learning.
|
[
{
"version": "v1",
"created": "Sat, 11 Jun 2022 00:04:49 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Zerroug",
"Aimen",
""
],
[
"Vaishnav",
"Mohit",
""
],
[
"Colin",
"Julien",
""
],
[
"Musslick",
"Sebastian",
""
],
[
"Serre",
"Thomas",
""
]
] |
new_dataset
| 0.998995 |
2206.05397
|
Stephen MacDonell
|
Sherlock A. Licorish, Christoph Treude, John Grundy, Chakkrit
Tantithamthavorn, Kelly Blincoe, Stephen MacDonell, Li Li, Jean-Guy Schneider
|
Software Engineering in Australasia
|
Journal article, 1 figure, 3 pages
|
Software Engineering in Australasia, SIGSOFT Softw. Eng. Notes 46,
2(April 2021), pp. 16-17
|
10.1145/3448992.3448995
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Six months ago an important call was made for researchers globally to provide
insights into the way Software Engineering is done in their region. Heeding
this call we hereby outline the position Software Engineering in Australasia
(New Zealand and Australia). This article first considers the software
development methods practices and tools that are popular in the Australasian
software engineering community. We then briefly review the particular strengths
of software engineering researchers in Australasia. Finally we make an open
call for collaborators by reflecting on our current position and identifying
future opportunities
|
[
{
"version": "v1",
"created": "Sat, 11 Jun 2022 02:14:54 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Licorish",
"Sherlock A.",
""
],
[
"Treude",
"Christoph",
""
],
[
"Grundy",
"John",
""
],
[
"Tantithamthavorn",
"Chakkrit",
""
],
[
"Blincoe",
"Kelly",
""
],
[
"MacDonell",
"Stephen",
""
],
[
"Li",
"Li",
""
],
[
"Schneider",
"Jean-Guy",
""
]
] |
new_dataset
| 0.990057 |
2206.05414
|
Tiejun Lv
|
Jintao Xing and Tiejun Lv and Yashuai Cao and Jie Zeng and Pingmu
Huang
|
Downlink Power Minimization in Intelligent Reconfigurable Surface-Aided
Security Classification Wireless Communications System
|
13 pages, 9 figures, Accepted by IEEE Systems Journal
| null |
10.1109/JSYST.2022.3182465
| null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
User privacy protection is considered a critical issue in wireless networks,
which drives the demand for various secure information interaction techniques.
In this paper, we introduce an intelligent reflecting surface (IRS)-aided
security classification wireless communication system, which reduces the
transmit power of the base station (BS) by classifying users with different
security requirements. Specifically, we divide the users into confidential
subscribers with secure communication requirements and general communication
users with simple communication requirements. During the communication period,
we guarantee the secure rate of the confidential subscribers while ensuring the
service quality of the general communication users, thereby reducing the
transmit power of the BS. To realize such a secure and green information
transmission, the BS implements a beamforming design on the transmitted signal
superimposed with artificial noise (AN) and then broadcasts it to users with
the assistance of the IRS's reflection. We develop an alternating optimization
framework to minimize the BS downlink power with respect to the active
beamformers of the BS, the AN vector at the BS, and the reflection phase shifts
of the IRS. A successive convex approximation (SCA) method is proposed so that
the nonconvex beamforming problems can be converted to tractable convex forms.
The simulation results demonstrate that the proposed algorithm is convergent
and can reduce the transmit power by 20\% compared to the best benchmark
scheme.
|
[
{
"version": "v1",
"created": "Sat, 11 Jun 2022 04:02:27 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Xing",
"Jintao",
""
],
[
"Lv",
"Tiejun",
""
],
[
"Cao",
"Yashuai",
""
],
[
"Zeng",
"Jie",
""
],
[
"Huang",
"Pingmu",
""
]
] |
new_dataset
| 0.975643 |
2206.05418
|
Jianfeng Zhan
|
Yatao Li, Jianfeng Zhan
|
SAIBench: Benchmarking AI for Science
|
Published in BenchCouncil Transactions on Benchmarks, Standards and
Evaluations (TBench)
| null |
10.1016/j.tbench.2022.100063
| null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Scientific research communities are embracing AI-based solutions to target
tractable scientific tasks and improve research workflows. However, the
development and evaluation of such solutions are scattered across multiple
disciplines. We formalize the problem of scientific AI benchmarking, and
propose a system called SAIBench in the hope of unifying the efforts and
enabling low-friction on-boarding of new disciplines. The system approaches
this goal with SAIL, a domain-specific language to decouple research problems,
AI models, ranking criteria, and software/hardware configuration into reusable
modules. We show that this approach is flexible and can adapt to problems, AI
models, and evaluation methods defined in different perspectives. The project
homepage is https://www.computercouncil.org/SAIBench
|
[
{
"version": "v1",
"created": "Sat, 11 Jun 2022 04:19:51 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Li",
"Yatao",
""
],
[
"Zhan",
"Jianfeng",
""
]
] |
new_dataset
| 0.983176 |
2206.05542
|
Varun Ravi Kumar
|
Varun Ravi Kumar
|
Surround-View Cameras based Holistic Visual Perception for Automated
Driving
|
Doctoral thesis
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The formation of eyes led to the big bang of evolution. The dynamics changed
from a primitive organism waiting for the food to come into contact for eating
food being sought after by visual sensors. The human eye is one of the most
sophisticated developments of evolution, but it still has defects. Humans have
evolved a biological perception algorithm capable of driving cars, operating
machinery, piloting aircraft, and navigating ships over millions of years.
Automating these capabilities for computers is critical for various
applications, including self-driving cars, augmented reality, and architectural
surveying. Near-field visual perception in the context of self-driving cars can
perceive the environment in a range of $0-10$ meters and 360{\deg} coverage
around the vehicle. It is a critical decision-making component in the
development of safer automated driving. Recent advances in computer vision and
deep learning, in conjunction with high-quality sensors such as cameras and
LiDARs, have fueled mature visual perception solutions. Until now, far-field
perception has been the primary focus. Another significant issue is the limited
processing power available for developing real-time applications. Because of
this bottleneck, there is frequently a trade-off between performance and
run-time efficiency. We concentrate on the following issues in order to address
them: 1) Developing near-field perception algorithms with high performance and
low computational complexity for various visual perception tasks such as
geometric and semantic tasks using convolutional neural networks. 2) Using
Multi-Task Learning to overcome computational bottlenecks by sharing initial
convolutional layers between tasks and developing optimization strategies that
balance tasks.
|
[
{
"version": "v1",
"created": "Sat, 11 Jun 2022 14:51:30 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Kumar",
"Varun Ravi",
""
]
] |
new_dataset
| 0.986594 |
2206.05759
|
Sarah Obead
|
Sarah A. Obead and J\"org Kliewer
|
Pliable Private Information Retrieval
|
23 pages, 3 figures, 3 tables, submitted for possible publication
| null | null | null |
cs.IT cs.IR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We formulate a new variant of the private information retrieval (PIR) problem
where the user is pliable, i.e., interested in any message from a desired
subset of the available dataset, denoted as pliable private information
retrieval (PPIR). We consider a setup where a dataset consisting of $f$
messages is replicated in $n$ noncolluding databases and classified into
$\Gamma$ classes. For this setup, the user wishes to retrieve any $\lambda\geq
1$ messages from multiple desired classes, i.e., $\eta\geq 1$, while revealing
no information about the identity of the desired classes to the databases. We
term this problem multi-message PPIR (M-PPIR) and introduce the single-message
PPIR (PPIR) problem as an elementary special case of M-PPIR. We first derive
converse bounds on the M-PPIR rate, which is defined as the ratio of the
desired amount of information and the total amount of downloaded information,
followed by the corresponding achievable schemes. As a result, we show that the
PPIR capacity, i.e., the maximum achievable PPIR rate, for $n$ noncolluding
databases matches the capacity of PIR with $n$ databases and $\Gamma$ messages.
Thus, enabling flexibility, i.e., pliability, where privacy is only guaranteed
for classes, but not for messages as in classical PIR, allows to trade-off
privacy versus download rate. A similar insight is shown to hold for the
general case of M-PPIR.
|
[
{
"version": "v1",
"created": "Sun, 12 Jun 2022 15:04:03 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Obead",
"Sarah A.",
""
],
[
"Kliewer",
"Jörg",
""
]
] |
new_dataset
| 0.958144 |
2206.05771
|
Linh K\"astner
|
Linh K\"astner, Bassel Fatloun, Zhengcheng Shen, Daniel Gawrisch, and
Jens Lambrecht
|
Human-Following and -guiding in Crowded Environments using Semantic
Deep-Reinforcement-Learning for Mobile Service Robots
|
IEEE International Conference on Robotics and Automation 2022, 7
pages, 4 figures
| null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Assistance robots have gained widespread attention in various industries such
as logistics and human assistance. The tasks of guiding or following a human in
a crowded environment such as airports or train stations to carry weight or
goods is still an open problem. In these use cases, the robot is not only
required to intelligently interact with humans, but also to navigate safely
among crowds. Thus, especially highly dynamic environments pose a grand
challenge due to the volatile behavior patterns and unpredictable movements of
humans. In this paper, we propose a Deep-Reinforcement-Learning-based agent for
human-guiding and -following tasks in crowded environments. Therefore, we
incorporate semantic information to provide the agent with high-level
information like the social states of humans, safety models, and class types.
We evaluate our proposed approach against a benchmark approach without semantic
information and demonstrated enhanced navigational safety and robustness.
Moreover, we demonstrate that the agent could learn to adapt its behavior to
humans, which improves the human-robot interaction significantly.
|
[
{
"version": "v1",
"created": "Sun, 12 Jun 2022 15:29:31 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Kästner",
"Linh",
""
],
[
"Fatloun",
"Bassel",
""
],
[
"Shen",
"Zhengcheng",
""
],
[
"Gawrisch",
"Daniel",
""
],
[
"Lambrecht",
"Jens",
""
]
] |
new_dataset
| 0.980407 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.