id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2208.08330
|
Jungho Ahn
|
Jungho Ahn, Seonghyuk Im, and Sang-il Oum
|
The proper conflict-free $k$-coloring problem and the odd $k$-coloring
problem are NP-complete on bipartite graphs
|
13 pages, 2 figures
| null | null | null |
cs.CC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A proper coloring of a graph is \emph{proper conflict-free} if every
non-isolated vertex $v$ has a neighbor whose color is unique in the
neighborhood of $v$. A proper coloring of a graph is \emph{odd} if for every
non-isolated vertex $v$, there is a color appearing an odd number of times in
the neighborhood of $v$. For an integer $k$, the \textsc{PCF $k$-Coloring}
problem asks whether an input graph admits a proper conflict-free $k$-coloring
and the \textsc{Odd $k$-Coloring} asks whether an input graph admits an odd
$k$-coloring. We show that for every integer $k\geq3$, both problems are
NP-complete, even if the input graph is bipartite. Furthermore, we show that
the \textsc{PCF $4$-Coloring} problem is NP-complete when the input graph is
planar.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 14:53:14 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Ahn",
"Jungho",
""
],
[
"Im",
"Seonghyuk",
""
],
[
"Oum",
"Sang-il",
""
]
] |
new_dataset
| 0.999225 |
2208.08374
|
Pradyumna Tambwekar
|
Pradyumna Tambwekar, Nathan Vaska, Lakshita Dodeja, Matthew Gombolay
|
Commander's Intent: A Dataset and Modeling Approach for Human-AI Task
Specification in Strategic Play
|
12 Pages, 5 figures, 1 page appendix
| null | null | null |
cs.AI cs.CL cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Effective Human-AI teaming requires the ability to communicate the goals of
the team and constraints under which you need the agent to operate. Providing
the ability to specify the shared intent or operation criteria of the team can
enable an AI agent to perform its primary function while still being able to
cater to the specific desires of the current team. While significant work has
been conducted to instruct an agent to perform a task, via language or
demonstrations, prior work lacks a focus on building agents which can operate
within the parameters specified by a team. Worse yet, there is a dearth of
research pertaining to enabling humans to provide their specifications through
unstructured, naturalist language. In this paper, we propose the use of goals
and constraints as a scaffold to modulate and evaluate autonomous agents. We
contribute to this field by presenting a novel dataset, and an associated data
collection protocol, which maps language descriptions to goals and constraints
corresponding to specific strategies developed by human participants for the
board game Risk. Leveraging state-of-the-art language models and augmentation
procedures, we develop a machine learning framework which can be used to
identify goals and constraints from unstructured strategy descriptions. To
empirically validate our approach we conduct a human-subjects study to
establish a human-baseline for our dataset. Our results show that our machine
learning architecture is better able to interpret unstructured language
descriptions into strategy specifications than human raters tasked with
performing the same machine translation task (F(1,272.53) = 17.025, p < 0.001).
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 16:11:07 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Tambwekar",
"Pradyumna",
""
],
[
"Vaska",
"Nathan",
""
],
[
"Dodeja",
"Lakshita",
""
],
[
"Gombolay",
"Matthew",
""
]
] |
new_dataset
| 0.99946 |
2208.08439
|
Vladislav Golyanik
|
Zhi Li and Soshi Shimada and Bernt Schiele and Christian Theobalt and
Vladislav Golyanik
|
MoCapDeform: Monocular 3D Human Motion Capture in Deformable Scenes
|
11 pages, 8 figures, 3 tables; project page:
https://4dqv.mpi-inf.mpg.de/MoCapDeform/
|
International Conference on 3D Vision 2022 (Oral)
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D human motion capture from monocular RGB images respecting interactions of
a subject with complex and possibly deformable environments is a very
challenging, ill-posed and under-explored problem. Existing methods address it
only weakly and do not model possible surface deformations often occurring when
humans interact with scene surfaces. In contrast, this paper proposes
MoCapDeform, i.e., a new framework for monocular 3D human motion capture that
is the first to explicitly model non-rigid deformations of a 3D scene for
improved 3D human pose estimation and deformable environment reconstruction.
MoCapDeform accepts a monocular RGB video and a 3D scene mesh aligned in the
camera space. It first localises a subject in the input monocular video along
with dense contact labels using a new raycasting based strategy. Next, our
human-environment interaction constraints are leveraged to jointly optimise
global 3D human poses and non-rigid surface deformations. MoCapDeform achieves
superior accuracy than competing methods on several datasets, including our
newly recorded one with deforming background scenes.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 17:59:54 GMT"
}
] | 2022-08-18T00:00:00 |
[
[
"Li",
"Zhi",
""
],
[
"Shimada",
"Soshi",
""
],
[
"Schiele",
"Bernt",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Golyanik",
"Vladislav",
""
]
] |
new_dataset
| 0.999048 |
2112.07642
|
Siwei Zhang
|
Siwei Zhang, Qianli Ma, Yan Zhang, Zhiyin Qian, Taein Kwon, Marc
Pollefeys, Federica Bogo, Siyu Tang
|
EgoBody: Human Body Shape and Motion of Interacting People from
Head-Mounted Devices
|
Camera ready version for ECCV 2022, appendix included
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding social interactions from egocentric views is crucial for many
applications, ranging from assistive robotics to AR/VR. Key to reasoning about
interactions is to understand the body pose and motion of the interaction
partner from the egocentric view. However, research in this area is severely
hindered by the lack of datasets. Existing datasets are limited in terms of
either size, capture/annotation modalities, ground-truth quality, or
interaction diversity. We fill this gap by proposing EgoBody, a novel
large-scale dataset for human pose, shape and motion estimation from egocentric
views, during interactions in complex 3D scenes. We employ Microsoft HoloLens2
headsets to record rich egocentric data streams (including RGB, depth, eye
gaze, head and hand tracking). To obtain accurate 3D ground truth, we calibrate
the headset with a multi-Kinect rig and fit expressive SMPL-X body meshes to
multi-view RGB-D frames, reconstructing 3D human shapes and poses relative to
the scene, over time. We collect 125 sequences, spanning diverse interaction
scenarios, and propose the first benchmark for 3D full-body pose and shape
estimation of the social partner from egocentric views. We extensively evaluate
state-of-the-art methods, highlight their limitations in the egocentric
scenario, and address such limitations leveraging our high-quality annotations.
Data and code are available at
https://sanweiliti.github.io/egobody/egobody.html.
|
[
{
"version": "v1",
"created": "Tue, 14 Dec 2021 18:41:28 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Jul 2022 19:36:25 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Aug 2022 16:52:07 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Zhang",
"Siwei",
""
],
[
"Ma",
"Qianli",
""
],
[
"Zhang",
"Yan",
""
],
[
"Qian",
"Zhiyin",
""
],
[
"Kwon",
"Taein",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Bogo",
"Federica",
""
],
[
"Tang",
"Siyu",
""
]
] |
new_dataset
| 0.99936 |
2201.04788
|
Weiling Chen
|
Weiling Chen, Sheng Lun Benjamin Chua, Stefan Winkler, See-Kiong Ng
|
Trusted Media Challenge Dataset and User Study
| null | null |
10.1145/3511808.3557715
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The development of powerful deep learning technologies has brought about some
negative effects to both society and individuals. One such issue is the
emergence of fake media. To tackle the issue, we have organized the Trusted
Media Challenge (TMC) to explore how Artificial Intelligence (AI) technologies
could be leveraged to combat fake media. To enable further research, we are
releasing the dataset that we had prepared from the TMC challenge, consisting
of 4,380 fake and 2,563 real videos, with various video and/or audio
manipulation methods employed to produce different types of fake media. All the
videos in the TMC dataset are accompanied with audios and have a minimum
resolution of 360p. The videos have various durations, background,
illumination, and may contain perturbations that mimic transmission errors and
compression. We have also carried out a user study to demonstrate the quality
of the TMC dataset and to compare the performance of humans and AI models. The
results showed that the TMC dataset can fool human participants in many cases,
and the winning AI models of the Trusted Media Challenge outperformed humans.
The TMC dataset is available for research purpose upon request via
tmc-dataset@aisingapore.org.
|
[
{
"version": "v1",
"created": "Thu, 13 Jan 2022 04:32:52 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Mar 2022 12:23:23 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Aug 2022 06:40:07 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Chen",
"Weiling",
""
],
[
"Chua",
"Sheng Lun Benjamin",
""
],
[
"Winkler",
"Stefan",
""
],
[
"Ng",
"See-Kiong",
""
]
] |
new_dataset
| 0.973251 |
2202.11503
|
Fan Zhu
|
Fan Zhu, Ruixing Jia, Lei Yang, Youcan Yan, Zheng Wang, Jia Pan,
Wenping Wang
|
Visual-Tactile Sensing for Real-time Liquid Volume Estimation in
Grasping
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a deep visuo-tactile model for realtime estimation of the liquid
inside a deformable container in a proprioceptive way.We fuse two sensory
modalities, i.e., the raw visual inputs from the RGB camera and the tactile
cues from our specific tactile sensor without any extra sensor calibrations.The
robotic system is well controlled and adjusted based on the estimation model in
real time. The main contributions and novelties of our work are listed as
follows: 1) Explore a proprioceptive way for liquid volume estimation by
developing an end-to-end predictive model with multi-modal convolutional
networks, which achieve a high precision with an error of around 2 ml in the
experimental validation. 2) Propose a multi-task learning architecture which
comprehensively considers the losses from both classification and regression
tasks, and comparatively evaluate the performance of each variant on the
collected data and actual robotic platform. 3) Utilize the proprioceptive
robotic system to accurately serve and control the requested volume of liquid,
which is continuously flowing into a deformable container in real time. 4)
Adaptively adjust the grasping plan to achieve more stable grasping and
manipulation according to the real-time liquid volume prediction.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 13:38:31 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2022 02:32:39 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Zhu",
"Fan",
""
],
[
"Jia",
"Ruixing",
""
],
[
"Yang",
"Lei",
""
],
[
"Yan",
"Youcan",
""
],
[
"Wang",
"Zheng",
""
],
[
"Pan",
"Jia",
""
],
[
"Wang",
"Wenping",
""
]
] |
new_dataset
| 0.984874 |
2202.13330
|
Xiaofeng Gao
|
Xiaofeng Gao, Qiaozi Gao, Ran Gong, Kaixiang Lin, Govind Thattai,
Gaurav S. Sukhatme
|
DialFRED: Dialogue-Enabled Agents for Embodied Instruction Following
|
8 pages, 5 figures, accepted by RA-L
|
IEEE Robotics and Automation Letters, vol. 7, no. 4, pp.
10049-10056, Oct. 2022
|
10.1109/LRA.2022.3193254
| null |
cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Language-guided Embodied AI benchmarks requiring an agent to navigate an
environment and manipulate objects typically allow one-way communication: the
human user gives a natural language command to the agent, and the agent can
only follow the command passively. We present DialFRED, a dialogue-enabled
embodied instruction following benchmark based on the ALFRED benchmark.
DialFRED allows an agent to actively ask questions to the human user; the
additional information in the user's response is used by the agent to better
complete its task. We release a human-annotated dataset with 53K task-relevant
questions and answers and an oracle to answer questions. To solve DialFRED, we
propose a questioner-performer framework wherein the questioner is pre-trained
with the human-annotated data and fine-tuned with reinforcement learning. We
make DialFRED publicly available and encourage researchers to propose and
evaluate their solutions to building dialog-enabled embodied agents.
|
[
{
"version": "v1",
"created": "Sun, 27 Feb 2022 09:50:45 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Aug 2022 19:42:54 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Gao",
"Xiaofeng",
""
],
[
"Gao",
"Qiaozi",
""
],
[
"Gong",
"Ran",
""
],
[
"Lin",
"Kaixiang",
""
],
[
"Thattai",
"Govind",
""
],
[
"Sukhatme",
"Gaurav S.",
""
]
] |
new_dataset
| 0.996341 |
2204.03864
|
Jing Li
|
Qidan Zhu, Jing Li, Fei Yuan, Quan Gan
|
Multi-scale temporal network for continuous sign language recognition
|
10 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Continuous Sign Language Recognition (CSLR) is a challenging research task
due to the lack of accurate annotation on the temporal sequence of sign
language data. The recent popular usage is a hybrid model based on "CNN + RNN"
for CSLR. However, when extracting temporal features in these works, most of
the methods using a fixed temporal receptive field and cannot extract the
temporal features well for each sign language word. In order to obtain more
accurate temporal features, this paper proposes a multi-scale temporal network
(MSTNet). The network mainly consists of three parts. The Resnet and two fully
connected (FC) layers constitute the frame-wise feature extraction part. The
time-wise feature extraction part performs temporal feature learning by first
extracting temporal receptive field features of different scales using the
proposed multi-scale temporal block (MST-block) to improve the temporal
modeling capability, and then further encoding the temporal features of
different scales by the transformers module to obtain more accurate temporal
features. Finally, the proposed multi-level Connectionist Temporal
Classification (CTC) loss part is used for training to obtain recognition
results. The multi-level CTC loss enables better learning and updating of the
shallow network parameters in CNN, and the method has no parameter increase and
can be flexibly embedded in other models. Experimental results on two publicly
available datasets demonstrate that our method can effectively extract sign
language features in an end-to-end manner without any prior knowledge,
improving the accuracy of CSLR and achieving competitive results.
|
[
{
"version": "v1",
"created": "Fri, 8 Apr 2022 06:14:22 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2022 06:36:09 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Zhu",
"Qidan",
""
],
[
"Li",
"Jing",
""
],
[
"Yuan",
"Fei",
""
],
[
"Gan",
"Quan",
""
]
] |
new_dataset
| 0.98515 |
2205.01643
|
Jinze Yu
|
Jinze Yu, Jiaming Liu, Xiaobao Wei, Haoyi Zhou, Yohei Nakata, Denis
Gudovskiy, Tomoyuki Okuno, Jianxin Li, Kurt Keutzer, Shanghang Zhang
|
MTTrans: Cross-Domain Object Detection with Mean-Teacher Transformer
|
Accepted by ECCV 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, DEtection TRansformer (DETR), an end-to-end object detection
pipeline, has achieved promising performance. However, it requires large-scale
labeled data and suffers from domain shift, especially when no labeled data is
available in the target domain. To solve this problem, we propose an end-to-end
cross-domain detection Transformer based on the mean teacher framework,
MTTrans, which can fully exploit unlabeled target domain data in object
detection training and transfer knowledge between domains via pseudo labels. We
further propose the comprehensive multi-level feature alignment to improve the
pseudo labels generated by the mean teacher framework taking advantage of the
cross-scale self-attention mechanism in Deformable DETR. Image and object
features are aligned at the local, global, and instance levels with domain
query-based feature alignment (DQFA), bi-level graph-based prototype alignment
(BGPA), and token-wise image feature alignment (TIFA). On the other hand, the
unlabeled target domain data pseudo-labeled and available for the object
detection training by the mean teacher framework can lead to better feature
extraction and alignment. Thus, the mean teacher framework and the
comprehensive multi-level feature alignment can be optimized iteratively and
mutually based on the architecture of Transformers. Extensive experiments
demonstrate that our proposed method achieves state-of-the-art performance in
three domain adaptation scenarios, especially the result of Sim10k to
Cityscapes scenario is remarkably improved from 52.6 mAP to 57.9 mAP. Code will
be released.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 17:11:55 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2022 09:55:23 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Yu",
"Jinze",
""
],
[
"Liu",
"Jiaming",
""
],
[
"Wei",
"Xiaobao",
""
],
[
"Zhou",
"Haoyi",
""
],
[
"Nakata",
"Yohei",
""
],
[
"Gudovskiy",
"Denis",
""
],
[
"Okuno",
"Tomoyuki",
""
],
[
"Li",
"Jianxin",
""
],
[
"Keutzer",
"Kurt",
""
],
[
"Zhang",
"Shanghang",
""
]
] |
new_dataset
| 0.999402 |
2205.14540
|
Feng Liang
|
Feng Liang, Yangguang Li, Diana Marculescu
|
SupMAE: Supervised Masked Autoencoders Are Efficient Vision Learners
|
Technical report. Codes are available
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, self-supervised Masked Autoencoders (MAE) have attracted
unprecedented attention for their impressive representation learning ability.
However, the pretext task, Masked Image Modeling (MIM), reconstructs the
missing local patches, lacking the global understanding of the image. This
paper extends MAE to a fully-supervised setting by adding a supervised
classification branch, thereby enabling MAE to effectively learn global
features from golden labels. The proposed Supervised MAE (SupMAE) only exploits
a visible subset of image patches for classification, unlike the standard
supervised pre-training where all image patches are used. Through experiments,
we demonstrate that not only is SupMAE more training efficient but also it
learns more robust and transferable features. Specifically, SupMAE achieves
comparable performance with MAE using only 30% of compute when evaluated on
ImageNet with the ViT-B/16 model. SupMAE's robustness on ImageNet variants and
transfer learning performance outperforms MAE and standard supervised
pre-training counterparts. Code will be made publicly available.
|
[
{
"version": "v1",
"created": "Sat, 28 May 2022 23:05:03 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2022 17:49:32 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Liang",
"Feng",
""
],
[
"Li",
"Yangguang",
""
],
[
"Marculescu",
"Diana",
""
]
] |
new_dataset
| 0.995837 |
2207.01298
|
Jingyao Zhang
|
Jingyao Zhang, Hoda Naghibijouybari, Elaheh Sadredini
|
Sealer: In-SRAM AES for High-Performance and Low-Overhead Memory
Encryption
|
6 pages, ISLPED 2022
| null |
10.1145/3531437.3539699
| null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
To provide data and code confidentiality and reduce the risk of information
leak from memory or memory bus, computing systems are enhanced with encryption
and decryption engine. Despite massive efforts in designing hardware
enhancements for data and code protection, existing solutions incur significant
performance overhead as the encryption/decryption is on the critical path. In
this paper, we present Sealer, a high-performance and low-overhead in-SRAM
memory encryption engine by exploiting the massive parallelism and bitline
computational capability of SRAM subarrays. Sealer encrypts data before sending
it off-chip and decrypts it upon receiving the memory blocks, thus, providing
data confidentiality. Our proposed solution requires only minimal modifications
to the existing SRAM peripheral circuitry. Sealer can achieve up to two orders
of magnitude throughput-per-area improvement while consuming 3x less energy
compared to the prior solutions.
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2022 10:04:05 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2022 07:39:19 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Zhang",
"Jingyao",
""
],
[
"Naghibijouybari",
"Hoda",
""
],
[
"Sadredini",
"Elaheh",
""
]
] |
new_dataset
| 0.994454 |
2207.10733
|
Sebastian J\"ager
|
Sebastian J\"ager, Alexander Flick, Jessica Adriana Sanchez Garcia,
Kaspar von den Driesch, Karl Brendel, Felix Biessmann
|
GreenDB -- A Dataset and Benchmark for Extraction of Sustainability
Information of Consumer Goods
|
Presented at DataPerf Workshop at the 39th International Conference
on Machine Learning, Baltimore, Maryland, USA, 2022
| null | null | null |
cs.LG cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The production, shipping, usage, and disposal of consumer goods have a
substantial impact on greenhouse gas emissions and the depletion of resources.
Machine Learning (ML) can help to foster sustainable consumption patterns by
accounting for sustainability aspects in product search or recommendations of
modern retail platforms. However, the lack of large high quality publicly
available product data with trustworthy sustainability information impedes the
development of ML technology that can help to reach our sustainability goals.
Here we present GreenDB, a database that collects products from European online
shops on a weekly basis. As proxy for the products' sustainability, it relies
on sustainability labels, which are evaluated by experts. The GreenDB schema
extends the well-known schema.org Product definition and can be readily
integrated into existing product catalogs. We present initial results
demonstrating that ML models trained with our data can reliably (F1 score 96%)
predict the sustainability label of products. These contributions can help to
complement existing e-commerce experiences and ultimately encourage users to
more sustainable consumption patterns.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 19:59:42 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2022 09:06:29 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Aug 2022 16:46:42 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Jäger",
"Sebastian",
""
],
[
"Flick",
"Alexander",
""
],
[
"Garcia",
"Jessica Adriana Sanchez",
""
],
[
"Driesch",
"Kaspar von den",
""
],
[
"Brendel",
"Karl",
""
],
[
"Biessmann",
"Felix",
""
]
] |
new_dataset
| 0.999794 |
2207.13591
|
Max Argus
|
Lukas Hermann, Max Argus, Adrian Roefer, Abhinav Valada, Thomas Brox
|
RobotIO: A Python Library for Robot Manipulation Experiments
|
6 pages, 3 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Setting up robot environments to quickly test newly developed algorithms is
still a difficult and time consuming process. This presents a significant
hurdle to researchers interested in performing real-world robotic experiments.
RobotIO is a python library designed to solve this problem. It focuses on
providing common, simple, and well structured python interfaces for robots,
grippers, and cameras, etc. These are provided with implementations of these
interfaces for common hardware. This enables code using RobotIO to be portable
across different robot setups. In terms of architecture, RobotIO is designed to
be compatible with OpenAI gym environments, as well as ROS; examples of both of
these are provided. The library comes together with a number of helpful tools,
such as camera calibration scripts and episode recording functionality that
further support algorithm development.
|
[
{
"version": "v1",
"created": "Wed, 27 Jul 2022 15:46:13 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2022 10:54:37 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Hermann",
"Lukas",
""
],
[
"Argus",
"Max",
""
],
[
"Roefer",
"Adrian",
""
],
[
"Valada",
"Abhinav",
""
],
[
"Brox",
"Thomas",
""
]
] |
new_dataset
| 0.999215 |
2208.06555
|
Foivos Tsimpourlas
|
Foivos Tsimpourlas, Pavlos Petoumenos, Min Xu, Chris Cummins, Kim
Hazelwood, Ajitha Rajan and Hugh Leather
|
BenchPress: A Deep Active Benchmark Generator
|
To appear in PACT 2022
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We develop BenchPress, the first ML benchmark generator for compilers that is
steerable within feature space representations of source code. BenchPress
synthesizes compiling functions by adding new code in any part of an empty or
existing sequence by jointly observing its left and right context, achieving
excellent compilation rate. BenchPress steers benchmark generation towards
desired target features that has been impossible for state of the art
synthesizers (or indeed humans) to reach. It performs better in targeting the
features of Rodinia benchmarks in 3 different feature spaces compared with (a)
CLgen - a state of the art ML synthesizer, (b) CLSmith fuzzer, (c) SRCIROR
mutator or even (d) human-written code from GitHub. BenchPress is the first
generator to search the feature space with active learning in order to generate
benchmarks that will improve a downstream task. We show how using BenchPress,
Grewe's et al. CPU vs GPU heuristic model can obtain a higher speedup when
trained on BenchPress's benchmarks compared to other techniques. BenchPress is
a powerful code generator: Its generated samples compile at a rate of 86%,
compared to CLgen's 2.33%. Starting from an empty fixed input, BenchPress
produces 10x more unique, compiling OpenCL benchmarks than CLgen, which are
significantly larger and more feature diverse.
|
[
{
"version": "v1",
"created": "Sat, 13 Aug 2022 03:00:50 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2022 00:40:44 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Tsimpourlas",
"Foivos",
""
],
[
"Petoumenos",
"Pavlos",
""
],
[
"Xu",
"Min",
""
],
[
"Cummins",
"Chris",
""
],
[
"Hazelwood",
"Kim",
""
],
[
"Rajan",
"Ajitha",
""
],
[
"Leather",
"Hugh",
""
]
] |
new_dataset
| 0.998511 |
2208.07368
|
Conrad Hougen
|
Conrad D. Hougen, Lance M. Kaplan, Magdalena Ivanovska, Federico
Cerutti, Kumar Vijay Mishra and Alfred O. Hero III
|
SOLBP: Second-Order Loopy Belief Propagation for Inference in Uncertain
Bayesian Networks
|
8 pages, appeared at FUSION 2022: 25th International Conference on
Information Fusion
| null | null | null |
cs.AI cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
In second-order uncertain Bayesian networks, the conditional probabilities
are only known within distributions, i.e., probabilities over probabilities.
The delta-method has been applied to extend exact first-order inference methods
to propagate both means and variances through sum-product networks derived from
Bayesian networks, thereby characterizing epistemic uncertainty, or the
uncertainty in the model itself. Alternatively, second-order belief propagation
has been demonstrated for polytrees but not for general directed acyclic graph
structures. In this work, we extend Loopy Belief Propagation to the setting of
second-order Bayesian networks, giving rise to Second-Order Loopy Belief
Propagation (SOLBP). For second-order Bayesian networks, SOLBP generates
inferences consistent with those generated by sum-product networks, while being
more computationally efficient and scalable.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 07:44:15 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Hougen",
"Conrad D.",
""
],
[
"Kaplan",
"Lance M.",
""
],
[
"Ivanovska",
"Magdalena",
""
],
[
"Cerutti",
"Federico",
""
],
[
"Mishra",
"Kumar Vijay",
""
],
[
"Hero",
"Alfred O.",
"III"
]
] |
new_dataset
| 0.969618 |
2208.07461
|
David Bieber
|
David Bieber, Kensen Shi, Petros Maniatis, Charles Sutton, Vincent
Hellendoorn, Daniel Johnson, Daniel Tarlow
|
A Library for Representing Python Programs as Graphs for Machine
Learning
|
21 pages, 14 figures
| null | null | null |
cs.LG cs.PL cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Graph representations of programs are commonly a central element of machine
learning for code research. We introduce an open source Python library
python_graphs that applies static analysis to construct graph representations
of Python programs suitable for training machine learning models. Our library
admits the construction of control-flow graphs, data-flow graphs, and composite
``program graphs'' that combine control-flow, data-flow, syntactic, and lexical
information about a program. We present the capabilities and limitations of the
library, perform a case study applying the library to millions of competitive
programming submissions, and showcase the library's utility for machine
learning research.
|
[
{
"version": "v1",
"created": "Mon, 15 Aug 2022 22:36:17 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Bieber",
"David",
""
],
[
"Shi",
"Kensen",
""
],
[
"Maniatis",
"Petros",
""
],
[
"Sutton",
"Charles",
""
],
[
"Hellendoorn",
"Vincent",
""
],
[
"Johnson",
"Daniel",
""
],
[
"Tarlow",
"Daniel",
""
]
] |
new_dataset
| 0.990221 |
2208.07524
|
Saurav Agarwal
|
Saurav Agarwal and Srinivas Akella
|
The Correlated Arc Orienteering Problem
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces the correlated arc orienteering problem (CAOP), where
the task is to find routes for a team of robots to maximize the collection of
rewards associated with features in the environment. These features can be
one-dimensional or points in the environment, and can have spatial correlation,
i.e., visiting a feature in the environment may provide a portion of the reward
associated with a correlated feature. A robot incurs costs as it traverses the
environment, and the total cost for its route is limited by a resource
constraint such as battery life or operation time. As environments are often
large, we permit multiple depots where the robots must start and end their
routes. The CAOP generalizes the correlated orienteering problem (COP), where
the rewards are only associated with point features, and the arc orienteering
problem (AOP), where the rewards are not spatially correlated. We formulate a
mixed integer quadratic program (MIQP) that formalizes the problem and gives
optimal solutions. However, the problem is NP-hard, and therefore we develop an
efficient greedy constructive algorithm. We illustrate the problem with two
different applications: informative path planning for methane gas leak
detection and coverage of road networks.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 04:02:22 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Agarwal",
"Saurav",
""
],
[
"Akella",
"Srinivas",
""
]
] |
new_dataset
| 0.970557 |
2208.07567
|
S\'andor Kisfaludi-Bak
|
Antonios Antoniadis, Mark de Berg, S\'andor Kisfaludi-Bak, Antonis
Skarlatos
|
Computing Smallest Convex Intersecting Polygons
|
Accepted to ESA 2022
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
A polygon C is an intersecting polygon for a set O of objects in the plane if
C intersects each object in O, where the polygon includes its interior. We
study the problem of computing the minimum-perimeter intersecting polygon and
the minimum-area convex intersecting polygon for a given set O of objects. We
present an FPTAS for both problems for the case where O is a set of possibly
intersecting convex polygons in the plane of total complexity n.
Furthermore, we present an exact polynomial-time algorithm for the
minimum-perimeter intersecting polygon for the case where O is a set of n
possibly intersecting segments in the plane. So far, polynomial-time exact
algorithms were only known for the minimum perimeter intersecting polygon of
lines or of disjoint segments.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 07:15:30 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Antoniadis",
"Antonios",
""
],
[
"de Berg",
"Mark",
""
],
[
"Kisfaludi-Bak",
"Sándor",
""
],
[
"Skarlatos",
"Antonis",
""
]
] |
new_dataset
| 0.999487 |
2208.07574
|
Valeria Pontillo
|
Valeria Pontillo, Dario Amoroso d'Aragona, Fabiano Pecorelli, Dario Di
Nucci, Filomena Ferrucci, Fabio Palomba
|
Machine Learning-Based Test Smell Detection
|
8 pages, 1 table, 38th IEEE International Conference on Software
Maintenance and Evolution (ICSME) - Registered Report
| null | null | null |
cs.SE cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Context: Test smells are symptoms of sub-optimal design choices adopted when
developing test cases. Previous studies have proved their harmfulness for test
code maintainability and effectiveness. Therefore, researchers have been
proposing automated, heuristic-based techniques to detect them. However, the
performance of such detectors is still limited and dependent on thresholds to
be tuned.
Objective: We propose the design and experimentation of a novel test smell
detection approach based on machine learning to detect four test smells.
Method: We plan to develop the largest dataset of manually-validated test
smells. This dataset will be leveraged to train six machine learners and assess
their capabilities in within- and cross-project scenarios. Finally, we plan to
compare our approach with state-of-the-art heuristic-based techniques.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 07:33:15 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Pontillo",
"Valeria",
""
],
[
"d'Aragona",
"Dario Amoroso",
""
],
[
"Pecorelli",
"Fabiano",
""
],
[
"Di Nucci",
"Dario",
""
],
[
"Ferrucci",
"Filomena",
""
],
[
"Palomba",
"Fabio",
""
]
] |
new_dataset
| 0.990846 |
2208.07582
|
Federico Fusco
|
Paul D\"utting, Federico Fusco, Silvio Lattanzi, Ashkan Norouzi-Fard,
Morteza Zadimoghaddam
|
Deletion Robust Non-Monotone Submodular Maximization over Matroids
|
Preliminary versions of this work appeared as arXiv:2201.13128 and in
ICML'22. The main difference with respect to these versions consists in
extending our results to non-monotone submodular functions
| null | null | null |
cs.DS cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Maximizing a submodular function is a fundamental task in machine learning
and in this paper we study the deletion robust version of the problem under the
classic matroids constraint. Here the goal is to extract a small size summary
of the dataset that contains a high value independent set even after an
adversary deleted some elements. We present constant-factor approximation
algorithms, whose space complexity depends on the rank $k$ of the matroid and
the number $d$ of deleted elements. In the centralized setting we present a
$(4.597+O(\varepsilon))$-approximation algorithm with summary size $O(
\frac{k+d}{\varepsilon^2}\log \frac{k}{\varepsilon})$ that is improved to a
$(3.582+O(\varepsilon))$-approximation with $O(k + \frac{d}{\varepsilon^2}\log
\frac{k}{\varepsilon})$ summary size when the objective is monotone. In the
streaming setting we provide a $(9.435 + O(\varepsilon))$-approximation
algorithm with summary size and memory $O(k + \frac{d}{\varepsilon^2}\log
\frac{k}{\varepsilon})$; the approximation factor is then improved to
$(5.582+O(\varepsilon))$ in the monotone case.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 07:51:58 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Dütting",
"Paul",
""
],
[
"Fusco",
"Federico",
""
],
[
"Lattanzi",
"Silvio",
""
],
[
"Norouzi-Fard",
"Ashkan",
""
],
[
"Zadimoghaddam",
"Morteza",
""
]
] |
new_dataset
| 0.994607 |
2208.07652
|
Jiangui Chen
|
Jiangui Chen, Ruqing Zhang, Jiafeng Guo, Yiqun Liu, Yixing Fan, Xueqi
Cheng
|
CorpusBrain: Pre-train a Generative Retrieval Model for
Knowledge-Intensive Language Tasks
|
Accepted by CIKM 2022
| null |
10.1145/3511808.3557271
| null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge-intensive language tasks (KILT) usually require a large body of
information to provide correct answers. A popular paradigm to solve this
problem is to combine a search system with a machine reader, where the former
retrieves supporting evidences and the latter examines them to produce answers.
Recently, the reader component has witnessed significant advances with the help
of large-scale pre-trained generative models. Meanwhile most existing solutions
in the search component rely on the traditional ``index-retrieve-then-rank''
pipeline, which suffers from large memory footprint and difficulty in
end-to-end optimization. Inspired by recent efforts in constructing model-based
IR models, we propose to replace the traditional multi-step search pipeline
with a novel single-step generative model, which can dramatically simplify the
search process and be optimized in an end-to-end manner. We show that a strong
generative retrieval model can be learned with a set of adequately designed
pre-training tasks, and be adopted to improve a variety of downstream KILT
tasks with further fine-tuning. We name the pre-trained generative retrieval
model as CorpusBrain as all information about the corpus is encoded in its
parameters without the need of constructing additional index. Empirical results
show that CorpusBrain can significantly outperform strong baselines for the
retrieval task on the KILT benchmark and establish new state-of-the-art
downstream performances. We also show that CorpusBrain works well under zero-
and low-resource settings.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 10:22:49 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Chen",
"Jiangui",
""
],
[
"Zhang",
"Ruqing",
""
],
[
"Guo",
"Jiafeng",
""
],
[
"Liu",
"Yiqun",
""
],
[
"Fan",
"Yixing",
""
],
[
"Cheng",
"Xueqi",
""
]
] |
new_dataset
| 0.990945 |
2208.07665
|
Christian Rondanini
|
Simone Bottoni, Anwitaman Datta, Federico Franzoni, Emanuele Ragnoli,
Roberto Ripamonti, Christian Rondanini, Gokhan Sagirlar, Alberto Trombetta
|
QPQ 1DLT: A system for the rapid deployment of secure and efficient
EVM-based blockchains
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Limited scalability and transaction costs are, among others, some of the
critical issues that hamper a wider adoption of distributed ledger technologies
(DLT). That is particularly true for the Ethereum blockchain, which, so far,
has been the ecosystem with the highest adoption rate. Quite a few solutions,
especially on the Ethereum side of things, have been attempted in the last few
years. Most of them adopt the approach to offload transactions from the
blockchain mainnet, a.k.a. Level 1 (L1), to a separate network. Such systems
are collectively known as Level 2 (L2) systems. While mitigating the
scalability issue, the adoption of L2 introduces additional drawbacks: users
have to trust that the L2 system has correctly performed transactions or,
conversely, high computational power is required to prove transactions
correctness. In addition, significant technical knowledge is needed to set up
and manage such an L2 system. To tackle such limitations, we propose 1DLT: a
novel system that enables rapid and trustless deployment of an Ethereum Virtual
Machine based blockchain that overcomes those drawbacks.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 11:04:56 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Bottoni",
"Simone",
""
],
[
"Datta",
"Anwitaman",
""
],
[
"Franzoni",
"Federico",
""
],
[
"Ragnoli",
"Emanuele",
""
],
[
"Ripamonti",
"Roberto",
""
],
[
"Rondanini",
"Christian",
""
],
[
"Sagirlar",
"Gokhan",
""
],
[
"Trombetta",
"Alberto",
""
]
] |
new_dataset
| 0.987759 |
2208.07682
|
Silvia Cascianelli PhD
|
Silvia Cascianelli, Vittorio Pippi, Martin Maarand, Marcella Cornia,
Lorenzo Baraldi, Christopher Kermorvant, Rita Cucchiara
|
The LAM Dataset: A Novel Benchmark for Line-Level Handwritten Text
Recognition
|
Accepted at ICPR 2022
| null | null | null |
cs.CV cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Handwritten Text Recognition (HTR) is an open problem at the intersection of
Computer Vision and Natural Language Processing. The main challenges, when
dealing with historical manuscripts, are due to the preservation of the paper
support, the variability of the handwriting -- even of the same author over a
wide time-span -- and the scarcity of data from ancient, poorly represented
languages. With the aim of fostering the research on this topic, in this paper
we present the Ludovico Antonio Muratori (LAM) dataset, a large line-level HTR
dataset of Italian ancient manuscripts edited by a single author over 60 years.
The dataset comes in two configurations: a basic splitting and a date-based
splitting which takes into account the age of the author. The first setting is
intended to study HTR on ancient documents in Italian, while the second focuses
on the ability of HTR systems to recognize text written by the same writer in
time periods for which training data are not available. For both
configurations, we analyze quantitative and qualitative characteristics, also
with respect to other line-level HTR benchmarks, and present the recognition
performance of state-of-the-art HTR architectures. The dataset is available for
download at \url{https://aimagelab.ing.unimore.it/go/lam}.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 11:44:16 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Cascianelli",
"Silvia",
""
],
[
"Pippi",
"Vittorio",
""
],
[
"Maarand",
"Martin",
""
],
[
"Cornia",
"Marcella",
""
],
[
"Baraldi",
"Lorenzo",
""
],
[
"Kermorvant",
"Christopher",
""
],
[
"Cucchiara",
"Rita",
""
]
] |
new_dataset
| 0.999826 |
2208.07699
|
Anurag Sarkar
|
Anurag Sarkar, Seth Cooper
|
tile2tile: Learning Game Filters for Platformer Style Transfer
|
Accepted to AIIDE 2022
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present tile2tile, an approach for style transfer between levels of
tile-based platformer games. Our method involves training models that translate
levels from a lower-resolution sketch representation based on tile affordances
to the original tile representation for a given game. This enables these
models, which we refer to as filters, to translate level sketches into the
style of a specific game. Moreover, by converting a level of one game into
sketch form and then translating the resulting sketch into the tiles of another
game, we obtain a method of style transfer between two games. We use Markov
random fields and autoencoders for learning the game filters and apply them to
demonstrate style transfer between levels of Super Mario Bros, Kid Icarus, Mega
Man and Metroid.
|
[
{
"version": "v1",
"created": "Mon, 15 Aug 2022 15:19:10 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Sarkar",
"Anurag",
""
],
[
"Cooper",
"Seth",
""
]
] |
new_dataset
| 0.991857 |
2208.07702
|
Pino Caballero-Gil
|
Iv\'an Santos-Gonz\'alez, Pino Caballero-Gil, Alexandra
Rivero-Garc\'ia, C\'andido Caballero-Gil
|
Priority and collision avoidance system for traffic lights
| null |
Ad Hoc Networks 94(2):101931. 2019
|
10.1016/j.adhoc.2019.101931
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, a collision avoidance system is presented to detect red light
running and warn nearby vehicles and pedestrians in real time in order to
prevent possible accidents. No complex infrastructure-based solution such as
those based on radars or cameras is here required. Instead, a new solution
based on smartphones carried by drivers and pedestrians is proposed so that it
is the device inside the vehicle violating a traffic light, the one that
self-reports the offence in order to generate alerts and warn nearby vehicles
and pedestrians to prevent accidents. The proposal could also be used by road
authorities to collect data on traffic lights that are most frequently violated
in order to define an action plan to investigate causes and look for solutions.
It includes a classifier for learning and estimating driver behaviour based on
collected data, which is used to predict whether he/she is about to run a red
light or detect whether that has already happened. In the first case, the
system broadcasts warnings directly to close vehicles and pedestrians through
Wi-Fi, while in the second case, the proposal warns vehicles and pedestrians in
the neighbourhood through a server. The solution also includes a prioritization
system based on changing traffic lights at intersections according to the needs
and characteristics of the traffic at all times, giving the top priority to
emergency vehicles. Furthermore, the proposal involves the use of cryptographic
schemes to protect authenticity and integrity of messages sent from traffic
lights, smartphones and servers, and privacy and anonymity to promote the use
of the system. A beta version with some parts of the proposal has been
implemented and the obtained results are promising.
|
[
{
"version": "v1",
"created": "Sun, 14 Aug 2022 11:53:07 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Santos-González",
"Iván",
""
],
[
"Caballero-Gil",
"Pino",
""
],
[
"Rivero-García",
"Alexandra",
""
],
[
"Caballero-Gil",
"Cándido",
""
]
] |
new_dataset
| 0.979948 |
2208.07704
|
Chaoyun Zhang
|
Chaoyun Zhang, Kai Wang, Hao Chen, Ge Fan, Yingjie Li, Lifang Wu,
Bingchao Zheng
|
QuickSkill: Novice Skill Estimation in Online Multiplayer Games
|
Accepted by CIKM 2022 Applied Research Track
| null |
10.1145/3511808.3557070
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Matchmaking systems are vital for creating fair matches in online multiplayer
games, which directly affects players' satisfactions and game experience. Most
of the matchmaking systems largely rely on precise estimation of players' game
skills to construct equitable games. However, the skill rating of a novice is
usually inaccurate, as current matchmaking rating algorithms require
considerable amount of games for learning the true skill of a new player. Using
these unreliable skill scores at early stages for matchmaking usually leads to
disparities in terms of team performance, which causes negative game
experience. This is known as the ''cold-start'' problem for matchmaking rating
algorithms.
To overcome this conundrum, this paper proposes QuickSKill, a deep learning
based novice skill estimation framework to quickly probe abilities of new
players in online multiplayer games. QuickSKill extracts sequential performance
features from initial few games of a player to predict his/her future skill
rating with a dedicated neural network, thus delivering accurate skill
estimation at the player's early game stage. By employing QuickSKill for
matchmaking, game fairness can be dramatically improved in the initial
cold-start period. We conduct experiments in a popular mobile multiplayer game
in both offline and online scenarios. Results obtained with two real-world
anonymized gaming datasets demonstrate that proposed QuickSKill delivers
precise estimation of game skills for novices, leading to significantly lower
team skill disparities and better player game experience. To the best of our
knowledge, proposed QuickSKill is the first framework that tackles the
cold-start problem for traditional skill rating algorithms.
|
[
{
"version": "v1",
"created": "Mon, 15 Aug 2022 11:59:05 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Zhang",
"Chaoyun",
""
],
[
"Wang",
"Kai",
""
],
[
"Chen",
"Hao",
""
],
[
"Fan",
"Ge",
""
],
[
"Li",
"Yingjie",
""
],
[
"Wu",
"Lifang",
""
],
[
"Zheng",
"Bingchao",
""
]
] |
new_dataset
| 0.990727 |
2208.07755
|
Wentao Jiang
|
Wentao Jiang, Sheng Jin, Wentao Liu, Chen Qian, Ping Luo, Si Liu
|
PoseTrans: A Simple Yet Effective Pose Transformation Augmentation for
Human Pose Estimation
|
Accepted by ECCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human pose estimation aims to accurately estimate a wide variety of human
poses. However, existing datasets often follow a long-tailed distribution that
unusual poses only occupy a small portion, which further leads to the lack of
diversity of rare poses. These issues result in the inferior generalization
ability of current pose estimators. In this paper, we present a simple yet
effective data augmentation method, termed Pose Transformation (PoseTrans), to
alleviate the aforementioned problems. Specifically, we propose Pose
Transformation Module (PTM) to create new training samples that have diverse
poses and adopt a pose discriminator to ensure the plausibility of the
augmented poses. Besides, we propose Pose Clustering Module (PCM) to measure
the pose rarity and select the "rarest" poses to help balance the long-tailed
distribution. Extensive experiments on three benchmark datasets demonstrate the
effectiveness of our method, especially on rare poses. Also, our method is
efficient and simple to implement, which can be easily integrated into the
training pipeline of existing pose estimation models.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 14:03:01 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Jiang",
"Wentao",
""
],
[
"Jin",
"Sheng",
""
],
[
"Liu",
"Wentao",
""
],
[
"Qian",
"Chen",
""
],
[
"Luo",
"Ping",
""
],
[
"Liu",
"Si",
""
]
] |
new_dataset
| 0.967848 |
2208.07810
|
Nirmalya Thakur
|
Nirmalya Thakur
|
A Large-Scale Dataset of Twitter Chatter about Online Learning during
the Current COVID-19 Omicron Wave
| null | null | null | null |
cs.SI cs.AI cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The COVID-19 Omicron variant, reported to be the most immune evasive variant
of COVID-19, is resulting in a surge of COVID-19 cases globally. This has
caused schools, colleges, and universities in different parts of the world to
transition to online learning. As a result, social media platforms such as
Twitter are seeing an increase in conversations related to online learning in
the form of tweets. Mining such tweets to develop a dataset can serve as a data
resource for different applications and use-cases related to the analysis of
interest, views, opinions, perspectives, attitudes, and feedback towards online
learning during the current surge of COVID-19 cases caused by the Omicron
variant. Therefore, this work presents a large-scale open-access Twitter
dataset of conversations about online learning from different parts of the
world since the first detected case of the COVID-19 Omicron variant in November
2021. The dataset is compliant with the privacy policy, developer agreement,
and guidelines for content redistribution of Twitter, as well as with the FAIR
principles (Findability, Accessibility, Interoperability, and Reusability)
principles for scientific data management. The paper also briefly outlines some
potential applications in the fields of Big Data, Data Mining, Natural Language
Processing, and their related disciplines, with a specific focus on online
learning during this Omicron wave that may be studied, explored, and
investigated by using this dataset.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 18:01:18 GMT"
}
] | 2022-08-17T00:00:00 |
[
[
"Thakur",
"Nirmalya",
""
]
] |
new_dataset
| 0.998891 |
2103.10596
|
Xiaohong Liu
|
Xiaohong Liu, Yaojie Liu, Jun Chen, Xiaoming Liu
|
PSCC-Net: Progressive Spatio-Channel Correlation Network for Image
Manipulation Detection and Localization
|
Published in IEEE Transactions on Circuits and Systems for Video
Technology. Codes and models are available at
https://github.com/proteus1991/PSCC-Net
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To defend against manipulation of image content, such as splicing, copy-move,
and removal, we develop a Progressive Spatio-Channel Correlation Network
(PSCC-Net) to detect and localize image manipulations. PSCC-Net processes the
image in a two-path procedure: a top-down path that extracts local and global
features and a bottom-up path that detects whether the input image is
manipulated, and estimates its manipulation masks at multiple scales, where
each mask is conditioned on the previous one. Different from the conventional
encoder-decoder and no-pooling structures, PSCC-Net leverages features at
different scales with dense cross-connections to produce manipulation masks in
a coarse-to-fine fashion. Moreover, a Spatio-Channel Correlation Module (SCCM)
captures both spatial and channel-wise correlations in the bottom-up path,
which endows features with holistic cues, enabling the network to cope with a
wide range of manipulation attacks. Thanks to the light-weight backbone and
progressive mechanism, PSCC-Net can process 1,080P images at 50+ FPS. Extensive
experiments demonstrate the superiority of PSCC-Net over the state-of-the-art
methods on both detection and localization.
|
[
{
"version": "v1",
"created": "Fri, 19 Mar 2021 02:22:53 GMT"
},
{
"version": "v2",
"created": "Sat, 13 Aug 2022 12:28:50 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Liu",
"Xiaohong",
""
],
[
"Liu",
"Yaojie",
""
],
[
"Chen",
"Jun",
""
],
[
"Liu",
"Xiaoming",
""
]
] |
new_dataset
| 0.998371 |
2109.06810
|
Akash Patel
|
Akash Patel, Avijit Banerjee, Bjorn Lindqvist, Christoforos
Kanellakis, George Nikolakopoulos
|
Design and Model Predictive Control of Mars Coaxial Quadrotor
| null | null |
10.1109/AERO53065.2022.9843799
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mars has been a prime candidate for planetary exploration of the solar system
because of the science discoveries that support chances of future habitation on
this planet. Martian caves and lava tubes like terrains, which consists of
uneven ground, poor visibility and confined space, makes it impossible for
wheel based rovers to navigate through these areas. In order to address these
limitations and advance the exploration capability in a Martian terrain, this
article presents the design and control of a novel coaxial quadrotor Micro
Aerial Vehicle (MAV). As it will be presented, the key contributions on the
design and control architecture of the proposed Mars coaxial quadrotor, are
introducing an alternative and more enhanced, from a control point of view
concept, when compared in terms of autonomy to Ingenuity. Based on the
presented design, the article will introduce the mathematical modelling and
automatic control framework of the vehicle that will consist of a linearised
model of a co-axial quadrotor and a corresponding Model Predictive Controller
(MPC) for the trajectory tracking. Among the many models, proposed for the
aerial flight on Mars, a reliable control architecture lacks in the related
state of the art. The MPC based closed loop responses of the proposed MAV will
be verified in different conditions during the flight with additional
disturbances, induced to replicate a real flight scenario. In order to further
validate the proposed control architecture and prove the efficacy of the
suggested design, the introduced Mars coaxial quadrotor and the MPC scheme will
be compared to a PID-type controller, similar to the Ingenuity helicopter's
control architecture for the position and the heading.
|
[
{
"version": "v1",
"created": "Tue, 14 Sep 2021 16:45:10 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Oct 2021 11:01:58 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Patel",
"Akash",
""
],
[
"Banerjee",
"Avijit",
""
],
[
"Lindqvist",
"Bjorn",
""
],
[
"Kanellakis",
"Christoforos",
""
],
[
"Nikolakopoulos",
"George",
""
]
] |
new_dataset
| 0.998682 |
2110.03101
|
Tae Ha Park
|
Tae Ha Park, Marcus M\"artens, Gurvan Lecuyer, Dario Izzo, Simone
D'Amico
|
SPEED+: Next-Generation Dataset for Spacecraft Pose Estimation across
Domain Gap
| null |
2022 IEEE Aerospace Conference (AERO), 2022
|
10.1109/AERO53065.2022.9843439
| null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Autonomous vision-based spaceborne navigation is an enabling technology for
future on-orbit servicing and space logistics missions. While computer vision
in general has benefited from Machine Learning (ML), training and validating
spaceborne ML models are extremely challenging due to the impracticality of
acquiring a large-scale labeled dataset of images of the intended target in the
space environment. Existing datasets, such as Spacecraft PosE Estimation
Dataset (SPEED), have so far mostly relied on synthetic images for both
training and validation, which are easy to mass-produce but fail to resemble
the visual features and illumination variability inherent to the target
spaceborne images. In order to bridge the gap between the current practices and
the intended applications in future space missions, this paper introduces
SPEED+: the next generation spacecraft pose estimation dataset with specific
emphasis on domain gap. In addition to 60,000 synthetic images for training,
SPEED+ includes 9,531 hardware-in-the-loop images of a spacecraft mockup model
captured from the Testbed for Rendezvous and Optical Navigation (TRON)
facility. TRON is a first-of-a-kind robotic testbed capable of capturing an
arbitrary number of target images with accurate and maximally diverse pose
labels and high-fidelity spaceborne illumination conditions. SPEED+ is used in
the second international Satellite Pose Estimation Challenge co-hosted by SLAB
and the Advanced Concepts Team of the European Space Agency to evaluate and
compare the robustness of spaceborne ML models trained on synthetic images.
|
[
{
"version": "v1",
"created": "Wed, 6 Oct 2021 23:22:24 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Dec 2021 22:17:12 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Park",
"Tae Ha",
""
],
[
"Märtens",
"Marcus",
""
],
[
"Lecuyer",
"Gurvan",
""
],
[
"Izzo",
"Dario",
""
],
[
"D'Amico",
"Simone",
""
]
] |
new_dataset
| 0.99978 |
2202.02312
|
Andrea Burns
|
Andrea Burns, Deniz Arsan, Sanjna Agrawal, Ranjitha Kumar, Kate
Saenko, Bryan A. Plummer
|
A Dataset for Interactive Vision-Language Navigation with Unknown
Command Feasibility
|
Accepted at the European Conference on Computer Vision (ECCV) 2022.
This is a new version of the paper with additional experimental results and a
few prior implementation bugs fixed
| null | null | null |
cs.CL cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Vision-language navigation (VLN), in which an agent follows language
instruction in a visual environment, has been studied under the premise that
the input command is fully feasible in the environment. Yet in practice, a
request may not be possible due to language ambiguity or environment changes.
To study VLN with unknown command feasibility, we introduce a new dataset
Mobile app Tasks with Iterative Feedback (MoTIF), where the goal is to complete
a natural language command in a mobile app. Mobile apps provide a scalable
domain to study real downstream uses of VLN methods. Moreover, mobile app
commands provide instruction for interactive navigation, as they result in
action sequences with state changes via clicking, typing, or swiping. MoTIF is
the first to include feasibility annotations, containing both binary
feasibility labels and fine-grained labels for why tasks are unsatisfiable. We
further collect follow-up questions for ambiguous queries to enable research on
task uncertainty resolution. Equipped with our dataset, we propose the new
problem of feasibility prediction, in which a natural language instruction and
multimodal app environment are used to predict command feasibility. MoTIF
provides a more realistic app dataset as it contains many diverse environments,
high-level goals, and longer action sequences than prior work. We evaluate
interactive VLN methods using MoTIF, quantify the generalization ability of
current approaches to new app environments, and measure the effect of task
feasibility on navigation performance.
|
[
{
"version": "v1",
"created": "Fri, 4 Feb 2022 18:51:50 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Jul 2022 23:19:57 GMT"
},
{
"version": "v3",
"created": "Mon, 15 Aug 2022 00:24:24 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Burns",
"Andrea",
""
],
[
"Arsan",
"Deniz",
""
],
[
"Agrawal",
"Sanjna",
""
],
[
"Kumar",
"Ranjitha",
""
],
[
"Saenko",
"Kate",
""
],
[
"Plummer",
"Bryan A.",
""
]
] |
new_dataset
| 0.999777 |
2203.00292
|
Ling Gao
|
Ling Gao, Laurent Kneip
|
FP-Loc: Lightweight and Drift-free Floor Plan-assisted LiDAR
Localization
| null |
IEEE International Conference on Robotics and Automation (ICRA),
2022
|
10.1109/ICRA46639.2022.9812361
| null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel framework for floor plan-based, full six degree-of-freedom
LiDAR localization. Our approach relies on robust ceiling and ground plane
detection, which solves part of the pose and supports the segmentation of
vertical structure elements such as walls and pillars. Our core contribution is
a novel nearest neighbour data structure for an efficient look-up of nearest
vertical structure elements from the floor plan. The registration is realized
as a pair-wise regularized windowed pose graph optimization. Highly efficient,
accurate and drift-free long-term localization is demonstrated on multiple
scenes.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 08:49:37 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Gao",
"Ling",
""
],
[
"Kneip",
"Laurent",
""
]
] |
new_dataset
| 0.99851 |
2203.03373
|
Zhanhao Hu
|
Zhanhao Hu, Siyuan Huang, Xiaopei Zhu, Fuchun Sun, Bo Zhang, Xiaolin
Hu
|
Adversarial Texture for Fooling Person Detectors in the Physical World
|
Accepted by CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, cameras equipped with AI systems can capture and analyze images to
detect people automatically. However, the AI system can make mistakes when
receiving deliberately designed patterns in the real world, i.e., physical
adversarial examples. Prior works have shown that it is possible to print
adversarial patches on clothes to evade DNN-based person detectors. However,
these adversarial examples could have catastrophic drops in the attack success
rate when the viewing angle (i.e., the camera's angle towards the object)
changes. To perform a multi-angle attack, we propose Adversarial Texture
(AdvTexture). AdvTexture can cover clothes with arbitrary shapes so that people
wearing such clothes can hide from person detectors from different viewing
angles. We propose a generative method, named Toroidal-Cropping-based
Expandable Generative Attack (TC-EGA), to craft AdvTexture with repetitive
structures. We printed several pieces of cloth with AdvTexure and then made
T-shirts, skirts, and dresses in the physical world. Experiments showed that
these clothes could fool person detectors in the physical world.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 13:22:25 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Mar 2022 14:29:07 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Mar 2022 06:47:05 GMT"
},
{
"version": "v4",
"created": "Sat, 13 Aug 2022 17:21:34 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Hu",
"Zhanhao",
""
],
[
"Huang",
"Siyuan",
""
],
[
"Zhu",
"Xiaopei",
""
],
[
"Sun",
"Fuchun",
""
],
[
"Zhang",
"Bo",
""
],
[
"Hu",
"Xiaolin",
""
]
] |
new_dataset
| 0.989913 |
2203.06925
|
Qiang Hu
|
Renjie Zhou, Qiang Hu, Jian Wan, Jilin Zhang, Qiang Liu, Tianxiang Hu,
Jianjun Li
|
WCL-BBCD: A Contrastive Learning and Knowledge Graph Approach to Named
Entity Recognition
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Named Entity Recognition task is one of the core tasks of information
extraction. Word ambiguity and word abbreviation are important reasons for the
low recognition rate of named entities. In this paper, we propose a novel named
entity recognition model WCL-BBCD (Word Contrastive Learning with
BERT-BiLSTM-CRF-DBpedia), which incorporates the idea of contrastive learning.
The model first trains the sentence pairs in the text, calculate similarity
between sentence pairs, and fine-tunes BERT used for the named entity
recognition task according to the similarity, so as to alleviate word
ambiguity. Then, the fine-tuned BERT is combined with BiLSTM-CRF to perform the
named entity recognition task. Finally, the recognition results are corrected
in combination with prior knowledge such as knowledge graphs, so as to
alleviate the low-recognition-rate problem caused by word abbreviations. The
results of experimentals conducted on the CoNLL-2003 English dataset and
OntoNotes V5 English dataset show that our model outperforms other similar
models on.
|
[
{
"version": "v1",
"created": "Mon, 14 Mar 2022 08:29:58 GMT"
},
{
"version": "v2",
"created": "Sun, 29 May 2022 10:41:35 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Jun 2022 07:17:43 GMT"
},
{
"version": "v4",
"created": "Sat, 11 Jun 2022 05:08:59 GMT"
},
{
"version": "v5",
"created": "Mon, 15 Aug 2022 12:28:13 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Zhou",
"Renjie",
""
],
[
"Hu",
"Qiang",
""
],
[
"Wan",
"Jian",
""
],
[
"Zhang",
"Jilin",
""
],
[
"Liu",
"Qiang",
""
],
[
"Hu",
"Tianxiang",
""
],
[
"Li",
"Jianjun",
""
]
] |
new_dataset
| 0.971948 |
2204.09138
|
Bing Wang
|
Bing Wang, Zhengdi Yu, Bo Yang, Jie Qin, Toby Breckon, Ling Shao, Niki
Trigoni, Andrew Markham
|
RangeUDF: Semantic Surface Reconstruction from 3D Point Clouds
| null | null | null | null |
cs.CV cs.AI cs.GR cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present RangeUDF, a new implicit representation based framework to recover
the geometry and semantics of continuous 3D scene surfaces from point clouds.
Unlike occupancy fields or signed distance fields which can only model closed
3D surfaces, our approach is not restricted to any type of topology. Being
different from the existing unsigned distance fields, our framework does not
suffer from any surface ambiguity. In addition, our RangeUDF can jointly
estimate precise semantics for continuous surfaces. The key to our approach is
a range-aware unsigned distance function together with a surface-oriented
semantic segmentation module. Extensive experiments show that RangeUDF clearly
surpasses state-of-the-art approaches for surface reconstruction on four point
cloud datasets. Moreover, RangeUDF demonstrates superior generalization
capability across multiple unseen datasets, which is nearly impossible for all
existing approaches.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2022 21:39:45 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Wang",
"Bing",
""
],
[
"Yu",
"Zhengdi",
""
],
[
"Yang",
"Bo",
""
],
[
"Qin",
"Jie",
""
],
[
"Breckon",
"Toby",
""
],
[
"Shao",
"Ling",
""
],
[
"Trigoni",
"Niki",
""
],
[
"Markham",
"Andrew",
""
]
] |
new_dataset
| 0.990417 |
2207.00319
|
Kepeng Xu
|
Gang He, Kepeng Xu, Li Xu, Chang Wu, Ming Sun, Xing Wen, Yu-Wing Tai
|
SDRTV-to-HDRTV via Hierarchical Dynamic Context Feature Mapping
|
9 pages
| null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we address the task of SDR videos to HDR
videos(SDRTV-to-HDRTV). Previous approaches use global feature modulation for
SDRTV-to-HDRTV. Feature modulation scales and shifts the features in the
original feature space, which has limited mapping capability. In addition, the
global image mapping cannot restore detail in HDR frames due to the luminance
differences in different regions of SDR frames. To resolve the appeal, we
propose a two-stage solution. The first stage is a hierarchical Dynamic Context
feature mapping (HDCFM) model. HDCFM learns the SDR frame to HDR frame mapping
function via hierarchical feature modulation (HME and HM ) module and a dynamic
context feature transformation (DCT) module. The HME estimates the feature
modulation vector, HM is capable of hierarchical feature modulation, consisting
of global feature modulation in series with local feature modulation, and is
capable of adaptive mapping of local image features. The DCT module constructs
a feature transformation module in conjunction with the context, which is
capable of adaptively generating a feature transformation matrix for feature
mapping. Compared with simple feature scaling and shifting, the DCT module can
map features into a new feature space and thus has a more excellent feature
mapping capability. In the second stage, we introduce a patch
discriminator-based context generation model PDCG to obtain subjective quality
enhancement of over-exposed regions. PDCG can solve the problem that the model
is challenging to train due to the proportion of overexposed regions of the
image. The proposed method can achieve state-of-the-art objective and
subjective quality results. Specifically, HDCFM achieves a PSNR gain of 0.81 dB
at a parameter of about 100K. The number of parameters is 1/14th of the
previous state-of-the-art methods. The test code will be released soon.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2022 10:12:59 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Aug 2022 10:39:30 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"He",
"Gang",
""
],
[
"Xu",
"Kepeng",
""
],
[
"Xu",
"Li",
""
],
[
"Wu",
"Chang",
""
],
[
"Sun",
"Ming",
""
],
[
"Wen",
"Xing",
""
],
[
"Tai",
"Yu-Wing",
""
]
] |
new_dataset
| 0.997097 |
2207.01044
|
Paul Guerrero
|
Paul Guerrero, Milo\v{s} Ha\v{s}an, Kalyan Sunkavalli, Radom\'ir
M\v{e}ch, Tamy Boubekeur, Niloy J. Mitra
|
MatFormer: A Generative Model for Procedural Materials
| null |
ACM Transactions on Graphics, Volume 41, Issue 4 (Proceedings of
Siggraph 2022)
|
10.1145/3528223.3530173
| null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Procedural material graphs are a compact, parameteric, and
resolution-independent representation that are a popular choice for material
authoring. However, designing procedural materials requires significant
expertise and publicly accessible libraries contain only a few thousand such
graphs. We present MatFormer, a generative model that can produce a diverse set
of high-quality procedural materials with complex spatial patterns and
appearance. While procedural materials can be modeled as directed (operation)
graphs, they contain arbitrary numbers of heterogeneous nodes with
unstructured, often long-range node connections, and functional constraints on
node parameters and connections. MatFormer addresses these challenges with a
multi-stage transformer-based model that sequentially generates nodes, node
parameters, and edges, while ensuring the semantic validity of the graph. In
addition to generation, MatFormer can be used for the auto-completion and
exploration of partial material graphs. We qualitatively and quantitatively
demonstrate that our method outperforms alternative approaches, in both
generated graph and material quality.
|
[
{
"version": "v1",
"created": "Sun, 3 Jul 2022 13:41:29 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Aug 2022 15:17:47 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Guerrero",
"Paul",
""
],
[
"Hašan",
"Miloš",
""
],
[
"Sunkavalli",
"Kalyan",
""
],
[
"Měch",
"Radomír",
""
],
[
"Boubekeur",
"Tamy",
""
],
[
"Mitra",
"Niloy J.",
""
]
] |
new_dataset
| 0.994165 |
2207.11166
|
Jeremy Irvin
|
Bryan Zhu, Nicholas Lui, Jeremy Irvin, Jimmy Le, Sahil Tadwalkar,
Chenghao Wang, Zutao Ouyang, Frankie Y. Liu, Andrew Y. Ng, Robert B. Jackson
|
METER-ML: A Multi-Sensor Earth Observation Benchmark for Automated
Methane Source Mapping
|
Workshop on Complex Data Challenges in Earth Observation at
IJCAI-ECAI 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Reducing methane emissions is essential for mitigating global warming. To
attribute methane emissions to their sources, a comprehensive dataset of
methane source infrastructure is necessary. Recent advancements with deep
learning on remotely sensed imagery have the potential to identify the
locations and characteristics of methane sources, but there is a substantial
lack of publicly available data to enable machine learning researchers and
practitioners to build automated mapping approaches. To help fill this gap, we
construct a multi-sensor dataset called METER-ML containing 86,599
georeferenced NAIP, Sentinel-1, and Sentinel-2 images in the U.S. labeled for
the presence or absence of methane source facilities including concentrated
animal feeding operations, coal mines, landfills, natural gas processing
plants, oil refineries and petroleum terminals, and wastewater treatment
plants. We experiment with a variety of models that leverage different spatial
resolutions, spatial footprints, image products, and spectral bands. We find
that our best model achieves an area under the precision recall curve of 0.915
for identifying concentrated animal feeding operations and 0.821 for oil
refineries and petroleum terminals on an expert-labeled test set, suggesting
the potential for large-scale mapping. We make METER-ML freely available at
https://stanfordmlgroup.github.io/projects/meter-ml/ to support future work on
automated methane source mapping.
|
[
{
"version": "v1",
"created": "Fri, 22 Jul 2022 16:12:07 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Aug 2022 04:37:26 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Zhu",
"Bryan",
""
],
[
"Lui",
"Nicholas",
""
],
[
"Irvin",
"Jeremy",
""
],
[
"Le",
"Jimmy",
""
],
[
"Tadwalkar",
"Sahil",
""
],
[
"Wang",
"Chenghao",
""
],
[
"Ouyang",
"Zutao",
""
],
[
"Liu",
"Frankie Y.",
""
],
[
"Ng",
"Andrew Y.",
""
],
[
"Jackson",
"Robert B.",
""
]
] |
new_dataset
| 0.999771 |
2208.06413
|
Fl\'avio Coutinho
|
Fl\'avio Coutinho, Luiz Chaimowicz
|
Generating Pixel Art Character Sprites using GANs
|
This article has been submitted to SBGames 2022
| null | null | null |
cs.GR cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Iterating on creating pixel art character sprite sheets is essential to the
game development process. However, it can take a lot of effort until the final
versions containing different poses and animation clips are achieved. This
paper investigates using conditional generative adversarial networks to aid the
designers in creating such sprite sheets. We propose an architecture based on
Pix2Pix to generate images of characters facing a target side (e.g., right)
given sprites of them in a source pose (e.g., front). Experiments with small
pixel art datasets yielded promising results, resulting in models with varying
degrees of generalization, sometimes capable of generating images very close to
the ground truth. We analyze the results through visual inspection and
quantitatively with FID.
|
[
{
"version": "v1",
"created": "Mon, 15 Aug 2022 14:14:19 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Coutinho",
"Flávio",
""
],
[
"Chaimowicz",
"Luiz",
""
]
] |
new_dataset
| 0.972885 |
2208.06437
|
Tommaso Tedeschi
|
Tommaso Tedeschi, Diego Ciangottini, Marco Baioletti, Valentina
Poggioni, Daniele Spiga, Loriano Storchi, Mirco Tracolli
|
Smart caching in a Data Lake for High Energy Physics analysis
| null | null | null | null |
cs.DC cs.DB cs.LG cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
The continuous growth of data production in almost all scientific areas
raises new problems in data access and management, especially in a scenario
where the end-users, as well as the resources that they can access, are
worldwide distributed. This work is focused on the data caching management in a
Data Lake infrastructure in the context of the High Energy Physics field. We
are proposing an autonomous method, based on Reinforcement Learning techniques,
to improve the user experience and to contain the maintenance costs of the
infrastructure.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 13:32:12 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Tedeschi",
"Tommaso",
""
],
[
"Ciangottini",
"Diego",
""
],
[
"Baioletti",
"Marco",
""
],
[
"Poggioni",
"Valentina",
""
],
[
"Spiga",
"Daniele",
""
],
[
"Storchi",
"Loriano",
""
],
[
"Tracolli",
"Mirco",
""
]
] |
new_dataset
| 0.98472 |
2208.06456
|
Oscar Fontanelli
|
Oscar Fontanelli, Dulce I. Valdivia, Guillermo Romero, Oliver Medina,
Wentian Li, Maribel Hern\'andez-Rosales
|
Human mobility patterns in Mexico City and their links with
socioeconomic variables during the COVID-19 pandemic
|
21 pages, 8 figures
| null | null | null |
cs.SI stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
The availability of cellphone geolocation data provides a remarkable
opportunity to study human mobility patterns and how these patterns are
affected by the recent pandemic. Two simple centrality metrics allow us to
measure two different aspects of mobility in origin-destination networks
constructed with this type of data: variety of places connected to a certain
node (degree) and number of people that travel to or from a given node
(strength). In this contribution, we present an analysis of node degree and
strength in daily origin-destination networks for Greater Mexico City during
2020. Unlike what is observed in many complex networks, these
origin-destination networks are not scale free. Instead, there is a
characteristic scale defined by the distribution peak; centrality distributions
exhibit a skewed two-tail distribution with power law decay on each side of the
peak. We found that high mobility areas tend to be closer to the city center,
have higher population and better socioeconomic conditions. Areas with
anomalous behavior are almost always on the periphery of the city, where we can
also observe qualitative difference in mobility patterns between east and west.
Finally, we study the effect of mobility restrictions due to the outbreak of
the COVID-19 pandemics on these mobility patterns.
|
[
{
"version": "v1",
"created": "Fri, 12 Aug 2022 18:51:59 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Fontanelli",
"Oscar",
""
],
[
"Valdivia",
"Dulce I.",
""
],
[
"Romero",
"Guillermo",
""
],
[
"Medina",
"Oliver",
""
],
[
"Li",
"Wentian",
""
],
[
"Hernández-Rosales",
"Maribel",
""
]
] |
new_dataset
| 0.998002 |
2208.06461
|
Hadi Ghahremannezhad
|
Hadi Ghahremannezhad, Hang Shi, Chengjun Liu
|
Real-Time Accident Detection in Traffic Surveillance Using Deep Learning
|
link to IEEE: https://ieeexplore.ieee.org/abstract/document/9827736
|
IEEE International Conference on Imaging Systems and Techniques
(IST), pages 1-6, 2022
|
10.1109/IST55454.2022.9827736
| null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic detection of traffic accidents is an important emerging topic in
traffic monitoring systems. Nowadays many urban intersections are equipped with
surveillance cameras connected to traffic management systems. Therefore,
computer vision techniques can be viable tools for automatic accident
detection. This paper presents a new efficient framework for accident detection
at intersections for traffic surveillance applications. The proposed framework
consists of three hierarchical steps, including efficient and accurate object
detection based on the state-of-the-art YOLOv4 method, object tracking based on
Kalman filter coupled with the Hungarian algorithm for association, and
accident detection by trajectory conflict analysis. A new cost function is
applied for object association to accommodate for occlusion, overlapping
objects, and shape changes in the object tracking step. The object trajectories
are analyzed in terms of velocity, angle, and distance in order to detect
different types of trajectory conflicts including vehicle-to-vehicle,
vehicle-to-pedestrian, and vehicle-to-bicycle. Experimental results using real
traffic video data show the feasibility of the proposed method in real-time
applications of traffic surveillance. In particular, trajectory conflicts,
including near-accidents and accidents occurring at urban intersections are
detected with a low false alarm rate and a high detection rate. The robustness
of the proposed framework is evaluated using video sequences collected from
YouTube with diverse illumination conditions. The dataset is publicly available
at: http://github.com/hadi-ghnd/AccidentDetection.
|
[
{
"version": "v1",
"created": "Fri, 12 Aug 2022 19:07:20 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Ghahremannezhad",
"Hadi",
""
],
[
"Shi",
"Hang",
""
],
[
"Liu",
"Chengjun",
""
]
] |
new_dataset
| 0.995964 |
2208.06496
|
Vasily Zadorozhnyy
|
Edison Mucllari, Vasily Zadorozhnyy, Cole Pospisil, Duc Nguyen, Qiang
Ye
|
Orthogonal Gated Recurrent Unit with Neumann-Cayley Transformation
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, using orthogonal matrices has been shown to be a promising
approach in improving Recurrent Neural Networks (RNNs) with training,
stability, and convergence, particularly, to control gradients. While Gated
Recurrent Unit (GRU) and Long Short Term Memory (LSTM) architectures address
the vanishing gradient problem by using a variety of gates and memory cells,
they are still prone to the exploding gradient problem. In this work, we
analyze the gradients in GRU and propose the usage of orthogonal matrices to
prevent exploding gradient problems and enhance long-term memory. We study
where to use orthogonal matrices and we propose a Neumann series-based Scaled
Cayley transformation for training orthogonal matrices in GRU, which we call
Neumann-Cayley Orthogonal GRU, or simply NC-GRU. We present detailed
experiments of our model on several synthetic and real-world tasks, which show
that NC-GRU significantly outperforms GRU as well as several other RNNs.
|
[
{
"version": "v1",
"created": "Fri, 12 Aug 2022 20:50:09 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Mucllari",
"Edison",
""
],
[
"Zadorozhnyy",
"Vasily",
""
],
[
"Pospisil",
"Cole",
""
],
[
"Nguyen",
"Duc",
""
],
[
"Ye",
"Qiang",
""
]
] |
new_dataset
| 0.971927 |
2208.06569
|
Emon Dey
|
Emon Dey, Jumman Hossain, Nirmalya Roy, Carl Busart
|
SynchroSim: An Integrated Co-simulation Middleware for Heterogeneous
Multi-robot System
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
With the advancement of modern robotics, autonomous agents are now capable of
hosting sophisticated algorithms, which enables them to make intelligent
decisions. But developing and testing such algorithms directly in real-world
systems is tedious and may result in the wastage of valuable resources.
Especially for heterogeneous multi-agent systems in battlefield environments
where communication is critical in determining the system's behavior and
usability. Due to the necessity of simulators of separate paradigms
(co-simulation) to simulate such scenarios before deploying, synchronization
between those simulators is vital. Existing works aimed at resolving this issue
fall short of addressing diversity among deployed agents. In this work, we
propose \textit{SynchroSim}, an integrated co-simulation middleware to simulate
a heterogeneous multi-robot system. Here we propose a velocity
difference-driven adjustable window size approach with a view to reducing
packet loss probability. It takes into account the respective velocities of
deployed agents to calculate a suitable window size before transmitting data
between them. We consider our algorithm-specific simulator agnostic but for the
sake of implementation results, we have used Gazebo as a Physics simulator and
NS-3 as a network simulator. Also, we design our algorithm considering the
Perception-Action loop inside a closed communication channel, which is one of
the essential factors in a contested scenario with the requirement of high
fidelity in terms of data transmission. We validate our approach empirically at
both the simulation and system level for both line-of-sight (LOS) and
non-line-of-sight (NLOS) scenarios. Our approach achieves a noticeable
improvement in terms of reducing packet loss probability ($\approx$11\%), and
average packet delay ($\approx$10\%) compared to the fixed window size-based
synchronization approach.
|
[
{
"version": "v1",
"created": "Sat, 13 Aug 2022 04:34:06 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Dey",
"Emon",
""
],
[
"Hossain",
"Jumman",
""
],
[
"Roy",
"Nirmalya",
""
],
[
"Busart",
"Carl",
""
]
] |
new_dataset
| 0.996601 |
2208.06594
|
Pino Caballero-Gil
|
V Mora-Afonso, Pino Caballero-Gil
|
Using identity-based cryptography in mobile applications
|
arXiv admin note: substantial text overlap with arXiv:2208.03541
|
International Joint Conference SOCO CISIS ICEUTE, 527-536, 2014
|
10.1007/978-3-319-01854-6_54
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This work includes a review of two cases study of mobile applications that
use Identity-Based Cryptography (IBC) to protect communications. It also
describes a proposal of a new mobile application that combines the use of IBC
for Wi-Fi or Bluetooth communication between smartphones, with the promising
Near Field Communication (NFC) technology for secure authentication. The
proposed scheme involves NFC pairing to establish as public key a piece of
information linked to the device, such as the phone number, so that this
information is then used in an IBC scheme for peer-to-peer communication. This
is a work in progress, so the implementation of a prototype based on
smartphones is still being improved.
|
[
{
"version": "v1",
"created": "Sat, 13 Aug 2022 08:23:12 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Mora-Afonso",
"V",
""
],
[
"Caballero-Gil",
"Pino",
""
]
] |
new_dataset
| 0.970168 |
2208.06658
|
Jiazhi Li
|
Jiazhi Li, Tingting Zhou, Yunnong Chen, Yanfang Chang, Yankun Zhen,
Lingyun Sun and Liuqing Chen
|
ULDGNN: A Fragmented UI Layer Detector Based on Graph Neural Networks
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While some work attempt to generate front-end code intelligently from UI
screenshots, it may be more convenient to utilize UI design drafts in Sketch
which is a popular UI design software, because we can access multimodal UI
information directly such as layers type, position, size, and visual images.
However, fragmented layers could degrade the code quality without being merged
into a whole part if all of them are involved in the code generation. In this
paper, we propose a pipeline to merge fragmented layers automatically. We first
construct a graph representation for the layer tree of a UI draft and detect
all fragmented layers based on the visual features and graph neural networks.
Then a rule-based algorithm is designed to merge fragmented layers. Through
experiments on a newly constructed dataset, our approach can retrieve most
fragmented layers in UI design drafts, and achieve 87% accuracy in the
detection task, and the post-processing algorithm is developed to cluster
associative layers under simple and general circumstances.
|
[
{
"version": "v1",
"created": "Sat, 13 Aug 2022 14:14:37 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Li",
"Jiazhi",
""
],
[
"Zhou",
"Tingting",
""
],
[
"Chen",
"Yunnong",
""
],
[
"Chang",
"Yanfang",
""
],
[
"Zhen",
"Yankun",
""
],
[
"Sun",
"Lingyun",
""
],
[
"Chen",
"Liuqing",
""
]
] |
new_dataset
| 0.998282 |
2208.06692
|
Giuseppe Antonio Di Luna
|
Fiorella Artuso, Marco Mormando, Giuseppe A. Di Luna, Leonardo
Querzoni
|
BinBert: Binary Code Understanding with a Fine-tunable and
Execution-aware Transformer
| null | null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A recent trend in binary code analysis promotes the use of neural solutions
based on instruction embedding models. An instruction embedding model is a
neural network that transforms sequences of assembly instructions into
embedding vectors. If the embedding network is trained such that the
translation from code to vectors partially preserves the semantic, the network
effectively represents an assembly code model.
In this paper we present BinBert, a novel assembly code model. BinBert is
built on a transformer pre-trained on a huge dataset of both assembly
instruction sequences and symbolic execution information. BinBert can be
applied to assembly instructions sequences and it is fine-tunable, i.e. it can
be re-trained as part of a neural architecture on task-specific data. Through
fine-tuning, BinBert learns how to apply the general knowledge acquired with
pre-training to the specific task.
We evaluated BinBert on a multi-task benchmark that we specifically designed
to test the understanding of assembly code. The benchmark is composed of
several tasks, some taken from the literature, and a few novel tasks that we
designed, with a mix of intrinsic and downstream tasks.
Our results show that BinBert outperforms state-of-the-art models for binary
instruction embedding, raising the bar for binary code understanding.
|
[
{
"version": "v1",
"created": "Sat, 13 Aug 2022 17:48:52 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Artuso",
"Fiorella",
""
],
[
"Mormando",
"Marco",
""
],
[
"Di Luna",
"Giuseppe A.",
""
],
[
"Querzoni",
"Leonardo",
""
]
] |
new_dataset
| 0.990013 |
2208.06697
|
Jobish John
|
Md. Noor-A-Rahim, Jobish John, Fadhil Firyaguna, Dimitrios Zorbas,
Hafiz Husnain Raza Sherazi, Sergii Kushch, Eoin O Connell, Dirk Pesch,
Brendan O Flynn, Martin Hayes, and Eddie Armstrong
|
Wireless Communications for Smart Manufacturing and Industrial IoT:
Existing Technologies, 5G, and Beyond
|
The manuscript has been submitted to IEEE for possible publication
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Smart manufacturing is a vision and major driver for change in industrial
environments. The goal of smart manufacturing is to optimize manufacturing
processes through constantly monitoring and adapting processes towards more
efficient and personalised manufacturing. This requires and relies on
technologies for connected machines incorporating a variety of computation,
sensing, actuation, and machine to machine communications modalities. As such,
understanding the change towards smart manufacturing requires knowledge of the
enabling technologies, their applications in real world scenarios and the
communications protocols that they rely on. This paper presents an extensive
review of wireless machine to machine communication protocols currently applied
in manufacturing environments and provides a comprehensive review of the
associated use cases whilst defining their expected impact on the future of
smart manufacturing. Based on the review, we point out a number of open
challenges and directions for future research.
|
[
{
"version": "v1",
"created": "Sat, 13 Aug 2022 18:07:05 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Noor-A-Rahim",
"Md.",
""
],
[
"John",
"Jobish",
""
],
[
"Firyaguna",
"Fadhil",
""
],
[
"Zorbas",
"Dimitrios",
""
],
[
"Sherazi",
"Hafiz Husnain Raza",
""
],
[
"Kushch",
"Sergii",
""
],
[
"Connell",
"Eoin O",
""
],
[
"Pesch",
"Dirk",
""
],
[
"Flynn",
"Brendan O",
""
],
[
"Hayes",
"Martin",
""
],
[
"Armstrong",
"Eddie",
""
]
] |
new_dataset
| 0.998376 |
2208.06702
|
Mahieyin Rahmun
|
Mahieyin Rahmun, Tonmoay Deb, Shahriar Ali Bijoy, Mayamin Hamid Raha
|
UAV-CROWD: Violent and non-violent crowd activity simulator from the
perspective of UAV
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Unmanned Aerial Vehicle (UAV) has gained significant traction in the recent
years, particularly the context of surveillance. However, video datasets that
capture violent and non-violent human activity from aerial point-of-view is
scarce. To address this issue, we propose a novel, baseline simulator which is
capable of generating sequences of photo-realistic synthetic images of crowds
engaging in various activities that can be categorized as violent or
non-violent. The crowd groups are annotated with bounding boxes that are
automatically computed using semantic segmentation. Our simulator is capable of
generating large, randomized urban environments and is able to maintain an
average of 25 frames per second on a mid-range computer with 150 concurrent
crowd agents interacting with each other. We also show that when synthetic data
from the proposed simulator is augmented with real world data, binary video
classification accuracy is improved by 5% on average across two different
models.
|
[
{
"version": "v1",
"created": "Sat, 13 Aug 2022 18:28:37 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Rahmun",
"Mahieyin",
""
],
[
"Deb",
"Tonmoay",
""
],
[
"Bijoy",
"Shahriar Ali",
""
],
[
"Raha",
"Mayamin Hamid",
""
]
] |
new_dataset
| 0.999195 |
2208.06734
|
Endri Kacupaj
|
Endri Kacupaj, Kuldeep Singh, Maria Maleshkova, Jens Lehmann
|
An Answer Verbalization Dataset for Conversational Question Answerings
over Knowledge Graphs
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a new dataset for conversational question answering over
Knowledge Graphs (KGs) with verbalized answers. Question answering over KGs is
currently focused on answer generation for single-turn questions (KGQA) or
multiple-tun conversational question answering (ConvQA). However, in a
real-world scenario (e.g., voice assistants such as Siri, Alexa, and Google
Assistant), users prefer verbalized answers. This paper contributes to the
state-of-the-art by extending an existing ConvQA dataset with multiple
paraphrased verbalized answers. We perform experiments with five
sequence-to-sequence models on generating answer responses while maintaining
grammatical correctness. We additionally perform an error analysis that details
the rates of models' mispredictions in specified categories. Our proposed
dataset extended with answer verbalization is publicly available with detailed
documentation on its usage for wider utility.
|
[
{
"version": "v1",
"created": "Sat, 13 Aug 2022 21:21:28 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Kacupaj",
"Endri",
""
],
[
"Singh",
"Kuldeep",
""
],
[
"Maleshkova",
"Maria",
""
],
[
"Lehmann",
"Jens",
""
]
] |
new_dataset
| 0.953386 |
2208.06761
|
Pengyu Chen
|
Pengyu Chen, Junyu Gao, Yuan Yuan, Qi Wang
|
MAFNet: A Multi-Attention Fusion Network for RGB-T Crowd Counting
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RGB-Thermal (RGB-T) crowd counting is a challenging task, which uses thermal
images as complementary information to RGB images to deal with the decreased
performance of unimodal RGB-based methods in scenes with low-illumination or
similar backgrounds. Most existing methods propose well-designed structures for
cross-modal fusion in RGB-T crowd counting. However, these methods have
difficulty in encoding cross-modal contextual semantic information in RGB-T
image pairs. Considering the aforementioned problem, we propose a two-stream
RGB-T crowd counting network called Multi-Attention Fusion Network (MAFNet),
which aims to fully capture long-range contextual information from the RGB and
thermal modalities based on the attention mechanism. Specifically, in the
encoder part, a Multi-Attention Fusion (MAF) module is embedded into different
stages of the two modality-specific branches for cross-modal fusion at the
global level. In addition, a Multi-modal Multi-scale Aggregation (MMA)
regression head is introduced to make full use of the multi-scale and
contextual information across modalities to generate high-quality crowd density
maps. Extensive experiments on two popular datasets show that the proposed
MAFNet is effective for RGB-T crowd counting and achieves the state-of-the-art
performance.
|
[
{
"version": "v1",
"created": "Sun, 14 Aug 2022 02:42:09 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Chen",
"Pengyu",
""
],
[
"Gao",
"Junyu",
""
],
[
"Yuan",
"Yuan",
""
],
[
"Wang",
"Qi",
""
]
] |
new_dataset
| 0.984097 |
2208.06773
|
Medhini Narasimhan
|
Medhini Narasimhan, Arsha Nagrani, Chen Sun, Michael Rubinstein,
Trevor Darrell, Anna Rohrbach, Cordelia Schmid
|
TL;DW? Summarizing Instructional Videos with Task Relevance &
Cross-Modal Saliency
|
Accepted to ECCV 2022. Website: https://medhini.github.io/ivsum/
| null | null | null |
cs.CV cs.IR cs.LG cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
YouTube users looking for instructions for a specific task may spend a long
time browsing content trying to find the right video that matches their needs.
Creating a visual summary (abridged version of a video) provides viewers with a
quick overview and massively reduces search time. In this work, we focus on
summarizing instructional videos, an under-explored area of video
summarization. In comparison to generic videos, instructional videos can be
parsed into semantically meaningful segments that correspond to important steps
of the demonstrated task. Existing video summarization datasets rely on manual
frame-level annotations, making them subjective and limited in size. To
overcome this, we first automatically generate pseudo summaries for a corpus of
instructional videos by exploiting two key assumptions: (i) relevant steps are
likely to appear in multiple videos of the same task (Task Relevance), and (ii)
they are more likely to be described by the demonstrator verbally (Cross-Modal
Saliency). We propose an instructional video summarization network that
combines a context-aware temporal video encoder and a segment scoring
transformer. Using pseudo summaries as weak supervision, our network constructs
a visual summary for an instructional video given only video and transcribed
speech. To evaluate our model, we collect a high-quality test set, WikiHow
Summaries, by scraping WikiHow articles that contain video demonstrations and
visual depictions of steps allowing us to obtain the ground-truth summaries. We
outperform several baselines and a state-of-the-art video summarization model
on this new benchmark.
|
[
{
"version": "v1",
"created": "Sun, 14 Aug 2022 04:07:40 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Narasimhan",
"Medhini",
""
],
[
"Nagrani",
"Arsha",
""
],
[
"Sun",
"Chen",
""
],
[
"Rubinstein",
"Michael",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Rohrbach",
"Anna",
""
],
[
"Schmid",
"Cordelia",
""
]
] |
new_dataset
| 0.988552 |
2208.06802
|
Mrinal Rawat
|
Mrinal Rawat, Victor Barres
|
Real-time Caller Intent Detection In Human-Human Customer Support Spoken
Conversations
| null | null | null |
Accepted in Communication in Human-AI Interaction, IJCAI'22
|
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Agent assistance during human-human customer support spoken interactions
requires triggering workflows based on the caller's intent (reason for call).
Timeliness of prediction is essential for a good user experience. The goal is
for a system to detect the caller's intent at the time the agent would have
been able to detect it (Intent Boundary). Some approaches focus on predicting
the output offline, i.e. once the full spoken input (e.g. the whole
conversational turn) has been processed by the ASR system. This introduces an
undesirable latency in the prediction each time the intent could have been
detected earlier in the turn. Recent work on voice assistants has used
incremental real-time predictions at a word-by-word level to detect intent
before the end of a command. Human-directed and machine-directed speech however
have very different characteristics. In this work, we propose to apply a method
developed in the context of voice-assistant to the problem of online real time
caller's intent detection in human-human spoken interactions. We use a dual
architecture in which two LSTMs are jointly trained: one predicting the Intent
Boundary (IB) and then other predicting the intent class at the IB. We conduct
our experiments on our private dataset comprising transcripts of human-human
telephone conversations from the telecom customer support domain. We report
results analyzing both the accuracy of our system as well as the impact of
different architectures on the trade off between overall accuracy and
prediction latency.
|
[
{
"version": "v1",
"created": "Sun, 14 Aug 2022 07:50:23 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Rawat",
"Mrinal",
""
],
[
"Barres",
"Victor",
""
]
] |
new_dataset
| 0.960162 |
2208.06804
|
Shruti Praveen Jain
|
Neetigya Poddar, Shruti Jain
|
Light Weight Character and Shape Recognition for Autonomous Drones
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There has been an extensive use of Unmanned Aerial Vehicles in search and
rescue missions to distribute first aid kits and food packets. It is important
that these UAVs are able to identify and distinguish the markers from one
another for effective distribution. One of the common ways to mark the
locations is via the use of characters superimposed on shapes of various colors
which gives rise to wide variety of markers based on combination of different
shapes, characters, and their respective colors.
In this paper, we propose an object detection and classification pipeline
which prevents false positives and minimizes misclassification of alphanumeric
characters and shapes in aerial images. Our method makes use of traditional
computer vision techniques and unsupervised machine learning methods for
identifying region proposals, segmenting the image targets and removing false
positives. We make use of a computationally light model for classification,
making it easy to be deployed on any aerial vehicle.
|
[
{
"version": "v1",
"created": "Sun, 14 Aug 2022 08:22:41 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Poddar",
"Neetigya",
""
],
[
"Jain",
"Shruti",
""
]
] |
new_dataset
| 0.995071 |
2208.06823
|
Kacper Sokol
|
Peter Flach and Kacper Sokol
|
Simply Logical -- Intelligent Reasoning by Example (Fully Interactive
Online Edition)
|
The online edition is available at https://book.simply-logical.space/
| null |
10.5281/zenodo.1156977
| null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
"Simply Logical -- Intelligent Reasoning by Example" by Peter Flach was first
published by John Wiley in 1994. It could be purchased as book-only or with a
3.5 inch diskette containing the SWI-Prolog programmes printed in the book (for
various operating systems). In 2007 the copyright reverted back to the author
at which point the book and programmes were made freely available online; the
print version is no longer distributed through John Wiley publishers. In 2015,
as a pilot, we ported most of the original book into an online, interactive
website using SWI-Prolog's SWISH platform. Since then, we launched the Simply
Logical open source organisation committed to maintaining a suite of freely
available interactive online educational resources about Artificial
Intelligence and Logic Programming with Prolog. With the advent of new
educational technologies we were inspired to rebuild the book from the ground
up using the Jupyter Book platform enhanced with a collection of bespoke
plugins that implement, among other things, interactive SWI-Prolog code blocks
that can be executed directly in a web browser. This new version is more
modular, easier to maintain, and can be split into custom teaching modules, in
addition to being modern-looking, visually appealing, and compatible with a
range of (mobile) devices of varying screen sizes.
|
[
{
"version": "v1",
"created": "Sun, 14 Aug 2022 10:32:13 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Flach",
"Peter",
""
],
[
"Sokol",
"Kacper",
""
]
] |
new_dataset
| 0.952242 |
2208.06827
|
Molla Rashied Hussein
|
Ayman Hasib, Saqib Sizan Khan, Jannatul Ferdous Eva, Mst. Nipa Khatun,
Ashraful Haque, Nishat Shahrin, Rashik Rahman, Hasan Murad, Md. Rajibul
Islam, Molla Rashied Hussein
|
BDSL 49: A Comprehensive Dataset of Bangla Sign Language
|
16 pages; 6 figures; Submitted to Data in Brief, a multidisciplinary,
open-access and peer-reviewed journal for reviewing
| null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Language is a method by which individuals express their thoughts. Each
language has its own set of alphabetic and numeric characters. People can
communicate with one another through either oral or written communication.
However, each language has a sign language counterpart. Individuals who are
deaf and/or mute communicate through sign language. The Bangla language also
has a sign language, which is called BDSL. The dataset is about Bangla hand
sign images. The collection contains 49 individual Bangla alphabet images in
sign language. BDSL49 is a dataset that consists of 29,490 images with 49
labels. Images of 14 different adult individuals, each with a distinct
background and appearance, have been recorded during data collection. Several
strategies have been used to eliminate noise from datasets during preparation.
This dataset is available to researchers for free. They can develop automated
systems using machine learning, computer vision, and deep learning techniques.
In addition, two models were used in this dataset. The first is for detection,
while the second is for recognition.
|
[
{
"version": "v1",
"created": "Sun, 14 Aug 2022 10:54:49 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Hasib",
"Ayman",
""
],
[
"Khan",
"Saqib Sizan",
""
],
[
"Eva",
"Jannatul Ferdous",
""
],
[
"Khatun",
"Mst. Nipa",
""
],
[
"Haque",
"Ashraful",
""
],
[
"Shahrin",
"Nishat",
""
],
[
"Rahman",
"Rashik",
""
],
[
"Murad",
"Hasan",
""
],
[
"Islam",
"Md. Rajibul",
""
],
[
"Hussein",
"Molla Rashied",
""
]
] |
new_dataset
| 0.999841 |
2208.06832
|
Nuh Aydin
|
Nuh Aydin, Yiang Lu, Vishad R. Onta
|
An Updated Database of $\mathbb{Z}_4$ Codes
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Research on codes over finite rings has intensified since the discovery in
1994 of the fact that some best binary non-linear codes can be obtained as
images of $\mathbb{Z}_4$-linear codes. Codes over many different finite rings
has been a subject of much research in coding theory after this discovery. Many
of these rings are extensions of $\mathbb{Z}_4$. As a result, an online
database of $\mathbb{Z}_4$ was created in 2008. The URL of the original
database on $\mathbb{Z}_4$ codes has recently changed. The purpose of this
paper is to introduce the new, updated database of $\mathbb{Z}_4$ codes. We
have made major updates to the database by adding 8701 new linear codes over
$\mathbb{Z}_4$. These codes have been found through exhaustive computer
searches on cyclic codes and by an implementation of the ASR search algorithm
that has been remarkably fruitful to obtain new linear codes from the class of
quasi-cyclic (QC) and quasi-twisted (QT) codes over finite fields. We made
modifications to the ASR algorithm to make it work over $\mathbb{Z}_4$. The
initial database contained few codes that were not free. We have added a large
number of non-free codes. In fact, of the 8701 codes we have added, 7631 of
them are non-free.
|
[
{
"version": "v1",
"created": "Sun, 14 Aug 2022 11:33:47 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Aydin",
"Nuh",
""
],
[
"Lu",
"Yiang",
""
],
[
"Onta",
"Vishad R.",
""
]
] |
new_dataset
| 0.998453 |
2208.06888
|
Mubashir Noman
|
Mubashir Noman, Wafa Al Ghallabi, Daniya Najiha, Christoph Mayer,
Akshay Dudhane, Martin Danelljan, Hisham Cholakkal, Salman Khan, Luc Van
Gool, Fahad Shahbaz Khan
|
AVisT: A Benchmark for Visual Object Tracking in Adverse Visibility
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
One of the key factors behind the recent success in visual tracking is the
availability of dedicated benchmarks. While being greatly benefiting to the
tracking research, existing benchmarks do not pose the same difficulty as
before with recent trackers achieving higher performance mainly due to (i) the
introduction of more sophisticated transformers-based methods and (ii) the lack
of diverse scenarios with adverse visibility such as, severe weather
conditions, camouflage and imaging effects.
We introduce AVisT, a dedicated benchmark for visual tracking in diverse
scenarios with adverse visibility. AVisT comprises 120 challenging sequences
with 80k annotated frames, spanning 18 diverse scenarios broadly grouped into
five attributes with 42 object categories. The key contribution of AVisT is
diverse and challenging scenarios covering severe weather conditions such as,
dense fog, heavy rain and sandstorm; obstruction effects including, fire, sun
glare and splashing water; adverse imaging effects such as, low-light; target
effects including, small targets and distractor objects along with camouflage.
We further benchmark 17 popular and recent trackers on AVisT with detailed
analysis of their tracking performance across attributes, demonstrating a big
room for improvement in performance. We believe that AVisT can greatly benefit
the tracking community by complementing the existing benchmarks, in developing
new creative tracking solutions in order to continue pushing the boundaries of
the state-of-the-art. Our dataset along with the complete tracking performance
evaluation is available at: https://github.com/visionml/pytracking
|
[
{
"version": "v1",
"created": "Sun, 14 Aug 2022 17:49:37 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Noman",
"Mubashir",
""
],
[
"Ghallabi",
"Wafa Al",
""
],
[
"Najiha",
"Daniya",
""
],
[
"Mayer",
"Christoph",
""
],
[
"Dudhane",
"Akshay",
""
],
[
"Danelljan",
"Martin",
""
],
[
"Cholakkal",
"Hisham",
""
],
[
"Khan",
"Salman",
""
],
[
"Van Gool",
"Luc",
""
],
[
"Khan",
"Fahad Shahbaz",
""
]
] |
new_dataset
| 0.999696 |
2208.06936
|
Sophia Althammer
|
Sophia Althammer, Sebastian Hofst\"atter, Suzan Verberne, Allan
Hanbury
|
TripJudge: A Relevance Judgement Test Collection for TripClick Health
Retrieval
|
To be published at CIKM 2022 as resource paper
| null |
10.1145/3511808.3557714
| null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Robust test collections are crucial for Information Retrieval research.
Recently there is a growing interest in evaluating retrieval systems for
domain-specific retrieval tasks, however these tasks often lack a reliable test
collection with human-annotated relevance assessments following the Cranfield
paradigm. In the medical domain, the TripClick collection was recently
proposed, which contains click log data from the Trip search engine and
includes two click-based test sets. However the clicks are biased to the
retrieval model used, which remains unknown, and a previous study shows that
the test sets have a low judgement coverage for the Top-10 results of lexical
and neural retrieval models. In this paper we present the novel, relevance
judgement test collection TripJudge for TripClick health retrieval. We collect
relevance judgements in an annotation campaign and ensure the quality and
reusability of TripJudge by a variety of ranking methods for pool creation, by
multiple judgements per query-document pair and by an at least moderate
inter-annotator agreement. We compare system evaluation with TripJudge and
TripClick and find that that click and judgement-based evaluation can lead to
substantially different system rankings.
|
[
{
"version": "v1",
"created": "Sun, 14 Aug 2022 23:11:25 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Althammer",
"Sophia",
""
],
[
"Hofstätter",
"Sebastian",
""
],
[
"Verberne",
"Suzan",
""
],
[
"Hanbury",
"Allan",
""
]
] |
new_dataset
| 0.979169 |
2208.06962
|
Bingqing Zhang
|
Yaxian Li, Bingqing Zhang, Guoping Zhao, Mingyu Zhang, Jiajun Liu,
Ziwei Wang, and Jirong Wen
|
InvisibiliTee: Angle-agnostic Cloaking from Person-Tracking Systems with
a Tee
|
12 pages, 10 figures and the ICANN 2022 accpeted paper
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
After a survey for person-tracking system-induced privacy concerns, we
propose a black-box adversarial attack method on state-of-the-art human
detection models called InvisibiliTee. The method learns printable adversarial
patterns for T-shirts that cloak wearers in the physical world in front of
person-tracking systems. We design an angle-agnostic learning scheme which
utilizes segmentation of the fashion dataset and a geometric warping process so
the adversarial patterns generated are effective in fooling person detectors
from all camera angles and for unseen black-box detection models. Empirical
results in both digital and physical environments show that with the
InvisibiliTee on, person-tracking systems' ability to detect the wearer drops
significantly.
|
[
{
"version": "v1",
"created": "Mon, 15 Aug 2022 01:32:09 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Li",
"Yaxian",
""
],
[
"Zhang",
"Bingqing",
""
],
[
"Zhao",
"Guoping",
""
],
[
"Zhang",
"Mingyu",
""
],
[
"Liu",
"Jiajun",
""
],
[
"Wang",
"Ziwei",
""
],
[
"Wen",
"Jirong",
""
]
] |
new_dataset
| 0.998806 |
2208.07094
|
Martin Aleksandrov D
|
Martin Damyanov Aleksandrov
|
Fair Division meets Vehicle Routing: Fairness for Drivers with Monotone
Profits
|
13 pages, 3 figures, IV 2022
| null |
10.1109/IV51971.2022.9827432
| null |
cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new model for fair division and vehicle routing, where drivers
have monotone profit preferences, and their vehicles have feasibility
constraints, for customer requests. For this model, we design two new axiomatic
notions for fairness for drivers: FEQ1 and FEF1. FEQ1 encodes driver pairwise
bounded equitability. FEF1 encodes driver pairwise bounded envy freeness. We
compare FEQ1 and FEF1 with popular fair division notions such as EQ1 and EF1.
We also give algorithms for guaranteeing FEQ1 and FEF1, respectively.
|
[
{
"version": "v1",
"created": "Mon, 15 Aug 2022 09:53:50 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Aleksandrov",
"Martin Damyanov",
""
]
] |
new_dataset
| 0.978837 |
2208.07167
|
Carole Sudre
|
Carole H. Sudre, Kimberlin Van Wijnen, Florian Dubost, Hieab Adams,
David Atkinson, Frederik Barkhof, Mahlet A. Birhanu, Esther E. Bron, Robin
Camarasa, Nish Chaturvedi, Yuan Chen, Zihao Chen, Shuai Chen, Qi Dou, Tavia
Evans, Ivan Ezhov, Haojun Gao, Marta Girones Sanguesa, Juan Domingo Gispert,
Beatriz Gomez Anson, Alun D. Hughes, M. Arfan Ikram, Silvia Ingala, H. Rolf
Jaeger, Florian Kofler, Hugo J. Kuijf, Denis Kutnar, Minho Lee, Bo Li, Luigi
Lorenzini, Bjoern Menze, Jose Luis Molinuevo, Yiwei Pan, Elodie Puybareau,
Rafael Rehwald, Ruisheng Su, Pengcheng Shi, Lorna Smith, Therese Tillin,
Guillaume Tochon, Helene Urien, Bas H.M. van der Velden, Isabelle F. van der
Velpen, Benedikt Wiestler, Frank J. Wolters, Pinar Yilmaz, Marius de Groot,
Meike W. Vernooij, Marleen de Bruijne (for the ALFA study)
|
Where is VALDO? VAscular Lesions Detection and segmentatiOn challenge at
MICCAI 2021
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Imaging markers of cerebral small vessel disease provide valuable information
on brain health, but their manual assessment is time-consuming and hampered by
substantial intra- and interrater variability. Automated rating may benefit
biomedical research, as well as clinical assessment, but diagnostic reliability
of existing algorithms is unknown. Here, we present the results of the
\textit{VAscular Lesions DetectiOn and Segmentation} (\textit{Where is VALDO?})
challenge that was run as a satellite event at the international conference on
Medical Image Computing and Computer Aided Intervention (MICCAI) 2021. This
challenge aimed to promote the development of methods for automated detection
and segmentation of small and sparse imaging markers of cerebral small vessel
disease, namely enlarged perivascular spaces (EPVS) (Task 1), cerebral
microbleeds (Task 2) and lacunes of presumed vascular origin (Task 3) while
leveraging weak and noisy labels. Overall, 12 teams participated in the
challenge proposing solutions for one or more tasks (4 for Task 1 - EPVS, 9 for
Task 2 - Microbleeds and 6 for Task 3 - Lacunes). Multi-cohort data was used in
both training and evaluation. Results showed a large variability in performance
both across teams and across tasks, with promising results notably for Task 1 -
EPVS and Task 2 - Microbleeds and not practically useful results yet for Task 3
- Lacunes. It also highlighted the performance inconsistency across cases that
may deter use at an individual level, while still proving useful at a
population level.
|
[
{
"version": "v1",
"created": "Mon, 15 Aug 2022 13:09:38 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Sudre",
"Carole H.",
"",
"for the ALFA study"
],
[
"Van Wijnen",
"Kimberlin",
"",
"for the ALFA study"
],
[
"Dubost",
"Florian",
"",
"for the ALFA study"
],
[
"Adams",
"Hieab",
"",
"for the ALFA study"
],
[
"Atkinson",
"David",
"",
"for the ALFA study"
],
[
"Barkhof",
"Frederik",
"",
"for the ALFA study"
],
[
"Birhanu",
"Mahlet A.",
"",
"for the ALFA study"
],
[
"Bron",
"Esther E.",
"",
"for the ALFA study"
],
[
"Camarasa",
"Robin",
"",
"for the ALFA study"
],
[
"Chaturvedi",
"Nish",
"",
"for the ALFA study"
],
[
"Chen",
"Yuan",
"",
"for the ALFA study"
],
[
"Chen",
"Zihao",
"",
"for the ALFA study"
],
[
"Chen",
"Shuai",
"",
"for the ALFA study"
],
[
"Dou",
"Qi",
"",
"for the ALFA study"
],
[
"Evans",
"Tavia",
"",
"for the ALFA study"
],
[
"Ezhov",
"Ivan",
"",
"for the ALFA study"
],
[
"Gao",
"Haojun",
"",
"for the ALFA study"
],
[
"Sanguesa",
"Marta Girones",
"",
"for the ALFA study"
],
[
"Gispert",
"Juan Domingo",
"",
"for the ALFA study"
],
[
"Anson",
"Beatriz Gomez",
"",
"for the ALFA study"
],
[
"Hughes",
"Alun D.",
"",
"for the ALFA study"
],
[
"Ikram",
"M. Arfan",
"",
"for the ALFA study"
],
[
"Ingala",
"Silvia",
"",
"for the ALFA study"
],
[
"Jaeger",
"H. Rolf",
"",
"for the ALFA study"
],
[
"Kofler",
"Florian",
"",
"for the ALFA study"
],
[
"Kuijf",
"Hugo J.",
"",
"for the ALFA study"
],
[
"Kutnar",
"Denis",
"",
"for the ALFA study"
],
[
"Lee",
"Minho",
"",
"for the ALFA study"
],
[
"Li",
"Bo",
"",
"for the ALFA study"
],
[
"Lorenzini",
"Luigi",
"",
"for the ALFA study"
],
[
"Menze",
"Bjoern",
"",
"for the ALFA study"
],
[
"Molinuevo",
"Jose Luis",
"",
"for the ALFA study"
],
[
"Pan",
"Yiwei",
"",
"for the ALFA study"
],
[
"Puybareau",
"Elodie",
"",
"for the ALFA study"
],
[
"Rehwald",
"Rafael",
"",
"for the ALFA study"
],
[
"Su",
"Ruisheng",
"",
"for the ALFA study"
],
[
"Shi",
"Pengcheng",
"",
"for the ALFA study"
],
[
"Smith",
"Lorna",
"",
"for the ALFA study"
],
[
"Tillin",
"Therese",
"",
"for the ALFA study"
],
[
"Tochon",
"Guillaume",
"",
"for the ALFA study"
],
[
"Urien",
"Helene",
"",
"for the ALFA study"
],
[
"van der Velden",
"Bas H. M.",
"",
"for the ALFA study"
],
[
"van der Velpen",
"Isabelle F.",
"",
"for the ALFA study"
],
[
"Wiestler",
"Benedikt",
"",
"for the ALFA study"
],
[
"Wolters",
"Frank J.",
"",
"for the ALFA study"
],
[
"Yilmaz",
"Pinar",
"",
"for the ALFA study"
],
[
"de Groot",
"Marius",
"",
"for the ALFA study"
],
[
"Vernooij",
"Meike W.",
"",
"for the ALFA study"
],
[
"de Bruijne",
"Marleen",
"",
"for the ALFA study"
]
] |
new_dataset
| 0.960312 |
2208.07250
|
Mark Stamp
|
Eric Liang and Mark Stamp
|
Predicting Pedestrian Crosswalk Behavior Using Convolutional Neural
Networks
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
A common yet potentially dangerous task is the act of crossing the street.
Pedestrian accidents contribute a significant amount to the high number of
annual traffic casualties, which is why it is crucial for pedestrians to use
safety measures such as a crosswalk. However, people often forget to activate a
crosswalk light or are unable to do so -- such as those who are visually
impaired or have occupied hands. Other pedestrians are simply careless and find
the crosswalk signals a hassle, which can result in an accident where a car
hits them. In this paper, we consider an improvement to the crosswalk system by
designing a system that can detect pedestrians and triggering the crosswalk
signal automatically. We collect a dataset of images that we then use to train
a convolutional neural network to distinguish between pedestrians (including
bicycle riders) and various false alarms. The resulting system can capture and
evaluate images in real time, and the result can be used to automatically
activate systems a crosswalk light. After extensive testing of our system in
real-world environments, we conclude that it is feasible as a back-up system
that can compliment existing crosswalk buttons, and thereby improve the overall
safety of crossing the street.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 22:48:22 GMT"
}
] | 2022-08-16T00:00:00 |
[
[
"Liang",
"Eric",
""
],
[
"Stamp",
"Mark",
""
]
] |
new_dataset
| 0.987867 |
1802.09575
|
David K\"ugler
|
David K\"ugler, Jannik Sehring, Andrei Stefanov, Igor Stenin, Julia
Kristin, Thomas Klenzner, J\"org Schipper, Anirban Mukhopadhyay
|
i3PosNet: Instrument Pose Estimation from X-Ray in temporal bone surgery
|
Accepted at International journal of computer assisted radiology and
surgery pending publication
| null |
10.1007/s11548-020-02157-4
| null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Purpose: Accurate estimation of the position and orientation (pose) of
surgical instruments is crucial for delicate minimally invasive temporal bone
surgery. Current techniques lack in accuracy and/or line-of-sight constraints
(conventional tracking systems) or expose the patient to prohibitive ionizing
radiation (intra-operative CT). A possible solution is to capture the
instrument with a c-arm at irregular intervals and recover the pose from the
image.
Methods: i3PosNet infers the position and orientation of instruments from
images using a pose estimation network. Said framework considers localized
patches and outputs pseudo-landmarks. The pose is reconstructed from
pseudo-landmarks by geometric considerations.
Results: We show i3PosNet reaches errors less than 0.05mm. It outperforms
conventional image registration-based approaches reducing average and maximum
errors by at least two thirds. i3PosNet trained on synthetic images generalizes
to real x-rays without any further adaptation.
Conclusion: The translation of Deep Learning based methods to surgical
applications is difficult, because large representative datasets for training
and testing are not available. This work empirically shows sub-millimeter pose
estimation trained solely based on synthetic training data.
|
[
{
"version": "v1",
"created": "Mon, 26 Feb 2018 20:00:40 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Mar 2020 18:51:15 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Kügler",
"David",
""
],
[
"Sehring",
"Jannik",
""
],
[
"Stefanov",
"Andrei",
""
],
[
"Stenin",
"Igor",
""
],
[
"Kristin",
"Julia",
""
],
[
"Klenzner",
"Thomas",
""
],
[
"Schipper",
"Jörg",
""
],
[
"Mukhopadhyay",
"Anirban",
""
]
] |
new_dataset
| 0.989436 |
2012.06506
|
Mike Papadakis
|
Ahmed Khanfir, Anil Koyuncu, Mike Papadakis, Maxime Cordy, Tegawend\'e
F. Bissyand\'e, Jacques Klein, Yves Le Traon
|
IBIR: Bug Report driven Fault Injection
| null | null |
10.1145/3542946
| null |
cs.SE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Much research on software engineering and software testing relies on
experimental studies based on fault injection. Fault injection, however, is not
often relevant to emulate real-world software faults since it "blindly" injects
large numbers of faults. It remains indeed challenging to inject few but
realistic faults that target a particular functionality in a program. In this
work, we introduce IBIR, a fault injection tool that addresses this challenge
by exploring change patterns associated to user-reported faults. To inject
realistic faults, we create mutants by retargeting a bug report driven
automated program repair system, i.e., reversing its code transformation
templates. IBIR is further appealing in practice since it requires deep
knowledge of neither of the code nor the tests, but just of the program's
relevant bug reports. Thus, our approach focuses the fault injection on the
feature targeted by the bug report. We assess IBIR by considering the Defects4J
dataset. Experimental results show that our approach outperforms the fault
injection performed by traditional mutation testing in terms of semantic
similarity with the original bug, when applied at either system or class levels
of granularity, and provides better, statistically significant, estimations of
test effectiveness (fault detection). Additionally, when injecting 100 faults,
IBIR injects faults that couple with the real ones in 36% of the cases, while
mutants from mutation testing inject less than 1%. Overall, IBIR targets real
functionality and injects realistic and diverse faults.
|
[
{
"version": "v1",
"created": "Fri, 11 Dec 2020 17:19:18 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Khanfir",
"Ahmed",
""
],
[
"Koyuncu",
"Anil",
""
],
[
"Papadakis",
"Mike",
""
],
[
"Cordy",
"Maxime",
""
],
[
"Bissyandé",
"Tegawendé F.",
""
],
[
"Klein",
"Jacques",
""
],
[
"Traon",
"Yves Le",
""
]
] |
new_dataset
| 0.999306 |
2103.11742
|
Daniel Schleich
|
Daniel Schleich, Marius Beul, Jan Quenzel, Sven Behnke
|
Autonomous Flight in Unknown GNSS-denied Environments for Disaster
Examination
| null |
In Proceedings of International Conference on Unmanned Aircraft
Systems (ICUAS), Athens, Greece, June 2021
|
10.1109/ICUAS51884.2021.9476790
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Micro aerial vehicles (MAVs) have high potential for information gathering
tasks to support situation awareness in search and rescue scenarios. Manually
controlling MAVs in such scenarios requires experienced pilots and is
error-prone, especially in stressful situations of real emergencies. The
conditions of disaster scenarios are also challenging for autonomous MAV
systems. The environment is usually not known in advance and GNSS might not
always be available.
We present a system for autonomous MAV flights in unknown environments which
does not rely on global positioning systems. The method is evaluated in
multiple search and rescue scenarios and allows for safe autonomous flights,
even when transitioning between indoor and outdoor areas.
|
[
{
"version": "v1",
"created": "Mon, 22 Mar 2021 11:49:27 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Aug 2022 08:45:07 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Schleich",
"Daniel",
""
],
[
"Beul",
"Marius",
""
],
[
"Quenzel",
"Jan",
""
],
[
"Behnke",
"Sven",
""
]
] |
new_dataset
| 0.997189 |
2111.02706
|
Flip van Spaendonck Msc.
|
J.F. Groote, M. Laveaux, P.H.M. van Spaendonck (Eindhoven University
of Technology)
|
A thread-safe Term Library
| null | null | null | null |
cs.DC
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Terms are one of the fundamental mathematical concepts in computing. E.g.
every expression characterisable by a context free grammar is a term. We
developed a thread-safe Term Library. The biggest challenge is to implement
hyper-efficient multi-reader/single-writer mutual exclusion for which we
designed the new busy-forbidden protocol. Model checking is used to show both
the correctness of the protocol and the Term Library. Benchmarks show this Term
Library has little overhead compared to sequential versions and outperforms
them already on two processors. Using the new library in an existing state
space generation tool, very substantial speed ups can be obtained.
|
[
{
"version": "v1",
"created": "Thu, 4 Nov 2021 09:30:27 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Aug 2022 16:27:21 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Groote",
"J. F.",
"",
"Eindhoven University\n of Technology"
],
[
"Laveaux",
"M.",
"",
"Eindhoven University\n of Technology"
],
[
"van Spaendonck",
"P. H. M.",
"",
"Eindhoven University\n of Technology"
]
] |
new_dataset
| 0.988347 |
2111.11813
|
Mohamad Hejazi Dinan
|
Mohamad H. Dinan, Nemanja Stefan Perovic, and Mark F. Flanagan
|
RIS-Assisted Receive Quadrature Space-Shift Keying: A New Paradigm and
Performance Analysis
|
16 pages (double column), 6 figures
| null |
10.1109/TCOMM.2022.3198117
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfigurable intelligent surfaces (RISs) represent a promising candidate
for sixth-generation (6G) wireless networks, as the RIS technology provides a
new solution to control the propagation channel in order to improve the
efficiency of a wireless link through enhancing the received signal power. In
this paper, we propose RIS-assisted receive quadrature space-shift keying
(RIS-RQSSK), which enhances the spectral efficiency of an RIS-based index
modulation (IM) system by using the real and imaginary dimensions independently
for the purpose of IM. Therefore, the error rate performance of the system is
improved as all RIS elements reflect the incident transmit signal toward both
selected receive antennas. At the receiver, a low-complexity but effective
greedy detector (GD) can be employed which determines the maximum energy per
dimension at the receive antennas. A max-min optimization problem is defined to
maximize the received signal-to-noise ratio (SNR) components at both selected
receive antennas; an analytical solution is provided based on Lagrange duality.
In particular, the multi-variable optimization problem is shown to reduce to
the solution of a single-variable equation, which results in a very simple
design procedure. In addition, we investigate the average bit error probability
(ABEP) of the proposed RIS-RQSSK system and derive a closed-form approximate
upper bound on the ABEP. We also provide extensive numerical simulations to
validate our derivations. Numerical results show that the proposed RIS-RQSSK
scheme substantially outperforms recent prominent benchmark schemes. This
enhancement considerably increases with an increasing number of receive
antennas.
|
[
{
"version": "v1",
"created": "Tue, 23 Nov 2021 12:06:05 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Aug 2022 09:34:40 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Dinan",
"Mohamad H.",
""
],
[
"Perovic",
"Nemanja Stefan",
""
],
[
"Flanagan",
"Mark F.",
""
]
] |
new_dataset
| 0.957122 |
2201.01364
|
Luciana Ferrer
|
Luciana Ferrer, Diego Castan, Mitchell McLaren, Aaron Lawson
|
A Discriminative Hierarchical PLDA-based Model for Spoken Language
Recognition
| null |
IEEE/ACM Transactions on Audio, Speech, and Language Processing,
vol. 30, pp. 2396-2410, 2022
|
10.1109/TASLP.2022.3190736
| null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spoken language recognition (SLR) refers to the automatic process used to
determine the language present in a speech sample. SLR is an important task in
its own right, for example, as a tool to analyze or categorize large amounts of
multi-lingual data. Further, it is also an essential tool for selecting
downstream applications in a work flow, for example, to chose appropriate
speech recognition or machine translation models. SLR systems are usually
composed of two stages, one where an embedding representing the audio sample is
extracted and a second one which computes the final scores for each language.
In this work, we approach the SLR task as a detection problem and implement the
second stage as a probabilistic linear discriminant analysis (PLDA) model. We
show that discriminative training of the PLDA parameters gives large gains with
respect to the usual generative training. Further, we propose a novel
hierarchical approach where two PLDA models are trained, one to generate scores
for clusters of highly-related languages and a second one to generate scores
conditional to each cluster. The final language detection scores are computed
as a combination of these two sets of scores. The complete model is trained
discriminatively to optimize a cross-entropy objective. We show that this
hierarchical approach consistently outperforms the non-hierarchical one for
detection of highly related languages, in many cases by large margins. We train
our systems on a collection of datasets including over 100 languages, and test
them both on matched and mismatched conditions, showing that the gains are
robust to condition mismatch.
|
[
{
"version": "v1",
"created": "Tue, 4 Jan 2022 22:10:36 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Aug 2022 21:21:22 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Ferrer",
"Luciana",
""
],
[
"Castan",
"Diego",
""
],
[
"McLaren",
"Mitchell",
""
],
[
"Lawson",
"Aaron",
""
]
] |
new_dataset
| 0.981035 |
2207.04040
|
Yunus Can G\"ultekin
|
Yunus Can G\"ultekin, Frans M. J. Willems, Alex Alvarado
|
Log-CCDM: Distribution Matching via Multiplication-free Arithmetic
Coding
|
6 pages, 4 figures, presented at the ISIT 2022
| null |
10.1109/ISIT50566.2022.9834834
| null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Recent years have seen renewed attention to arithmetic coding (AC). This is
thanks to the use of AC for distribution matching (DM) to control the channel
input distribution in probabilistic amplitude shaping. There are two main
problems inherent to AC: (1) its required arithmetic precision grows linearly
with the input length, and (2) high-precision multiplications and divisions are
required. Here, we introduce a multiplication-free AC-based DM technique via
three lookup tables (LUTs) which solves both problems above. These LUTs are
used to approximate the high-precision multiplications and divisions by
additions and subtractions. The required precision of our approach is shown to
grow logarithmically with the input length. We prove that this approximate
technique maintains the invertibility of DM. At an input length of 1024
symbols, the proposed technique achieves negligible rate loss ($<0.01$ bit/sym)
against the full-precision DM, while requiring less than 4 kilobytes of
storage.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 17:55:19 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Gültekin",
"Yunus Can",
""
],
[
"Willems",
"Frans M. J.",
""
],
[
"Alvarado",
"Alex",
""
]
] |
new_dataset
| 0.998615 |
2207.09814
|
Chenfei Wu
|
Chenfei Wu, Jian Liang, Xiaowei Hu, Zhe Gan, Jianfeng Wang, Lijuan
Wang, Zicheng Liu, Yuejian Fang, Nan Duan
|
NUWA-Infinity: Autoregressive over Autoregressive Generation for
Infinite Visual Synthesis
|
24 pages, 19 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present NUWA-Infinity, a generative model for infinite
visual synthesis, which is defined as the task of generating arbitrarily-sized
high-resolution images or long-duration videos. An autoregressive over
autoregressive generation mechanism is proposed to deal with this variable-size
generation task, where a global patch-level autoregressive model considers the
dependencies between patches, and a local token-level autoregressive model
considers dependencies between visual tokens within each patch. A Nearby
Context Pool (NCP) is introduced to cache-related patches already generated as
the context for the current patch being generated, which can significantly save
computation costs without sacrificing patch-level dependency modeling. An
Arbitrary Direction Controller (ADC) is used to decide suitable generation
orders for different visual synthesis tasks and learn order-aware positional
embeddings. Compared to DALL-E, Imagen and Parti, NUWA-Infinity can generate
high-resolution images with arbitrary sizes and support long-duration video
generation additionally. Compared to NUWA, which also covers images and videos,
NUWA-Infinity has superior visual synthesis capabilities in terms of resolution
and variable-size generation. The GitHub link is
https://github.com/microsoft/NUWA. The homepage link is
https://nuwa-infinity.microsoft.com.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 10:55:55 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Aug 2022 04:41:05 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Wu",
"Chenfei",
""
],
[
"Liang",
"Jian",
""
],
[
"Hu",
"Xiaowei",
""
],
[
"Gan",
"Zhe",
""
],
[
"Wang",
"Jianfeng",
""
],
[
"Wang",
"Lijuan",
""
],
[
"Liu",
"Zicheng",
""
],
[
"Fang",
"Yuejian",
""
],
[
"Duan",
"Nan",
""
]
] |
new_dataset
| 0.99061 |
2208.03869
|
Jonathan Zong
|
Jonathan Zong, Josh Pollock, Dylan Wootton, Arvind Satyanarayan
|
Animated Vega-Lite: Unifying Animation with a Grammar of Interactive
Graphics
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present Animated Vega-Lite, a set of extensions to Vega-Lite that model
animated visualizations as time-varying data queries. In contrast to alternate
approaches for specifying animated visualizations, which prize a highly
expressive design space, Animated Vega-Lite prioritizes unifying animation with
the language's existing abstractions for static and interactive visualizations
to enable authors to smoothly move between or combine these modalities. Thus,
to compose animation with static visualizations, we represent time as an
encoding channel. Time encodings map a data field to animation keyframes,
providing a lightweight specification for animations without interaction. To
compose animation and interaction, we also represent time as an event stream;
Vega-Lite selections, which provide dynamic data queries, are now driven not
only by input events but by timer ticks as well. We evaluate the expressiveness
of our approach through a gallery of diverse examples that demonstrate coverage
over taxonomies of both interaction and animation. We also critically reflect
on the conceptual affordances and limitations of our contribution by
interviewing five expert developers of existing animation grammars. These
reflections highlight the key motivating role of in-the-wild examples, and
identify three central tradeoffs: the language design process, the types of
animated transitions supported, and how the systems model keyframes.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 02:00:07 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Aug 2022 15:00:46 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Zong",
"Jonathan",
""
],
[
"Pollock",
"Josh",
""
],
[
"Wootton",
"Dylan",
""
],
[
"Satyanarayan",
"Arvind",
""
]
] |
new_dataset
| 0.997793 |
2208.06004
|
N. Annamalai
|
N. Annamalai
|
On Zero-Divisor Graph of the ring $\mathbb{F}_p+u\mathbb{F}_p+u^2
\mathbb{F}_p$
| null | null | null | null |
cs.IT math.CO math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this article, we discussed the zero-divisor graph of a commutative ring
with identity $\mathbb{F}_p+u\mathbb{F}_p+u^2 \mathbb{F}_p$ where $u^3=0$ and
$p$ is an odd prime. We find the clique number, chromatic number, vertex
connectivity, edge connectivity, diameter and girth of a zero-divisor graph
associated with the ring. We find some of topological indices and the main
parameters of the code derived from the incidence matrix of the zero-divisor
graph $\Gamma(R).$ Also, we find the eigenvalues, energy and spectral radius of
both adjacency and Laplacian matrices of $\Gamma(R).$
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 18:27:03 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Annamalai",
"N.",
""
]
] |
new_dataset
| 0.998632 |
2208.06042
|
Ahmed Khanfir
|
Ahmed Khanfir, Matthieu Jimenez, Mike Papadakis and Yves Le Traon
|
CodeBERT-nt: code naturalness via CodeBERT
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Much of software-engineering research relies on the naturalness of code, the
fact that code, in small code snippets, is repetitive and can be predicted
using statistical language models like n-gram. Although powerful, training such
models on large code corpus is tedious, time-consuming and sensitive to code
patterns (and practices) encountered during training. Consequently, these
models are often trained on a small corpora and estimate the language
naturalness that is relative to a specific style of programming or type of
project. To overcome these issues, we propose using pre-trained language models
to infer code naturalness. Pre-trained models are often built on big data, are
easy to use in an out-of-the-box way and include powerful learning associations
mechanisms. Our key idea is to quantify code naturalness through its
predictability, by using state-of-the-art generative pre-trained language
models. To this end, we infer naturalness by masking (omitting) code tokens,
one at a time, of code-sequences, and checking the models' ability to predict
them. To this end, we evaluate three different predictability metrics; a)
measuring the number of exact matches of the predictions, b) computing the
embedding similarity between the original and predicted code, i.e., similarity
at the vector space, and c) computing the confidence of the model when doing
the token completion task irrespective of the outcome. We implement this
workflow, named CodeBERT-nt, and evaluate its capability to prioritize buggy
lines over non-buggy ones when ranking code based on its naturalness. Our
results, on 2510 buggy versions of 40 projects from the SmartShark dataset,
show that CodeBERT-nt outperforms both, random-uniform and complexity-based
ranking techniques, and yields comparable results (slightly better) than the
n-gram models.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 21:22:18 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Khanfir",
"Ahmed",
""
],
[
"Jimenez",
"Matthieu",
""
],
[
"Papadakis",
"Mike",
""
],
[
"Traon",
"Yves Le",
""
]
] |
new_dataset
| 0.995531 |
2208.06092
|
Adeilson Silva
|
Adeilson Antonio da Silva and Mauricio Pamplona Segundo
|
On deceiving malware classification with section injection
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We investigate how to modify executable files to deceive malware
classification systems. This work's main contribution is a methodology to
inject bytes across a malware file randomly and use it both as an attack to
decrease classification accuracy but also as a defensive method, augmenting the
data available for training. It respects the operating system file format to
make sure the malware will still execute after our injection and will not
change its behavior. We reproduced five state-of-the-art malware classification
approaches to evaluate our injection scheme: one based on GIST+KNN, three CNN
variations and one Gated CNN. We performed our experiments on a public dataset
with 9,339 malware samples from 25 different families. Our results show that a
mere increase of 7% in the malware size causes an accuracy drop between 25% and
40% for malware family classification. They show that a automatic malware
classification system may not be as trustworthy as initially reported in the
literature. We also evaluate using modified malwares alongside the original
ones to increase networks robustness against mentioned attacks. Results show
that a combination of reordering malware sections and injecting random data can
improve overall performance of the classification. Code available at
https://github.com/adeilsonsilva/malware-injection.
|
[
{
"version": "v1",
"created": "Fri, 12 Aug 2022 02:43:17 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"da Silva",
"Adeilson Antonio",
""
],
[
"Segundo",
"Mauricio Pamplona",
""
]
] |
new_dataset
| 0.993939 |
2208.06110
|
Khang Lam
|
Khang Nhut Lam and Feras Al Tarouti and Jugal Kalita
|
Automatically Creating a Large Number of New Bilingual Dictionaries
|
7 pages
|
Proceedings of the AAAI Conference on Artificial Intelligence,
vol. 29, no. 1. 2015
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper proposes approaches to automatically create a large number of new
bilingual dictionaries for low-resource languages, especially resource-poor and
endangered languages, from a single input bilingual dictionary. Our algorithms
produce translations of words in a source language to plentiful target
languages using available Wordnets and a machine translator (MT). Since our
approaches rely on just one input dictionary, available Wordnets and an MT,
they are applicable to any bilingual dictionary as long as one of the two
languages is English or has a Wordnet linked to the Princeton Wordnet. Starting
with 5 available bilingual dictionaries, we create 48 new bilingual
dictionaries. Of these, 30 pairs of languages are not supported by the popular
MTs: Google and Bing.
|
[
{
"version": "v1",
"created": "Fri, 12 Aug 2022 04:25:23 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Lam",
"Khang Nhut",
""
],
[
"Tarouti",
"Feras Al",
""
],
[
"Kalita",
"Jugal",
""
]
] |
new_dataset
| 0.996497 |
2208.06122
|
Debajyoti Mondal
|
Stephane Durocher, J. Mark Keil and Debajyoti Mondal
|
Minimum Ply Covering of Points with Unit Squares
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a set $P$ of points and a set $U$ of axis-parallel unit squares in the
Euclidean plane, a minimum ply cover of $P$ with $U$ is a subset of $U$ that
covers $P$ and minimizes the number of squares that share a common
intersection, called the minimum ply cover number of $P$ with $U$. Biedl et al.
[Comput. Geom., 94:101712, 2020] showed that determining the minimum ply cover
number for a set of points by a set of axis-parallel unit squares is NP-hard,
and gave a polynomial-time 2-approximation algorithm for instances in which the
minimum ply cover number is constant. The question of whether there exists a
polynomial-time approximation algorithm remained open when the minimum ply
cover number is $\omega(1)$. We settle this open question and present a
polynomial-time $(8+\varepsilon)$-approximation algorithm for the general
problem, for every fixed $\varepsilon>0$.
|
[
{
"version": "v1",
"created": "Fri, 12 Aug 2022 05:24:56 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Durocher",
"Stephane",
""
],
[
"Keil",
"J. Mark",
""
],
[
"Mondal",
"Debajyoti",
""
]
] |
new_dataset
| 0.973787 |
2208.06143
|
Brandon Yushan Feng
|
Brandon Yushan Feng, Yinda Zhang, Danhang Tang, Ruofei Du, Amitabh
Varshney
|
PRIF: Primary Ray-based Implicit Function
|
ECCV 2022. Project Page: https://augmentariumlab.github.io/PRIF/
| null | null | null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a new implicit shape representation called Primary Ray-based
Implicit Function (PRIF). In contrast to most existing approaches based on the
signed distance function (SDF) which handles spatial locations, our
representation operates on oriented rays. Specifically, PRIF is formulated to
directly produce the surface hit point of a given input ray, without the
expensive sphere-tracing operations, hence enabling efficient shape extraction
and differentiable rendering. We demonstrate that neural networks trained to
encode PRIF achieve successes in various tasks including single shape
representation, category-wise shape generation, shape completion from sparse or
noisy observations, inverse rendering for camera pose estimation, and neural
rendering with color.
|
[
{
"version": "v1",
"created": "Fri, 12 Aug 2022 07:23:45 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Feng",
"Brandon Yushan",
""
],
[
"Zhang",
"Yinda",
""
],
[
"Tang",
"Danhang",
""
],
[
"Du",
"Ruofei",
""
],
[
"Varshney",
"Amitabh",
""
]
] |
new_dataset
| 0.99938 |
2208.06153
|
Pino Caballero-Gil
|
Pino Caballero-Gil, C\'andido Caballero-Gil, Jezabel Molina-Gil
|
How to build vehicular ad-hoc networks on smartphones
| null |
Journal of Systems Architecture 59 (10), 996-1004, 2013
|
10.1016/j.sysarc.2013.08.015
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Vehicular ad-hoc networks have been defined in the literature as
communications networks that allow disseminating information among vehicles to
help to reduce traffic accidents and congestions. The practical deployment of
such networks has been delayed mainly due to economic and technical issues.
This paper describes a new software application to detect traffic incidents and
exchange information about them, using only smartphones, without any central
authority or additional equipment. Both road safety and communication security
have been taken into account in the application design. On the one hand, the
interface has been designed to avoid distractions while driving because it
operates automatically and independently of the driver, through voice prompts.
On the other hand, communication security, which is essential in critical
wireless networks, is provided through the protection of attributes such as
authenticity, privacy, integrity and non-repudiation. All this is achieved
without increasing the price of vehicles and without requiring the integration
of new devices neither in vehicles nor on roads. The only prerequisite is to
have a smartphone equipped with Wi-Fi connectivity and GPS location in each
vehicle. The proposed application has been successfully validated both in
large-scale NS-2 simulations and in small-scale real tests to detect traffic
congestions and empty parking spaces.
|
[
{
"version": "v1",
"created": "Fri, 12 Aug 2022 07:50:27 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Caballero-Gil",
"Pino",
""
],
[
"Caballero-Gil",
"Cándido",
""
],
[
"Molina-Gil",
"Jezabel",
""
]
] |
new_dataset
| 0.956161 |
2208.06169
|
Franco Caspe
|
Franco Caspe, Andrew McPherson, Mark Sandler
|
DDX7: Differentiable FM Synthesis of Musical Instrument Sounds
|
Accepted to ISMIR 2022. See online supplement at
https://fcaspe.github.io/ddx7/
| null | null | null |
cs.SD cs.LG eess.AS eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
FM Synthesis is a well-known algorithm used to generate complex timbre from a
compact set of design primitives. Typically featuring a MIDI interface, it is
usually impractical to control it from an audio source. On the other hand,
Differentiable Digital Signal Processing (DDSP) has enabled nuanced audio
rendering by Deep Neural Networks (DNNs) that learn to control differentiable
synthesis layers from arbitrary sound inputs. The training process involves a
corpus of audio for supervision, and spectral reconstruction loss functions.
Such functions, while being great to match spectral amplitudes, present a lack
of pitch direction which can hinder the joint optimization of the parameters of
FM synthesizers. In this paper, we take steps towards enabling continuous
control of a well-established FM synthesis architecture from an audio input.
Firstly, we discuss a set of design constraints that ease spectral optimization
of a differentiable FM synthesizer via a standard reconstruction loss. Next, we
present Differentiable DX7 (DDX7), a lightweight architecture for neural FM
resynthesis of musical instrument sounds in terms of a compact set of
parameters. We train the model on instrument samples extracted from the URMP
dataset, and quantitatively demonstrate its comparable audio quality against
selected benchmarks.
|
[
{
"version": "v1",
"created": "Fri, 12 Aug 2022 08:39:45 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Caspe",
"Franco",
""
],
[
"McPherson",
"Andrew",
""
],
[
"Sandler",
"Mark",
""
]
] |
new_dataset
| 0.997267 |
2208.06231
|
Pino Caballero-Gil
|
C\'andido Caballero-Gil, Pino Caballero-Gil, Jezabel Molina-Gil
|
Mutual authentication in self-organized VANETs
| null |
Computer Standards & Interfaces 36 (4), 704-710, 2014
|
10.1016/j.csi.2013.12.005
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The practical deployment of vehicular networks is still a pending issue. In
this paper we describe a new self-organized method of authentication for
VANETs, which allows their widespread, fast and secure implementation. Our
proposal does not involve any central certification authority because the nodes
themselves certify the validity of public keys of the other nodes. On the one
hand we propose an algorithm that each node must use to choose the public key
certificates for its local store. On the other hand, we also describe a new
node authentication method based on a cryptographic protocol including a
zero-knowledge proof that each node must use to convince another node on the
possession of certain secret without revealing anything about it, which allows
non-encrypted communication during authentication. Thanks to the combination of
the aforementioned tools, the cooperation among vehicles can be used for
developing several practical applications of VANETs, such as detection and
warning about abnormal traffic conditions. One of the most interesting aspects
of our proposal is that it only requires existing devices such as smartphones,
because the designed schemes are fully distributed and self-organized. In this
work we include an analysis of both an NS-2 simulation and a real device
implementation of the proposed algorithms, which enables us to extract
promising conclusions and several possible improvements and open questions for
further research.
|
[
{
"version": "v1",
"created": "Fri, 12 Aug 2022 11:54:21 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Caballero-Gil",
"Cándido",
""
],
[
"Caballero-Gil",
"Pino",
""
],
[
"Molina-Gil",
"Jezabel",
""
]
] |
new_dataset
| 0.973394 |
2208.06296
|
Changyuan Liu
|
Changyuan Liu
|
Monte Carlo neutron transport using low power mobile GPU devices
|
This work is an English translated version of an article submitted to
the CORPHY 2022 conference(http://corphy2022.org.cn), which is postponed to
be held in Shanghai in 2023. Original text is in Chinese, and there may be
minor difference in the contents. This work is also an extension to the
published article.
https://www.sciencedirect.com/science/article/pii/S0306454922001852
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The using of GPU for Monte Carlo particle transport is lacking of fair
comparisons. This work performs simulations on both CPU and GPU in the same
package under the same manufacturing process of low power mobile devices. The
experiment with simple pincell benchmark problems with fresh fuel gives
consistent results between CPU and GPU. In the meanwhile, it finds that the
Apple M1 GPU is as twice capable as M1 CPU, while entitled with a 5 times
advantage in power consumption. The particle sorting algorithm optimized for
GPU improves computing efficiency by 28\%, while prominently reducing GPU power
consumption. Such advantage of sorting algorithm is expected to be greater for
depleted fuel problems than fresh fuel problem. The kernel reconstruction
Doppler broadening algorithm designed for continuously varying materials is
demonstrated to produce consistent Doppler coefficients with the reference code
and the algorithm can be efficiently implemented on GPU. Compared with the
reference code with double precision floating point numbers, the testing codes
with single precision floating point numbers could underestimate the
K-effective values by about 500 pcm, and the Doppler coefficients of the fuel
are well reproduced though. The conclusion may strengthen the argument that it
is helpful for high performance computer to adopt GPU in order to reduce gross
power consumption.
|
[
{
"version": "v1",
"created": "Sat, 18 Jun 2022 09:00:04 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Liu",
"Changyuan",
""
]
] |
new_dataset
| 0.997242 |
2208.06309
|
Shreyas Ramakrishna
|
Shreyas Ramakrishna, Baiting Luo, Christopher Kuhn, Gabor Karsai, and
Abhishek Dubey
|
ANTI-CARLA: An Adversarial Testing Framework for Autonomous Vehicles in
CARLA
|
Paper accepted at IEEE International Conference on Intelligent
Transportation Systems (IEEE ITSC 2022)
| null | null | null |
cs.LG cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite recent advances in autonomous driving systems, accidents such as the
fatal Uber crash in 2018 show these systems are still susceptible to edge
cases. Such systems must be thoroughly tested and validated before being
deployed in the real world to avoid such events. Testing in open-world
scenarios can be difficult, time-consuming, and expensive. These challenges can
be addressed by using driving simulators such as CARLA instead. A key part of
such tests is adversarial testing, in which the goal is to find scenarios that
lead to failures of the given system. While several independent efforts in
testing have been made, a well-established testing framework that enables
adversarial testing has yet to be made available for CARLA. We therefore
propose ANTI-CARLA, an automated testing framework in CARLA for simulating
adversarial weather conditions (e.g., heavy rain) and sensor faults (e.g.,
camera occlusion) that fail the system. The operating conditions in which a
given system should be tested are specified in a scenario description language.
The framework offers an efficient search mechanism that searches for
adversarial operating conditions that will fail the tested system. In this way,
ANTI-CARLA extends the CARLA simulator with the capability of performing
adversarial testing on any given driving pipeline. We use ANTI-CARLA to test
the driving pipeline trained with Learning By Cheating (LBC) approach. The
simulation results demonstrate that ANTI-CARLA can effectively and
automatically find a range of failure cases despite LBC reaching an accuracy of
100% in the CARLA benchmark.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 01:05:26 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Ramakrishna",
"Shreyas",
""
],
[
"Luo",
"Baiting",
""
],
[
"Kuhn",
"Christopher",
""
],
[
"Karsai",
"Gabor",
""
],
[
"Dubey",
"Abhishek",
""
]
] |
new_dataset
| 0.994122 |
2208.06344
|
Serdar Abut
|
Serdar Abut
|
Modelleme ve Simulasyon
|
22 pages, in Turkish language, 4 figures
|
Teorik ve Uygulamali Muhendislik Calismalari, IKSAD, 2021/12, 1,
135-156
| null | null |
cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
Computer modeling and simulation is used to analyze system behavior and
evaluate strategies for operating in descriptive or predictive modes. In this
part of the book, modeling and simulation approaches that have been proposed
since the 1970s have been tried to be presented. Simulation models used in
social sciences, risk management and cloud-based information systems are tried
to be summarized, and information about agent-based modeling and simulation
approach is given.
|
[
{
"version": "v1",
"created": "Fri, 27 May 2022 19:15:27 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Abut",
"Serdar",
""
]
] |
new_dataset
| 0.956827 |
2208.06350
|
Ryo Suzuki
|
Jian Liao, Adnan Karim, Shivesh Jadon, Rubaiat Habib Kazi, Ryo Suzuki
|
RealityTalk: Real-Time Speech-Driven Augmented Presentation for AR Live
Storytelling
|
UIST 2022; For the interactive gallery, see
https://ilab.ucalgary.ca/realitytalk/
| null |
10.1145/3526113.3545702
| null |
cs.HC cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present RealityTalk, a system that augments real-time live presentations
with speech-driven interactive virtual elements. Augmented presentations
leverage embedded visuals and animation for engaging and expressive
storytelling. However, existing tools for live presentations often lack
interactivity and improvisation, while creating such effects in video editing
tools require significant time and expertise. RealityTalk enables users to
create live augmented presentations with real-time speech-driven interactions.
The user can interactively prompt, move, and manipulate graphical elements
through real-time speech and supporting modalities. Based on our analysis of
177 existing video-edited augmented presentations, we propose a novel set of
interaction techniques and then incorporated them into RealityTalk. We evaluate
our tool from a presenter's perspective to demonstrate the effectiveness of our
system.
|
[
{
"version": "v1",
"created": "Fri, 12 Aug 2022 16:12:00 GMT"
}
] | 2022-08-15T00:00:00 |
[
[
"Liao",
"Jian",
""
],
[
"Karim",
"Adnan",
""
],
[
"Jadon",
"Shivesh",
""
],
[
"Kazi",
"Rubaiat Habib",
""
],
[
"Suzuki",
"Ryo",
""
]
] |
new_dataset
| 0.998782 |
2108.07002
|
Zhuo Zheng
|
Zhuo Zheng, Ailong Ma, Liangpei Zhang, Yanfei Zhong
|
Change is Everywhere: Single-Temporal Supervised Object Change Detection
in Remote Sensing Imagery
|
Accepted by ICCV 2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For high spatial resolution (HSR) remote sensing images, bitemporal
supervised learning always dominates change detection using many pairwise
labeled bitemporal images. However, it is very expensive and time-consuming to
pairwise label large-scale bitemporal HSR remote sensing images. In this paper,
we propose single-temporal supervised learning (STAR) for change detection from
a new perspective of exploiting object changes in unpaired images as
supervisory signals. STAR enables us to train a high-accuracy change detector
only using \textbf{unpaired} labeled images and generalize to real-world
bitemporal images. To evaluate the effectiveness of STAR, we design a simple
yet effective change detector called ChangeStar, which can reuse any deep
semantic segmentation architecture by the ChangeMixin module. The comprehensive
experimental results show that ChangeStar outperforms the baseline with a large
margin under single-temporal supervision and achieves superior performance
under bitemporal supervision. Code is available at
https://github.com/Z-Zheng/ChangeStar
|
[
{
"version": "v1",
"created": "Mon, 16 Aug 2021 10:25:15 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Aug 2022 07:31:15 GMT"
}
] | 2022-08-12T00:00:00 |
[
[
"Zheng",
"Zhuo",
""
],
[
"Ma",
"Ailong",
""
],
[
"Zhang",
"Liangpei",
""
],
[
"Zhong",
"Yanfei",
""
]
] |
new_dataset
| 0.981005 |
2111.05319
|
Shubhendu Jena
|
Shubhendu Jena, Franck Multon, Adnane Boukhayma
|
Monocular Human Shape and Pose with Dense Mesh-borne Local Image
Features
|
FG 2021
| null |
10.1109/FG52635.2021.9666993
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose to improve on graph convolution based approaches for human shape
and pose estimation from monocular input, using pixel-aligned local image
features. Given a single input color image, existing graph convolutional
network (GCN) based techniques for human shape and pose estimation use a single
convolutional neural network (CNN) generated global image feature appended to
all mesh vertices equally to initialize the GCN stage, which transforms a
template T-posed mesh into the target pose. In contrast, we propose for the
first time the idea of using local image features per vertex. These features
are sampled from the CNN image feature maps by utilizing pixel-to-mesh
correspondences generated with DensePose. Our quantitative and qualitative
results on standard benchmarks show that using local features improves on
global ones and leads to competitive performances with respect to the
state-of-the-art.
|
[
{
"version": "v1",
"created": "Tue, 9 Nov 2021 18:43:18 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Nov 2021 02:00:05 GMT"
},
{
"version": "v3",
"created": "Thu, 11 Nov 2021 08:38:08 GMT"
}
] | 2022-08-12T00:00:00 |
[
[
"Jena",
"Shubhendu",
""
],
[
"Multon",
"Franck",
""
],
[
"Boukhayma",
"Adnane",
""
]
] |
new_dataset
| 0.958303 |
2208.05479
|
Xiaoling Hu
|
Xiaoling Hu, Chenxi Liu, Mugen Peng and Caijun Zhong
|
IRS-Based Integrated Location Sensing and Communication for mmWave SIMO
Systems
|
arXiv admin note: text overlap with arXiv:2208.05300
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we establish an integrated sensing and communication (ISAC)
system based on a distributed semi-passive intelligent reflecting surface
(IRS), which allows location sensing and data transmission to be carried out
simultaneously, sharing the same frequency and time resources. The detailed
working process of the proposed IRS-based ISAC system is designed, including
the transmission protocol, location sensing and beamforming optimization.
Specifically, each coherence block consists of two periods, the ISAC period
with two time blocks and the pure communication (PC) period. During each time
block of the ISAC period, data transmission and user positioning are carried
out simultaneously. The estimated user location in the first time block will be
used for beamforming design in the second time block. During the PC period,
only data transmission is conducted, by invoking the user location estimated in
the second time block of the ISAC period for beamforming design.
{\color{black}Simulation results show that a millimeter-level positioning
accuracy can be achieved by the proposed location sensing scheme, demonstrating
the advantage of the proposed IRS-based ISAC framework. Besides, the proposed
two beamforming schemes based on the estimated location information achieve
similar performance to the benchmark schemes assuming perfect channel state
information (CSI), which verifies the effectiveness of beamforming design using
sensed location information.
|
[
{
"version": "v1",
"created": "Wed, 10 Aug 2022 13:21:07 GMT"
}
] | 2022-08-12T00:00:00 |
[
[
"Hu",
"Xiaoling",
""
],
[
"Liu",
"Chenxi",
""
],
[
"Peng",
"Mugen",
""
],
[
"Zhong",
"Caijun",
""
]
] |
new_dataset
| 0.996696 |
2208.05597
|
Shion Fukuzawa
|
Gill Barequet, Shion Fukuzawa, Michael T. Goodrich, David M. Mount,
Martha C. Osegueda, Evrim Ozel
|
Diamonds are Forever in the Blockchain: Geometric Polyhedral Point-Set
Pattern Matching
|
8 pages, 5 figures, To appear in 34th Canadian Conference on
Computational Geometry
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by blockchain technology for supply-chain tracing of ethically
sourced diamonds, we study geometric polyhedral point-set pattern matching as
minimum-width polyhedral annulus problems under translations and rotations. We
provide two $(1 + \varepsilon)$-approximation schemes under translations with
$O(\varepsilon^{-d} n)$-time for $d$ dimensions and $O(n\log \varepsilon^{-1} +
\varepsilon^{-2})$-time for two dimensions, and we give an
$O(f^{d-1}\varepsilon^{1-2d}n)$-time algorithm when also allowing for
rotations, parameterized on $f$, which we define as the slimness of the point
set.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 00:32:54 GMT"
}
] | 2022-08-12T00:00:00 |
[
[
"Barequet",
"Gill",
""
],
[
"Fukuzawa",
"Shion",
""
],
[
"Goodrich",
"Michael T.",
""
],
[
"Mount",
"David M.",
""
],
[
"Osegueda",
"Martha C.",
""
],
[
"Ozel",
"Evrim",
""
]
] |
new_dataset
| 0.994715 |
2208.05621
|
Xujie Zhang
|
Xujie Zhang, Yu Sha, Michael C. Kampffmeyer, Zhenyu Xie, Zequn Jie,
Chengwen Huang, Jianqing Peng, Xiaodan Liang
|
ARMANI: Part-level Garment-Text Alignment for Unified Cross-Modal
Fashion Design
|
Accepted by ACMMM22
| null |
10.1145/3503161.3548230
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cross-modal fashion image synthesis has emerged as one of the most promising
directions in the generation domain due to the vast untapped potential of
incorporating multiple modalities and the wide range of fashion image
applications. To facilitate accurate generation, cross-modal synthesis methods
typically rely on Contrastive Language-Image Pre-training (CLIP) to align
textual and garment information. In this work, we argue that simply aligning
texture and garment information is not sufficient to capture the semantics of
the visual information and therefore propose MaskCLIP. MaskCLIP decomposes the
garments into semantic parts, ensuring fine-grained and semantically accurate
alignment between the visual and text information. Building on MaskCLIP, we
propose ARMANI, a unified cross-modal fashion designer with part-level
garment-text alignment. ARMANI discretizes an image into uniform tokens based
on a learned cross-modal codebook in its first stage and uses a Transformer to
model the distribution of image tokens for a real image given the tokens of the
control signals in its second stage. Contrary to prior approaches that also
rely on two-stage paradigms, ARMANI introduces textual tokens into the
codebook, making it possible for the model to utilize fine-grain semantic
information to generate more realistic images. Further, by introducing a
cross-modal Transformer, ARMANI is versatile and can accomplish image synthesis
from various control signals, such as pure text, sketch images, and partial
images. Extensive experiments conducted on our newly collected cross-modal
fashion dataset demonstrate that ARMANI generates photo-realistic images in
diverse synthesis tasks and outperforms existing state-of-the-art cross-modal
image synthesis approaches.Our code is available at
https://github.com/Harvey594/ARMANI.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 03:44:02 GMT"
}
] | 2022-08-12T00:00:00 |
[
[
"Zhang",
"Xujie",
""
],
[
"Sha",
"Yu",
""
],
[
"Kampffmeyer",
"Michael C.",
""
],
[
"Xie",
"Zhenyu",
""
],
[
"Jie",
"Zequn",
""
],
[
"Huang",
"Chengwen",
""
],
[
"Peng",
"Jianqing",
""
],
[
"Liang",
"Xiaodan",
""
]
] |
new_dataset
| 0.999768 |
2208.05623
|
Kexin Yang
|
Kexin Yang, Dayiheng Liu, Wenqiang Lei, Baosong Yang, Qian Qu,
Jiancheng Lv
|
Draft, Command, and Edit: Controllable Text Editing in E-Commerce
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Product description generation is a challenging and under-explored task. Most
such work takes a set of product attributes as inputs then generates a
description from scratch in a single pass. However, this widespread paradigm
might be limited when facing the dynamic wishes of users on constraining the
description, such as deleting or adding the content of a user-specified
attribute based on the previous version. To address this challenge, we explore
a new draft-command-edit manner in description generation, leading to the
proposed new task-controllable text editing in E-commerce. More specifically,
we allow systems to receive a command (deleting or adding) from the user and
then generate a description by flexibly modifying the content based on the
previous version. It is easier and more practical to meet the new needs by
modifying previous versions than generating from scratch. Furthermore, we
design a data augmentation method to remedy the low resource challenge in this
task, which contains a model-based and a rule-based strategy to imitate the
edit by humans. To accompany this new task, we present a human-written
draft-command-edit dataset called E-cEdits and a new metric "Attribute Edit".
Our experimental results show that using the new data augmentation method
outperforms baselines to a greater extent in both automatic and human
evaluations.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 03:48:08 GMT"
}
] | 2022-08-12T00:00:00 |
[
[
"Yang",
"Kexin",
""
],
[
"Liu",
"Dayiheng",
""
],
[
"Lei",
"Wenqiang",
""
],
[
"Yang",
"Baosong",
""
],
[
"Qu",
"Qian",
""
],
[
"Lv",
"Jiancheng",
""
]
] |
new_dataset
| 0.998188 |
2208.05647
|
Zihan Ding
|
Zihan Ding, Zi-han Ding, Tianrui Hui, Junshi Huang, Xiaoming Wei,
Xiaolin Wei, Si Liu
|
PPMN: Pixel-Phrase Matching Network for One-Stage Panoptic Narrative
Grounding
|
Accepted by ACM MM 2022
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Panoptic Narrative Grounding (PNG) is an emerging task whose goal is to
segment visual objects of things and stuff categories described by dense
narrative captions of a still image. The previous two-stage approach first
extracts segmentation region proposals by an off-the-shelf panoptic
segmentation model, then conducts coarse region-phrase matching to ground the
candidate regions for each noun phrase. However, the two-stage pipeline usually
suffers from the performance limitation of low-quality proposals in the first
stage and the loss of spatial details caused by region feature pooling, as well
as complicated strategies designed for things and stuff categories separately.
To alleviate these drawbacks, we propose a one-stage end-to-end Pixel-Phrase
Matching Network (PPMN), which directly matches each phrase to its
corresponding pixels instead of region proposals and outputs panoptic
segmentation by simple combination. Thus, our model can exploit sufficient and
finer cross-modal semantic correspondence from the supervision of densely
annotated pixel-phrase pairs rather than sparse region-phrase pairs. In
addition, we also propose a Language-Compatible Pixel Aggregation (LCPA) module
to further enhance the discriminative ability of phrase features through
multi-round refinement, which selects the most compatible pixels for each
phrase to adaptively aggregate the corresponding visual context. Extensive
experiments show that our method achieves new state-of-the-art performance on
the PNG benchmark with 4.0 absolute Average Recall gains.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 05:42:12 GMT"
}
] | 2022-08-12T00:00:00 |
[
[
"Ding",
"Zihan",
""
],
[
"Ding",
"Zi-han",
""
],
[
"Hui",
"Tianrui",
""
],
[
"Huang",
"Junshi",
""
],
[
"Wei",
"Xiaoming",
""
],
[
"Wei",
"Xiaolin",
""
],
[
"Liu",
"Si",
""
]
] |
new_dataset
| 0.997274 |
2208.05680
|
Mosarrat Jahan
|
Farhana Siddiqua, Mosarrat Jahan
|
A Trust-Based Malicious RSU Detection Mechanism in Edge-Enabled
Vehicular Ad Hoc Networks
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Edge-enabled Vehicular Ad Hoc Network (VANET) introduces real-time services
and storage, computation, and communication facilities to the vehicles through
Roadside Units (RSUs). Nevertheless, RSUs are often easy targets for security
assaults due to their placement in an open, unprotected environment and
resource-constrained nature. The malicious RSUs compromised by security attacks
impose threats to human safety by impeding the operations of VANETs. Hence, an
effective malevolent RSU detection mechanism is crucial for VANETs. Existing
trust-based detection mechanisms assign trust scores to RSUs based on their
interactions with moving vehicles where precise detection of rogue RSUs depends
on the accuracy of trust scores. However, brief interaction of RSUs with the
running vehicles permits inadequate time to estimate trust accurately. Besides,
current works use only vehicle speed and density in beacon messages to assess
trust without considering the sensor-detected data in the same messages.
Nonetheless, sensor data is useful for traffic management, and neglecting them
creates inaccuracy in trust estimation. In this paper, we address these
limitations and propose a trust-based scheme to detect malicious RSUs that uses
stable and frequent RSU-to-RSU (R2R) interaction to precisely analyze the
behavior of an RSU. We also offer a mechanism to detect alteration of
sensor-detected data in beacon content and incorporate this scheme in the trust
calculation of RSUs. The experimental results show that the proposed solution
effectively detects approximately 92% malicious RSUs, even in the presence of
hostile vehicles. Moreover, integrating the proposed solution with the VANET
routing protocols improves routing efficiency.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 07:56:23 GMT"
}
] | 2022-08-12T00:00:00 |
[
[
"Siddiqua",
"Farhana",
""
],
[
"Jahan",
"Mosarrat",
""
]
] |
new_dataset
| 0.998002 |
2208.05691
|
Xinrui Li
|
Xinrui Li, Haiquan Lu, Yong Zeng, Shi Jin, Rui Zhang
|
Modular Extremely Large-Scale Array Communication: Near-Field Modelling
and Performance Analysis
|
30 pages, 10 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates wireless communications based on a new antenna array
architecture, termed modular extremely large-scale array (XL-array), where an
extremely large number of antenna elements are regularly arranged on a common
platform in a modular manner. Each module consists of a flexible/moderate
number of antenna elements, and different modules are separated with an
inter-module spacing that is typically much larger than the inter-element
spacing/signal wavelength for ease of deployment. By properly modelling the
variations of signal phase, amplitude and projected aperture across different
array modules/elements, we develop the new channel model and analyze the
signal-to-noise ratio (SNR) performance of the modular XL-array based
communications. Under the practical non-uniform spherical wave (NUSW) model,
the closed-form expression of the maximum achievable SNR is derived in terms of
key geometric parameters, including the total planar array size, module
separation distances along each dimension, as well as the user's location in
the three-dimensional (3D) space. Besides, the asymptotic SNR scaling laws are
revealed as the number of modules along different dimensions goes to infinity.
Moreover, we show that our developed near-field modelling and performance
analysis include the existing ones for the collocated XL-array, the far-field
uniform plane wave (UPW) model, as well as the one-dimensional (1D) modular
extremely large-scale uniform linear array (XL-ULA) as special cases. Extensive
simulation results are provided to validate our obtained results.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 08:23:51 GMT"
}
] | 2022-08-12T00:00:00 |
[
[
"Li",
"Xinrui",
""
],
[
"Lu",
"Haiquan",
""
],
[
"Zeng",
"Yong",
""
],
[
"Jin",
"Shi",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.993769 |
2208.05699
|
Manabu Hagiwara
|
Manabu Hagiwara
|
Quantum Deletion Codes derived from Classical Deletion Codes (Extended
Abstract)
| null | null | null | null |
cs.IT cs.DM math.CO math.IT quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This manuscript is an extended abstract version of the paper entitled
``Quantum Deletion Codes derived from Classical Deletion Codes.'' The paper
contributes to the fundamental theory for quantum deletion error-correcting
codes. The paper proposes a code construction condition for a partition of
classical deletion error-correcting codes to derive quantum deletion
error-correcting codes. The construction methods in this paper give examples of
quantum codes that can correct single-quantum deletion errors and have a code
rate arbitrarily close to 1, while the previously known quantum deletion code
rates are close to 0 for long length. This manuscript omits the proofs of the
statements in the paper.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 08:51:01 GMT"
}
] | 2022-08-12T00:00:00 |
[
[
"Hagiwara",
"Manabu",
""
]
] |
new_dataset
| 0.999842 |
2208.05701
|
Christian Guckelsberger
|
Inan Evin, Perttu H\"am\"al\"ainen, Christian Guckelsberger
|
Cine-AI: Generating Video Game Cutscenes in the Style of Human Directors
|
23 pages, 6 figures, 4 tables. In Proceedings ACM Human-Computer
Interaction, Vol. 6, CHIPLAY, Article 223. Publication date: October 2022
| null |
10.1145/3549486
| null |
cs.HC cs.AI cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Cutscenes form an integral part of many video games, but their creation is
costly, time-consuming, and requires skills that many game developers lack.
While AI has been leveraged to semi-automate cutscene production, the results
typically lack the internal consistency and uniformity in style that is
characteristic of professional human directors. We overcome this shortcoming
with Cine-AI, an open-source procedural cinematography toolset capable of
generating in-game cutscenes in the style of eminent human directors.
Implemented in the popular game engine Unity, Cine-AI features a novel timeline
and storyboard interface for design-time manipulation, combined with runtime
cinematography automation. Via two user studies, each employing quantitative
and qualitative measures, we demonstrate that Cine-AI generates cutscenes that
people correctly associate with a target director, while providing
above-average usability. Our director imitation dataset is publicly available,
and can be extended by users and film enthusiasts.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 08:52:43 GMT"
}
] | 2022-08-12T00:00:00 |
[
[
"Evin",
"Inan",
""
],
[
"Hämäläinen",
"Perttu",
""
],
[
"Guckelsberger",
"Christian",
""
]
] |
new_dataset
| 0.956293 |
2208.05721
|
EPTCS
|
Ido Benbaji (MIT), Omri Doron (MIT), Ad\`ele H\'enot-Mortier (MIT)
|
Word-Embeddings Distinguish Denominal and Root-Derived Verbs in Semitic
|
In Proceedings E2ECOMPVEC, arXiv:2208.05313
|
EPTCS 366, 2022, pp. 35-49
|
10.4204/EPTCS.366.6
| null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Proponents of the Distributed Morphology framework have posited the existence
of two levels of morphological word formation: a lower one, leading to loose
input-output semantic relationships; and an upper one, leading to tight
input-output semantic relationships. In this work, we propose to test the
validity of this assumption in the context of Hebrew word embeddings. If the
two-level hypothesis is borne out, we expect state-of-the-art Hebrew word
embeddings to encode (1) a noun, (2) a denominal derived from it (via an
upper-level operation), and (3) a verb related to the noun (via a lower-level
operation on the noun's root), in such a way that the denominal (2) should be
closer in the embedding space to the noun (1) than the related verb (3) is to
the same noun (1). We report that this hypothesis is verified by four embedding
models of Hebrew: fastText, GloVe, Word2Vec and AlephBERT. This suggests that
word embedding models are able to capture complex and fine-grained semantic
properties that are morphologically motivated.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 09:31:37 GMT"
}
] | 2022-08-12T00:00:00 |
[
[
"Benbaji",
"Ido",
"",
"MIT"
],
[
"Doron",
"Omri",
"",
"MIT"
],
[
"Hénot-Mortier",
"Adèle",
"",
"MIT"
]
] |
new_dataset
| 0.986038 |
2208.05734
|
Pino Caballero-Gil
|
Nayra Rodr\'iguez-P\'erez, Josu\'e Toledo-Castro, Pino Caballero-Gil,
Iv\'an Santos-Gonz\'alez, Candelaria Hern\'andez-Goya
|
Secure ambient intelligence prototype for airports
| null | null |
10.1007/s12652-020-01683-y
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Nowadays, many technological advances applied to the Internet of Things (IoT)
make the introduction of innovative sensors aimed to deploy efficient wireless
sensor networks possible. In order to improve the environment and people's
lives, real time analysis of certain environmental variables may favor the
reduction of health risks related to the deterioration of air quality. To this
respect, the proposed system implements a particular prototype of IoT device
characterized by the assembly of ambient sensors capable of measuring pollutant
gases, temperature and humidity. For this purpose, Raspberry Pi and Arduino
platforms are used. Several security methods are introduced to ensure the
integrity of air quality data by implementing Merkle Trees on each IoT node and
on the Cloud server. Besides, the authenticity of IoT devices and the
confidentiality of communications are guaranteed by implementing HTTPS
requests. Finally, authentication tokens are used to identify system users, and
different security rules are applied to manage database operations.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 10:00:14 GMT"
}
] | 2022-08-12T00:00:00 |
[
[
"Rodríguez-Pérez",
"Nayra",
""
],
[
"Toledo-Castro",
"Josué",
""
],
[
"Caballero-Gil",
"Pino",
""
],
[
"Santos-González",
"Iván",
""
],
[
"Hernández-Goya",
"Candelaria",
""
]
] |
new_dataset
| 0.998549 |
2208.05819
|
Alexandra Weinberger
|
Alfredo Garc\'ia, Javier Tejel, Birgit Vogtenhuber, and Alexandra
Weinberger
|
Empty Triangles in Generalized Twisted Drawings of $K_n$
|
Appears in the Proceedings of the 30th International Symposium on
Graph Drawing and Network Visualization (GD 2022)
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Simple drawings are drawings of graphs in the plane or on the sphere such
that vertices are distinct points, edges are Jordan arcs connecting their
endpoints, and edges intersect at most once (either in a proper crossing or in
a shared endpoint). Simple drawings are generalized twisted if there is a point
$O$ such that every ray emanating from $O$ crosses every edge of the drawing at
most once and there is a ray emanating from $O$ which crosses every edge
exactly once. We show that all generalized twisted drawings of $K_n$ contain
exactly $2n-4$ empty triangles, by this making a substantial step towards
proving the conjecture that this is the case for every simple drawing of $K_n$.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 13:27:43 GMT"
}
] | 2022-08-12T00:00:00 |
[
[
"García",
"Alfredo",
""
],
[
"Tejel",
"Javier",
""
],
[
"Vogtenhuber",
"Birgit",
""
],
[
"Weinberger",
"Alexandra",
""
]
] |
new_dataset
| 0.998913 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.