id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2110.06679
|
Shidi Li
|
Shidi Li, Miaomiao Liu, Christian Walder
|
EditVAE: Unsupervised Part-Aware Controllable 3D Point Cloud Shape
Generation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper tackles the problem of parts-aware point cloud generation. Unlike
existing works which require the point cloud to be segmented into parts a
priori, our parts-aware editing and generation are performed in an unsupervised
manner. We achieve this with a simple modification of the Variational
Auto-Encoder which yields a joint model of the point cloud itself along with a
schematic representation of it as a combination of shape primitives. In
particular, we introduce a latent representation of the point cloud which can
be decomposed into a disentangled representation for each part of the shape.
These parts are in turn disentangled into both a shape primitive and a point
cloud representation, along with a standardising transformation to a canonical
coordinate system. The dependencies between our standardising transformations
preserve the spatial dependencies between the parts in a manner that allows
meaningful parts-aware point cloud generation and shape editing. In addition to
the flexibility afforded by our disentangled representation, the inductive bias
introduced by our joint modeling approach yields state-of-the-art experimental
results on the ShapeNet dataset.
|
[
{
"version": "v1",
"created": "Wed, 13 Oct 2021 12:38:01 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2022 07:55:19 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Li",
"Shidi",
""
],
[
"Liu",
"Miaomiao",
""
],
[
"Walder",
"Christian",
""
]
] |
new_dataset
| 0.995516 |
2110.08151
|
Ryokan Ri
|
Ryokan Ri, Ikuya Yamada, Yoshimasa Tsuruoka
|
mLUKE: The Power of Entity Representations in Multilingual Pretrained
Language Models
|
ACL 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recent studies have shown that multilingual pretrained language models can be
effectively improved with cross-lingual alignment information from Wikipedia
entities. However, existing methods only exploit entity information in
pretraining and do not explicitly use entities in downstream tasks. In this
study, we explore the effectiveness of leveraging entity representations for
downstream cross-lingual tasks. We train a multilingual language model with 24
languages with entity representations and show the model consistently
outperforms word-based pretrained models in various cross-lingual transfer
tasks. We also analyze the model and the key insight is that incorporating
entity representations into the input allows us to extract more
language-agnostic features. We also evaluate the model with a multilingual
cloze prompt task with the mLAMA dataset. We show that entity-based prompt
elicits correct factual knowledge more likely than using only word
representations. Our source code and pretrained models are available at
https://github.com/studio-ousia/luke.
|
[
{
"version": "v1",
"created": "Fri, 15 Oct 2021 15:28:38 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Mar 2022 13:26:39 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Mar 2022 14:27:20 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Ri",
"Ryokan",
""
],
[
"Yamada",
"Ikuya",
""
],
[
"Tsuruoka",
"Yoshimasa",
""
]
] |
new_dataset
| 0.982451 |
2111.09354
|
Aimee Goncalves
|
Aimee Goncalves, Naveen Kuppuswamy, Andrew Beaulieu, Avinash
Uttamchandani, Katherine M. Tsui, Alex Alspach
|
Punyo-1: Soft tactile-sensing upper-body robot for large object
manipulation and physical human interaction
|
Research done at Toyota Research Institute. Accepted to the 5th IEEE
International Conference on Soft Robotics (RoboSoft 2022). The supplemental
video is available publicly at https://youtu.be/G8ZYgPRV5LY
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The manipulation of large objects and safe operation in the vicinity of
humans are key capabilities of a general purpose domestic robotic assistant. We
present the design of a soft, tactile-sensing humanoid upper-body robot and
demonstrate whole-body rich-contact manipulation strategies for handling large
objects. We demonstrate our hardware design philosophy for outfitting
off-the-shelf hard robot arms and other components with soft tactile-sensing
modules, including: (i) low-cost, cut-resistant, contact pressure localizing
coverings for the arms, (ii) paws based on TRI's Soft-bubble sensors for the
end effectors, and (iii) compliant force/geometry sensors for the coarse
geometry sensing chest. We leverage the mechanical intelligence and tactile
sensing of these modules to develop and demonstrate motion primitives for
whole-body grasping. We evaluate the hardware's effectiveness in achieving
grasps of varying strengths over a variety of large domestic objects. Our
results demonstrate the importance of exploiting softness and tactile sensing
in contact-rich manipulation strategies, as well as a path forward for
whole-body force-controlled interactions with the world. (The supplemental
video is available publicly at https://youtu.be/G8ZYgPRV5LY).
|
[
{
"version": "v1",
"created": "Wed, 17 Nov 2021 19:31:05 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 22:16:44 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Mar 2022 16:47:06 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Goncalves",
"Aimee",
""
],
[
"Kuppuswamy",
"Naveen",
""
],
[
"Beaulieu",
"Andrew",
""
],
[
"Uttamchandani",
"Avinash",
""
],
[
"Tsui",
"Katherine M.",
""
],
[
"Alspach",
"Alex",
""
]
] |
new_dataset
| 0.999747 |
2112.01526
|
Yanghao Li
|
Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Mangalam, Bo Xiong,
Jitendra Malik, Christoph Feichtenhofer
|
MViTv2: Improved Multiscale Vision Transformers for Classification and
Detection
|
CVPR 2022 Camera Ready
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study Multiscale Vision Transformers (MViTv2) as a unified
architecture for image and video classification, as well as object detection.
We present an improved version of MViT that incorporates decomposed relative
positional embeddings and residual pooling connections. We instantiate this
architecture in five sizes and evaluate it for ImageNet classification, COCO
detection and Kinetics video recognition where it outperforms prior work. We
further compare MViTv2s' pooling attention to window attention mechanisms where
it outperforms the latter in accuracy/compute. Without bells-and-whistles,
MViTv2 has state-of-the-art performance in 3 domains: 88.8% accuracy on
ImageNet classification, 58.7 boxAP on COCO object detection as well as 86.1%
on Kinetics-400 video classification. Code and models are available at
https://github.com/facebookresearch/mvit.
|
[
{
"version": "v1",
"created": "Thu, 2 Dec 2021 18:59:57 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2022 17:56:37 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Li",
"Yanghao",
""
],
[
"Wu",
"Chao-Yuan",
""
],
[
"Fan",
"Haoqi",
""
],
[
"Mangalam",
"Karttikeya",
""
],
[
"Xiong",
"Bo",
""
],
[
"Malik",
"Jitendra",
""
],
[
"Feichtenhofer",
"Christoph",
""
]
] |
new_dataset
| 0.986712 |
2112.02194
|
Harsh Mehta
|
Harsh Mehta, Steffen Rendle, Walid Krichene, Li Zhang
|
ALX: Large Scale Matrix Factorization on TPUs
| null | null | null | null |
cs.LG cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
We present ALX, an open-source library for distributed matrix factorization
using Alternating Least Squares, written in JAX. Our design allows for
efficient use of the TPU architecture and scales well to matrix factorization
problems of O(B) rows/columns by scaling the number of available TPU cores. In
order to spur future research on large scale matrix factorization methods and
to illustrate the scalability properties of our own implementation, we also
built a real world web link prediction dataset called WebGraph. This dataset
can be easily modeled as a matrix factorization problem. We created several
variants of this dataset based on locality and sparsity properties of
sub-graphs. The largest variant of WebGraph has around 365M nodes and training
a single epoch finishes in about 20 minutes with 256 TPU cores. We include
speed and performance numbers of ALX on all variants of WebGraph. Both the
framework code and the dataset is open-sourced.
|
[
{
"version": "v1",
"created": "Fri, 3 Dec 2021 23:35:42 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 20:43:42 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Mehta",
"Harsh",
""
],
[
"Rendle",
"Steffen",
""
],
[
"Krichene",
"Walid",
""
],
[
"Zhang",
"Li",
""
]
] |
new_dataset
| 0.958185 |
2112.02857
|
Zhipeng Luo
|
Changqing Zhou, Zhipeng Luo, Yueru Luo, Tianrui Liu, Liang Pan,
Zhongang Cai, Haiyu Zhao, Shijian Lu
|
PTTR: Relational 3D Point Cloud Object Tracking with Transformer
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a point cloud sequence, 3D object tracking aims to predict the location
and orientation of an object in the current search point cloud given a template
point cloud. Motivated by the success of transformers, we propose Point
Tracking TRansformer (PTTR), which efficiently predicts high-quality 3D
tracking results in a coarse-to-fine manner with the help of transformer
operations. PTTR consists of three novel designs. 1) Instead of random
sampling, we design Relation-Aware Sampling to preserve relevant points to
given templates during subsampling. 2) Furthermore, we propose a Point Relation
Transformer (PRT) consisting of a self-attention and a cross-attention module.
The global self-attention operation captures long-range dependencies to enhance
encoded point features for the search area and the template, respectively.
Subsequently, we generate the coarse tracking results by matching the two sets
of point features via cross-attention. 3) Based on the coarse tracking results,
we employ a novel Prediction Refinement Module to obtain the final refined
prediction. In addition, we create a large-scale point cloud single object
tracking benchmark based on the Waymo Open Dataset. Extensive experiments show
that PTTR achieves superior point cloud tracking in both accuracy and
efficiency.
|
[
{
"version": "v1",
"created": "Mon, 6 Dec 2021 08:28:05 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Dec 2021 05:28:47 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Feb 2022 14:59:32 GMT"
},
{
"version": "v4",
"created": "Tue, 22 Mar 2022 10:04:03 GMT"
},
{
"version": "v5",
"created": "Wed, 30 Mar 2022 05:25:15 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Zhou",
"Changqing",
""
],
[
"Luo",
"Zhipeng",
""
],
[
"Luo",
"Yueru",
""
],
[
"Liu",
"Tianrui",
""
],
[
"Pan",
"Liang",
""
],
[
"Cai",
"Zhongang",
""
],
[
"Zhao",
"Haiyu",
""
],
[
"Lu",
"Shijian",
""
]
] |
new_dataset
| 0.999841 |
2112.04482
|
Ronghang Hu
|
Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon,
Wojciech Galuba, Marcus Rohrbach, Douwe Kiela
|
FLAVA: A Foundational Language And Vision Alignment Model
|
CVPR 2022
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
State-of-the-art vision and vision-and-language models rely on large-scale
visio-linguistic pretraining for obtaining good performance on a variety of
downstream tasks. Generally, such models are often either cross-modal
(contrastive) or multi-modal (with earlier fusion) but not both; and they often
only target specific modalities or tasks. A promising direction would be to use
a single holistic universal model, as a "foundation", that targets all
modalities at once -- a true vision and language foundation model should be
good at vision tasks, language tasks, and cross- and multi-modal vision and
language tasks. We introduce FLAVA as such a model and demonstrate impressive
performance on a wide range of 35 tasks spanning these target modalities.
|
[
{
"version": "v1",
"created": "Wed, 8 Dec 2021 18:59:16 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Feb 2022 04:55:27 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Mar 2022 18:22:08 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Singh",
"Amanpreet",
""
],
[
"Hu",
"Ronghang",
""
],
[
"Goswami",
"Vedanuj",
""
],
[
"Couairon",
"Guillaume",
""
],
[
"Galuba",
"Wojciech",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Kiela",
"Douwe",
""
]
] |
new_dataset
| 0.999505 |
2112.06390
|
Juil Koo
|
Juil Koo, Ian Huang, Panos Achlioptas, Leonidas Guibas, Minhyuk Sung
|
PartGlot: Learning Shape Part Segmentation from Language Reference Games
|
CVPR 2022 (Oral)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce PartGlot, a neural framework and associated architectures for
learning semantic part segmentation of 3D shape geometry, based solely on part
referential language. We exploit the fact that linguistic descriptions of a
shape can provide priors on the shape's parts -- as natural language has
evolved to reflect human perception of the compositional structure of objects,
essential to their recognition and use. For training, we use the paired
geometry / language data collected in the ShapeGlot work for their reference
game, where a speaker creates an utterance to differentiate a target shape from
two distractors and the listener has to find the target based on this
utterance. Our network is designed to solve this target discrimination problem,
carefully incorporating a Transformer-based attention module so that the output
attention can precisely highlight the semantic part or parts described in the
language. Furthermore, the network operates without any direct supervision on
the 3D geometry itself. Surprisingly, we further demonstrate that the learned
part information is generalizable to shape classes unseen during training. Our
approach opens the possibility of learning 3D shape parts from language alone,
without the need for large-scale part geometry annotations, thus facilitating
annotation acquisition.
|
[
{
"version": "v1",
"created": "Mon, 13 Dec 2021 02:57:57 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2022 04:26:49 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Koo",
"Juil",
""
],
[
"Huang",
"Ian",
""
],
[
"Achlioptas",
"Panos",
""
],
[
"Guibas",
"Leonidas",
""
],
[
"Sung",
"Minhyuk",
""
]
] |
new_dataset
| 0.999155 |
2112.11427
|
Roy Or-El
|
Roy Or-El and Xuan Luo and Mengyi Shan and Eli Shechtman and Jeong
Joon Park and Ira Kemelmacher-Shlizerman
|
StyleSDF: High-Resolution 3D-Consistent Image and Geometry Generation
|
Camera-Ready version. Paper was accepted as oral to CVPR 2022. Added
discussions and figures from the rebuttal to the supplementary material
(sections C & F). Project Webpage: https://stylesdf.github.io/
| null | null | null |
cs.CV cs.AI cs.GR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We introduce a high resolution, 3D-consistent image and shape generation
technique which we call StyleSDF. Our method is trained on single-view RGB data
only, and stands on the shoulders of StyleGAN2 for image generation, while
solving two main challenges in 3D-aware GANs: 1) high-resolution,
view-consistent generation of the RGB images, and 2) detailed 3D shape. We
achieve this by merging a SDF-based 3D representation with a style-based 2D
generator. Our 3D implicit network renders low-resolution feature maps, from
which the style-based network generates view-consistent, 1024x1024 images.
Notably, our SDF-based 3D modeling defines detailed 3D surfaces, leading to
consistent volume rendering. Our method shows higher quality results compared
to state of the art in terms of visual and geometric quality.
|
[
{
"version": "v1",
"created": "Tue, 21 Dec 2021 18:45:45 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2022 01:00:39 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Or-El",
"Roy",
""
],
[
"Luo",
"Xuan",
""
],
[
"Shan",
"Mengyi",
""
],
[
"Shechtman",
"Eli",
""
],
[
"Park",
"Jeong Joon",
""
],
[
"Kemelmacher-Shlizerman",
"Ira",
""
]
] |
new_dataset
| 0.994443 |
2202.04606
|
Abdesslem Layeb
|
Abdesslem Layeb
|
New hard benchmark functions for global optimization
| null | null | null | null |
cs.NE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this paper, we present some new unimodal, multimodal, and noise test
functions to assess the performance of global optimization algorithms. All the
test functions are multidimensional problems. The 2-dimension landscape of the
proposed functions has been graphically presented in 3D space to show their
geometry, however these functions are more complicated in dimensions greater
than 3. To show the hardness of these functions, we have made an experimental
study with some powerful algorithms such as CEC competition winners: LSHADE,
MadDe, and LSHADE-SPACMA algorithms. Besides the novel algorithm, Tangent
search algorithm (TSA) and its modified Tangent search algorithm (mTSA) were
also used in the experimental study. The results found demonstrate the hardness
of the proposed functions. The code sources of the proposed test functions are
available on Matlab Exchange website.
https://www.mathworks.com/matlabcentral/fileexchange/106450-new-hard-benchmark-functions-for-global-optimization?s_tid=srchtitle
|
[
{
"version": "v1",
"created": "Wed, 9 Feb 2022 17:54:44 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Feb 2022 09:11:26 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Mar 2022 13:26:06 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Layeb",
"Abdesslem",
""
]
] |
new_dataset
| 0.962068 |
2203.04356
|
Pavan Holur
|
Pavan Holur, Tianyi Wang, Shadi Shahsavari, Timothy Tangherlini, Vwani
Roychowdhury
|
Which side are you on? Insider-Outsider classification in
conspiracy-theoretic social media
|
ACL 2022: 60th Annual Meeting of the Association for Computational
Linguistics 8+4 pages, 6 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Social media is a breeding ground for threat narratives and related
conspiracy theories. In these, an outside group threatens the integrity of an
inside group, leading to the emergence of sharply defined group identities:
Insiders -- agents with whom the authors identify and Outsiders -- agents who
threaten the insiders. Inferring the members of these groups constitutes a
challenging new NLP task: (i) Information is distributed over many
poorly-constructed posts; (ii) Threats and threat agents are highly contextual,
with the same post potentially having multiple agents assigned to membership in
either group; (iii) An agent's identity is often implicit and transitive; and
(iv) Phrases used to imply Outsider status often do not follow common negative
sentiment patterns. To address these challenges, we define a novel
Insider-Outsider classification task. Because we are not aware of any
appropriate existing datasets or attendant models, we introduce a labeled
dataset (CT5K) and design a model (NP2IO) to address this task. NP2IO leverages
pretrained language modeling to classify Insiders and Outsiders. NP2IO is shown
to be robust, generalizing to noun phrases not seen during training, and
exceeding the performance of non-trivial baseline models by $20\%$.
|
[
{
"version": "v1",
"created": "Tue, 8 Mar 2022 19:29:53 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2022 07:20:37 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Holur",
"Pavan",
""
],
[
"Wang",
"Tianyi",
""
],
[
"Shahsavari",
"Shadi",
""
],
[
"Tangherlini",
"Timothy",
""
],
[
"Roychowdhury",
"Vwani",
""
]
] |
new_dataset
| 0.987995 |
2203.09418
|
Yongzhi Su
|
Yongzhi Su, Mahdi Saleh, Torben Fetzer, Jason Rambach, Nassir Navab,
Benjamin Busam, Didier Stricker, Federico Tombari
|
ZebraPose: Coarse to Fine Surface Encoding for 6DoF Object Pose
Estimation
|
CVPR2022 camera ready
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Establishing correspondences from image to 3D has been a key task of 6DoF
object pose estimation for a long time. To predict pose more accurately, deeply
learned dense maps replaced sparse templates. Dense methods also improved pose
estimation in the presence of occlusion. More recently researchers have shown
improvements by learning object fragments as segmentation. In this work, we
present a discrete descriptor, which can represent the object surface densely.
By incorporating a hierarchical binary grouping, we can encode the object
surface very efficiently. Moreover, we propose a coarse to fine training
strategy, which enables fine-grained correspondence prediction. Finally, by
matching predicted codes with object surface and using a PnP solver, we
estimate the 6DoF pose. Results on the public LM-O and YCB-V datasets show
major improvement over the state of the art w.r.t. ADD(-S) metric, even
surpassing RGB-D based methods in some cases.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 16:16:24 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 18:23:03 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Su",
"Yongzhi",
""
],
[
"Saleh",
"Mahdi",
""
],
[
"Fetzer",
"Torben",
""
],
[
"Rambach",
"Jason",
""
],
[
"Navab",
"Nassir",
""
],
[
"Busam",
"Benjamin",
""
],
[
"Stricker",
"Didier",
""
],
[
"Tombari",
"Federico",
""
]
] |
new_dataset
| 0.998112 |
2203.14790
|
Barak Gahtan
|
Barak Gahtan
|
5G Routing Interfered Environment
| null | null | null | null |
cs.NI cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
5G is the next-generation cellular network technology, with the goal of
meeting the critical demand for bandwidth required to accommodate a high
density of users. It employs flexible architectures to accommodate the high
density. 5G is enabled by mmWave communication, which operates at frequencies
ranging from 30 to 300 GHz. This paper describes the design of the 5G Routing
Interfered Environment (5GRIE), a python-based environment based on Gym's
methods. The environment can run different algorithms to route packets with
source and destination pairs using a formulated interference model. Deep
Reinforcement Learning algorithms that use Stable-Baselines 3, as well as
heuristic-based algorithms like random or greedy, can be run on it. Profitable
is an algorithm that is provided.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 14:25:45 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 18:12:09 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Gahtan",
"Barak",
""
]
] |
new_dataset
| 0.987861 |
2203.14856
|
Nizhuan Wang
|
Haitong Tang, Shuang He, Lingbin Bian, Zhiming Cui, Nizhuan Wang
|
WSEBP: A Novel Width-depth Synchronous Extension-based Basis Pursuit
Algorithm for Multi-Layer Convolutional Sparse Coding
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The pursuit algorithms integrated in multi-layer convolutional sparse coding
(ML-CSC) can interpret the convolutional neural networks (CNNs). However, many
current state-of-art (SOTA) pursuit algorithms require multiple iterations to
optimize the solution of ML-CSC, which limits their applications to deeper CNNs
due to high computational cost and large number of resources for getting very
tiny gain of performance. In this study, we focus on the 0th iteration in
pursuit algorithm by introducing an effective initialization strategy for each
layer, by which the solution for ML-CSC can be improved. Specifically, we first
propose a novel width-depth synchronous extension-based basis pursuit (WSEBP)
algorithm which solves the ML-CSC problem without the limitation of the number
of iterations compared to the SOTA algorithms and maximizes the performance by
an effective initialization in each layer. Then, we propose a simple and
unified ML-CSC-based classification network (ML-CSC-Net) which consists of an
ML-CSC-based feature encoder and a fully-connected layer to validate the
performance of WSEBP on image classification task. The experimental results
show that our proposed WSEBP outperforms SOTA algorithms in terms of accuracy
and consumption resources. In addition, the WSEBP integrated in CNNs can
improve the performance of deeper CNNs and make them interpretable. Finally,
taking VGG as an example, we propose WSEBP-VGG13 to enhance the performance of
VGG13, which achieves competitive results on four public datasets, i.e., 87.79%
vs. 86.83% on Cifar-10 dataset, 58.01% vs. 54.60% on Cifar-100 dataset, 91.52%
vs. 89.58% on COVID-19 dataset, and 99.88% vs. 99.78% on Crack dataset,
respectively. The results show the effectiveness of the proposed WSEBP, the
improved performance of ML-CSC with WSEBP, and interpretation of the CNNs or
deeper CNNs.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 15:53:52 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2022 02:22:24 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Tang",
"Haitong",
""
],
[
"He",
"Shuang",
""
],
[
"Bian",
"Lingbin",
""
],
[
"Cui",
"Zhiming",
""
],
[
"Wang",
"Nizhuan",
""
]
] |
new_dataset
| 0.970856 |
2203.15110
|
Rohit Goswami MInstP
|
Laurence Kedward (1) and Balint Aradi (2) and Ondrej Certik (3) and
Milan Curcic (4) and Sebastian Ehlert (5) and Philipp Engel (6) and Rohit
Goswami (7 and 8) and Michael Hirsch (9) and Asdrubal Lozada-Blanco (10) and
Vincent Magnin (11) and Arjen Markus (12) and Emanuele Pagone (13) and Ivan
Pribec (14) and Brad Richardson (15) and Harris Snyder (16) and John Urban
(17) and Jeremie Vandenplas (18) ((1) Department of Aerospace Engineering,
University of Bristol, (2) Bremen Center for Computational Materials Science,
(3) Los Alamos National Laboratory, (4) University of Miami, (5) Mulliken
Center for Theoretical Chemistry, Institut fur Physikalische und Theoretische
Chemie Universitat Bonn, (6) Institut fur Geodasie und
Geoinformationstechnik, Technische Universitat Berlin, (7) Quansight Austin
USA, (8) Science Institute, University of Iceland, (9) Center for Space
Physics, Boston University, (10) Sao Carlos Institute of Physics, University
of Sao Paulo, (11) Univ. Lille, CNRS, Centrale Lille, Univ. Polytechnique
Hauts-de-France, IEMN, (12) Deltares Research Institute, The Netherlands,
(13) Cranfield University, Sustainable Manufacturing Systems Centre, School
of Aerospace Transport and Manufacturing, (14) Chair of Brewing and Beverage
Technology, Technical University of Munich, (15) Archaeologic, Inc., (16)
Structura Biotechnology Inc., Toronto, Canada, (17) HPC Consultant, USA, (18)
Animal Breeding and Genomics, Wageningen, The Netherlands)
|
The State of Fortran
|
12 pages, 2 figures, 1 table. Computing in Science & Engineering
(2022)
|
Computing in Science & Engineering
|
10.1109/MCSE.2022.3159862
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A community of developers has formed to modernize the Fortran ecosystem. In
this article, we describe the high-level features of Fortran that continue to
make it a good choice for scientists and engineers in the 21st century. Ongoing
efforts include the development of a Fortran standard library and package
manager, the fostering of a friendly and welcoming online community, improved
compiler support, and language feature development. The lessons learned are
common across contemporary programming languages and help reduce the learning
curve and increase adoption of Fortran.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 21:39:07 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2022 11:53:45 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Kedward",
"Laurence",
"",
"7 and 8"
],
[
"Aradi",
"Balint",
"",
"7 and 8"
],
[
"Certik",
"Ondrej",
"",
"7 and 8"
],
[
"Curcic",
"Milan",
"",
"7 and 8"
],
[
"Ehlert",
"Sebastian",
"",
"7 and 8"
],
[
"Engel",
"Philipp",
"",
"7 and 8"
],
[
"Goswami",
"Rohit",
"",
"7 and 8"
],
[
"Hirsch",
"Michael",
""
],
[
"Lozada-Blanco",
"Asdrubal",
""
],
[
"Magnin",
"Vincent",
""
],
[
"Markus",
"Arjen",
""
],
[
"Pagone",
"Emanuele",
""
],
[
"Pribec",
"Ivan",
""
],
[
"Richardson",
"Brad",
""
],
[
"Snyder",
"Harris",
""
],
[
"Urban",
"John",
""
],
[
"Vandenplas",
"Jeremie",
""
]
] |
new_dataset
| 0.999747 |
2203.15533
|
Ivan Shugurov
|
Ivan Shugurov, Fu Li, Benjamin Busam, Slobodan Ilic
|
OSOP: A Multi-Stage One Shot Object Pose Estimation Framework
|
CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel one-shot method for object detection and 6 DoF pose
estimation, that does not require training on target objects. At test time, it
takes as input a target image and a textured 3D query model. The core idea is
to represent a 3D model with a number of 2D templates rendered from different
viewpoints. This enables CNN-based direct dense feature extraction and
matching. The object is first localized in 2D, then its approximate viewpoint
is estimated, followed by dense 2D-3D correspondence prediction. The final pose
is computed with PnP. We evaluate the method on LineMOD, Occlusion, Homebrewed,
YCB-V and TLESS datasets and report very competitive performance in comparison
to the state-of-the-art methods trained on synthetic data, even though our
method is not trained on the object models used for testing.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 13:12:00 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2022 07:31:14 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Shugurov",
"Ivan",
""
],
[
"Li",
"Fu",
""
],
[
"Busam",
"Benjamin",
""
],
[
"Ilic",
"Slobodan",
""
]
] |
new_dataset
| 0.999532 |
2203.15829
|
Xiaotian Li
|
Xiaotian Li, Xiang Zhang, Huiyuan Yang, Wenna Duan, Weiying Dai and
Lijun Yin
|
An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic
Facial Actions for Emotion Analysis
| null |
FG2021(long Oral)
|
10.1109/FG47880.2020.00050
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Emotion is an experience associated with a particular pattern of
physiological activity along with different physiological, behavioral and
cognitive changes. One behavioral change is facial expression, which has been
studied extensively over the past few decades. Facial behavior varies with a
person's emotion according to differences in terms of culture, personality,
age, context, and environment. In recent years, physiological activities have
been used to study emotional responses. A typical signal is the
electroencephalogram (EEG), which measures brain activity. Most of existing
EEG-based emotion analysis has overlooked the role of facial expression
changes. There exits little research on the relationship between facial
behavior and brain signals due to the lack of dataset measuring both EEG and
facial action signals simultaneously. To address this problem, we propose to
develop a new database by collecting facial expressions, action units, and EEGs
simultaneously. We recorded the EEGs and face videos of both posed facial
actions and spontaneous expressions from 29 participants with different ages,
genders, ethnic backgrounds. Differing from existing approaches, we designed a
protocol to capture the EEG signals by evoking participants' individual action
units explicitly. We also investigated the relation between the EEG signals and
facial action units. As a baseline, the database has been evaluated through the
experiments on both posed and spontaneous emotion recognition with images
alone, EEG alone, and EEG fused with images, respectively. The database will be
released to the research community to advance the state of the art for
automatic emotion recognition.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 18:02:12 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Li",
"Xiaotian",
""
],
[
"Zhang",
"Xiang",
""
],
[
"Yang",
"Huiyuan",
""
],
[
"Duan",
"Wenna",
""
],
[
"Dai",
"Weiying",
""
],
[
"Yin",
"Lijun",
""
]
] |
new_dataset
| 0.992926 |
2203.15841
|
Ulices Santa Cruz Leal
|
Ulices Santa Cruz and Yasser Shoukry
|
NNLander-VeriF: A Neural Network Formal Verification Framework for
Vision-Based Autonomous Aircraft Landing
|
18 pages
| null | null | null |
cs.LG cs.CV cs.SY eess.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider the problem of formally verifying a Neural Network
(NN) based autonomous landing system. In such a system, a NN controller
processes images from a camera to guide the aircraft while approaching the
runway. A central challenge for the safety and liveness verification of
vision-based closed-loop systems is the lack of mathematical models that
captures the relation between the system states (e.g., position of the
aircraft) and the images processed by the vision-based NN controller. Another
challenge is the limited abilities of state-of-the-art NN model checkers. Such
model checkers can reason only about simple input-output robustness properties
of neural networks. This limitation creates a gap between the NN model checker
abilities and the need to verify a closed-loop system while considering the
aircraft dynamics, the perception components, and the NN controller. To this
end, this paper presents NNLander-VeriF, a framework to verify vision-based NN
controllers used for autonomous landing. NNLander-VeriF addresses the
challenges above by exploiting geometric models of perspective cameras to
obtain a mathematical model that captures the relation between the aircraft
states and the inputs to the NN controller. By converting this model into a NN
(with manually assigned weights) and composing it with the NN controller, one
can capture the relation between aircraft states and control actions using one
augmented NN. Such an augmented NN model leads to a natural encoding of the
closed-loop verification into several NN robustness queries, which
state-of-the-art NN model checkers can handle. Finally, we evaluate our
framework to formally verify the properties of a trained NN and we show its
efficiency.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 18:18:53 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Cruz",
"Ulices Santa",
""
],
[
"Shoukry",
"Yasser",
""
]
] |
new_dataset
| 0.990383 |
2203.15853
|
Xiangyu Zhang
|
Xiangyu Zhang, Peter I. Frazier
|
Near-optimality for infinite-horizon restless bandits with many arms
| null | null | null | null |
cs.LG math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Restless bandits are an important class of problems with applications in
recommender systems, active learning, revenue management and other areas. We
consider infinite-horizon discounted restless bandits with many arms where a
fixed proportion of arms may be pulled in each period and where arms share a
finite state space. Although an average-case-optimal policy can be computed via
stochastic dynamic programming, the computation required grows exponentially
with the number of arms $N$. Thus, it is important to find scalable policies
that can be computed efficiently for large $N$ and that are near optimal in
this regime, in the sense that the optimality gap (i.e. the loss of expected
performance against an optimal policy) per arm vanishes for large $N$. However,
the most popular approach, the Whittle index, requires a hard-to-verify
indexability condition to be well-defined and another hard-to-verify condition
to guarantee a $o(N)$ optimality gap. We present a method resolving these
difficulties. By replacing a global Lagrange multiplier used by the Whittle
index with a sequence of Lagrangian multipliers, one per time period up to a
finite truncation point, we derive a class of policies, called fluid-balance
policies, that have a $O(\sqrt{N})$ optimality gap. Unlike the Whittle index,
fluid-balance policies do not require indexability to be well-defined and their
$O(\sqrt{N})$ optimality gap bound holds universally without sufficient
conditions. We also demonstrate empirically that fluid-balance policies provide
state-of-the-art performance on specific problems.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 18:49:21 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Zhang",
"Xiangyu",
""
],
[
"Frazier",
"Peter I.",
""
]
] |
new_dataset
| 0.972293 |
2203.15856
|
Luciano Oliveira
|
Bernardo Silva, La\'is Pinheiro, Brenda Sobrinho, Fernanda Lima, Bruna
Sobrinho, Kalyf Abdalla, Matheus Pithon, Patr\'icia Cury, Luciano Oliveira
|
OdontoAI: A human-in-the-loop labeled data set and an online platform to
boost research on dental panoramic radiographs
|
45 pages, 11 figures, journal preprint
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Deep learning has remarkably advanced in the last few years, supported by
large labeled data sets. These data sets are precious yet scarce because of the
time-consuming labeling procedures, discouraging researchers from producing
them. This scarcity is especially true in dentistry, where deep learning
applications are still in an embryonic stage. Motivated by this background, we
address in this study the construction of a public data set of dental panoramic
radiographs. Our objects of interest are the teeth, which are segmented and
numbered, as they are the primary targets for dentists when screening a
panoramic radiograph. We benefited from the human-in-the-loop (HITL) concept to
expedite the labeling procedure, using predictions from deep neural networks as
provisional labels, later verified by human annotators. All the gathering and
labeling procedures of this novel data set is thoroughly analyzed. The results
were consistent and behaved as expected: At each HITL iteration, the model
predictions improved. Our results demonstrated a 51% labeling time reduction
using HITL, saving us more than 390 continuous working hours. In a novel online
platform, called OdontoAI, created to work as task central for this novel data
set, we released 4,000 images, from which 2,000 have their labels publicly
available for model fitting. The labels of the other 2,000 images are private
and used for model evaluation considering instance and semantic segmentation
and numbering. To the best of our knowledge, this is the largest-scale publicly
available data set for panoramic radiographs, and the OdontoAI is the first
platform of its kind in dentistry.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 18:57:23 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Silva",
"Bernardo",
""
],
[
"Pinheiro",
"Laís",
""
],
[
"Sobrinho",
"Brenda",
""
],
[
"Lima",
"Fernanda",
""
],
[
"Sobrinho",
"Bruna",
""
],
[
"Abdalla",
"Kalyf",
""
],
[
"Pithon",
"Matheus",
""
],
[
"Cury",
"Patrícia",
""
],
[
"Oliveira",
"Luciano",
""
]
] |
new_dataset
| 0.982853 |
2203.15862
|
Philipp Zschoche
|
Philipp Zschoche
|
Restless Temporal Path Parameterized Above Lower Bounds
| null | null | null | null |
cs.DS cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reachability questions are one of the most fundamental algorithmic primitives
in temporal graphs -- graphs whose edge set changes over discrete time steps. A
core problem here is the NP-hard Short Restless Temporal Path: given a temporal
graph $\mathcal G$, two distinct vertices $s$ and $z$, and two numbers $\delta$
and $k$, is there a $\delta$-restless temporal $s$-$z$ path of length at most
$k$? A temporal path is a path whose edges appear in chronological order and a
temporal path is $\delta$-restless if two consecutive path edges appear at most
$\delta$ time steps apart from each other. Among others, this problem has
applications in neuroscience and epidemiology. While Short Restless Temporal
Path is known to be computationally hard, e.g., it is NP-hard for only three
time steps and W[1]-hard when parameterized by the feedback vertex number of
the underlying graph, it is fixed-parameter tractable when parameterized by the
path length $k$. We improve on this by showing that Short Restless Temporal
Path can be solved in (randomized) $4^{k-d}|\mathcal G|^{O(1)}$ time, where $d$
is the minimum length of a temporal $s$-$z$ path.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 19:07:41 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Zschoche",
"Philipp",
""
]
] |
new_dataset
| 0.986898 |
2203.15926
|
Ayush Tewari
|
Ayush Tewari, Mallikarjun B R, Xingang Pan, Ohad Fried, Maneesh
Agrawala, Christian Theobalt
|
Disentangled3D: Learning a 3D Generative Model with Disentangled
Geometry and Appearance from Monocular Images
|
CVPR 2022
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Learning 3D generative models from a dataset of monocular images enables
self-supervised 3D reasoning and controllable synthesis. State-of-the-art 3D
generative models are GANs which use neural 3D volumetric representations for
synthesis. Images are synthesized by rendering the volumes from a given camera.
These models can disentangle the 3D scene from the camera viewpoint in any
generated image. However, most models do not disentangle other factors of image
formation, such as geometry and appearance. In this paper, we design a 3D GAN
which can learn a disentangled model of objects, just from monocular
observations. Our model can disentangle the geometry and appearance variations
in the scene, i.e., we can independently sample from the geometry and
appearance spaces of the generative model. This is achieved using a novel
non-rigid deformable scene formulation. A 3D volume which represents an object
instance is computed as a non-rigidly deformed canonical 3D volume. Our method
learns the canonical volume, as well as its deformations, jointly during
training. This formulation also helps us improve the disentanglement between
the 3D scene and the camera viewpoints using a novel pose regularization loss
defined on the 3D deformation field. In addition, we further model the inverse
deformations, enabling the computation of dense correspondences between images
generated by our model. Finally, we design an approach to embed real images
into the latent space of our disentangled generative model, enabling editing of
real images.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 22:03:18 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Tewari",
"Ayush",
""
],
[
"R",
"Mallikarjun B",
""
],
[
"Pan",
"Xingang",
""
],
[
"Fried",
"Ohad",
""
],
[
"Agrawala",
"Maneesh",
""
],
[
"Theobalt",
"Christian",
""
]
] |
new_dataset
| 0.999604 |
2203.15930
|
Julien Piet
|
Julien Piet, Jaiden Fairoze and Nicholas Weaver
|
Extracting Godl [sic] from the Salt Mines: Ethereum Miners Extracting
Value
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Cryptocurrency miners have great latitude in deciding which transactions they
accept, including their own, and the order in which they accept them. Ethereum
miners in particular use this flexibility to collect MEV-Miner Extractable
Value-by structuring transactions to extract additional revenue. Ethereum also
contains numerous bots that attempt to obtain MEV based on
public-but-not-yet-confirmed transactions. Private relays shelter operations
from these selfsame bots by directly submitting transactions to mining pools.
In this work, we develop an algorithm to detect MEV exploitation present in
previously mined blocks. We use our implementation of the detector to analyze
MEV usage and profit redistribution, finding that miners make the lion's share
of the profits, rather than independent users of the private relays. More
specifically, (i) 73% of private transactions hide trading activity or
re-distribute miner rewards, and 87.6% of MEV collection is accomplished with
privately submitted transactions, (ii) our algorithm finds more than $6M worth
of MEV profit in a period of 12 days, two thirds of which go directly to
miners, and (iii) MEV represents 9.2% of miners' profit from transaction fees.
Furthermore, in those 12 days, we also identify four blocks that contain
enough MEV profits to make time-bandit forking attacks economically viable for
large miners, undermining the security and stability of Ethereum as a whole.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 22:19:24 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Piet",
"Julien",
""
],
[
"Fairoze",
"Jaiden",
""
],
[
"Weaver",
"Nicholas",
""
]
] |
new_dataset
| 0.979346 |
2203.15941
|
Kevin Dai
|
Kevin Dai, Xinyu Wang, Allison M. Rojas, Evan Harber, Yu Tian,
Nicholas Paiva, Joseph Gnehm, Evan Schindewolf, Howie Choset, Victoria A.
Webster-Wood, Lu Li
|
Design of a Biomimetic Tactile Sensor for Material Classification
|
To be published in ICRA 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Tactile sensing typically involves active exploration of unknown surfaces and
objects, making it especially effective at processing the characteristics of
materials and textures. A key property extracted by human tactile perception is
surface roughness, which relies on measuring vibratory signals using the
multi-layered fingertip structure. Existing robotic systems lack tactile
sensors that are able to provide high dynamic sensing ranges, perceive material
properties, and maintain a low hardware cost. In this work, we introduce the
reference design and fabrication procedure of a miniature and low-cost tactile
sensor consisting of a biomimetic cutaneous structure, including the artificial
fingerprint, dermis, epidermis, and an embedded magnet-sensor structure which
serves as a mechanoreceptor for converting mechanical information to digital
signals. The presented sensor is capable of detecting high-resolution magnetic
field data through the Hall effect and creating high-dimensional time-frequency
domain features for material texture classification. Additionally, we
investigate the effects of different superficial sensor fingerprint patterns
for classifying materials through both simulation and physical experimentation.
After extracting time series and frequency domain features, we assess a
k-nearest neighbors classifier for distinguishing between different materials.
The results from our experiments show that our biomimetic tactile sensors with
fingerprint ridges can classify materials with more than 8% higher accuracy and
lower variability than ridge-less sensors. These results, along with the low
cost and customizability of our sensor, demonstrate high potential for lowering
the barrier to entry for a wide array of robotic applications, including
model-less tactile sensing for texture classification, material inspection, and
object recognition.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 22:51:17 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Dai",
"Kevin",
""
],
[
"Wang",
"Xinyu",
""
],
[
"Rojas",
"Allison M.",
""
],
[
"Harber",
"Evan",
""
],
[
"Tian",
"Yu",
""
],
[
"Paiva",
"Nicholas",
""
],
[
"Gnehm",
"Joseph",
""
],
[
"Schindewolf",
"Evan",
""
],
[
"Choset",
"Howie",
""
],
[
"Webster-Wood",
"Victoria A.",
""
],
[
"Li",
"Lu",
""
]
] |
new_dataset
| 0.999127 |
2203.15979
|
Jerin Yasmin
|
Jerin Yasmin, Mohammad Sadegh Sheikhaei, Yuan Tian
|
A First Look at Duplicate and Near-duplicate Self-admitted Technical
Debt Comments
|
4 +1 pages
| null |
10.1145/3524610.3528387
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-admitted technical debt (SATD) refers to technical debt that is
intentionally introduced by developers and explicitly documented in code
comments or other software artifacts (e.g., issue reports) to annotate
sub-optimal decisions made by developers in the software development process.
In this work, we take the first look at the existence and characteristics of
duplicate and near-duplicate SATD comments in five popular Apache OSS projects,
i.e., JSPWiki, Helix, Jackrabbit, Archiva, and SystemML. We design a method to
automatically identify groups of duplicate and near-duplicate SATD comments and
track their evolution in the software system by mining the commit history of a
software project. Leveraging the proposed method, we identified 3,520 duplicate
and near-duplicate SATD comments from the target projects, which belong to
1,141 groups. We manually analyze the content and context of a sample of 1,505
SATD comments (by sampling 100 groups for each project) and identify if they
annotate the same root cause. We also investigate whether duplicate SATD
comments exist in code clones, whether they co-exist in the same file, and
whether they are introduced and removed simultaneously. Our preliminary study
reveals several surprising findings that would shed light on future studies
aiming to improve the management of duplicate SATD comments. For instance, only
48.5% duplicate SATD comment groups with the same root cause exist in regular
code clones, and only 33.9% of the duplicate SATD comment pairs are introduced
in the same commit.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 01:36:18 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Yasmin",
"Jerin",
""
],
[
"Sheikhaei",
"Mohammad Sadegh",
""
],
[
"Tian",
"Yuan",
""
]
] |
new_dataset
| 0.982681 |
2203.15987
|
Xuhui Yang
|
Xuhui Yang, Yaowei Wang, Ke Chen, Yong Xu, Yonghong Tian
|
Fine-Grained Object Classification via Self-Supervised Pose Alignment
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic patterns of fine-grained objects are determined by subtle appearance
difference of local parts, which thus inspires a number of part-based methods.
However, due to uncontrollable object poses in images, distinctive details
carried by local regions can be spatially distributed or even self-occluded,
leading to a large variation on object representation. For discounting pose
variations, this paper proposes to learn a novel graph based object
representation to reveal a global configuration of local parts for
self-supervised pose alignment across classes, which is employed as an
auxiliary feature regularization on a deep representation learning
network.Moreover, a coarse-to-fine supervision together with the proposed
pose-insensitive constraint on shallow-to-deep sub-networks encourages
discriminative features in a curriculum learning manner. We evaluate our method
on three popular fine-grained object classification benchmarks, consistently
achieving the state-of-the-art performance. Source codes are available at
https://github.com/yangxh11/P2P-Net.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 01:46:19 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Yang",
"Xuhui",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Chen",
"Ke",
""
],
[
"Xu",
"Yong",
""
],
[
"Tian",
"Yonghong",
""
]
] |
new_dataset
| 0.9982 |
2203.15990
|
Raula Gaikovina Kula Dr
|
Gregorio Robles, Raula Gaikovina Kula, Chaiyong Ragkhitwetsagul,
Tattiya Sakulniwat, Kenichi Matsumoto, and Jesus M. Gonzalez-Barahona
|
pycefr: Python Competency Level through Code Analysis
|
Accepted at International Conference on Program Comprehension, 2022
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Python is known to be a versatile language, well suited both for beginners
and advanced users. Some elements of the language are easier to understand than
others: some are found in any kind of code, while some others are used only by
experienced programmers. The use of these elements lead to different ways to
code, depending on the experience with the language and the knowledge of its
elements, the general programming competence and programming skills, etc. In
this paper, we present pycefr, a tool that detects the use of the different
elements of the Python language, effectively measuring the level of Python
proficiency required to comprehend and deal with a fragment of Python code.
Following the well-known Common European Framework of Reference for Languages
(CEFR), widely used for natural languages, pycefr categorizes Python code in
six levels, depending on the proficiency required to create and understand it.
We also discuss different use cases for pycefr: identifying code snippets that
can be understood by developers with a certain proficiency, labeling code
examples in online resources such as Stackoverflow and GitHub to suit them to a
certain level of competency, helping in the onboarding process of new
developers in Open Source Software projects, etc. A video shows availability
and usage of the tool: https://tinyurl.com/ypdt3fwe.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 01:54:26 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Robles",
"Gregorio",
""
],
[
"Kula",
"Raula Gaikovina",
""
],
[
"Ragkhitwetsagul",
"Chaiyong",
""
],
[
"Sakulniwat",
"Tattiya",
""
],
[
"Matsumoto",
"Kenichi",
""
],
[
"Gonzalez-Barahona",
"Jesus M.",
""
]
] |
new_dataset
| 0.999433 |
2203.16014
|
Junchi Chu
|
Junchi Chu, Xueyun Tang
|
ESNI: Domestic Robots Design for Elderly and Disabled People
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Our paper focuses on the research of the possibility for speech recognition
intelligent agents to assist the elderly and disabled people's lives, to
improve their life quality by utilizing cutting-edge technologies. After
researching the attitude of elderly and disabled people toward the household
agent, we propose a design framework: ESNI(Exploration, Segmentation,
Navigation, Instruction) that apply to mobile agent, achieve some
functionalities such as processing human commands, picking up a specified
object, and moving an object to another location. The agent starts the
exploration in an unseen environment, stores each item's information in the
grid cells to his memory and analyzes the corresponding features for each
section. We divided our indoor environment into 6 sections: Kitchen, Living
room, Bedroom, Studio, Bathroom, Balcony. The agent uses algorithms to assign
sections for each grid cell then generates a navigation trajectory base on the
section segmentation. When the user gives a command to the agent, feature words
will be extracted and processed into a sequence of sub-tasks.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 02:44:57 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Chu",
"Junchi",
""
],
[
"Tang",
"Xueyun",
""
]
] |
new_dataset
| 0.998512 |
2203.16015
|
Qiang Li Capasso
|
Wanfeng Zheng, Qiang Li, Guoxin Zhang, Pengfei Wan, Zhongyuan Wang
|
ITTR: Unpaired Image-to-Image Translation with Transformers
|
18 pages, 7 figures, 5 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unpaired image-to-image translation is to translate an image from a source
domain to a target domain without paired training data. By utilizing CNN in
extracting local semantics, various techniques have been developed to improve
the translation performance. However, CNN-based generators lack the ability to
capture long-range dependency to well exploit global semantics. Recently,
Vision Transformers have been widely investigated for recognition tasks. Though
appealing, it is inappropriate to simply transfer a recognition-based vision
transformer to image-to-image translation due to the generation difficulty and
the computation limitation. In this paper, we propose an effective and
efficient architecture for unpaired Image-to-Image Translation with
Transformers (ITTR). It has two main designs: 1) hybrid perception block (HPB)
for token mixing from different receptive fields to utilize global semantics;
2) dual pruned self-attention (DPSA) to sharply reduce the computational
complexity. Our ITTR outperforms the state-of-the-arts for unpaired
image-to-image translation on six benchmark datasets.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 02:46:12 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Zheng",
"Wanfeng",
""
],
[
"Li",
"Qiang",
""
],
[
"Zhang",
"Guoxin",
""
],
[
"Wan",
"Pengfei",
""
],
[
"Wang",
"Zhongyuan",
""
]
] |
new_dataset
| 0.985566 |
2203.16044
|
Satoshi Imamura
|
Satoshi Imamura, Masafumi Yamazaki, Takumi Honda, Akihiko Kasagi,
Akihiro Tabuchi, Hiroshi Nakao, Naoto Fukumoto, Kohta Nakashima
|
mpiQulacs: A Distributed Quantum Computer Simulator for A64FX-based
Cluster Systems
|
This preprint is related to the press release of Fujitsu LTD. in
https://www.fujitsu.com/global/about/resources/news/press-releases/2022/0330-01.html,
11 pages, 12 figures
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantum computer simulators running on classical computers are essential for
developing real quantum computers and emerging quantum applications. In
particular, state vector simulators, which store a full state vector in memory
and update it in every quantum operation, are available to simulate an
arbitrary form of quantum circuits, debug quantum applications, and validate
future quantum computers. However, the time and space complexity grows
exponentially with the number of qubits and easily exceeds the capability of a
single machine.
Therefore, we develop a distributed state vector simulator, $mpiQulacs$, that
is optimized for large-scale simulation on A64FX-based cluster systems. A64FX
is an ARM-based CPU that is also equipped in the world's top Fugaku
supercomputer. We evaluate weak and strong scaling of mpiQulacs with up to 36
qubits on a new 64-node A64FX-based cluster system named $Todoroki$. By
comparing mpiQulacs with existing distributed state vector simulators, we show
that mpiQulacs achieves the highest performance for large-scale simulation on
tens of nodes while sustaining a nearly ideal scalability. Besides, we define a
new metric, $quantum B/F ratio$, and use it to demonstrate that mpiQulacs
running on Todoroki fits the requirements of distributed state vector
simulation rather than the existing simulators running on general purpose
CPU-based or GPU-based cluster systems.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 04:25:41 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Imamura",
"Satoshi",
""
],
[
"Yamazaki",
"Masafumi",
""
],
[
"Honda",
"Takumi",
""
],
[
"Kasagi",
"Akihiko",
""
],
[
"Tabuchi",
"Akihiro",
""
],
[
"Nakao",
"Hiroshi",
""
],
[
"Fukumoto",
"Naoto",
""
],
[
"Nakashima",
"Kohta",
""
]
] |
new_dataset
| 0.999504 |
2203.16136
|
David Lusseau
|
David Lusseau and Rosie Baillie
|
Disparities in greenspace access during COVID-19 mobility restrictions
|
32 pages, 4 main figures, 13 additional figures, and 10 additional
tables. submitted
| null | null | null |
cs.SI physics.soc-ph q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
More than half of the human population lives in cities meaning that most
people predominantly experience nature in urban greenspace. Nature exposure is
an important contributor to social, mental and physical health. As the world
faces a pandemic which threatens the physical and mental health of billions of
people, it is crucial to understand that all have the possibility to access
nature exposure to alleviate some of these challenges. Here, for the first
time, we integrate data from Facebook, Twitter, and Google Search users to show
that people looked for greenspace during COVID-19 mobility restrictions but may
not have always managed to reach it. People spent more time in areas with
greenspace when they could and that depended on the level of multiple
deprivation in the neighbourhood where the greenspace was embedded.
Importantly, while people sought greenspace throughout the first 20 months of
the pandemic, this preference intensified through the waves of lockdown. Living
in an affluent area conferred a greenspace advantage in London and Paris but we
find that in Berlin more deprived neighbourhoods sought greenspace more,
including outside their neighbourhood. This highlights the need to understand
how greenspace access and deprivation interact to create more sustainable
communities.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 08:23:01 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Lusseau",
"David",
""
],
[
"Baillie",
"Rosie",
""
]
] |
new_dataset
| 0.989127 |
2203.16180
|
Jamie Blanche Ph.D.
|
Jamie Blanche, Shivoh Chirayil Nandakumar, Daniel Mitchell, Sam
Harper, Keir Groves, Andrew West, Barry Lennox, Simon Watson, David Flynn,
Ikuo Yamamoto
|
Millimeter-Wave Sensing for Avoidance of High-Risk Ground Conditions for
Mobile Robots
|
6 pages, 9 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Mobile robot autonomy has made significant advances in recent years, with
navigation algorithms well developed and used commercially in certain
well-defined environments, such as warehouses. The common link in usage
scenarios is that the environments in which the robots are utilized have a high
degree of certainty. Operating environments are often designed to be robot
friendly, for example augmented reality markers are strategically placed and
the ground is typically smooth, level, and clear of debris. For robots to be
useful in a wider range of environments, especially environments that are not
sanitized for their use, robots must be able to handle uncertainty. This
requires a robot to incorporate new sensors and sources of information, and to
be able to use this information to make decisions regarding navigation and the
overall mission. When using autonomous mobile robots in unstructured and poorly
defined environments, such as a natural disaster site or in a rural
environment, ground condition is of critical importance and is a common cause
of failure. Examples include loss of traction due to high levels of ground
water, hidden cavities, or material boundary failures. To evaluate a
non-contact sensing method to mitigate these risks, Frequency Modulated
Continuous Wave (FMCW) radar is integrated with an Unmanned Ground Vehicle
(UGV), representing a novel application of FMCW to detect new measurands for
Robotic Autonomous Systems (RAS) navigation, informing on terrain integrity and
adding to the state-of-the-art in sensing for optimized autonomous path
planning. In this paper, the FMCW is first evaluated in a desktop setting to
determine its performance in anticipated ground conditions. The FMCW is then
fixed to a UGV and the sensor system is tested and validated in a
representative environment containing regions with significant levels of ground
water saturation.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 10:02:11 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Blanche",
"Jamie",
""
],
[
"Nandakumar",
"Shivoh Chirayil",
""
],
[
"Mitchell",
"Daniel",
""
],
[
"Harper",
"Sam",
""
],
[
"Groves",
"Keir",
""
],
[
"West",
"Andrew",
""
],
[
"Lennox",
"Barry",
""
],
[
"Watson",
"Simon",
""
],
[
"Flynn",
"David",
""
],
[
"Yamamoto",
"Ikuo",
""
]
] |
new_dataset
| 0.999578 |
2203.16258
|
Corentin Sautier
|
Corentin Sautier, Gilles Puy, Spyros Gidaris, Alexandre Boulch, Andrei
Bursuc, Renaud Marlet
|
Image-to-Lidar Self-Supervised Distillation for Autonomous Driving Data
|
Accepted to CVPR2022
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Segmenting or detecting objects in sparse Lidar point clouds are two
important tasks in autonomous driving to allow a vehicle to act safely in its
3D environment. The best performing methods in 3D semantic segmentation or
object detection rely on a large amount of annotated data. Yet annotating 3D
Lidar data for these tasks is tedious and costly. In this context, we propose a
self-supervised pre-training method for 3D perception models that is tailored
to autonomous driving data. Specifically, we leverage the availability of
synchronized and calibrated image and Lidar sensors in autonomous driving
setups for distilling self-supervised pre-trained image representations into 3D
models. Hence, our method does not require any point cloud nor image
annotations. The key ingredient of our method is the use of superpixels which
are used to pool 3D point features and 2D pixel features in visually similar
regions. We then train a 3D network on the self-supervised task of matching
these pooled point features with the corresponding pooled image pixel features.
The advantages of contrasting regions obtained by superpixels are that: (1)
grouping together pixels and points of visually coherent regions leads to a
more meaningful contrastive task that produces features well adapted to 3D
semantic segmentation and 3D object detection; (2) all the different regions
have the same weight in the contrastive loss regardless of the number of 3D
points sampled in these regions; (3) it mitigates the noise produced by
incorrect matching of points and pixels due to occlusions between the different
sensors. Extensive experiments on autonomous driving datasets demonstrate the
ability of our image-to-Lidar distillation strategy to produce 3D
representations that transfer well on semantic segmentation and object
detection tasks.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 12:40:30 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Sautier",
"Corentin",
""
],
[
"Puy",
"Gilles",
""
],
[
"Gidaris",
"Spyros",
""
],
[
"Boulch",
"Alexandre",
""
],
[
"Bursuc",
"Andrei",
""
],
[
"Marlet",
"Renaud",
""
]
] |
new_dataset
| 0.961836 |
2203.16274
|
Benjamin Horne
|
Matthew C. Childs, Cody Buntain, Milo Z. Trujillo, Benjamin D. Horne
|
Characterizing YouTube and BitChute Content and Mobilizers During U.S.
Election Fraud Discussions on Twitter
|
Published and Peer Reviewed at ACM WebSci 2022
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, we characterize the cross-platform mobilization of YouTube and
BitChute videos on Twitter during the 2020 U.S. Election fraud discussions.
Specifically, we extend the VoterFraud2020 dataset to describe the prevalence
of content supplied by both platforms, the mobilizers of that content, the
suppliers of that content, and the content itself. We find that while BitChute
videos promoting election fraud claims were linked to and engaged with in the
Twitter discussion, they played a relatively small role compared to YouTube
videos promoting fraud claims. This core finding points to the continued need
for proactive, consistent, and collaborative content moderation solutions
rather than the reactive and inconsistent solutions currently being used.
Additionally, we find that cross-platform disinformation spread from video
platforms was not prominently from bot accounts or political elites, but rather
average Twitter users. This finding supports past work arguing that research on
disinformation should move beyond a focus on bots and trolls to a focus on
participatory disinformation spread.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 13:10:40 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Childs",
"Matthew C.",
""
],
[
"Buntain",
"Cody",
""
],
[
"Trujillo",
"Milo Z.",
""
],
[
"Horne",
"Benjamin D.",
""
]
] |
new_dataset
| 0.978621 |
2203.16416
|
Niharika Thakuria
|
Niharika Thakuria, Reena Elangovan, Sandeep K Thirumala, Anand
Raghunathan, Sumeet K. Gupta
|
STeP-CiM: Strain-enabled Ternary Precision Computation-in-Memory based
on Non-Volatile 2D Piezoelectric Transistors
|
Under review at Frontiers of Nanotechnology
| null | null | null |
cs.ET cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose 2D Piezoelectric FET (PeFET) based compute-enabled non-volatile
memory for ternary deep neural networks (DNNs). PeFETs consist of a material
with ferroelectric and piezoelectric properties coupled with Transition Metal
Dichalcogenide channel. We utilize (a) ferroelectricity to store binary bits
(0/1) in the form of polarization (-P/+P) and (b) polarization dependent
piezoelectricity to read the stored state by means of strain-induced bandgap
change in Transition Metal Dichalcogenide channel. The unique read mechanism of
PeFETs enables us to expand the traditional association of +P (-P) with low
(high) resistance states to their dual high (low) resistance depending on read
voltage. Specifically, we demonstrate that +P (-P) stored in PeFETs can be
dynamically configured in (a) a low (high) resistance state for positive read
voltages and (b) their dual high (low) resistance states for negative read
voltages, without afflicting a read disturb. Such a feature, which we name as
Polarization Preserved Piezoelectric Effect Reversal with Dual Voltage Polarity
(PiER), is unique to PeFETs and has not been shown in hitherto explored
memories. We leverage PiER to propose a Strain-enabled Ternary Precision
Computation-in-Memory (STeP-CiM) cell with capabilities of computing the scalar
product of the stored weight and input, both of which are represented with
signed ternary precision. Further, using multi word-line assertion of STeP-CiM
cells, we achieve massively parallel computation of dot products of signed
ternary inputs and weights. Our array level analysis shows 91% lower delay and
improvements of 15% and 91% in energy for in-memory multiply-and-accumulate
operations compared to near-memory design approaches based on SRAM and PeFET
respectively. STeP-CiM exhibits upto 8.91x improvement in performance and 6.07x
average improvement in energy over SRAM/PeFET based near-memory design.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 15:58:00 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Thakuria",
"Niharika",
""
],
[
"Elangovan",
"Reena",
""
],
[
"Thirumala",
"Sandeep K",
""
],
[
"Raghunathan",
"Anand",
""
],
[
"Gupta",
"Sumeet K.",
""
]
] |
new_dataset
| 0.994115 |
2203.16421
|
Hanxiao Jiang
|
Hanxiao Jiang, Yongsen Mao, Manolis Savva, Angel X. Chang
|
OPD: Single-view 3D Openable Part Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the task of predicting what parts of an object can open and how
they move when they do so. The input is a single image of an object, and as
output we detect what parts of the object can open, and the motion parameters
describing the articulation of each openable part. To tackle this task, we
create two datasets of 3D objects: OPDSynth based on existing synthetic
objects, and OPDReal based on RGBD reconstructions of real objects. We then
design OPDRCNN, a neural architecture that detects openable parts and predicts
their motion parameters. Our experiments show that this is a challenging task
especially when considering generalization across object categories, and the
limited amount of information in a single image. Our architecture outperforms
baselines and prior work especially for RGB image inputs. Short video summary
at https://www.youtube.com/watch?v=P85iCaD0rfc
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 16:02:19 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Jiang",
"Hanxiao",
""
],
[
"Mao",
"Yongsen",
""
],
[
"Savva",
"Manolis",
""
],
[
"Chang",
"Angel X.",
""
]
] |
new_dataset
| 0.999699 |
2203.16531
|
Shengyi Qian
|
Shengyi Qian, Linyi Jin, Chris Rockwell, Siyi Chen, David F. Fouhey
|
Understanding 3D Object Articulation in Internet Videos
|
CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose to investigate detecting and characterizing the 3D planar
articulation of objects from ordinary videos. While seemingly easy for humans,
this problem poses many challenges for computers. We propose to approach this
problem by combining a top-down detection system that finds planes that can be
articulated along with an optimization approach that solves for a 3D plane that
can explain a sequence of observed articulations. We show that this system can
be trained on a combination of videos and 3D scan datasets. When tested on a
dataset of challenging Internet videos and the Charades dataset, our approach
obtains strong performance. Project site:
https://jasonqsy.github.io/Articulation3D
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 17:59:46 GMT"
}
] | 2022-03-31T00:00:00 |
[
[
"Qian",
"Shengyi",
""
],
[
"Jin",
"Linyi",
""
],
[
"Rockwell",
"Chris",
""
],
[
"Chen",
"Siyi",
""
],
[
"Fouhey",
"David F.",
""
]
] |
new_dataset
| 0.985885 |
2011.06922
|
Yoav Shalev
|
Yoav Shalev, Lior Wolf
|
Image Animation with Perturbed Masks
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel approach for image-animation of a source image by a
driving video, both depicting the same type of object. We do not assume the
existence of pose models and our method is able to animate arbitrary objects
without the knowledge of the object's structure. Furthermore, both, the driving
video and the source image are only seen during test-time. Our method is based
on a shared mask generator, which separates the foreground object from its
background, and captures the object's general pose and shape. To control the
source of the identity of the output frame, we employ perturbations to
interrupt the unwanted identity information on the driver's mask. A
mask-refinement module then replaces the identity of the driver with the
identity of the source. Conditioned on the source image, the transformed mask
is then decoded by a multi-scale generator that renders a realistic image, in
which the content of the source frame is animated by the pose in the driving
video. Due to the lack of fully supervised data, we train on the task of
reconstructing frames from the same video the source image is taken from. Our
method is shown to greatly outperform the state-of-the-art methods on multiple
benchmarks. Our code and samples are available at
https://github.com/itsyoavshalev/Image-Animation-with-Perturbed-Masks.
|
[
{
"version": "v1",
"created": "Fri, 13 Nov 2020 14:17:17 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Nov 2020 19:23:52 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Mar 2022 09:30:26 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Shalev",
"Yoav",
""
],
[
"Wolf",
"Lior",
""
]
] |
new_dataset
| 0.998756 |
2012.08510
|
Bo He
|
Bo He, Xitong Yang, Zuxuan Wu, Hao Chen, Ser-Nam Lim, Abhinav
Shrivastava
|
GTA: Global Temporal Attention for Video Action Understanding
|
Accepted to BMVC 2021
|
BMVC, 2021
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-attention learns pairwise interactions to model long-range dependencies,
yielding great improvements for video action recognition. In this paper, we
seek a deeper understanding of self-attention for temporal modeling in videos.
We first demonstrate that the entangled modeling of spatio-temporal information
by flattening all pixels is sub-optimal, failing to capture temporal
relationships among frames explicitly. To this end, we introduce Global
Temporal Attention (GTA), which performs global temporal attention on top of
spatial attention in a decoupled manner. We apply GTA on both pixels and
semantically similar regions to capture temporal relationships at different
levels of spatial granularity. Unlike conventional self-attention that computes
an instance-specific attention matrix, GTA directly learns a global attention
matrix that is intended to encode temporal structures that generalize across
different samples. We further augment GTA with a cross-channel multi-head
fashion to exploit channel interactions for better temporal modeling. Extensive
experiments on 2D and 3D networks demonstrate that our approach consistently
enhances temporal modeling and provides state-of-the-art performance on three
video action recognition datasets.
|
[
{
"version": "v1",
"created": "Tue, 15 Dec 2020 18:58:21 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Apr 2021 18:16:52 GMT"
},
{
"version": "v3",
"created": "Tue, 2 Nov 2021 23:10:10 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"He",
"Bo",
""
],
[
"Yang",
"Xitong",
""
],
[
"Wu",
"Zuxuan",
""
],
[
"Chen",
"Hao",
""
],
[
"Lim",
"Ser-Nam",
""
],
[
"Shrivastava",
"Abhinav",
""
]
] |
new_dataset
| 0.999638 |
2102.00499
|
Martin Bullinger
|
Felix Brandt and Martin Bullinger and Patrick Lederer
|
On the Indecisiveness of Kelly-Strategyproof Social Choice Functions
|
Appears in: Proceedings of the 20th International Conference on
Autonomous Agents and Multiagent Systems (AAMAS), 2021
|
Journal of Artificial Intelligence Research, 73:1093-1130 (2022)
| null | null |
cs.GT econ.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social choice functions (SCFs) map the preferences of a group of agents over
some set of alternatives to a non-empty subset of alternatives. The
Gibbard-Satterthwaite theorem has shown that only extremely restrictive SCFs
are strategyproof when there are more than two alternatives. For set-valued
SCFs, or so-called social choice correspondences, the situation is less clear.
There are miscellaneous - mostly negative - results using a variety of
strategyproofness notions and additional requirements. The simple and intuitive
notion of Kelly-strategyproofness has turned out to be particularly compelling
because it is weak enough to still allow for positive results. For example, the
Pareto rule is strategyproof even when preferences are weak, and a number of
attractive SCFs (such as the top cycle, the uncovered set, and the essential
set) are strategyproof for strict preferences. In this paper, we show that, for
weak preferences, only indecisive SCFs can satisfy strategyproofness. In
particular, (i) every strategyproof rank-based SCF violates Pareto-optimality,
(ii) every strategyproof support-based SCF (which generalize Fishburn's C2
SCFs) that satisfies Pareto-optimality returns at least one most preferred
alternative of every voter, and (iii) every strategyproof non-imposing SCF
returns the Condorcet loser in at least one profile. We also discuss the
consequences of these results for randomized social choice.
|
[
{
"version": "v1",
"created": "Sun, 31 Jan 2021 17:41:41 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 12:50:15 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Brandt",
"Felix",
""
],
[
"Bullinger",
"Martin",
""
],
[
"Lederer",
"Patrick",
""
]
] |
new_dataset
| 0.97317 |
2104.07611
|
Michelle Yuan
|
Michelle Yuan, Patrick Xia, Chandler May, Benjamin Van Durme, Jordan
Boyd-Graber
|
Adapting Coreference Resolution Models through Active Learning
|
Accepted at ACL 2022 Main Conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Neural coreference resolution models trained on one dataset may not transfer
to new, low-resource domains. Active learning mitigates this problem by
sampling a small subset of data for annotators to label. While active learning
is well-defined for classification tasks, its application to coreference
resolution is neither well-defined nor fully understood. This paper explores
how to actively label coreference, examining sources of model uncertainty and
document reading costs. We compare uncertainty sampling strategies and their
advantages through thorough error analysis. In both synthetic and human
experiments, labeling spans within the same document is more effective than
annotating spans across documents. The findings contribute to a more realistic
development of coreference resolution models.
|
[
{
"version": "v1",
"created": "Thu, 15 Apr 2021 17:21:51 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 02:19:54 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Yuan",
"Michelle",
""
],
[
"Xia",
"Patrick",
""
],
[
"May",
"Chandler",
""
],
[
"Van Durme",
"Benjamin",
""
],
[
"Boyd-Graber",
"Jordan",
""
]
] |
new_dataset
| 0.976525 |
2104.11348
|
Miguel Del Rio Fernandez
|
Miguel Del Rio, Natalie Delworth, Ryan Westerman, Michelle Huang,
Nishchal Bhandari, Joseph Palakapilly, Quinten McNamara, Joshua Dong, Piotr
Zelasko, Miguel Jette
|
Earnings-21: A Practical Benchmark for ASR in the Wild
|
Accepted to INTERSPEECH 2021. June 15 2021: Addressing the comments
of reviewers and updating the results of our internal ESPNet model. The
results do not change our conclusions. April 28th, 2021: We found and
resolved an issue in our experimental evaluation that scored the LibriSpeech
model at ~20% worse relative WER than the actual WER. The updated results do
not affect our conclusions
| null |
10.21437/Interspeech.2021-1915
| null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Commonly used speech corpora inadequately challenge academic and commercial
ASR systems. In particular, speech corpora lack metadata needed for detailed
analysis and WER measurement. In response, we present Earnings-21, a 39-hour
corpus of earnings calls containing entity-dense speech from nine different
financial sectors. This corpus is intended to benchmark ASR systems in the wild
with special attention towards named entity recognition. We benchmark four
commercial ASR models, two internal models built with open-source tools, and an
open-source LibriSpeech model and discuss their differences in performance on
Earnings-21. Using our recently released fstalign tool, we provide a candid
analysis of each model's recognition capabilities under different partitions.
Our analysis finds that ASR accuracy for certain NER categories is poor,
presenting a significant impediment to transcript comprehension and usage.
Earnings-21 bridges academic and commercial ASR system evaluation and enables
further research on entity modeling and WER on real world audio.
|
[
{
"version": "v1",
"created": "Thu, 22 Apr 2021 23:04:28 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Apr 2021 15:43:46 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Jun 2021 02:32:23 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Del Rio",
"Miguel",
""
],
[
"Delworth",
"Natalie",
""
],
[
"Westerman",
"Ryan",
""
],
[
"Huang",
"Michelle",
""
],
[
"Bhandari",
"Nishchal",
""
],
[
"Palakapilly",
"Joseph",
""
],
[
"McNamara",
"Quinten",
""
],
[
"Dong",
"Joshua",
""
],
[
"Zelasko",
"Piotr",
""
],
[
"Jette",
"Miguel",
""
]
] |
new_dataset
| 0.999605 |
2104.11934
|
Jun Chen
|
Jun Chen, Aniket Agarwal, Sherif Abdelkarim, Deyao Zhu, Mohamed
Elhoseiny
|
RelTransformer: A Transformer-Based Long-Tail Visual Relationship
Recognition
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The visual relationship recognition (VRR) task aims at understanding the
pairwise visual relationships between interacting objects in an image. These
relationships typically have a long-tail distribution due to their
compositional nature. This problem gets more severe when the vocabulary becomes
large, rendering this task very challenging. This paper shows that modeling an
effective message-passing flow through an attention mechanism can be critical
to tackling the compositionality and long-tail challenges in VRR. The method,
called RelTransformer, represents each image as a fully-connected scene graph
and restructures the whole scene into the relation-triplet and global-scene
contexts. It directly passes the message from each element in the
relation-triplet and global-scene contexts to the target relation via
self-attention. We also design a learnable memory to augment the long-tail
relation representation learning. Through extensive experiments, we find that
our model generalizes well on many VRR benchmarks. Our model outperforms the
best-performing models on two large-scale long-tail VRR benchmarks, VG8K-LT
(+2.0% overall acc) and GQA-LT (+26.0% overall acc), both having a highly
skewed distribution towards the tail. It also achieves strong results on the
VG200 relation detection task. Our code is available at
https://github.com/Vision-CAIR/RelTransformer.
|
[
{
"version": "v1",
"created": "Sat, 24 Apr 2021 12:04:04 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 14:47:44 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Chen",
"Jun",
""
],
[
"Agarwal",
"Aniket",
""
],
[
"Abdelkarim",
"Sherif",
""
],
[
"Zhu",
"Deyao",
""
],
[
"Elhoseiny",
"Mohamed",
""
]
] |
new_dataset
| 0.996755 |
2110.05064
|
Nicholas Gao
|
Nicholas Gao, Stephan G\"unnemann
|
Ab-Initio Potential Energy Surfaces by Pairing GNNs with Neural Wave
Functions
|
Published as a conference paper at ICLR 2022
| null | null | null |
cs.LG physics.chem-ph physics.comp-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Solving the Schr\"odinger equation is key to many quantum mechanical
properties. However, an analytical solution is only tractable for
single-electron systems. Recently, neural networks succeeded at modeling wave
functions of many-electron systems. Together with the variational Monte-Carlo
(VMC) framework, this led to solutions on par with the best known classical
methods. Still, these neural methods require tremendous amounts of
computational resources as one has to train a separate model for each molecular
geometry. In this work, we combine a Graph Neural Network (GNN) with a neural
wave function to simultaneously solve the Schr\"odinger equation for multiple
geometries via VMC. This enables us to model continuous subsets of the
potential energy surface with a single training pass. Compared to existing
state-of-the-art networks, our Potential Energy Surface Network PESNet speeds
up training for multiple geometries by up to 40 times while matching or
surpassing their accuracy. This may open the path to accurate and orders of
magnitude cheaper quantum mechanical calculations.
|
[
{
"version": "v1",
"created": "Mon, 11 Oct 2021 07:58:31 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Nov 2021 08:28:58 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Mar 2022 07:21:24 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Gao",
"Nicholas",
""
],
[
"Günnemann",
"Stephan",
""
]
] |
new_dataset
| 0.976094 |
2110.07018
|
Yuxiang Peng
|
Yuxiang Peng, Mingsheng Ying, Xiaodi Wu
|
Algebraic Reasoning of Quantum Programs via Non-idempotent Kleene
Algebra
|
extended version, 23 pages, 6 figures, to appear at the 43rd ACM
SIGPLAN PLDI 2022
| null |
10.1145/3519939.3523713
| null |
cs.PL quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the algebraic reasoning of quantum programs inspired by the
success of classical program analysis based on Kleene algebra. One prominent
example of such is the famous Kleene Algebra with Tests (KAT), which has
furnished both theoretical insights and practical tools. The succinctness of
algebraic reasoning would be especially desirable for scalable analysis of
quantum programs, given the involvement of exponential-size matrices in most of
the existing methods. A few key features of KAT including the idempotent law
and the nice properties of classical tests, however, fail to hold in the
context of quantum programs due to their unique quantum features, especially in
branching. We propose Non-idempotent Kleene Algebra (NKA) as a natural
alternative and identify complete and sound semantic models for NKA as well as
their quantum interpretations. In light of applications of KAT, we demonstrate
algebraic proofs in NKA of quantum compiler optimization and the normal form of
quantum while-programs. Moreover, we extend NKA with Tests (i.e., NKAT), where
tests model quantum predicates following effect algebra, and illustrate how to
encode propositional quantum Hoare logic as NKAT theorems.
|
[
{
"version": "v1",
"created": "Wed, 13 Oct 2021 20:27:01 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 03:27:57 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Peng",
"Yuxiang",
""
],
[
"Ying",
"Mingsheng",
""
],
[
"Wu",
"Xiaodi",
""
]
] |
new_dataset
| 0.965952 |
2110.12509
|
Manuel Schultheiss
|
Manuel Schultheiss, Philipp Schmette, Thorsten Sellerer, Rafael
Schick, Kirsten Taphorn, Korbinian Mechlem, Lorenz Birnbacher, Bernhard
Renger, Marcus R. Makowski, Franz Pfeiffer, Daniela Pfeiffer
|
Per-Pixel Lung Thickness and Lung Capacity Estimation on Chest X-Rays
using Convolutional Neural Networks
|
v4: fixed simulation bug, improved text, various other improvements
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Estimating the lung depth on x-ray images could provide both an accurate
opportunistic lung volume estimation during clinical routine and improve image
contrast in modern structural chest imaging techniques like x-ray dark-field
imaging. We present a method based on a convolutional neural network that
allows a per-pixel lung thickness estimation and subsequent total lung capacity
estimation. The network was trained and validated using 5250 simulated
radiographs generated from 525 real CT scans. The network was evaluated on a
test set of 131 synthetic radiographs and a retrospective evaluation was
performed on another test set of 45 standard clinical radiographs. The standard
clinical radiographs were obtained from 45 patients, who got a CT examination
between July 1, 2021 and September 1, 2021 and a chest x-ray 6 month before or
after the CT. For 45 standard clinical radiographs, the mean-absolute error
between the estimated lung volume and groundtruth volume was 0.75 liter with a
positive correlation (r = 0.78). When accounting for the patient diameter, the
error decreases to 0.69 liter with a positive correlation (r = 0.83).
Additionally, we predicted the lung thicknesses on the synthetic test set,
where the mean-absolute error between the total volumes was 0.19 liter with a
positive correlation (r = 0.99). The results show, that creation of lung
thickness maps and estimation of approximate total lung volume is possible from
standard clinical radiographs.
|
[
{
"version": "v1",
"created": "Sun, 24 Oct 2021 19:09:28 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Oct 2021 09:02:30 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Jan 2022 13:56:17 GMT"
},
{
"version": "v4",
"created": "Tue, 29 Mar 2022 15:17:40 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Schultheiss",
"Manuel",
""
],
[
"Schmette",
"Philipp",
""
],
[
"Sellerer",
"Thorsten",
""
],
[
"Schick",
"Rafael",
""
],
[
"Taphorn",
"Kirsten",
""
],
[
"Mechlem",
"Korbinian",
""
],
[
"Birnbacher",
"Lorenz",
""
],
[
"Renger",
"Bernhard",
""
],
[
"Makowski",
"Marcus R.",
""
],
[
"Pfeiffer",
"Franz",
""
],
[
"Pfeiffer",
"Daniela",
""
]
] |
new_dataset
| 0.96466 |
2110.14566
|
Rahmad Mahendra
|
Rahmad Mahendra, Alham Fikri Aji, Samuel Louvan, Fahrurrozi Rahman,
and Clara Vania
|
IndoNLI: A Natural Language Inference Dataset for Indonesian
|
Accepted at EMNLP 2021 main conference
|
https://aclanthology.org/2021.emnlp-main.821/
|
10.18653/v1/2021.emnlp-main.821
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present IndoNLI, the first human-elicited NLI dataset for Indonesian. We
adapt the data collection protocol for MNLI and collect nearly 18K sentence
pairs annotated by crowd workers and experts. The expert-annotated data is used
exclusively as a test set. It is designed to provide a challenging test-bed for
Indonesian NLI by explicitly incorporating various linguistic phenomena such as
numerical reasoning, structural changes, idioms, or temporal and spatial
reasoning. Experiment results show that XLM-R outperforms other pre-trained
models in our data. The best performance on the expert-annotated data is still
far below human performance (13.4% accuracy gap), suggesting that this test set
is especially challenging. Furthermore, our analysis shows that our
expert-annotated data is more diverse and contains fewer annotation artifacts
than the crowd-annotated data. We hope this dataset can help accelerate
progress in Indonesian NLP research.
|
[
{
"version": "v1",
"created": "Wed, 27 Oct 2021 16:37:13 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Mahendra",
"Rahmad",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Louvan",
"Samuel",
""
],
[
"Rahman",
"Fahrurrozi",
""
],
[
"Vania",
"Clara",
""
]
] |
new_dataset
| 0.999818 |
2111.04414
|
Tyler Menezes
|
Tyler Menezes, Alex Parra and Mingjie Jiang
|
Open-Source Internships With Industry Mentors
|
Will appear in Proceedings of the 27th ACM Conference on Innovation
and Technology in Computer Science Education
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Internships help students connect what they have learned in the classroom to
the real world, and students with access to internships are more likely to
graduate and secure employment. However, many students are unable to find an
internship by the time they graduate. This experience report describes a
program where volunteer software engineers mentor students as they work on
open-source projects in the summer, offered as an alternative to a traditional
internship experience. We catalog the considerations involved in providing an
experience similar to a traditional internship, describe our program's design,
and provide two years' worth of participant evaluations and career outcomes as
a measure of efficacy. The program served mostly undergraduates from non-R1
schools who are underrepresented in technology, and achieved similar
educational outcomes to a traditional internship program. Most promisingly,
mentors were willing to serve as a professional reference for 80% of students
and the number of graduating seniors who secured full-time employment in
technology was 7 points higher than average (despite occurring during the
COVID-19 pandemic).
|
[
{
"version": "v1",
"created": "Wed, 3 Nov 2021 20:05:50 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 22:19:49 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Menezes",
"Tyler",
""
],
[
"Parra",
"Alex",
""
],
[
"Jiang",
"Mingjie",
""
]
] |
new_dataset
| 0.988172 |
2111.14562
|
Hyunmin Lee
|
Hyunmin Lee and Jaesik Park
|
Instance-wise Occlusion and Depth Orders in Natural Scenes
|
Accepted to CVPR 2022. Code is available at
https://github.com/POSTECH-CVLab/InstaOrder
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we introduce a new dataset, named InstaOrder, that can be used
to understand the geometrical relationships of instances in an image. The
dataset consists of 2.9M annotations of geometric orderings for class-labeled
instances in 101K natural scenes. The scenes were annotated by 3,659
crowd-workers regarding (1) occlusion order that identifies occluder/occludee
and (2) depth order that describes ordinal relations that consider relative
distance from the camera. The dataset provides joint annotation of two kinds of
orderings for the same instances, and we discover that the occlusion order and
depth order are complementary. We also introduce a geometric order prediction
network called InstaOrderNet, which is superior to state-of-the-art approaches.
Moreover, we propose a dense depth prediction network called InstaDepthNet that
uses auxiliary geometric order loss to boost the accuracy of the
state-of-the-art depth prediction approach, MiDaS [56].
|
[
{
"version": "v1",
"created": "Mon, 29 Nov 2021 14:45:07 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Nov 2021 07:55:21 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Mar 2022 10:30:00 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Lee",
"Hyunmin",
""
],
[
"Park",
"Jaesik",
""
]
] |
new_dataset
| 0.969978 |
2111.14755
|
Menghe Zhang
|
Menghe Zhang, Jurgen Schulze, and Dong Zhang
|
FaceAtlasAR: Atlas of Facial Acupuncture Points in Augmented Reality
| null |
Computer Science & Information Technology 2021
| null | null |
cs.GR cs.CV cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Acupuncture is a technique in which practitioners stimulate specific points
on the body. Those points, called acupuncture points (or acupoints),
anatomically define areas on the skin relative to specific landmarks on the
body. However, mapping the acupoints to individuals could be challenging for
inexperienced acupuncturists. In this project, we proposed a system to localize
and visualize facial acupoints for individuals in an augmented reality (AR)
context. This system combines a face alignment model and a hair segmentation
model to provide dense reference points for acupoints localization in real-time
(60FPS). The localization process takes the proportional bone (B-cun or
skeletal) measurement method, which is commonly operated by specialists;
however, in the real practice, operators sometimes find it inaccurate due to
skill-related error. With this system, users, even without any skills, can
locate the facial acupoints as a part of the self-training or self-treatment
process.
|
[
{
"version": "v1",
"created": "Mon, 29 Nov 2021 18:00:25 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 05:43:25 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Zhang",
"Menghe",
""
],
[
"Schulze",
"Jurgen",
""
],
[
"Zhang",
"Dong",
""
]
] |
new_dataset
| 0.999273 |
2112.02779
|
Kwonyoung Ryu
|
Wei Dong, Kwonyoung Ryu, Michael Kaess, Jaesik Park
|
Revisiting LiDAR Registration and Reconstruction: A Range Image
Perspective
|
14 pages, 9 figures. This paper is under the review
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spinning LiDAR data are prevalent for 3D vision tasks. Since LiDAR data is
presented in the form of point clouds, expensive 3D operations are usually
required. This paper revisits spinning LiDAR scan formation and presents a
cylindrical range image representation with a ray-wise projection/unprojection
model. It is built upon raw scans and supports lossless conversion from 2D to
3D, allowing fast 2D operations, including 2D index-based neighbor search and
downsampling. We then propose, to the best of our knowledge, the first
multi-scale registration and dense signed distance function (SDF)
reconstruction system for LiDAR range images. We further collect a dataset of
indoor and outdoor LiDAR scenes in the posed range image format. A
comprehensive evaluation of registration and reconstruction is conducted on the
proposed dataset and the KITTI dataset. Experiments demonstrate that our
approach outperforms surface reconstruction baselines and achieves similar
performance to state-of-the-art LiDAR registration methods, including a modern
learning-based registration approach. Thanks to the simplicity, our
registration runs at 100Hz and SDF reconstruction in real time. The dataset and
a modularized C++/Python toolbox will be released.
|
[
{
"version": "v1",
"created": "Mon, 6 Dec 2021 04:28:32 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 22:38:28 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Dong",
"Wei",
""
],
[
"Ryu",
"Kwonyoung",
""
],
[
"Kaess",
"Michael",
""
],
[
"Park",
"Jaesik",
""
]
] |
new_dataset
| 0.951192 |
2112.03902
|
Rui Dai
|
Rui Dai, Srijan Das, Kumara Kahatapitiya, Michael S. Ryoo, Francois
Bremond
|
MS-TCT: Multi-Scale Temporal ConvTransformer for Action Detection
|
Accepted in CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Action detection is an essential and challenging task, especially for densely
labelled datasets of untrimmed videos. The temporal relation is complex in
those datasets, including challenges like composite action, and co-occurring
action. For detecting actions in those complex videos, efficiently capturing
both short-term and long-term temporal information in the video is critical. To
this end, we propose a novel ConvTransformer network for action detection. This
network comprises three main components: (1) Temporal Encoder module
extensively explores global and local temporal relations at multiple temporal
resolutions. (2) Temporal Scale Mixer module effectively fuses the multi-scale
features to have a unified feature representation. (3) Classification module is
used to learn the instance center-relative position and predict the frame-level
classification scores. The extensive experiments on multiple datasets,
including Charades, TSU and MultiTHUMOS, confirm the effectiveness of our
proposed method. Our network outperforms the state-of-the-art methods on all
three datasets.
|
[
{
"version": "v1",
"created": "Tue, 7 Dec 2021 18:57:37 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 13:02:47 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Dai",
"Rui",
""
],
[
"Das",
"Srijan",
""
],
[
"Kahatapitiya",
"Kumara",
""
],
[
"Ryoo",
"Michael S.",
""
],
[
"Bremond",
"Francois",
""
]
] |
new_dataset
| 0.999518 |
2112.10703
|
Haithem Turki
|
Haithem Turki, Deva Ramanan, Mahadev Satyanarayanan
|
Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs
|
CVPR 2022 Project page: https://meganerf.cmusatyalab.org GitHub:
https://github.com/cmusatyalab/mega-nerf
| null | null | null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We use neural radiance fields (NeRFs) to build interactive 3D environments
from large-scale visual captures spanning buildings or even multiple city
blocks collected primarily from drones. In contrast to single object scenes (on
which NeRFs are traditionally evaluated), our scale poses multiple challenges
including (1) the need to model thousands of images with varying lighting
conditions, each of which capture only a small subset of the scene, (2)
prohibitively large model capacities that make it infeasible to train on a
single GPU, and (3) significant challenges for fast rendering that would enable
interactive fly-throughs.
To address these challenges, we begin by analyzing visibility statistics for
large-scale scenes, motivating a sparse network structure where parameters are
specialized to different regions of the scene. We introduce a simple geometric
clustering algorithm for data parallelism that partitions training images (or
rather pixels) into different NeRF submodules that can be trained in parallel.
We evaluate our approach on existing datasets (Quad 6k and UrbanScene3D) as
well as against our own drone footage, improving training speed by 3x and PSNR
by 12%. We also evaluate recent NeRF fast renderers on top of Mega-NeRF and
introduce a novel method that exploits temporal coherence. Our technique
achieves a 40x speedup over conventional NeRF rendering while remaining within
0.8 db in PSNR quality, exceeding the fidelity of existing fast renderers.
|
[
{
"version": "v1",
"created": "Mon, 20 Dec 2021 17:40:48 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 22:21:38 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Turki",
"Haithem",
""
],
[
"Ramanan",
"Deva",
""
],
[
"Satyanarayanan",
"Mahadev",
""
]
] |
new_dataset
| 0.994547 |
2112.12785
|
Tony Ng
|
Tony Ng, Hyo Jin Kim, Vincent Lee, Daniel DeTone, Tsun-Yi Yang,
Tianwei Shen, Eddy Ilg, Vassileios Balntas, Krystian Mikolajczyk, Chris
Sweeney
|
NinjaDesc: Content-Concealing Visual Descriptors via Adversarial
Learning
|
Accepted at CVPR 2022. Supplementary material included after
references. 15 pages, 14 figures, 6 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In the light of recent analyses on privacy-concerning scene revelation from
visual descriptors, we develop descriptors that conceal the input image
content. In particular, we propose an adversarial learning framework for
training visual descriptors that prevent image reconstruction, while
maintaining the matching accuracy. We let a feature encoding network and image
reconstruction network compete with each other, such that the feature encoder
tries to impede the image reconstruction with its generated descriptors, while
the reconstructor tries to recover the input image from the descriptors. The
experimental results demonstrate that the visual descriptors obtained with our
method significantly deteriorate the image reconstruction quality with minimal
impact on correspondence matching and camera localization performance.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 18:58:58 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 16:06:02 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Ng",
"Tony",
""
],
[
"Kim",
"Hyo Jin",
""
],
[
"Lee",
"Vincent",
""
],
[
"DeTone",
"Daniel",
""
],
[
"Yang",
"Tsun-Yi",
""
],
[
"Shen",
"Tianwei",
""
],
[
"Ilg",
"Eddy",
""
],
[
"Balntas",
"Vassileios",
""
],
[
"Mikolajczyk",
"Krystian",
""
],
[
"Sweeney",
"Chris",
""
]
] |
new_dataset
| 0.998905 |
2202.01279
|
Stephen Bach
|
Stephen H. Bach, Victor Sanh, Zheng-Xin Yong, Albert Webson, Colin
Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault
Fevry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik
Ben-David, Canwen Xu, Gunjan Chhablani, Han Wang, Jason Alan Fries, Maged S.
Al-shaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang,
Dragomir Radev, Mike Tian-Jian Jiang, Alexander M. Rush
|
PromptSource: An Integrated Development Environment and Repository for
Natural Language Prompts
|
ACL 2022 Demo
| null | null | null |
cs.LG cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
PromptSource is a system for creating, sharing, and using natural language
prompts. Prompts are functions that map an example from a dataset to a natural
language input and target output. Using prompts to train and query language
models is an emerging area in NLP that requires new tools that let users
develop and refine these prompts collaboratively. PromptSource addresses the
emergent challenges in this new setting with (1) a templating language for
defining data-linked prompts, (2) an interface that lets users quickly iterate
on prompt development by observing outputs of their prompts on many examples,
and (3) a community-driven set of guidelines for contributing new prompts to a
common pool. Over 2,000 prompts for roughly 170 datasets are already available
in PromptSource. PromptSource is available at
https://github.com/bigscience-workshop/promptsource.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 20:48:54 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2022 14:54:01 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Mar 2022 16:37:47 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Bach",
"Stephen H.",
""
],
[
"Sanh",
"Victor",
""
],
[
"Yong",
"Zheng-Xin",
""
],
[
"Webson",
"Albert",
""
],
[
"Raffel",
"Colin",
""
],
[
"Nayak",
"Nihal V.",
""
],
[
"Sharma",
"Abheesht",
""
],
[
"Kim",
"Taewoon",
""
],
[
"Bari",
"M Saiful",
""
],
[
"Fevry",
"Thibault",
""
],
[
"Alyafeai",
"Zaid",
""
],
[
"Dey",
"Manan",
""
],
[
"Santilli",
"Andrea",
""
],
[
"Sun",
"Zhiqing",
""
],
[
"Ben-David",
"Srulik",
""
],
[
"Xu",
"Canwen",
""
],
[
"Chhablani",
"Gunjan",
""
],
[
"Wang",
"Han",
""
],
[
"Fries",
"Jason Alan",
""
],
[
"Al-shaibani",
"Maged S.",
""
],
[
"Sharma",
"Shanya",
""
],
[
"Thakker",
"Urmish",
""
],
[
"Almubarak",
"Khalid",
""
],
[
"Tang",
"Xiangru",
""
],
[
"Radev",
"Dragomir",
""
],
[
"Jiang",
"Mike Tian-Jian",
""
],
[
"Rush",
"Alexander M.",
""
]
] |
new_dataset
| 0.999899 |
2202.06546
|
Stefan Milius
|
Florian Frank and Stefan Milius and Henning Urbat
|
Coalgebraic Semantics for Nominal Automata
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper provides a coalgebraic approach to the language semantics of two
types of non-deterministic automata over nominal sets: non-deterministic
orbit-finite automata (NOFAs) and regular nominal non-deterministic automata
(RNNAs), which were introduced in previous work. While NOFAs are a
straightforward nominal version of non-deterministic automata, RNNAs feature
ordinary as well as name binding transitions. Correspondingly, words accepted
by RNNAs are strings formed by ordinary letters and name binding letters. Bar
languages are sets of such words modulo $\alpha$-equivalence, and to every
state of an RNNA one associates its accepted bar language. We show that the
semantics of NOFAs and RNNAs, respectively, arise both as an instance of the
Kleisli-style coalgebraic trace semantics as well as an instance of the
coalgebraic language semantics obtained via generalized determinization. On the
way we revisit coalgebraic trace semantics in general and give a new compact
proof for the main result in that theory stating that an initial algebra for a
functor yields the terminal coalgebra for the Kleisli extension of the functor.
Our proof requires fewer assumptions on the functor than all previous ones.
|
[
{
"version": "v1",
"created": "Mon, 14 Feb 2022 08:36:34 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 16:26:32 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Frank",
"Florian",
""
],
[
"Milius",
"Stefan",
""
],
[
"Urbat",
"Henning",
""
]
] |
new_dataset
| 0.991019 |
2202.11426
|
Freddie Hong
|
Freddie Hong, Steve Hodges, Connor Myant, David Boyle
|
Open5x: Accessible 5-axis 3D printing and conformal slicing
|
6 pages, 7 Figures, Extended Abstracts of the 2022 CHI Conference on
Human Factors in Computing Systems
| null |
10.1145/3491101.3519782
| null |
cs.HC cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The common layer-by-layer deposition of regular, 3-axis 3D printing
simplifies both the fabrication process and the 3D printer's mechanical design.
However, the resulting 3D printed objects have some unfavourable
characteristics including visible layers, uneven structural strength and
support material. To overcome these, researchers have employed robotic arms and
multi-axis CNCs to deposit materials in conformal layers. Conformal deposition
improves the quality of the 3D printed parts through support-less printing and
curved layer deposition. However, such multi-axis 3D printing is inaccessible
to many individuals due to high costs and technical complexities. Furthermore,
the limited GUI support for conformal slicers creates an additional barrier for
users. To open multi-axis 3D printing up to more makers and researchers, we
present a cheap and accessible way to upgrade a regular 3D printer to 5 axes.
We have also developed a GUI-based conformal slicer, integrated within a
popular CAD package. Together, these deliver an accessible workflow for
designing, simulating and creating conformally-printed 3D models.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 11:14:24 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 16:06:20 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Hong",
"Freddie",
""
],
[
"Hodges",
"Steve",
""
],
[
"Myant",
"Connor",
""
],
[
"Boyle",
"David",
""
]
] |
new_dataset
| 0.999761 |
2203.09658
|
Yaroslav Golubev
|
Anna Vlasova, Maria Tigina, Ilya Vlasov, Anastasiia Birillo, Yaroslav
Golubev, Timofey Bryksin
|
Lupa: A Framework for Large Scale Analysis of the Programming Language
Usage
|
5 pages, 2 figures
| null | null | null |
cs.PL cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present Lupa - a framework for large-scale analysis of the
programming language usage. Lupa is a command line tool that uses the power of
the IntelliJ Platform under the hood, which gives it access to powerful static
analysis tools used in modern IDEs. The tool supports custom analyzers that
process the rich concrete syntax tree of the code and can calculate its various
features: the presence of entities, their dependencies, definition-usage
chains, etc. Currently, Lupa supports analyzing Python and Kotlin, but can be
extended to other languages supported by IntelliJ-based IDEs. We explain the
internals of the tool, show how it can be extended and customized, and describe
an example analysis that we carried out with its help: analyzing the syntax of
ranges in Kotlin.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 23:46:49 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 23:18:18 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Vlasova",
"Anna",
""
],
[
"Tigina",
"Maria",
""
],
[
"Vlasov",
"Ilya",
""
],
[
"Birillo",
"Anastasiia",
""
],
[
"Golubev",
"Yaroslav",
""
],
[
"Bryksin",
"Timofey",
""
]
] |
new_dataset
| 0.995697 |
2203.12082
|
Pan Ji
|
Jiachen Liu, Pan Ji, Nitin Bansal, Changjiang Cai, Qingan Yan, Xiaolei
Huang, Yi Xu
|
PlaneMVS: 3D Plane Reconstruction from Multi-View Stereo
|
CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel framework named PlaneMVS for 3D plane reconstruction from
multiple input views with known camera poses. Most previous learning-based
plane reconstruction methods reconstruct 3D planes from single images, which
highly rely on single-view regression and suffer from depth scale ambiguity. In
contrast, we reconstruct 3D planes with a multi-view-stereo (MVS) pipeline that
takes advantage of multi-view geometry. We decouple plane reconstruction into a
semantic plane detection branch and a plane MVS branch. The semantic plane
detection branch is based on a single-view plane detection framework but with
differences. The plane MVS branch adopts a set of slanted plane hypotheses to
replace conventional depth hypotheses to perform plane sweeping strategy and
finally learns pixel-level plane parameters and its planar depth map. We
present how the two branches are learned in a balanced way, and propose a
soft-pooling loss to associate the outputs of the two branches and make them
benefit from each other. Extensive experiments on various indoor datasets show
that PlaneMVS significantly outperforms state-of-the-art (SOTA) single-view
plane reconstruction methods on both plane detection and 3D geometry metrics.
Our method even outperforms a set of SOTA learning-based MVS methods thanks to
the learned plane priors. To the best of our knowledge, this is the first work
on 3D plane reconstruction within an end-to-end MVS framework.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 22:35:46 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 19:05:15 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Liu",
"Jiachen",
""
],
[
"Ji",
"Pan",
""
],
[
"Bansal",
"Nitin",
""
],
[
"Cai",
"Changjiang",
""
],
[
"Yan",
"Qingan",
""
],
[
"Huang",
"Xiaolei",
""
],
[
"Xu",
"Yi",
""
]
] |
new_dataset
| 0.999141 |
2203.12845
|
Didan Deng
|
Didan Deng
|
Multiple Emotion Descriptors Estimation at the ABAW3 Challenge
|
The technical report for our multi-task approach in the ABAW3
Challenge
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
To describe complex emotional states, psychologists have proposed multiple
emotion descriptors: sparse descriptors like facial action units; continuous
descriptors like valence and arousal; and discrete class descriptors like
happiness and anger. According to Ekman and Friesen, 1969, facial action units
are sign vehicles that convey the emotion message, while discrete or continuous
emotion descriptors are the messages perceived and expressed by human.
In this paper, we designed an architecture for multiple emotion descriptors
estimation in participating the ABAW3 Challenge. Based on the theory of Ekman
and Friesen, 1969, we designed distinct architectures to measure the sign
vehicles (i.e., facial action units) and the message (i.e., discrete emotions,
valence and arousal) given their different properties. The quantitative
experiments on the ABAW3 challenge dataset has shown the superior performance
of our approach over two baseline models.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 04:55:21 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 11:16:04 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Deng",
"Didan",
""
]
] |
new_dataset
| 0.964896 |
2203.14331
|
Tao Zhang
|
Tao Zhang
|
SuperMVS: Non-Uniform Cost Volume For High-Resolution Multi-View Stereo
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Different from most state-of-the-art~(SOTA) algorithms that use static and
uniform sampling methods with a lot of hypothesis planes to get fine depth
sampling. In this paper, we propose a free-moving hypothesis plane method for
dynamic and non-uniform sampling in a wide depth range to build the cost
volume, which not only greatly reduces the number of planes but also finers
sampling, for both of reducing computational cost and improving accuracy, named
Non-Uniform Cost Volume. We present the SuperMVS network to implement
Multi-View Stereo with Non-Uniform Cost Volume. SuperMVS is a coarse-to-fine
framework with four cascade stages. It can output higher resolution and
accurate depth map. Our SuperMVS achieves the SOTA results with low memory, low
runtime, and fewer planes on the DTU datasets and Tanks \& Temples dataset.
|
[
{
"version": "v1",
"created": "Sun, 27 Mar 2022 15:40:06 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 14:19:59 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Zhang",
"Tao",
""
]
] |
new_dataset
| 0.998399 |
2203.14367
|
Jian Zhao
|
Jian Zhao and Hui Zhang
|
Thin-Plate Spline Motion Model for Image Animation
|
CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Image animation brings life to the static object in the source image
according to the driving video. Recent works attempt to perform motion transfer
on arbitrary objects through unsupervised methods without using a priori
knowledge. However, it remains a significant challenge for current unsupervised
methods when there is a large pose gap between the objects in the source and
driving images. In this paper, a new end-to-end unsupervised motion transfer
framework is proposed to overcome such issue. Firstly, we propose thin-plate
spline motion estimation to produce a more flexible optical flow, which warps
the feature maps of the source image to the feature domain of the driving
image. Secondly, in order to restore the missing regions more realistically, we
leverage multi-resolution occlusion masks to achieve more effective feature
fusion. Finally, additional auxiliary loss functions are designed to ensure
that there is a clear division of labor in the network modules, encouraging the
network to generate high-quality images. Our method can animate a variety of
objects, including talking faces, human bodies, and pixel animations.
Experiments demonstrate that our method performs better on most benchmarks than
the state of the art with visible improvements in pose-related metrics.
|
[
{
"version": "v1",
"created": "Sun, 27 Mar 2022 18:40:55 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2022 03:06:26 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Zhao",
"Jian",
""
],
[
"Zhang",
"Hui",
""
]
] |
new_dataset
| 0.999099 |
2203.15078
|
Saarthak Kapse
|
Saarthak Kapse, Srijan Das, Prateek Prasanna
|
CD-Net: Histopathology Representation Learning using Pyramidal
Context-Detail Network
|
Submitted to MICCAI 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Extracting rich phenotype information, such as cell density and arrangement,
from whole slide histology images (WSIs), requires analysis of large field of
view, i.e more contexual information. This can be achieved through analyzing
the digital slides at lower resolution. A potential drawback is missing out on
details present at a higher resolution. To jointly leverage complementary
information from multiple resolutions, we present a novel transformer based
Pyramidal Context-Detail Network (CD-Net). CD-Net exploits the WSI pyramidal
structure through co-training of proposed Context and Detail Modules, which
operate on inputs from multiple resolutions. The residual connections between
the modules enable the joint training paradigm while learning self-supervised
representation for WSIs. The efficacy of CD-Net is demonstrated in classifying
Lung Adenocarcinoma from Squamous cell carcinoma.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 20:33:39 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Kapse",
"Saarthak",
""
],
[
"Das",
"Srijan",
""
],
[
"Prasanna",
"Prateek",
""
]
] |
new_dataset
| 0.957518 |
2203.15086
|
Satya Krishna Gorti
|
Satya Krishna Gorti, Noel Vouitsis, Junwei Ma, Keyvan Golestan,
Maksims Volkovs, Animesh Garg, Guangwei Yu
|
X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval
|
CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In text-video retrieval, the objective is to learn a cross-modal similarity
function between a text and a video that ranks relevant text-video pairs higher
than irrelevant pairs. However, videos inherently express a much wider gamut of
information than texts. Instead, texts often capture sub-regions of entire
videos and are most semantically similar to certain frames within videos.
Therefore, for a given text, a retrieval model should focus on the text's most
semantically similar video sub-regions to make a more relevant comparison. Yet,
most existing works aggregate entire videos without directly considering text.
Common text-agnostic aggregations schemes include mean-pooling or
self-attention over the frames, but these are likely to encode misleading
visual information not described in the given text. To address this, we propose
a cross-modal attention model called X-Pool that reasons between a text and the
frames of a video. Our core mechanism is a scaled dot product attention for a
text to attend to its most semantically similar frames. We then generate an
aggregated video representation conditioned on the text's attention weights
over the frames. We evaluate our method on three benchmark datasets of MSR-VTT,
MSVD and LSMDC, achieving new state-of-the-art results by up to 12% in relative
improvement in Recall@1. Our findings thereby highlight the importance of joint
text-video reasoning to extract important visual cues according to text. Full
code and demo can be found at: https://layer6ai-labs.github.io/xpool/
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 20:47:37 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Gorti",
"Satya Krishna",
""
],
[
"Vouitsis",
"Noel",
""
],
[
"Ma",
"Junwei",
""
],
[
"Golestan",
"Keyvan",
""
],
[
"Volkovs",
"Maksims",
""
],
[
"Garg",
"Animesh",
""
],
[
"Yu",
"Guangwei",
""
]
] |
new_dataset
| 0.993305 |
2203.15090
|
Mehmet Baygin
|
Mehmet Baygin, Turker Tuncer, Sengul Dogan
|
New pyramidal hybrid textural and deep features based automatic skin
cancer classification model: Ensemble DarkNet and textural feature extractor
|
22 pages, 7 figures
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background: Skin cancer is one of the widely seen cancer worldwide and
automatic classification of skin cancer can be benefited dermatology clinics
for an accurate diagnosis. Hence, a machine learning-based automatic skin
cancer detection model must be developed. Material and Method: This research
interests to overcome automatic skin cancer detection problem. A colored skin
cancer image dataset is used. This dataset contains 3297 images with two
classes. An automatic multilevel textural and deep features-based model is
presented. Multilevel fuse feature generation using discrete wavelet transform
(DWT), local phase quantization (LPQ), local binary pattern (LBP), pre-trained
DarkNet19, and DarkNet53 are utilized to generate features of the skin cancer
images, top 1000 features are selected threshold value-based neighborhood
component analysis (NCA). The chosen top 1000 features are classified using the
10-fold cross-validation technique. Results: To obtain results, ten-fold
cross-validation is used and 91.54% classification accuracy results are
obtained by using the recommended pyramidal hybrid feature generator and NCA
selector-based model. Further, various training and testing separation ratios
(90:10, 80:20, 70:30, 60:40, 50:50) are used and the maximum classification
rate is calculated as 95.74% using the 90:10 separation ratio. Conclusions: The
findings and accuracies calculated are denoted that this model can be used in
dermatology and pathology clinics to simplify the skin cancer detection process
and help physicians.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 20:53:09 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Baygin",
"Mehmet",
""
],
[
"Tuncer",
"Turker",
""
],
[
"Dogan",
"Sengul",
""
]
] |
new_dataset
| 0.980589 |
2203.15103
|
Alejandro Escontrela
|
Alejandro Escontrela, Xue Bin Peng, Wenhao Yu, Tingnan Zhang, Atil
Iscen, Ken Goldberg, and Pieter Abbeel
|
Adversarial Motion Priors Make Good Substitutes for Complex Reward
Functions
|
8 pages, 6 figures, 3 tables
| null | null | null |
cs.AI cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Training a high-dimensional simulated agent with an under-specified reward
function often leads the agent to learn physically infeasible strategies that
are ineffective when deployed in the real world. To mitigate these unnatural
behaviors, reinforcement learning practitioners often utilize complex reward
functions that encourage physically plausible behaviors. However, a tedious
labor-intensive tuning process is often required to create hand-designed
rewards which might not easily generalize across platforms and tasks. We
propose substituting complex reward functions with "style rewards" learned from
a dataset of motion capture demonstrations. A learned style reward can be
combined with an arbitrary task reward to train policies that perform tasks
using naturalistic strategies. These natural strategies can also facilitate
transfer to the real world. We build upon Adversarial Motion Priors -- an
approach from the computer graphics domain that encodes a style reward from a
dataset of reference motions -- to demonstrate that an adversarial approach to
training policies can produce behaviors that transfer to a real quadrupedal
robot without requiring complex reward functions. We also demonstrate that an
effective style reward can be learned from a few seconds of motion capture data
gathered from a German Shepherd and leads to energy-efficient locomotion
strategies with natural gait transitions.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 21:17:36 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Escontrela",
"Alejandro",
""
],
[
"Peng",
"Xue Bin",
""
],
[
"Yu",
"Wenhao",
""
],
[
"Zhang",
"Tingnan",
""
],
[
"Iscen",
"Atil",
""
],
[
"Goldberg",
"Ken",
""
],
[
"Abbeel",
"Pieter",
""
]
] |
new_dataset
| 0.988537 |
2203.15121
|
Mohannad Ismail
|
Mohannad Ismail, Andrew Quach, Christopher Jelesnianski, Yeongjin
Jang, Changwoo Min
|
Tightly Seal Your Sensitive Pointers with PACTight
|
Accepted for publication to USENIX Security 2022
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
ARM is becoming more popular in desktops and data centers, opening a new
realm in terms of security attacks against ARM. ARM has released Pointer
Authentication, a new hardware security feature that is intended to ensure
pointer integrity with cryptographic primitives. In this paper, we utilize
Pointer Authentication (PA) to build a novel scheme to completely prevent any
misuse of security-sensitive pointers. We propose PACTight to tightly seal
these pointers. PACTight utilizes a strong and unique modifier that addresses
the current issues with the state-of-the-art PA defense mechanisms. We
implement four defenses based on the PACTight mechanism. Our security and
performance evaluation results show that PACTight defenses are more efficient
and secure. Using real PA instructions, we evaluated PACTight on 30 different
applications, including NGINX web server, with an average performance overhead
of 4.07% even when enforcing our strongest defense. PACTight demonstrates its
effectiveness and efficiency with real PA instructions on real hardware.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 21:55:51 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Ismail",
"Mohannad",
""
],
[
"Quach",
"Andrew",
""
],
[
"Jelesnianski",
"Christopher",
""
],
[
"Jang",
"Yeongjin",
""
],
[
"Min",
"Changwoo",
""
]
] |
new_dataset
| 0.971646 |
2203.15145
|
Alexander K\"ubler
|
Pascal Auf der Maur, Betim Djambazi, Yves Haberth\"ur, Patricia
H\"ormann, Alexander K\"ubler, Michael Lustenberger, Samuel Sigrist, Oda
Vigen, Julian F\"orster, Florian Achermann, Elias Hampp, Robert K.
Katzschmann, and Roland Siegwart
|
RoBoa: Construction and Evaluation of a Steerable Vine Robot for Search
and Rescue Applications
|
6 pages, 5 figures, 2021 IEEE 4th International Conference on Soft
Robotics (RoboSoft). For associated video, see this
https://www.youtube.com/watch?v=W6wlZ9kaUvo
|
2021 IEEE 4th International Conference on Soft Robotics (RoboSoft)
|
10.1109/RoboSoft51838.2021.9479192
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RoBoa is a vine-like search and rescue robot that can explore narrow and
cluttered environments such as destroyed buildings. The robot assists rescue
teams in finding and communicating with trapped people. It employs the
principle of vine robots for locomotion, everting the tip of its tube to move
forward. Inside the tube, pneumatic actuators enable lateral movement. The head
carries sensors and is mounted outside at the tip of the tube. At the back, a
supply box contains the rolled up tube and provides pressurized air, power,
computation, as well as an interface for the user to interact with the system.
A decentralized control scheme was implemented that reduces the required number
of cables and takes care of the low-level control of the pneumatic actuators.
The design, characterization, and experimental evaluation of the system and its
crucial components is shown. The complete prototype is fully functional and was
evaluated in a realistic environment of a collapsed building where the
remote-controlled robot was able to repeatedly locate a trapped person after a
travel distance of about 10 m.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 23:40:46 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"der Maur",
"Pascal Auf",
""
],
[
"Djambazi",
"Betim",
""
],
[
"Haberthür",
"Yves",
""
],
[
"Hörmann",
"Patricia",
""
],
[
"Kübler",
"Alexander",
""
],
[
"Lustenberger",
"Michael",
""
],
[
"Sigrist",
"Samuel",
""
],
[
"Vigen",
"Oda",
""
],
[
"Förster",
"Julian",
""
],
[
"Achermann",
"Florian",
""
],
[
"Hampp",
"Elias",
""
],
[
"Katzschmann",
"Robert K.",
""
],
[
"Siegwart",
"Roland",
""
]
] |
new_dataset
| 0.999701 |
2203.15166
|
Evan Lowe
|
Evan Lowe, Levent G\"uven\c{c}
|
Autonomous Road Vehicle Emergency Obstacle Avoidance Maneuver Framework
at Highway Speeds
|
50 pages, 25 figures, 2 tables
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
An Autonomous Road Vehicle (ARV) can navigate various types of road networks
using inputs such as throttle (acceleration), braking (deceleration), and
steering (change of lateral direction). In most ARV driving scenarios that
involve normal vehicle traffic and encounters with vulnerable road users
(VRUs), ARVs are not required to take evasive action. This paper presents a
novel Emergency Obstacle Avoidance Maneuver (EOAM) methodology for ARVs
traveling at higher speeds and lower road surface friction, involving
time-critical maneuver determination and control. The proposed EOAM Framework
offers usage of the ARV's sensing, perception, control, and actuation system
abilities as one cohesive system, to accomplish avoidance of an on-road
obstacle, based first on performance feasibility and second on passenger
comfort, and is designed to be well-integrated within an ARV high-level system.
Co-simulation including the ARV EOAM logic in Simulink and a vehicle model in
CarSim is conducted with speeds ranging from 55 to 165 km/h and on road
surfaces with friction ranging from 1.0 to 0.1. The results are analyzed and
given in the context of an entire ARV system, with implications for future
work.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 01:08:37 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Lowe",
"Evan",
""
],
[
"Güvenç",
"Levent",
""
]
] |
new_dataset
| 0.963713 |
2203.15178
|
Natarajan Shankar
|
Natarajan Shankar, Devesh Bhatt, Michael Ernst, Minyoung Kim,
Srivatsan Varadarajan, Suzanne Millstein, Jorge Navas, Jason Biatek, Huascar
Sanchez, Anitha Murugesan, Hao Ren
|
DesCert: Design for Certification
|
142 pages, 63 figures
| null | null |
SRI-CSL-2022-1
|
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
The goal of the DARPA Automated Rapid Certification Of Software (ARCOS)
program is to "automate the evaluation of software assurance evidence to enable
certifiers to determine rapidly that system risk is acceptable." As part of
this program, the DesCert project focuses on the assurance-driven development
of new software. The DesCert team consists of SRI International, Honeywell
Research, and the University of Washington. We have adopted a formal,
tool-based approach to the construction of software artifacts that are
supported by rigorous evidence. The DesCert workflow integrates evidence
generation into a design process that goes from requirements capture and
analysis to the decomposition of the high-level software requirements into
architecture properties and software components with assertional contracts, and
on to software that can be analyzed both dynamically and statically. The
generated evidence is organized by means of an assurance ontology and
integrated into the RACK knowledge base.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 01:40:32 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Shankar",
"Natarajan",
""
],
[
"Bhatt",
"Devesh",
""
],
[
"Ernst",
"Michael",
""
],
[
"Kim",
"Minyoung",
""
],
[
"Varadarajan",
"Srivatsan",
""
],
[
"Millstein",
"Suzanne",
""
],
[
"Navas",
"Jorge",
""
],
[
"Biatek",
"Jason",
""
],
[
"Sanchez",
"Huascar",
""
],
[
"Murugesan",
"Anitha",
""
],
[
"Ren",
"Hao",
""
]
] |
new_dataset
| 0.955346 |
2203.15190
|
Xin Wen
|
Xin Wen and Junsheng Zhou and Yu-Shen Liu and Zhen Dong and Zhizhong
Han
|
3D Shape Reconstruction from 2D Images with Disentangled Attribute Flow
|
Accepted by CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconstructing 3D shape from a single 2D image is a challenging task, which
needs to estimate the detailed 3D structures based on the semantic attributes
from 2D image. So far, most of the previous methods still struggle to extract
semantic attributes for 3D reconstruction task. Since the semantic attributes
of a single image are usually implicit and entangled with each other, it is
still challenging to reconstruct 3D shape with detailed semantic structures
represented by the input image. To address this problem, we propose 3DAttriFlow
to disentangle and extract semantic attributes through different semantic
levels in the input images. These disentangled semantic attributes will be
integrated into the 3D shape reconstruction process, which can provide definite
guidance to the reconstruction of specific attribute on 3D shape. As a result,
the 3D decoder can explicitly capture high-level semantic features at the
bottom of the network, and utilize low-level features at the top of the
network, which allows to reconstruct more accurate 3D shapes. Note that the
explicit disentangling is learned without extra labels, where the only
supervision used in our training is the input image and its corresponding 3D
shape. Our comprehensive experiments on ShapeNet dataset demonstrate that
3DAttriFlow outperforms the state-of-the-art shape reconstruction methods, and
we also validate its generalization ability on shape completion task.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 02:03:31 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Wen",
"Xin",
""
],
[
"Zhou",
"Junsheng",
""
],
[
"Liu",
"Yu-Shen",
""
],
[
"Dong",
"Zhen",
""
],
[
"Han",
"Zhizhong",
""
]
] |
new_dataset
| 0.992311 |
2203.15253
|
Shankhanil Ghosh
|
Shankhanil Ghosh (1), Chhanda Saha (1) and Naagamani Molakathaala (1)
((1) School of Computer and Information Sciences, University of Hyderabad,
Hyderabad, India)
|
NeuraGen-A Low-Resource Neural Network based approach for Gender
Classification
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Human voice is the source of several important information. This is in the
form of features. These Features help in interpreting various features
associated with the speaker and speech. The speaker dependent work
researchersare targeted towards speaker identification, Speaker verification,
speaker biometric, forensics using feature, and cross-modal matching via speech
and face images. In such context research, it is a very difficult task to come
across clean, and well annotated publicly available speech corpus as data set.
Acquiring volunteers to generate such dataset is also very expensive, not to
mention the enormous amount of effort and time researchers spend to gather such
data. The present paper work, a Neural Network proposal as NeuraGen focused
which is a low-resource ANN architecture. The proposed tool used to classify
gender of the speaker from the speech recordings. We have used speech
recordings collected from the ELSDSR and limited TIMIT datasets, from which we
extracted 8 speech features, which were pre-processed and then fed into
NeuraGen to identify the gender. NeuraGen has successfully achieved accuracy of
90.7407% and F1 score of 91.227% in train and 20-fold cross validation dataset.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 05:57:24 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Ghosh",
"Shankhanil",
""
],
[
"Saha",
"Chhanda",
""
],
[
"Molakathaala",
"Naagamani",
""
]
] |
new_dataset
| 0.999164 |
2203.15334
|
Jianxin Sun Mr
|
Jianxin Sun, Qiyao Deng, Qi Li, Muyi Sun, Min Ren, Zhenan Sun
|
AnyFace: Free-style Text-to-Face Synthesis and Manipulation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Existing text-to-image synthesis methods generally are only applicable to
words in the training dataset. However, human faces are so variable to be
described with limited words. So this paper proposes the first free-style
text-to-face method namely AnyFace enabling much wider open world applications
such as metaverse, social media, cosmetics, forensics, etc. AnyFace has a novel
two-stream framework for face image synthesis and manipulation given arbitrary
descriptions of the human face. Specifically, one stream performs text-to-face
generation and the other conducts face image reconstruction. Facial text and
image features are extracted using the CLIP (Contrastive Language-Image
Pre-training) encoders. And a collaborative Cross Modal Distillation (CMD)
module is designed to align the linguistic and visual features across these two
streams. Furthermore, a Diverse Triplet Loss (DT loss) is developed to model
fine-grained features and improve facial diversity. Extensive experiments on
Multi-modal CelebA-HQ and CelebAText-HQ demonstrate significant advantages of
AnyFace over state-of-the-art methods. AnyFace can achieve high-quality,
high-resolution, and high-diversity face synthesis and manipulation results
without any constraints on the number and content of input captions.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 08:27:38 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Sun",
"Jianxin",
""
],
[
"Deng",
"Qiyao",
""
],
[
"Li",
"Qi",
""
],
[
"Sun",
"Muyi",
""
],
[
"Ren",
"Min",
""
],
[
"Sun",
"Zhenan",
""
]
] |
new_dataset
| 0.999784 |
2203.15354
|
Ben Saunders
|
Ben Saunders, Necati Cihan Camgoz, Richard Bowden
|
Signing at Scale: Learning to Co-Articulate Signs for Large-Scale
Photo-Realistic Sign Language Production
|
arXiv admin note: text overlap with arXiv:2011.09846
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sign languages are visual languages, with vocabularies as rich as their
spoken language counterparts. However, current deep-learning based Sign
Language Production (SLP) models produce under-articulated skeleton pose
sequences from constrained vocabularies and this limits applicability. To be
understandable and accepted by the deaf, an automatic SLP system must be able
to generate co-articulated photo-realistic signing sequences for large domains
of discourse.
In this work, we tackle large-scale SLP by learning to co-articulate between
dictionary signs, a method capable of producing smooth signing while scaling to
unconstrained domains of discourse. To learn sign co-articulation, we propose a
novel Frame Selection Network (FS-Net) that improves the temporal alignment of
interpolated dictionary signs to continuous signing sequences. Additionally, we
propose SignGAN, a pose-conditioned human synthesis model that produces
photo-realistic sign language videos direct from skeleton pose. We propose a
novel keypoint-based loss function which improves the quality of synthesized
hand images.
We evaluate our SLP model on the large-scale meineDGS (mDGS) corpus,
conducting extensive user evaluation showing our FS-Net approach improves
co-articulation of interpolated dictionary signs. Additionally, we show that
SignGAN significantly outperforms all baseline methods for quantitative
metrics, human perceptual studies and native deaf signer comprehension.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 08:51:38 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Saunders",
"Ben",
""
],
[
"Camgoz",
"Necati Cihan",
""
],
[
"Bowden",
"Richard",
""
]
] |
new_dataset
| 0.987693 |
2203.15369
|
Stefano Zacchiroli
|
Davide Rossi, Stefano Zacchiroli (IP Paris, LTCI)
|
Geographic Diversity in Public Code Contributions
|
The 2022 Mining Software Repositories Conference, May 2022,
Pittsburgh, Pennsylvania, United States
| null |
10.1145/3524842.3528471
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We conduct an exploratory, large-scale, longitudinal study of 50 years of
commits to publicly available version control system repositories, in order to
characterize the geographic diversity of contributors to public code and its
evolution over time. We analyze in total 2.2 billion commits collected by
Software Heritage from 160 million projects and authored by 43 million authors
during the 1971-2021 time period. We geolocate developers to 12 world regions
derived from the United Nation geoscheme, using as signals email top-level
domains, author names compared with names distributions around the world, and
UTC offsets mined from commit metadata.We find evidence of the early dominance
of North America in open source software, later joined by Europe. After that
period, the geographic diversity in public code has been constantly increasing.
We also identify relevant historical shifts related to the UNIX wars, the
increase of coding literacy in Central and South Asia, and broader phenomena
like colonialism and people movement across countries (immigration/emigration).
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 09:07:43 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Rossi",
"Davide",
"",
"IP Paris, LTCI"
],
[
"Zacchiroli",
"Stefano",
"",
"IP Paris, LTCI"
]
] |
new_dataset
| 0.999022 |
2203.15437
|
Manohara Pai
|
Girisha S, Ujjwal Verma, Manohara Pai M M and Radhika M Pai
|
Contextual Information Based Anomaly Detection for a Multi-Scene UAV
Aerial Videos
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
UAV based surveillance is gaining much interest worldwide due to its
extensive applications in monitoring wildlife, urban planning, disaster
management, campus security, etc. These videos are analyzed for
strange/odd/anomalous patterns which are essential aspects of surveillance. But
manual analysis of these videos is tedious and laborious. Hence, the
development of computer-aided systems for the analysis of UAV based
surveillance videos is crucial. Despite this interest, in literature, several
computer aided systems are developed focusing only on CCTV based surveillance
videos. These methods are designed for single scene scenarios and lack
contextual knowledge which is required for multi-scene scenarios. Furthermore,
the lack of standard UAV based anomaly detection datasets limits the
development of these systems. In this regard, the present work aims at the
development of a Computer Aided Decision support system to analyse UAV based
surveillance videos. A new UAV based multi-scene anomaly detection dataset is
developed with frame-level annotations for the development of computer aided
systems. It holistically uses contextual, temporal and appearance features for
accurate detection of anomalies. Furthermore, a new inference strategy is
proposed that utilizes few anomalous samples along with normal samples to
identify better decision boundaries. The proposed method is extensively
evaluated on the UAV based anomaly detection dataset and performed
competitively with respect to state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 11:07:49 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"S",
"Girisha",
""
],
[
"Verma",
"Ujjwal",
""
],
[
"M",
"Manohara Pai M",
""
],
[
"Pai",
"Radhika M",
""
]
] |
new_dataset
| 0.970563 |
2203.15443
|
Marcos Faundez-Zanuy
|
Anna Esposito, Vincenzo Capuano, Jiri Mekyska, Marcos Faundez-Zanuy
|
A Naturalistic Database of Thermal Emotional Facial Expressions and
Effects of Induced Emotions on Memory
|
15 pages published in Esposito, A., Esposito, A.M., Vinciarelli, A.,
Hoffmann, R., M\"uller, V.C. (eds) Cognitive Behavioural Systems. Lecture
Notes in Computer Science, vol 7403. Springer, Berlin, Heidelberg
|
2012 Cognitive Behavioural Systems. Lecture Notes in Computer
Science, vol 7403. Springer, Berlin, Heidelberg
|
10.1007/978-3-642-34584-5_12
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This work defines a procedure for collecting naturally induced emotional
facial expressions through the vision of movie excerpts with high emotional
contents and reports experimental data ascertaining the effects of emotions on
memory word recognition tasks. The induced emotional states include the four
basic emotions of sadness, disgust, happiness, and surprise, as well as the
neutral emotional state. The resulting database contains both thermal and
visible emotional facial expressions, portrayed by forty Italian subjects and
simultaneously acquired by appropriately synchronizing a thermal and a standard
visible camera. Each subject's recording session lasted 45 minutes, allowing
for each mode (thermal or visible) to collect a minimum of 2000 facial
expressions from which a minimum of 400 were selected as highly expressive of
each emotion category. The database is available to the scientific community
and can be obtained contacting one of the authors. For this pilot study, it was
found that emotions and/or emotion categories do not affect individual
performance on memory word recognition tasks and temperature changes in the
face or in some regions of it do not discriminate among emotional states.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 11:17:35 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Esposito",
"Anna",
""
],
[
"Capuano",
"Vincenzo",
""
],
[
"Mekyska",
"Jiri",
""
],
[
"Faundez-Zanuy",
"Marcos",
""
]
] |
new_dataset
| 0.955479 |
2203.15480
|
Yuwen Deng
|
Yuwen Deng, Donghai Guan, Yanyu Chen, Weiwei Yuan, Jiemin Ji,
Mingqiang Wei
|
SAR-ShipNet: SAR-Ship Detection Neural Network via Bidirectional
Coordinate Attention and Multi-resolution Feature Fusion
|
This paper was accepted by the International Conference on Acoustics,
Speech, and Signal Processing(ICASSP) 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies a practically meaningful ship detection problem from
synthetic aperture radar (SAR) images by the neural network. We broadly extract
different types of SAR image features and raise the intriguing question that
whether these extracted features are beneficial to (1) suppress data variations
(e.g., complex land-sea backgrounds, scattered noise) of real-world SAR images,
and (2) enhance the features of ships that are small objects and have different
aspect (length-width) ratios, therefore resulting in the improvement of ship
detection. To answer this question, we propose a SAR-ship detection neural
network (call SAR-ShipNet for short), by newly developing Bidirectional
Coordinate Attention (BCA) and Multi-resolution Feature Fusion (MRF) based on
CenterNet. Moreover, considering the varying length-width ratio of arbitrary
ships, we adopt elliptical Gaussian probability distribution in CenterNet to
improve the performance of base detector models. Experimental results on the
public SAR-Ship dataset show that our SAR-ShipNet achieves competitive
advantages in both speed and accuracy.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 12:27:04 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Deng",
"Yuwen",
""
],
[
"Guan",
"Donghai",
""
],
[
"Chen",
"Yanyu",
""
],
[
"Yuan",
"Weiwei",
""
],
[
"Ji",
"Jiemin",
""
],
[
"Wei",
"Mingqiang",
""
]
] |
new_dataset
| 0.962606 |
2203.15498
|
Inderjeet Singh
|
Inderjeet Singh, Toshinori Araki, and Kazuya Kakizaki
|
Powerful Physical Adversarial Examples Against Practical Face
Recognition Systems
|
Accepted at IEEE/CVF WACV 2022 MAP
| null | null | null |
cs.CR cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is well-known that the most existing machine learning (ML)-based
safety-critical applications are vulnerable to carefully crafted input
instances called adversarial examples (AXs). An adversary can conveniently
attack these target systems from digital as well as physical worlds. This paper
aims to the generation of robust physical AXs against face recognition systems.
We present a novel smoothness loss function and a patch-noise combo attack for
realizing powerful physical AXs. The smoothness loss interjects the concept of
delayed constraints during the attack generation process, thereby causing
better handling of optimization complexity and smoother AXs for the physical
domain. The patch-noise combo attack combines patch noise and imperceptibly
small noises from different distributions to generate powerful
registration-based physical AXs. An extensive experimental analysis found that
our smoothness loss results in robust and more transferable digital and
physical AXs than the conventional techniques. Notably, our smoothness loss
results in a 1.17 and 1.97 times better mean attack success rate (ASR) in
physical white-box and black-box attacks, respectively. Our patch-noise combo
attack furthers the performance gains and results in 2.39 and 4.74 times higher
mean ASR than conventional technique in physical world white-box and black-box
attacks, respectively.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 18:29:44 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Singh",
"Inderjeet",
""
],
[
"Araki",
"Toshinori",
""
],
[
"Kakizaki",
"Kazuya",
""
]
] |
new_dataset
| 0.95383 |
2203.15509
|
Diptendu Chatterjee
|
Diptendu Chatterjee, Rishiraj Bhattacharyya
|
Firefighter Problem with Minimum Budget: Hardness and Approximation
Algorithm for Unit Disk Graphs
|
10 pages, 2 algorithms
| null | null | null |
cs.DS cs.CC cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unit disk graphs are the set of graphs which represent the intersection of
disk graphs and interval graphs. These graphs are of great importance due to
their structural similarity with wireless communication networks. Firefighter
problem on unit disk graph is interesting as it models the virus spreading in
an wireless network and asks for a solution to stop it. In this paper, we
consider the MIN-BUDGET firefighter problem where the goal is to determine the
minimum number of firefighters required and the nodes to place them at each
time instant to save a given set of vertices of a given graph and a fire
breakout node. We show that, the MIN-BUDGET firefighter problem in a unit disk
graph is NP-Hard. We also present a constant factor approximation algorithm.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 12:54:14 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Chatterjee",
"Diptendu",
""
],
[
"Bhattacharyya",
"Rishiraj",
""
]
] |
new_dataset
| 0.995397 |
2203.15558
|
Eduardo Rodrigues
|
Eduardo Rodrigues, Bianca Zadrozny, Campbell Watson
|
Wildfire risk forecast: An optimizable fire danger index
|
6 pages, 5 figures
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Wildfire events have caused severe losses in many places around the world and
are expected to increase with climate change. Throughout the years many
technologies have been developed to identify fire events early on and to
simulate fire behavior once they have started. Another particularly helpful
technology is fire risk indices, which use weather forcing to make advanced
predictions of the risk of fire. Predictions of fire risk indices can be used,
for instance, to allocate resources in places with high risk. These indices
have been developed over the years as empirical models with parameters that
were estimated in lab experiments and field tests. These parameters, however,
may not fit well all places where these models are used. In this paper we
propose a novel implementation of one index (NFDRS IC) as a differentiable
function in which one can optimize its internal parameters via gradient
descent. We leverage existing machine learning frameworks (PyTorch) to
construct our model. This approach has two benefits: (1) the NFDRS IC
parameters can be improved for each region using actual observed fire events,
and (2) the internal variables remain intact for interpretations by specialists
instead of meaningless hidden layers as in traditional neural networks. In this
paper we evaluate our strategy with actual fire events for locations in the USA
and Europe.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 14:08:49 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Rodrigues",
"Eduardo",
""
],
[
"Zadrozny",
"Bianca",
""
],
[
"Watson",
"Campbell",
""
]
] |
new_dataset
| 0.983332 |
2203.15568
|
Theodoros Giannakopoulos
|
Maria Moutti, Sofia Eleftheriou, Panagiotis Koromilas, Theodoros
Giannakopoulos
|
A Dataset for Speech Emotion Recognition in Greek Theatrical Plays
| null | null | null | null |
cs.SD cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning methodologies can be adopted in cultural applications and
propose new ways to distribute or even present the cultural content to the
public. For instance, speech analytics can be adopted to automatically generate
subtitles in theatrical plays, in order to (among other purposes) help people
with hearing loss. Apart from a typical speech-to-text transcription with
Automatic Speech Recognition (ASR), Speech Emotion Recognition (SER) can be
used to automatically predict the underlying emotional content of speech
dialogues in theatrical plays, and thus to provide a deeper understanding how
the actors utter their lines. However, real-world datasets from theatrical
plays are not available in the literature. In this work we present GreThE, the
Greek Theatrical Emotion dataset, a new publicly available data collection for
speech emotion recognition in Greek theatrical plays. The dataset contains
utterances from various actors and plays, along with respective valence and
arousal annotations. Towards this end, multiple annotators have been asked to
provide their input for each speech recording and inter-annotator agreement is
taken into account in the final ground truth generation. In addition, we
discuss the results of some indicative experiments that have been conducted
with machine and deep learning frameworks, using the dataset, along with some
widely used databases in the field of speech emotion recognition.
|
[
{
"version": "v1",
"created": "Sun, 27 Mar 2022 21:55:59 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Moutti",
"Maria",
""
],
[
"Eleftheriou",
"Sofia",
""
],
[
"Koromilas",
"Panagiotis",
""
],
[
"Giannakopoulos",
"Theodoros",
""
]
] |
new_dataset
| 0.999795 |
2203.15591
|
Miguel Del Rio Fernandez
|
Miguel Del Rio, Peter Ha, Quinten McNamara, Corey Miller, Shipra
Chandra
|
Earnings-22: A Practical Benchmark for Accents in the Wild
|
Submitted to Interspeech 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Modern automatic speech recognition (ASR) systems have achieved superhuman
Word Error Rate (WER) on many common corpora despite lacking adequate
performance on speech in the wild. Beyond that, there is a lack of real-world,
accented corpora to properly benchmark academic and commercial models. To
ensure this type of speech is represented in ASR benchmarking, we present
Earnings-22, a 125 file, 119 hour corpus of English-language earnings calls
gathered from global companies. We run a comparison across 4 commercial models
showing the variation in performance when taking country of origin into
consideration. Looking at hypothesis transcriptions, we explore errors common
to all ASR systems tested. By examining Individual Word Error Rate (IWER), we
find that key speech features impact model performance more for certain accents
than others. Earnings-22 provides a free-to-use benchmark of real-world,
accented audio to bridge academic and industrial research.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 14:02:57 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Del Rio",
"Miguel",
""
],
[
"Ha",
"Peter",
""
],
[
"McNamara",
"Quinten",
""
],
[
"Miller",
"Corey",
""
],
[
"Chandra",
"Shipra",
""
]
] |
new_dataset
| 0.999466 |
2203.15625
|
Kehong Gong
|
Kehong Gong, Bingbing Li, Jianfeng Zhang, Tao Wang, Jing Huang,
Michael Bi Mi, Jiashi Feng, Xinchao Wang
|
PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and
Hallucination under Self-supervision
|
CVPR 2022 Oral Paper, code available:
https://github.com/Garfield-kh/PoseTriplet
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing self-supervised 3D human pose estimation schemes have largely relied
on weak supervisions like consistency loss to guide the learning, which,
inevitably, leads to inferior results in real-world scenarios with unseen
poses. In this paper, we propose a novel self-supervised approach that allows
us to explicitly generate 2D-3D pose pairs for augmenting supervision, through
a self-enhancing dual-loop learning framework. This is made possible via
introducing a reinforcement-learning-based imitator, which is learned jointly
with a pose estimator alongside a pose hallucinator; the three components form
two loops during the training process, complementing and strengthening one
another. Specifically, the pose estimator transforms an input 2D pose sequence
to a low-fidelity 3D output, which is then enhanced by the imitator that
enforces physical constraints. The refined 3D poses are subsequently fed to the
hallucinator for producing even more diverse data, which are, in turn,
strengthened by the imitator and further utilized to train the pose estimator.
Such a co-evolution scheme, in practice, enables training a pose estimator on
self-generated motion data without relying on any given 3D data. Extensive
experiments across various benchmarks demonstrate that our approach yields
encouraging results significantly outperforming the state of the art and, in
some cases, even on par with results of fully-supervised methods. Notably, it
achieves 89.1% 3D PCK on MPI-INF-3DHP under self-supervised cross-dataset
evaluation setup, improving upon the previous best self-supervised methods by
8.6%. Code can be found at: https://github.com/Garfield-kh/PoseTriplet
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 14:45:53 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Gong",
"Kehong",
""
],
[
"Li",
"Bingbing",
""
],
[
"Zhang",
"Jianfeng",
""
],
[
"Wang",
"Tao",
""
],
[
"Huang",
"Jing",
""
],
[
"Mi",
"Michael Bi",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Wang",
"Xinchao",
""
]
] |
new_dataset
| 0.998493 |
2203.15629
|
Shana Moothedath
|
Jiabin Lin, Xian Yeow Lee, Talukder Jubery, Shana Moothedath, Soumik
Sarkar, and Baskar Ganapathysubramanian
|
Stochastic Conservative Contextual Linear Bandits
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many physical systems have underlying safety considerations that require that
the strategy deployed ensures the satisfaction of a set of constraints.
Further, often we have only partial information on the state of the system. We
study the problem of safe real-time decision making under uncertainty. In this
paper, we formulate a conservative stochastic contextual bandit formulation for
real-time decision making when an adversary chooses a distribution on the set
of possible contexts and the learner is subject to certain safety/performance
constraints. The learner observes only the context distribution and the exact
context is unknown, and the goal is to develop an algorithm that selects a
sequence of optimal actions to maximize the cumulative reward without violating
the safety constraints at any time step. By leveraging the UCB algorithm for
this setting, we propose a conservative linear UCB algorithm for stochastic
bandits with context distribution. We prove an upper bound on the regret of the
algorithm and show that it can be decomposed into three terms: (i) an upper
bound for the regret of the standard linear UCB algorithm, (ii) a constant term
(independent of time horizon) that accounts for the loss of being conservative
in order to satisfy the safety constraint, and (ii) a constant term
(independent of time horizon) that accounts for the loss for the contexts being
unknown and only the distribution being known. To validate the performance of
our approach we perform extensive simulations on synthetic data and on
real-world maize data collected through the Genomes to Fields (G2F) initiative.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 14:50:50 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Lin",
"Jiabin",
""
],
[
"Lee",
"Xian Yeow",
""
],
[
"Jubery",
"Talukder",
""
],
[
"Moothedath",
"Shana",
""
],
[
"Sarkar",
"Soumik",
""
],
[
"Ganapathysubramanian",
"Baskar",
""
]
] |
new_dataset
| 0.984715 |
2203.15704
|
Chris Thomas
|
Christopher Thomas and Yipeng Zhang and Shih-Fu Chang
|
Fine-Grained Visual Entailment
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual entailment is a recently proposed multimodal reasoning task where the
goal is to predict the logical relationship of a piece of text to an image. In
this paper, we propose an extension of this task, where the goal is to predict
the logical relationship of fine-grained knowledge elements within a piece of
text to an image. Unlike prior work, our method is inherently explainable and
makes logical predictions at different levels of granularity. Because we lack
fine-grained labels to train our method, we propose a novel multi-instance
learning approach which learns a fine-grained labeling using only sample-level
supervision. We also impose novel semantic structural constraints which ensure
that fine-grained predictions are internally semantically consistent. We
evaluate our method on a new dataset of manually annotated knowledge elements
and show that our method achieves 68.18\% accuracy at this challenging task
while significantly outperforming several strong baselines. Finally, we present
extensive qualitative results illustrating our method's predictions and the
visual evidence our method relied on. Our code and annotated dataset can be
found here: https://github.com/SkrighYZ/FGVE.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 16:09:38 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Thomas",
"Christopher",
""
],
[
"Zhang",
"Yipeng",
""
],
[
"Chang",
"Shih-Fu",
""
]
] |
new_dataset
| 0.991964 |
2203.15709
|
Lixin Yang
|
Lixin Yang, Kailin Li, Xinyu Zhan, Fei Wu, Anran Xu, Liu Liu, Cewu Lu
|
OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object
Interaction
|
Accepted by CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning how humans manipulate objects requires machines to acquire knowledge
from two perspectives: one for understanding object affordances and the other
for learning human's interactions based on the affordances. Even though these
two knowledge bases are crucial, we find that current databases lack a
comprehensive awareness of them. In this work, we propose a multi-modal and
rich-annotated knowledge repository, OakInk, for visual and cognitive
understanding of hand-object interactions. We start to collect 1,800 common
household objects and annotate their affordances to construct the first
knowledge base: Oak. Given the affordance, we record rich human interactions
with 100 selected objects in Oak. Finally, we transfer the interactions on the
100 recorded objects to their virtual counterparts through a novel method:
Tink. The recorded and transferred hand-object interactions constitute the
second knowledge base: Ink. As a result, OakInk contains 50,000 distinct
affordance-aware and intent-oriented hand-object interactions. We benchmark
OakInk on pose estimation and grasp generation tasks. Moreover, we propose two
practical applications of OakInk: intent-based interaction generation and
handover generation. Our datasets and source code are publicly available at
https://github.com/lixiny/OakInk.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 16:13:07 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Yang",
"Lixin",
""
],
[
"Li",
"Kailin",
""
],
[
"Zhan",
"Xinyu",
""
],
[
"Wu",
"Fei",
""
],
[
"Xu",
"Anran",
""
],
[
"Liu",
"Liu",
""
],
[
"Lu",
"Cewu",
""
]
] |
new_dataset
| 0.99787 |
2203.15726
|
Ghassan Samara
|
Ruzayn Quaddoura, Gassan Samara
|
Scheduling UET-UCT DAGs of Depth Two on Two Processors
|
6 pages
|
2021 22nd International Arab Conference on Information Technology
(ACIT) | 978-1-6654-1995-6/21 2021 IEEE
|
10.1109/ACIT53391.2021.9677100
| null |
cs.DS cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Given unit execution time (UET) tasks whose precedence constraints form a
directed acyclic graph (DAG), the arcs are associated with unit communication
time (UCT) delays. The problem is to schedule the tasks on two processors in
order to minimize the makespan. Several polynomial algorithms in the literature
are proposed for special classes of digraphs, but the complexity of solving
this problem in general case stills a challenging open question. We propose in
this paper a linear time algorithm to compute an optimal schedule for the class
of DAGs of depth two.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 11:44:07 GMT"
}
] | 2022-03-30T00:00:00 |
[
[
"Quaddoura",
"Ruzayn",
""
],
[
"Samara",
"Gassan",
""
]
] |
new_dataset
| 0.96326 |
1704.01426
|
Daniele Pannone
|
Danilo Avola, Gian Luca Foresti, Niki Martinel, Daniele Pannone and
Claudio Piciarelli
|
The UMCD Dataset
|
3 pages, 5 figures
|
IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2018
|
10.1109/TSMC.2018.2804766
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, the technological improvements of low-cost small-scale
Unmanned Aerial Vehicles (UAVs) are promoting an ever-increasing use of them in
different tasks. In particular, the use of small-scale UAVs is useful in all
these low-altitude tasks in which common UAVs cannot be adopted, such as
recurrent comprehensive view of wide environments, frequent monitoring of
military areas, real-time classification of static and moving entities (e.g.,
people, cars, etc.). These tasks can be supported by mosaicking and change
detection algorithms achieved at low-altitude. Currently, public datasets for
testing these algorithms are not available. This paper presents the UMCD
dataset, the first collection of geo-referenced video sequences acquired at
low-altitude for mosaicking and change detection purposes. Five reference
scenarios are also reported.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2017 13:49:27 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Avola",
"Danilo",
""
],
[
"Foresti",
"Gian Luca",
""
],
[
"Martinel",
"Niki",
""
],
[
"Pannone",
"Daniele",
""
],
[
"Piciarelli",
"Claudio",
""
]
] |
new_dataset
| 0.999294 |
2004.06612
|
Markus Ryll
|
Markus Ryll, Davide Bicego, Mattia Giurato, Marco Lovera, Antonio
Franchi
|
FAST-Hex -- A Morphing Hexarotor: Design, Mechanical Implementation,
Control and Experimental Validation
| null | null |
10.1109/TMECH.2021.3099197
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present FAST-Hex, a micro aerial hexarotor platform that allows to
seamlessly transit from an under-actuated to a fully-actuated configuration
with only one additional control input, a motor that synchronously tilts all
propellers. The FAST-Hex adapts its configuration between the more efficient
but under-actuated, collinear multi-rotors and the less efficient, but
full-pose-tracking, which is attained by non-collinear multi-rotors. On the
basis of prior work on minimal input configurable micro aerial vehicle we
mainly stress three aspects: mechanical design, motion control and experimental
validation. Specifically, we present the lightweight mechanical structure of
the FAST-Hex that allows it to only use one additional input to achieve
configurability and full actuation in a vast state space. The motion controller
receives as input any reference pose in $\mathbb{R}^3\times \mathrm{SO}(3)$ (3D
position + 3D orientation). Full pose tracking is achieved if the reference
pose is feasible with respect to actuator constraints. In case of unfeasibility
a new feasible desired trajectory is generated online giving priority to the
position tracking over the orientation tracking. Finally we present a large set
of experimental results shading light on all aspects of the control and pose
tracking of FAST-Hex.
|
[
{
"version": "v1",
"created": "Tue, 14 Apr 2020 15:52:42 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Ryll",
"Markus",
""
],
[
"Bicego",
"Davide",
""
],
[
"Giurato",
"Mattia",
""
],
[
"Lovera",
"Marco",
""
],
[
"Franchi",
"Antonio",
""
]
] |
new_dataset
| 0.99863 |
2010.14622
|
Runbing Zheng
|
Runbing Zheng, Vince Lyzinski, Carey E. Priebe and Minh Tang
|
Vertex nomination between graphs via spectral embedding and quadratic
programming
| null | null | null | null |
cs.SI stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a network and a subset of interesting vertices whose identities are
only partially known, the vertex nomination problem seeks to rank the remaining
vertices in such a way that the interesting vertices are ranked at the top of
the list. An important variant of this problem is vertex nomination in the
multi-graphs setting. Given two graphs $G_1, G_2$ with common vertices and a
vertex of interest $x \in G_1$, we wish to rank the vertices of $G_2$ such that
the vertices most similar to $x$ are ranked at the top of the list. The current
paper addresses this problem and proposes a method that first applies adjacency
spectral graph embedding to embed the graphs into a common Euclidean space, and
then solves a penalized linear assignment problem to obtain the nomination
lists. Since the spectral embedding of the graphs are only unique up to
orthogonal transformations, we present two approaches to eliminate this
potential non-identifiability. One approach is based on orthogonal Procrustes
and is applicable when there are enough vertices with known correspondence
between the two graphs. Another approach uses adaptive point set registration
and is applicable when there are few or no vertices with known correspondence.
We show that our nomination scheme leads to accurate nomination under a
generative model for pairs of random graphs that are approximately low-rank and
possibly with pairwise edge correlations. We illustrate our algorithm's
performance through simulation studies on synthetic data as well as analysis of
a high-school friendship network and analysis of transition rates between web
pages on the Bing search engine.
|
[
{
"version": "v1",
"created": "Sat, 24 Oct 2020 10:50:29 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Feb 2021 17:50:13 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Oct 2021 05:58:21 GMT"
},
{
"version": "v4",
"created": "Sun, 27 Mar 2022 18:47:13 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Zheng",
"Runbing",
""
],
[
"Lyzinski",
"Vince",
""
],
[
"Priebe",
"Carey E.",
""
],
[
"Tang",
"Minh",
""
]
] |
new_dataset
| 0.997718 |
2012.00433
|
Yao Hu
|
Yao Hu, Guohua Geng, Kang Li, Wei Zhou
|
Unsupervised Segmentation for Terracotta Warrior Point Cloud (SRG-Net)
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The repairing work of terracotta warriors in Emperor Qinshihuang Mausoleum
Site Museum is handcrafted by experts, and the increasing amounts of unearthed
pieces of terracotta warriors make the archaeologists too challenging to
conduct the restoration of terracotta warriors efficiently. We hope to segment
the 3D point cloud data of the terracotta warriors automatically and store the
fragment data in the database to assist the archaeologists in matching the
actual fragments with the ones in the database, which could result in higher
repairing efficiency of terracotta warriors. Moreover, the existing 3D neural
network research is mainly focusing on supervised classification, clustering,
unsupervised representation, and reconstruction. There are few pieces of
researches concentrating on unsupervised point cloud part segmentation. In this
paper, we present SRG-Net for 3D point clouds of terracotta warriors to address
these problems. Firstly, we adopt a customized seed-region-growing algorithm to
segment the point cloud coarsely. Then we present a supervised segmentation and
unsupervised reconstruction networks to learn the characteristics of 3D point
clouds. Finally, we combine the SRG algorithm with our improved CNN(convolution
neural network) using a refinement method. This pipeline is called SRG-Net,
which aims at conducting segmentation tasks on the terracotta warriors. Our
proposed SRG-Net is evaluated on the terracotta warrior data and ShapeNet
dataset by measuring the accuracy and the latency. The experimental results
show that our SRG-Net outperforms the state-of-the-art methods. Our code is
available at https://github.com/hyoau/SRG-Net.
|
[
{
"version": "v1",
"created": "Tue, 1 Dec 2020 12:02:55 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Mar 2022 10:23:29 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Hu",
"Yao",
""
],
[
"Geng",
"Guohua",
""
],
[
"Li",
"Kang",
""
],
[
"Zhou",
"Wei",
""
]
] |
new_dataset
| 0.990549 |
2012.09424
|
Zelong Yang
|
Zelong Yang, Yan Wang, Piji Li, Shaobin Lin, Shuming Shi, Shao-Lun
Huang, Wei Bi
|
Predicting Events in MOBA Games: Prediction, Attribution, and Evaluation
| null | null |
10.1109/TG.2022.3159704
| null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The multiplayer online battle arena (MOBA) games have become increasingly
popular in recent years. Consequently, many efforts have been devoted to
providing pre-game or in-game predictions for them. However, these works are
limited in the following two aspects: 1) the lack of sufficient in-game
features; 2) the absence of interpretability in the prediction results. These
two limitations greatly restrict the practical performance and industrial
application of the current works. In this work, we collect and release a
large-scale dataset containing rich in-game features for the popular MOBA game
Honor of Kings. We then propose to predict four types of important events in an
interpretable way by attributing the predictions to the input features using
two gradient-based attribution methods: Integrated Gradients and SmoothGrad. To
evaluate the explanatory power of different models and attribution methods, a
fidelity-based evaluation metric is further proposed. Finally, we evaluate the
accuracy and Fidelity of several competitive methods on the collected dataset
to assess how well machines predict events in MOBA games.
|
[
{
"version": "v1",
"created": "Thu, 17 Dec 2020 07:28:35 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Dec 2020 07:42:51 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Dec 2020 07:47:19 GMT"
},
{
"version": "v4",
"created": "Tue, 22 Mar 2022 06:54:14 GMT"
},
{
"version": "v5",
"created": "Mon, 28 Mar 2022 14:12:55 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Yang",
"Zelong",
""
],
[
"Wang",
"Yan",
""
],
[
"Li",
"Piji",
""
],
[
"Lin",
"Shaobin",
""
],
[
"Shi",
"Shuming",
""
],
[
"Huang",
"Shao-Lun",
""
],
[
"Bi",
"Wei",
""
]
] |
new_dataset
| 0.994929 |
2012.10821
|
Sebastiano Vascon Mr
|
Sebastiano Vascon, Sinem Aslan, Gianluca Bigaglia, Lorenzo Giudice,
Marcello Pelillo
|
Transductive Visual Verb Sense Disambiguation
|
Accepted at the IEEE Workshop on Application of Computer Vision 2021
| null |
10.1109/WACV48630.2021.00309
| null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Verb Sense Disambiguation is a well-known task in NLP, the aim is to find the
correct sense of a verb in a sentence. Recently, this problem has been extended
in a multimodal scenario, by exploiting both textual and visual features of
ambiguous verbs leading to a new problem, the Visual Verb Sense Disambiguation
(VVSD). Here, the sense of a verb is assigned considering the content of an
image paired with it rather than a sentence in which the verb appears.
Annotating a dataset for this task is more complex than textual disambiguation,
because assigning the correct sense to a pair of $<$image, verb$>$ requires
both non-trivial linguistic and visual skills. In this work, differently from
the literature, the VVSD task will be performed in a transductive
semi-supervised learning (SSL) setting, in which only a small amount of labeled
information is required, reducing tremendously the need for annotated data. The
disambiguation process is based on a graph-based label propagation method which
takes into account mono or multimodal representations for $<$image, verb$>$
pairs. Experiments have been carried out on the recently published dataset
VerSe, the only available dataset for this task. The achieved results
outperform the current state-of-the-art by a large margin while using only a
small fraction of labeled samples per sense. Code available:
https://github.com/GiBg1aN/TVVSD.
|
[
{
"version": "v1",
"created": "Sun, 20 Dec 2020 01:07:30 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Vascon",
"Sebastiano",
""
],
[
"Aslan",
"Sinem",
""
],
[
"Bigaglia",
"Gianluca",
""
],
[
"Giudice",
"Lorenzo",
""
],
[
"Pelillo",
"Marcello",
""
]
] |
new_dataset
| 0.999722 |
2103.17235
|
Debesh Jha
|
Nikhil Kumar Tomar, Debesh Jha, Michael A. Riegler, H{\aa}vard D.
Johansen, Dag Johansen, Jens Rittscher, P{\aa}l Halvorsen, and Sharib Ali
|
FANet: A Feedback Attention Network for Improved Biomedical Image
Segmentation
| null |
IEEE Transactions on Neural Networks and Learning Systems, 2022
| null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
The increase of available large clinical and experimental datasets has
contributed to a substantial amount of important contributions in the area of
biomedical image analysis. Image segmentation, which is crucial for any
quantitative analysis, has especially attracted attention. Recent hardware
advancement has led to the success of deep learning approaches. However,
although deep learning models are being trained on large datasets, existing
methods do not use the information from different learning epochs effectively.
In this work, we leverage the information of each training epoch to prune the
prediction maps of the subsequent epochs. We propose a novel architecture
called feedback attention network (FANet) that unifies the previous epoch mask
with the feature map of the current training epoch. The previous epoch mask is
then used to provide a hard attention to the learned feature maps at different
convolutional layers. The network also allows to rectify the predictions in an
iterative fashion during the test time. We show that our proposed
\textit{feedback attention} model provides a substantial improvement on most
segmentation metrics tested on seven publicly available biomedical imaging
datasets demonstrating the effectiveness of FANet. The source code is available
at \url{https://github.com/nikhilroxtomar/FANet}.
|
[
{
"version": "v1",
"created": "Wed, 31 Mar 2021 17:34:20 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jan 2022 03:25:47 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Mar 2022 18:17:11 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Tomar",
"Nikhil Kumar",
""
],
[
"Jha",
"Debesh",
""
],
[
"Riegler",
"Michael A.",
""
],
[
"Johansen",
"Håvard D.",
""
],
[
"Johansen",
"Dag",
""
],
[
"Rittscher",
"Jens",
""
],
[
"Halvorsen",
"Pål",
""
],
[
"Ali",
"Sharib",
""
]
] |
new_dataset
| 0.961408 |
2104.00501
|
Alexander Renz-Wieland
|
Alexander Renz-Wieland, Rainer Gemulla, Zoi Kaoudi, Volker Markl
|
NuPS: A Parameter Server for Machine Learning with Non-Uniform Parameter
Access
|
SIGMOD '22
| null |
10.1145/3514221.3517860
| null |
cs.DB cs.DC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Parameter servers (PSs) facilitate the implementation of distributed training
for large machine learning tasks. In this paper, we argue that existing PSs are
inefficient for tasks that exhibit non-uniform parameter access; their
performance may even fall behind that of single node baselines. We identify two
major sources of such non-uniform access: skew and sampling. Existing PSs are
ill-suited for managing skew because they uniformly apply the same parameter
management technique to all parameters. They are inefficient for sampling
because the PS is oblivious to the associated randomized accesses and cannot
exploit locality. To overcome these performance limitations, we introduce NuPS,
a novel PS architecture that (i) integrates multiple management techniques and
employs a suitable technique for each parameter and (ii) supports sampling
directly via suitable sampling primitives and sampling schemes that allow for a
controlled quality--efficiency trade-off. In our experimental study, NuPS
outperformed existing PSs by up to one order of magnitude and provided up to
linear scalability across multiple machine learning tasks.
|
[
{
"version": "v1",
"created": "Thu, 1 Apr 2021 14:52:32 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Dec 2021 09:30:36 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Mar 2022 07:36:27 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Renz-Wieland",
"Alexander",
""
],
[
"Gemulla",
"Rainer",
""
],
[
"Kaoudi",
"Zoi",
""
],
[
"Markl",
"Volker",
""
]
] |
new_dataset
| 0.998364 |
2106.13264
|
Eli Chien
|
Eli Chien, Chao Pan, Jianhao Peng, Olgica Milenkovic
|
You are AllSet: A Multiset Function Framework for Hypergraph Neural
Networks
|
ICLR 2022
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Hypergraphs are used to model higher-order interactions amongst agents and
there exist many practically relevant instances of hypergraph datasets. To
enable efficient processing of hypergraph-structured data, several hypergraph
neural network platforms have been proposed for learning hypergraph properties
and structure, with a special focus on node classification. However, almost all
existing methods use heuristic propagation rules and offer suboptimal
performance on many datasets. We propose AllSet, a new hypergraph neural
network paradigm that represents a highly general framework for (hyper)graph
neural networks and for the first time implements hypergraph neural network
layers as compositions of two multiset functions that can be efficiently
learned for each task and each dataset. Furthermore, AllSet draws on new
connections between hypergraph neural networks and recent advances in deep
learning of multiset functions. In particular, the proposed architecture
utilizes Deep Sets and Set Transformer architectures that allow for significant
modeling flexibility and offer high expressive power. To evaluate the
performance of AllSet, we conduct the most extensive experiments to date
involving ten known benchmarking datasets and three newly curated datasets that
represent significant challenges for hypergraph node classification. The
results demonstrate that AllSet has the unique ability to consistently either
match or outperform all other hypergraph neural networks across the tested
datasets.
|
[
{
"version": "v1",
"created": "Thu, 24 Jun 2021 18:10:08 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Mar 2022 18:32:31 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Mar 2022 17:13:40 GMT"
},
{
"version": "v4",
"created": "Mon, 28 Mar 2022 15:39:11 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Chien",
"Eli",
""
],
[
"Pan",
"Chao",
""
],
[
"Peng",
"Jianhao",
""
],
[
"Milenkovic",
"Olgica",
""
]
] |
new_dataset
| 0.997179 |
2107.01281
|
Jean-Baptiste Mouret
|
Luigi Penco, Jean-Baptiste Mouret, Serena Ivaldi
|
Prescient teleoperation of humanoid robots
|
Video: https://www.youtube.com/watch?v=N3u4ot3aIyQ
| null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humanoid robots could be versatile and intuitive human avatars that operate
remotely in inaccessible places: the robot could reproduce in the remote
location the movements of an operator equipped with a wearable motion capture
device while sending visual feedback to the operator. While substantial
progress has been made on transferring ("retargeting") human motions to
humanoid robots, a major problem preventing the deployment of such systems in
real applications is the presence of communication delays between the human
input and the feedback from the robot: even a few hundred milliseconds of delay
can irreversibly disturb the operator, let alone a few seconds. To overcome
these delays, we introduce a system in which a humanoid robot executes commands
before it actually receives them, so that the visual feedback appears to be
synchronized to the operator, whereas the robot executed the commands in the
past. To do so, the robot continuously predicts future commands by querying a
machine learning model that is trained on past trajectories and conditioned on
the last received commands. In our experiments, an operator was able to
successfully control a humanoid robot (32 degrees of freedom) with stochastic
delays up to 2 seconds in several whole-body manipulation tasks, including
reaching different targets, picking up a bottle, and placing a box at distinct
locations.
|
[
{
"version": "v1",
"created": "Fri, 2 Jul 2021 21:10:35 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jul 2021 15:26:57 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Mar 2022 16:14:51 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Penco",
"Luigi",
""
],
[
"Mouret",
"Jean-Baptiste",
""
],
[
"Ivaldi",
"Serena",
""
]
] |
new_dataset
| 0.986995 |
2107.13167
|
Wei Zhou
|
Yao Hu, Guohua Geng, Kang Li, Wei Zhou, Xingxing Hao, Xin Cao
|
Unsupervised Segmentation for Terracotta Warrior with
Seed-Region-Growing CNN (SRG-Net)
|
arXiv admin note: substantial text overlap with arXiv:2012.00433
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The repairing work of terracotta warriors in Emperor Qinshihuang Mausoleum
Site Museum is handcrafted by experts, and the increasing amounts of unearthed
pieces of terracotta warriors make the archaeologists too challenging to
conduct the restoration of terracotta warriors efficiently. We hope to segment
the 3D point cloud data of the terracotta warriors automatically and store the
fragment data in the database to assist the archaeologists in matching the
actual fragments with the ones in the database, which could result in higher
repairing efficiency of terracotta warriors. Moreover, the existing 3D neural
network research is mainly focusing on supervised classification, clustering,
unsupervised representation, and reconstruction. There are few pieces of
researches concentrating on unsupervised point cloud part segmentation. In this
paper, we present SRG-Net for 3D point clouds of terracotta warriors to address
these problems. Firstly, we adopt a customized seed-region-growing algorithm to
segment the point cloud coarsely. Then we present a supervised segmentation and
unsupervised reconstruction networks to learn the characteristics of 3D point
clouds. Finally, we combine the SRG algorithm with our improved CNN using a
refinement method. This pipeline is called SRG-Net, which aims at conducting
segmentation tasks on the terracotta warriors. Our proposed SRG-Net is
evaluated on the terracotta warriors data and ShapeNet dataset by measuring the
accuracy and the latency. The experimental results show that our SRG-Net
outperforms the state-of-the-art methods. Our code is shown in Code File
1~\cite{Srgnet_2021}.
|
[
{
"version": "v1",
"created": "Wed, 28 Jul 2021 04:50:27 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Hu",
"Yao",
""
],
[
"Geng",
"Guohua",
""
],
[
"Li",
"Kang",
""
],
[
"Zhou",
"Wei",
""
],
[
"Hao",
"Xingxing",
""
],
[
"Cao",
"Xin",
""
]
] |
new_dataset
| 0.995474 |
2108.06712
|
Haoyu Dong
|
Zhoujun Cheng, Haoyu Dong, Zhiruo Wang, Ran Jia, Jiaqi Guo, Yan Gao,
Shi Han, Jian-Guang Lou, Dongmei Zhang
|
HiTab: A Hierarchical Table Dataset for Question Answering and Natural
Language Generation
|
ACL'22 main track
| null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tables are often created with hierarchies, but existing works on table
reasoning mainly focus on flat tables and neglect hierarchical tables.
Hierarchical tables challenge existing methods by hierarchical indexing, as
well as implicit relationships of calculation and semantics. This work presents
HiTab, a free and open dataset to study question answering (QA) and natural
language generation (NLG) over hierarchical tables. HiTab is a cross-domain
dataset constructed from a wealth of statistical reports (analyses) and
Wikipedia pages, and has unique characteristics: (1) nearly all tables are
hierarchical, and (2) both target sentences for NLG and questions for QA are
revised from original, meaningful, and diverse descriptive sentences authored
by analysts and professions of reports. (3) to reveal complex numerical
reasoning in statistical analyses, we provide fine-grained annotations of
entity and quantity alignment. HiTab provides 10,686 QA pairs and descriptive
sentences with well-annotated quantity and entity alignment on 3,597 tables
with broad coverage of table hierarchies and numerical reasoning types.
Targeting hierarchical structure, we devise a novel hierarchy-aware logical
form for symbolic reasoning over tables, which shows high effectiveness.
Targeting complex numerical reasoning, we propose partially supervised training
given annotations of entity and quantity alignment, which helps models to
largely reduce spurious predictions in the QA task. In the NLG task, we find
that entity and quantity alignment also helps NLG models to generate better
results in a conditional generation setting. Experiment results of
state-of-the-art baselines suggest that this dataset presents a strong
challenge and a valuable benchmark for future research.
|
[
{
"version": "v1",
"created": "Sun, 15 Aug 2021 10:14:21 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Aug 2021 10:27:30 GMT"
},
{
"version": "v3",
"created": "Sat, 26 Mar 2022 14:32:23 GMT"
}
] | 2022-03-29T00:00:00 |
[
[
"Cheng",
"Zhoujun",
""
],
[
"Dong",
"Haoyu",
""
],
[
"Wang",
"Zhiruo",
""
],
[
"Jia",
"Ran",
""
],
[
"Guo",
"Jiaqi",
""
],
[
"Gao",
"Yan",
""
],
[
"Han",
"Shi",
""
],
[
"Lou",
"Jian-Guang",
""
],
[
"Zhang",
"Dongmei",
""
]
] |
new_dataset
| 0.999674 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.