id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2107.08178
|
Sunanda Thunder
|
Sunanda Thunder, Parthasarathi Pal, Yeong-Her Wang, Po-Tsang Huang
|
Ultra Low Power 3D-Embedded Convolutional Neural Network Cube Based on
$\alpha$-IGZO Nanosheet and Bi-Layer Resistive Memory
|
Accepted in ICICDT2021
| null | null | null |
cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we propose and evaluate the performance of a 3D-embedded
neuromorphic computation block based on indium gallium zinc oxide
($\alpha$-IGZO) based nanosheet transistor and bi-layer resistive memory
devices. We have fabricated bi-layer resistive random-access memory (RRAM)
devices with Ta$_2$O$_5$ and Al$_2$O$_3$ layers. The device has been
characterized and modeled. The compact models of RRAM and $\alpha$-IGZO based
embedded nanosheet structures have been used to evaluate the system-level
performance of 8 vertically stacked $\alpha$-IGZO based nanosheet layers with
RRAM for neuromorphic applications. The model considers the design space with
uniform bit line (BL), select line (SL), and word line (WL) resistance.
Finally, we have simulated the weighted sum operation with our proposed 8-layer
stacked nanosheet-based embedded memory and evaluated the performance for
VGG-16 convolutional neural network (CNN) for Fashion-MNIST and CIFAR-10 data
recognition, which yielded 92% and 75% accuracy respectively with drop out
layers amid device variation.
|
[
{
"version": "v1",
"created": "Sat, 17 Jul 2021 04:20:13 GMT"
}
] | 2022-05-01T00:00:00 |
[
[
"Thunder",
"Sunanda",
""
],
[
"Pal",
"Parthasarathi",
""
],
[
"Wang",
"Yeong-Her",
""
],
[
"Huang",
"Po-Tsang",
""
]
] |
new_dataset
| 0.972274 |
2004.13316
|
Xue Yang
|
Xue Yang, Junchi Yan, Wenlong Liao, Xiaokang Yang, Jin Tang, Tao He
|
SCRDet++: Detecting Small, Cluttered and Rotated Objects via
Instance-Level Feature Denoising and Rotation Loss Smoothing
|
15 pages, 12 figures, 11 tables, accepted by TPAMI
| null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Small and cluttered objects are common in real-world which are challenging
for detection. The difficulty is further pronounced when the objects are
rotated, as traditional detectors often routinely locate the objects in
horizontal bounding box such that the region of interest is contaminated with
background or nearby interleaved objects. In this paper, we first innovatively
introduce the idea of denoising to object detection. Instance-level denoising
on the feature map is performed to enhance the detection to small and cluttered
objects. To handle the rotation variation, we also add a novel IoU constant
factor to the smooth L1 loss to address the long standing boundary problem,
which to our analysis, is mainly caused by the periodicity of angular (PoA) and
exchangeability of edges (EoE). By combing these two features, our proposed
detector is termed as SCRDet++. Extensive experiments are performed on large
aerial images public datasets DOTA, DIOR, UCAS-AOD as well as natural image
dataset COCO, scene text dataset ICDAR2015, small traffic light dataset BSTLD
and our released S$^2$TLD by this paper. The results show the effectiveness of
our approach. The released dataset S2TLD is made public available, which
contains 5,786 images with 14,130 traffic light instances across five
categories.
|
[
{
"version": "v1",
"created": "Tue, 28 Apr 2020 06:03:54 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Apr 2022 07:24:19 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Yang",
"Xue",
""
],
[
"Yan",
"Junchi",
""
],
[
"Liao",
"Wenlong",
""
],
[
"Yang",
"Xiaokang",
""
],
[
"Tang",
"Jin",
""
],
[
"He",
"Tao",
""
]
] |
new_dataset
| 0.99953 |
2103.13439
|
Won Ik Cho
|
Won Ik Cho, Sangwhan Moon, Jong In Kim, Seok Min Kim, Nam Soo Kim
|
StyleKQC: A Style-Variant Paraphrase Corpus for Korean Questions and
Commands
|
LREC 2022 Camera-ready
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Paraphrasing is often performed with less concern for controlled style
conversion. Especially for questions and commands, style-variant paraphrasing
can be crucial in tone and manner, which also matters with industrial
applications such as dialog systems. In this paper, we attack this issue with a
corpus construction scheme that simultaneously considers the core content and
style of directives, namely intent and formality, for the Korean language.
Utilizing manually generated natural language queries on six daily topics, we
expand the corpus to formal and informal sentences by human rewriting and
transferring. We verify the validity and industrial applicability of our
approach by checking the adequate classification and inference performance that
fit with conventional fine-tuning approaches, at the same time proposing a
supervised formality transfer task.
|
[
{
"version": "v1",
"created": "Wed, 24 Mar 2021 18:38:53 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Apr 2022 01:57:39 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Cho",
"Won Ik",
""
],
[
"Moon",
"Sangwhan",
""
],
[
"Kim",
"Jong In",
""
],
[
"Kim",
"Seok Min",
""
],
[
"Kim",
"Nam Soo",
""
]
] |
new_dataset
| 0.999125 |
2107.09153
|
Santosh Kumar Singh
|
S. K. Singh, V. S. Borkar, G. S. Kasbekar
|
User Association in Dense mmWave Networks as Restless Bandits
|
11 pages, 7 figures
| null | null | null |
cs.IT cs.SY eess.SY math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We study the problem of user association, i.e., determining which base
station (BS) a user should associate with, in a dense millimeter wave (mmWave)
network. In our system model, in each time slot, a user arrives with some
probability in a region with a relatively small geographical area served by a
dense mmWave network. Our goal is to devise an association policy under which,
in each time slot in which a user arrives, it is assigned to exactly one BS so
as to minimize the weighted average amount of time that users spend in the
system. The above problem is a restless multi-armed bandit problem and is
provably hard to solve. We prove that the problem is Whittle indexable, and
based on this result, propose an association policy under which an arriving
user is associated with the BS having the smallest Whittle index. Using
simulations, we show that our proposed policy outperforms several user
association policies proposed in prior work.
|
[
{
"version": "v1",
"created": "Fri, 16 Jul 2021 13:03:31 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Dec 2021 06:34:05 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Apr 2022 11:08:01 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Singh",
"S. K.",
""
],
[
"Borkar",
"V. S.",
""
],
[
"Kasbekar",
"G. S.",
""
]
] |
new_dataset
| 0.952239 |
2109.03587
|
Yequan Wang
|
Yiyi Liu, Yequan Wang, Aixin Sun, Xuying Meng, Jing Li, Jiafeng Guo
|
A Dual-Channel Framework for Sarcasm Recognition by Detecting Sentiment
Conflict
|
Accepted by Findings of NAACL 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Sarcasm employs ambivalence, where one says something positive but actually
means negative, and vice versa. The essence of sarcasm, which is also a
sufficient and necessary condition, is the conflict between literal and implied
sentiments expressed in one sentence. However, it is difficult to recognize
such sentiment conflict because the sentiments are mixed or even implicit. As a
result, the recognition of sophisticated and obscure sentiment brings in a
great challenge to sarcasm detection. In this paper, we propose a Dual-Channel
Framework by modeling both literal and implied sentiments separately. Based on
this dual-channel framework, we design the Dual-Channel Network~(DC-Net) to
recognize sentiment conflict. Experiments on political debates (i.e. IAC-V1 and
IAC-V2) and Twitter datasets show that our proposed DC-Net achieves
state-of-the-art performance on sarcasm recognition. Our code is released to
support research.
|
[
{
"version": "v1",
"created": "Wed, 8 Sep 2021 12:33:19 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Apr 2022 08:14:33 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Liu",
"Yiyi",
""
],
[
"Wang",
"Yequan",
""
],
[
"Sun",
"Aixin",
""
],
[
"Meng",
"Xuying",
""
],
[
"Li",
"Jing",
""
],
[
"Guo",
"Jiafeng",
""
]
] |
new_dataset
| 0.997397 |
2109.14124
|
Ari Seff
|
Ari Seff, Wenda Zhou, Nick Richardson, Ryan P. Adams
|
Vitruvion: A Generative Model of Parametric CAD Sketches
|
ICLR camera ready
|
ICLR 2022
| null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Parametric computer-aided design (CAD) tools are the predominant way that
engineers specify physical structures, from bicycle pedals to airplanes to
printed circuit boards. The key characteristic of parametric CAD is that design
intent is encoded not only via geometric primitives, but also by parameterized
constraints between the elements. This relational specification can be viewed
as the construction of a constraint program, allowing edits to coherently
propagate to other parts of the design. Machine learning offers the intriguing
possibility of accelerating the design process via generative modeling of these
structures, enabling new tools such as autocompletion, constraint inference,
and conditional synthesis. In this work, we present such an approach to
generative modeling of parametric CAD sketches, which constitute the basic
computational building blocks of modern mechanical design. Our model, trained
on real-world designs from the SketchGraphs dataset, autoregressively
synthesizes sketches as sequences of primitives, with initial coordinates, and
constraints that reference back to the sampled primitives. As samples from the
model match the constraint graph representation used in standard CAD software,
they may be directly imported, solved, and edited according to downstream
design tasks. In addition, we condition the model on various contexts,
including partial sketches (primers) and images of hand-drawn sketches.
Evaluation of the proposed approach demonstrates its ability to synthesize
realistic CAD sketches and its potential to aid the mechanical design workflow.
|
[
{
"version": "v1",
"created": "Wed, 29 Sep 2021 01:02:30 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Apr 2022 12:34:26 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Seff",
"Ari",
""
],
[
"Zhou",
"Wenda",
""
],
[
"Richardson",
"Nick",
""
],
[
"Adams",
"Ryan P.",
""
]
] |
new_dataset
| 0.999486 |
2110.03855
|
Tongguang Yu
|
Tonggunag Yu, Yixin Xu, Shan Deng, Zijian Zhao, Nicolas Jao, You Sung
Kim, Stefan Duenkel, Sven Beyer, Kai Ni, Sumitha George, Vijaykrishnan
Narayanan
|
Hardware Functional Obfuscation With Ferroelectric Active Interconnects
| null |
Nat Commun 13, 2235 (2022)
|
10.1038/s41467-022-29795-3
| null |
cs.ET cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Camouflaging gate techniques are typically used in hardware security to
prevent reverse engineering. Layout level camouflaging by adding dummy contacts
ensures some level of protection against extracting the correct netlist.
Threshold voltage manipulation for multi-functional logic with identical
layouts has also been introduced for functional obfuscation. All these
techniques are implemented at the expense of circuit-complexity and with
significant area, energy, and delay penalty. In this paper, we propose an
efficient hardware encryption technique with minimal complexity and overheads
based on ferroelectric field-effect transistor (FeFET) active interconnects.
The active interconnect provides run-time reconfigurable inverter-buffer logic
by utilizing the threshold voltage programmability of the FeFETs. Our method
utilizes only two FeFETs and an inverter to realize the masking function
compared to recent reconfigurable logic gate implementations using several
FeFETs and complex differential logic. We fabricate the proposed circuit and
demonstrate the functionality. Judicious placement of the proposed logic in the
IC makes it acts as a hardware encryption key and enables encoding and decoding
of the functional output without affecting the critical path timing delay.
Also, we achieve comparable encryption probability with a limited number of
encryption units. In addition, we show a peripheral programming scheme for
reconfigurable logic by reusing the existing scan chain logic, hence obviating
the need for specialized programming logic and circuitry for keybit
distribution. Our analysis shows an average encryption probability of 97.43%
with an increase of 2.24%/ 3.67% delay for the most critical path/ sum of 100
critical paths delay for ISCAS85 benchmarks.
|
[
{
"version": "v1",
"created": "Fri, 8 Oct 2021 01:53:27 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2022 19:28:49 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Yu",
"Tonggunag",
""
],
[
"Xu",
"Yixin",
""
],
[
"Deng",
"Shan",
""
],
[
"Zhao",
"Zijian",
""
],
[
"Jao",
"Nicolas",
""
],
[
"Kim",
"You Sung",
""
],
[
"Duenkel",
"Stefan",
""
],
[
"Beyer",
"Sven",
""
],
[
"Ni",
"Kai",
""
],
[
"George",
"Sumitha",
""
],
[
"Narayanan",
"Vijaykrishnan",
""
]
] |
new_dataset
| 0.992294 |
2201.05123
|
Andrey Kutuzov
|
Andrey Kutuzov, Samia Touileb, Petter M{\ae}hlum, Tita Ranveig Enstad,
Alexandra Wittemann
|
NorDiaChange: Diachronic Semantic Change Dataset for Norwegian
|
LREC'2022 proceedings
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We describe NorDiaChange: the first diachronic semantic change dataset for
Norwegian. NorDiaChange comprises two novel subsets, covering about 80
Norwegian nouns manually annotated with graded semantic change over time. Both
datasets follow the same annotation procedure and can be used interchangeably
as train and test splits for each other. NorDiaChange covers the time periods
related to pre- and post-war events, oil and gas discovery in Norway, and
technological developments. The annotation was done using the DURel framework
and two large historical Norwegian corpora. NorDiaChange is published in full
under a permissive licence, complete with raw annotation data and inferred
diachronic word usage graphs (DWUGs).
|
[
{
"version": "v1",
"created": "Thu, 13 Jan 2022 18:27:33 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2022 22:29:23 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Kutuzov",
"Andrey",
""
],
[
"Touileb",
"Samia",
""
],
[
"Mæhlum",
"Petter",
""
],
[
"Enstad",
"Tita Ranveig",
""
],
[
"Wittemann",
"Alexandra",
""
]
] |
new_dataset
| 0.999733 |
2204.00951
|
Ioannis Papoutsis
|
Dimitrios Sykas, Maria Sdraka, Dimitrios Zografakis, Ioannis Papoutsis
|
A Sentinel-2 multi-year, multi-country benchmark dataset for crop
classification and segmentation with deep learning
|
This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensing. Copyright
may be transferred without notice, after which this version may no longer be
accessible
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this work we introduce Sen4AgriNet, a Sentinel-2 based time series multi
country benchmark dataset, tailored for agricultural monitoring applications
with Machine and Deep Learning. Sen4AgriNet dataset is annotated from farmer
declarations collected via the Land Parcel Identification System (LPIS) for
harmonizing country wide labels. These declarations have only recently been
made available as open data, allowing for the first time the labeling of
satellite imagery from ground truth data. We proceed to propose and standardise
a new crop type taxonomy across Europe that address Common Agriculture Policy
(CAP) needs, based on the Food and Agriculture Organization (FAO) Indicative
Crop Classification scheme. Sen4AgriNet is the only multi-country, multi-year
dataset that includes all spectral information. It is constructed to cover the
period 2016-2020 for Catalonia and France, while it can be extended to include
additional countries. Currently, it contains 42.5 million parcels, which makes
it significantly larger than other available archives. We extract two
sub-datasets to highlight its value for diverse Deep Learning applications; the
Object Aggregated Dataset (OAD) and the Patches Assembled Dataset (PAD). OAD
capitalizes zonal statistics of each parcel, thus creating a powerful
label-to-features instance for classification algorithms. On the other hand,
PAD structure generalizes the classification problem to parcel extraction and
semantic segmentation and labeling. The PAD and OAD are examined under three
different scenarios to showcase and model the effects of spatial and temporal
variability across different years and different countries.
|
[
{
"version": "v1",
"created": "Sat, 2 Apr 2022 23:14:46 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2022 18:04:35 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Sykas",
"Dimitrios",
""
],
[
"Sdraka",
"Maria",
""
],
[
"Zografakis",
"Dimitrios",
""
],
[
"Papoutsis",
"Ioannis",
""
]
] |
new_dataset
| 0.999793 |
2204.08058
|
Songyang Zhang
|
Thomas Hayes, Songyang Zhang, Xi Yin, Guan Pang, Sasha Sheng, Harry
Yang, Songwei Ge, Qiyuan Hu, and Devi Parikh
|
MUGEN: A Playground for Video-Audio-Text Multimodal Understanding and
GENeration
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal video-audio-text understanding and generation can benefit from
datasets that are narrow but rich. The narrowness allows bite-sized challenges
that the research community can make progress on. The richness ensures we are
making progress along the core challenges. To this end, we present a
large-scale video-audio-text dataset MUGEN, collected using the open-sourced
platform game CoinRun [11]. We made substantial modifications to make the game
richer by introducing audio and enabling new interactions. We trained RL agents
with different objectives to navigate the game and interact with 13 objects and
characters. This allows us to automatically extract a large collection of
diverse videos and associated audio. We sample 375K video clips (3.2s each) and
collect text descriptions from human annotators. Each video has additional
annotations that are extracted automatically from the game engine, such as
accurate semantic maps for each frame and templated textual descriptions.
Altogether, MUGEN can help progress research in many tasks in multimodal
understanding and generation. We benchmark representative approaches on tasks
involving video-audio-text retrieval and generation. Our dataset and code are
released at: https://mugen-org.github.io/.
|
[
{
"version": "v1",
"created": "Sun, 17 Apr 2022 17:59:09 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Apr 2022 19:32:57 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Apr 2022 14:32:18 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Hayes",
"Thomas",
""
],
[
"Zhang",
"Songyang",
""
],
[
"Yin",
"Xi",
""
],
[
"Pang",
"Guan",
""
],
[
"Sheng",
"Sasha",
""
],
[
"Yang",
"Harry",
""
],
[
"Ge",
"Songwei",
""
],
[
"Hu",
"Qiyuan",
""
],
[
"Parikh",
"Devi",
""
]
] |
new_dataset
| 0.998367 |
2204.11686
|
Arkadeep Narayan Chaudhury
|
Arkadeep Narayan Chaudhury, Timothy Man, Wenzhen Yuan and Christopher
G. Atkeson
|
Using Collocated Vision and Tactile Sensors for Visual Servoing and
Localization
|
This archival version of the manuscript is significantly different in
content from the reviewed and published version. The published version can be
accessed here: https://ieeexplore.ieee.org/document/9699405. Supplementary
materials can be accessed here:
https://arkadeepnc.github.io/projects/collocated_vision_touch/index.html
|
IEEE Robotics and Automation Letters, vol. 7, no. 2, pp.
3427-3434, April 2022
|
10.1109/LRA.2022.3146565
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Coordinating proximity and tactile imaging by collocating cameras with
tactile sensors can 1) provide useful information before contact such as object
pose estimates and visually servo a robot to a target with reduced occlusion
and higher resolution compared to head-mounted or external depth cameras, 2)
simplify the contact point and pose estimation problems and help tactile
sensing avoid erroneous matches when a surface does not have significant
texture or has repetitive texture with many possible matches, and 3) use
tactile imaging to further refine contact point and object pose estimation. We
demonstrate our results with objects that have more surface texture than most
objects in standard manipulation datasets. We learn that optic flow needs to be
integrated over a substantial amount of camera travel to be useful in
predicting movement direction. Most importantly, we also learn that state of
the art vision algorithms do not do a good job localizing tactile images on
object models, unless a reasonable prior can be provided from collocated
cameras.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 14:24:29 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2022 18:52:29 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Chaudhury",
"Arkadeep Narayan",
""
],
[
"Man",
"Timothy",
""
],
[
"Yuan",
"Wenzhen",
""
],
[
"Atkeson",
"Christopher G.",
""
]
] |
new_dataset
| 0.985263 |
2204.11920
|
Hai Dao
|
Dao Thanh Hai
|
Quo Vadis, Optical Network Architecture? Towards an
Optical-processing-enabled Paradigm
|
6 pages, 10 figures, to be submitted to a conference
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Among various aspects in optical network architectures, handling transit
traffic at intermediate nodes represents a defining characteristic for
classification. In this context, the transition from the first generation of
optical-electrical-optical (O-E-O) mode to the second generation of
optical-bypass marked a paradigm shift in redesigning optical transport
networks towards greater network efficiency. Optical-bypass operation has then
become the \textit{de facto} approach adopted by the majority of carriers in
both metro and backbone networks in the last two decades and has remained
basically unchanged. However, in optical-bypass network, the fact that
in-transit lightpaths crossing a common intermediate node must be separated in
either time, frequency or spatial domain to avoid adversarial interference
appears to be a critical shortcoming as the interaction of such lightpaths in
optical domain may result in efficient computing and/or signal processing
operations for saving spectral resources. Inspired by the accelerated
progresses in optical signal processing technologies and the integration of
computing and communications, we introduce in this paper a new architectural
paradigm for future optical networks and highlight how this new architecture
has the potential to shatter the \textit{status quo}. Indeed, our proposal is
centered on exploiting the superposition of in-transit lightpaths at
intermediate nodes to generate more spectrally efficient lightpaths and how to
harness this opportunity from network design perspectives. We present two case
studies featuring optical aggregation and optical XOR encoding to demonstrate
the merit of optical-processing-enabled operation compared to its counterpart,
optical-bypass. Numerical results on realistic network typologies are provided,
revealing that a spectral saving up to $30\%$ could be achieved thanks to
adopting optical-processing network.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 18:50:52 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Apr 2022 03:07:49 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Hai",
"Dao Thanh",
""
]
] |
new_dataset
| 0.982127 |
2204.13158
|
Mustafa Chasmai Ebrahim
|
Mustafa Ebrahim Chasmai and Tamajit Banerjee
|
Person Re-Identification
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Person Re-Identification (Re-ID) is an important problem in computer
vision-based surveillance applications, in which one aims to identify a person
across different surveillance photographs taken from different cameras having
varying orientations and field of views. Due to the increasing demand for
intelligent video surveillance, Re-ID has gained significant interest in the
computer vision community. In this work, we experiment on some existing Re-ID
methods that obtain state of the art performance in some open benchmarks. We
qualitatively and quantitaively analyse their performance on a provided
dataset, and then propose methods to improve the results. This work was the
report submitted for COL780 final project at IIT Delhi.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 19:37:42 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Chasmai",
"Mustafa Ebrahim",
""
],
[
"Banerjee",
"Tamajit",
""
]
] |
new_dataset
| 0.983827 |
2204.13172
|
Ehsan Nowroozi
|
Ehsan Nowroozi, Abhishek, Mohammadreza Mohammadi, Mauro Conti
|
An Adversarial Attack Analysis on Malicious Advertisement URL Detection
Framework
|
13
| null | null | null |
cs.LG cs.AI cs.CR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Malicious advertisement URLs pose a security risk since they are the source
of cyber-attacks, and the need to address this issue is growing in both
industry and academia. Generally, the attacker delivers an attack vector to the
user by means of an email, an advertisement link or any other means of
communication and directs them to a malicious website to steal sensitive
information and to defraud them. Existing malicious URL detection techniques
are limited and to handle unseen features as well as generalize to test data.
In this study, we extract a novel set of lexical and web-scrapped features and
employ machine learning technique to set up system for fraudulent advertisement
URLs detection. The combination set of six different kinds of features
precisely overcome the obfuscation in fraudulent URL classification. Based on
different statistical properties, we use twelve different formatted datasets
for detection, prediction and classification task. We extend our prediction
analysis for mismatched and unlabelled datasets. For this framework, we analyze
the performance of four machine learning techniques: Random Forest, Gradient
Boost, XGBoost and AdaBoost in the detection part. With our proposed method, we
can achieve a false negative rate as low as 0.0037 while maintaining high
accuracy of 99.63%. Moreover, we devise a novel unsupervised technique for data
clustering using K- Means algorithm for the visual analysis. This paper
analyses the vulnerability of decision tree-based models using the limited
knowledge attack scenario. We considered the exploratory attack and implemented
Zeroth Order Optimization adversarial attack on the detection models.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 20:06:22 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Nowroozi",
"Ehsan",
""
],
[
"Abhishek",
"",
""
],
[
"Mohammadi",
"Mohammadreza",
""
],
[
"Conti",
"Mauro",
""
]
] |
new_dataset
| 0.999438 |
2204.13183
|
Keerthikumara Devarajegowda PhD
|
Endri Kaja, Nicolas Gerlin, Luis Rivas, Monideep Bora, Keerthikumara
Devarajegowda, Wolfgang Ecker
|
MetFI: Model-driven Fault Simulation Framework
| null | null | null | null |
cs.SE cs.AR cs.LO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Safety-critical designs need to ensure reliable operations under hostile
conditions with a certain degree of confidence. The continuously higher
complexity of these designs makes them more susceptible to the risk of failure.
ISO26262 recommends fault injection as the proper technique to verify and
measure the dependability of safety-critical designs. To cope with the
complexity, a lot of effort and stringent verification flow is needed.
Moreover, many fault injection tools offer only a limited degree of
controllability.
We propose MetaFI, a model-driven simulator-independent fault simulation
framework that provides multi-purpose fault injection strategies such as
Statistical Fault Injection, Direct Fault Injection, Exhaustive Fault
Injection, and at the same time reduces manual efforts. The framework enables
injection of Stuck-at faults, Single-Event Transient faults, Single-Event Upset
faults as well as Timing faults. The fault simulation is performed at the
Register Transfer Level (RTL) of a design, in which parts of the design
targeted for fault simulation are represented with Gate-level (GL) granularity.
MetaFI is scalable with a full System-on-Chip (SoC) design and to demonstrate
the applicability of the framework, fault simulation was applied to various
components of two different SoCs. One SoC is running the Dhrystone application
and the other one is running a Fingerprint calculation application. A minimal
effort of 2 persondays was required to run 38 various fault injection campaigns
on both the designs. The framework provided significant data regarding failure
rates of the components. Results concluded that Prefetcher, a component of the
SoC processor, is more susceptible to failures than the other targeted
components on both the SoCs, regardless of the running application.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 20:30:05 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Kaja",
"Endri",
""
],
[
"Gerlin",
"Nicolas",
""
],
[
"Rivas",
"Luis",
""
],
[
"Bora",
"Monideep",
""
],
[
"Devarajegowda",
"Keerthikumara",
""
],
[
"Ecker",
"Wolfgang",
""
]
] |
new_dataset
| 0.987581 |
2204.13243
|
Sharon Levy
|
Kai Nakamura, Sharon Levy, Yi-Lin Tuan, Wenhu Chen, William Yang Wang
|
HybriDialogue: An Information-Seeking Dialogue Dataset Grounded on
Tabular and Textual Data
|
Findings of ACL 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A pressing challenge in current dialogue systems is to successfully converse
with users on topics with information distributed across different modalities.
Previous work in multiturn dialogue systems has primarily focused on either
text or table information. In more realistic scenarios, having a joint
understanding of both is critical as knowledge is typically distributed over
both unstructured and structured forms. We present a new dialogue dataset,
HybriDialogue, which consists of crowdsourced natural conversations grounded on
both Wikipedia text and tables. The conversations are created through the
decomposition of complex multihop questions into simple, realistic multiturn
dialogue interactions. We propose retrieval, system state tracking, and
dialogue response generation tasks for our dataset and conduct baseline
experiments for each. Our results show that there is still ample opportunity
for improvement, demonstrating the importance of building stronger dialogue
systems that can reason over the complex setting of information-seeking
dialogue grounded on tables and text.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 00:52:16 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Nakamura",
"Kai",
""
],
[
"Levy",
"Sharon",
""
],
[
"Tuan",
"Yi-Lin",
""
],
[
"Chen",
"Wenhu",
""
],
[
"Wang",
"William Yang",
""
]
] |
new_dataset
| 0.991574 |
2204.13286
|
Guangwei Gao
|
Guangwei Gao, Zhengxue Wang, Juncheng Li, Wenjie Li, Yi Yu, Tieyong
Zeng
|
Lightweight Bimodal Network for Single-Image Super-Resolution via
Symmetric CNN and Recursive Transformer
|
Accepted by IJCAI2022, short oral presentation
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Single-image super-resolution (SISR) has achieved significant breakthroughs
with the development of deep learning. However, these methods are difficult to
be applied in real-world scenarios since they are inevitably accompanied by the
problems of computational and memory costs caused by the complex operations. To
solve this issue, we propose a Lightweight Bimodal Network (LBNet) for SISR.
Specifically, an effective Symmetric CNN is designed for local feature
extraction and coarse image reconstruction. Meanwhile, we propose a Recursive
Transformer to fully learn the long-term dependence of images thus the global
information can be fully used to further refine texture details. Studies show
that the hybrid of CNN and Transformer can build a more efficient model.
Extensive experiments have proved that our LBNet achieves more prominent
performance than other state-of-the-art methods with a relatively low
computational cost and memory consumption. The code is available at
https://github.com/IVIPLab/LBNet.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 04:43:22 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Gao",
"Guangwei",
""
],
[
"Wang",
"Zhengxue",
""
],
[
"Li",
"Juncheng",
""
],
[
"Li",
"Wenjie",
""
],
[
"Yu",
"Yi",
""
],
[
"Zeng",
"Tieyong",
""
]
] |
new_dataset
| 0.993226 |
2204.13311
|
Nora Hollenstein
|
Nora Hollenstein, Maria Barrett, Marina Bj\"ornsd\'ottir
|
The Copenhagen Corpus of Eye Tracking Recordings from Natural Reading of
Danish Texts
|
accepted at LREC 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Eye movement recordings from reading are one of the richest signals of human
language processing. Corpora of eye movements during reading of contextualized
running text is a way of making such records available for natural language
processing purposes. Such corpora already exist in some languages. We present
CopCo, the Copenhagen Corpus of eye tracking recordings from natural reading of
Danish texts. It is the first eye tracking corpus of its kind for the Danish
language. CopCo includes 1,832 sentences with 34,897 tokens of Danish text
extracted from a collection of speech manuscripts. This first release of the
corpus contains eye tracking data from 22 participants. It will be extended
continuously with more participants and texts from other genres. We assess the
data quality of the recorded eye movements and find that the extracted features
are in line with related research. The dataset available here:
https://osf.io/ud8s5/.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 07:13:00 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Hollenstein",
"Nora",
""
],
[
"Barrett",
"Maria",
""
],
[
"Björnsdóttir",
"Marina",
""
]
] |
new_dataset
| 0.99955 |
2204.13323
|
Huadong Li
|
Huadong Li, Yuefeng Wang, Ying Wei, Lin Wang, Li Ge
|
Discriminative-Region Attention and Orthogonal-View Generation Model for
Vehicle Re-Identification
|
28pages,12 figures
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Vehicle re-identification (Re-ID) is urgently demanded to alleviate
thepressure caused by the increasingly onerous task of urban traffic
management. Multiple challenges hamper the applications of vision-based vehicle
Re-ID methods: (1) The appearances of different vehicles of the same
brand/model are often similar; However, (2) the appearances of the same vehicle
differ significantly from different viewpoints. Previous methods mainly use
manually annotated multi-attribute datasets to assist the network in getting
detailed cues and in inferencing multi-view to improve the vehicle Re-ID
performance. However, finely labeled vehicle datasets are usually unattainable
in real application scenarios. Hence, we propose a Discriminative-Region
Attention and Orthogonal-View Generation (DRA-OVG) model, which only requires
identity (ID) labels to conquer the multiple challenges of vehicle Re-ID.The
proposed DRA model can automatically extract the discriminative region
features, which can distinguish similar vehicles. And the OVG model can
generate multi-view features based on the input view features to reduce the
impact of viewpoint mismatches. Finally, the distance between vehicle
appearances is presented by the discriminative region features and multi-view
features together. Therefore, the significance of pairwise distance measure
between vehicles is enhanced in acomplete feature space. Extensive experiments
substantiate the effectiveness of each proposed ingredient, and experimental
results indicate that our approach achieves remarkable improvements over the
state- of-the-art vehicle Re-ID methods on VehicleID and VeRi-776 datasets.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 07:46:03 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Li",
"Huadong",
""
],
[
"Wang",
"Yuefeng",
""
],
[
"Wei",
"Ying",
""
],
[
"Wang",
"Lin",
""
],
[
"Ge",
"Li",
""
]
] |
new_dataset
| 0.98453 |
2204.13331
|
Federico Pigozzi Mr
|
Federico Pigozzi
|
Robots: the Century Past and the Century Ahead
|
Best essay from PhD student, ALife'21 conference, runner-up
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Let us reflect on the state of robotics. This year marks the $101$-st
anniversary of R.U.R., a play by the writer Karel \v{C}apek, often credited
with introducing the word "robot". The word used to refer to feudal forced
labourers in Slavic languages. Indeed, it points to one key characteristic of
robotic systems: they are mere slaves, have no rights, and execute our wills
instruction by instruction, without asking anything in return. The relationship
with us humans is commensalism; in biology, commensalism subsists between two
symbiotic species when one species benefits from it (robots boost productivity
for humans), while the other species neither benefits nor is harmed (can you
really argue that robots benefit from simply functioning?).
We then distinguish robots from "living machines", that is, machines infused
with life. If living machines should ever become a reality, we would need to
shift our relationship with them from commensalism to mutualism. The
distinction is not subtle: we experience it every day with domesticated
animals, that exchange serfdom for forage and protection. This is because life
has evolved to resist any attempt at enslaving it; it is stubborn.
In the path towards living machines, let us ask: what has been achieved by
robotics in the last $100$ years? What is left to accomplish in the next $100$
years? For us, the answers boil down to three words: juice, need (or death),
and embodiment, as we shall see in the following.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 08:01:43 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Pigozzi",
"Federico",
""
]
] |
new_dataset
| 0.990566 |
2204.13336
|
Sunwoo Kim
|
Sunwoo Kim, Maks Sorokin, Jehee Lee, Sehoon Ha
|
Human Motion Control of Quadrupedal Robots using Deep Reinforcement
Learning
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A motion-based control interface promises flexible robot operations in
dangerous environments by combining user intuitions with the robot's motor
capabilities. However, designing a motion interface for non-humanoid robots,
such as quadrupeds or hexapods, is not straightforward because different
dynamics and control strategies govern their movements. We propose a novel
motion control system that allows a human user to operate various motor tasks
seamlessly on a quadrupedal robot. We first retarget the captured human motion
into the corresponding robot motion with proper semantics using supervised
learning and post-processing techniques. Then we apply the motion imitation
learning with curriculum learning to develop a control policy that can track
the given retargeted reference. We further improve the performance of both
motion retargeting and motion imitation by training a set of experts. As we
demonstrate, a user can execute various motor tasks using our system, including
standing, sitting, tilting, manipulating, walking, and turning, on simulated
and real quadrupeds. We also conduct a set of studies to analyze the
performance gain induced by each component.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 08:14:03 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Kim",
"Sunwoo",
""
],
[
"Sorokin",
"Maks",
""
],
[
"Lee",
"Jehee",
""
],
[
"Ha",
"Sehoon",
""
]
] |
new_dataset
| 0.998893 |
2204.13343
|
Alberto Gotta
|
Achilles Machumilane, Alberto Gotta, Pietro Cassar\`a, Claudio
Gennaro, and Giuseppe Amato
|
Actor-Critic Scheduling for Path-Aware Air-to-Ground Multipath
Multimedia Delivery
| null | null | null | null |
cs.NI cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement Learning (RL) has recently found wide applications in network
traffic management and control because some of its variants do not require
prior knowledge of network models. In this paper, we present a novel scheduler
for real-time multimedia delivery in multipath systems based on an Actor-Critic
(AC) RL algorithm. We focus on a challenging scenario of real-time video
streaming from an Unmanned Aerial Vehicle (UAV) using multiple wireless paths.
The scheduler acting as an RL agent learns in real-time the optimal policy for
path selection, path rate allocation and redundancy estimation for flow
protection. The scheduler, implemented as a module of the GStreamer framework,
can be used in real or simulated settings. The simulation results show that our
scheduler can target a very low loss rate at the receiver by dynamically
adapting in real-time the scheduling policy to the path conditions without
performing training or relying on prior knowledge of network channel models.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 08:28:25 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Machumilane",
"Achilles",
""
],
[
"Gotta",
"Alberto",
""
],
[
"Cassarà",
"Pietro",
""
],
[
"Gennaro",
"Claudio",
""
],
[
"Amato",
"Giuseppe",
""
]
] |
new_dataset
| 0.965292 |
2204.13426
|
Kyusun Cho
|
Mira Kim, Jaehoon Ko, Kyusun Cho, Junmyeong Choi, Daewon Choi,
Seungryong Kim
|
AE-NeRF: Auto-Encoding Neural Radiance Fields for 3D-Aware Object
Manipulation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel framework for 3D-aware object manipulation, called
Auto-Encoding Neural Radiance Fields (AE-NeRF). Our model, which is formulated
in an auto-encoder architecture, extracts disentangled 3D attributes such as 3D
shape, appearance, and camera pose from an image, and a high-quality image is
rendered from the attributes through disentangled generative Neural Radiance
Fields (NeRF). To improve the disentanglement ability, we present two losses,
global-local attribute consistency loss defined between input and output, and
swapped-attribute classification loss. Since training such auto-encoding
networks from scratch without ground-truth shape and appearance information is
non-trivial, we present a stage-wise training scheme, which dramatically helps
to boost the performance. We conduct experiments to demonstrate the
effectiveness of the proposed model over the latest methods and provide
extensive ablation studies.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 11:50:18 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Kim",
"Mira",
""
],
[
"Ko",
"Jaehoon",
""
],
[
"Cho",
"Kyusun",
""
],
[
"Choi",
"Junmyeong",
""
],
[
"Choi",
"Daewon",
""
],
[
"Kim",
"Seungryong",
""
]
] |
new_dataset
| 0.991621 |
2204.13493
|
Leroy Cronin Prof
|
Abhishek Sharma, Marcus Tze-Kiat Ng, Juan Manuel Parrilla Gutierrez,
Yibin Jiang and Leroy Cronin
|
A Probabilistic Chemical Programmable Computer
|
20 page manuscript, 6 figures, 112 page supplementary volume
| null | null | null |
cs.ET cs.CC nlin.CG physics.chem-ph
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The exponential growth of the power of modern digital computers is based upon
the miniaturisation of vast nanoscale arrays of electronic switches, but this
will be eventually constrained by fabrication limits and power dissipation.
Chemical processes have the potential to scale beyond these limits performing
computations through chemical reactions, yet the lack of well-defined
programmability limits their scalability and performance. We present a hybrid
digitally programmable chemical array as a probabilistic computational machine
that uses chemical oscillators partitioned in interconnected cells as a
computational substrate. This hybrid architecture performs efficient
computation by distributing between chemical and digital domains together with
error correction. The efficiency is gained by combining digital with
probabilistic chemical logic based on nearest neighbour interactions and
hysteresis effects. We demonstrated the implementation of one- and two-
dimensional Chemical Cellular Automata and solutions to combinatorial
optimization problems.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 13:36:31 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Sharma",
"Abhishek",
""
],
[
"Ng",
"Marcus Tze-Kiat",
""
],
[
"Gutierrez",
"Juan Manuel Parrilla",
""
],
[
"Jiang",
"Yibin",
""
],
[
"Cronin",
"Leroy",
""
]
] |
new_dataset
| 0.9629 |
2204.13496
|
Georgios Spithourakis
|
Georgios P. Spithourakis, Ivan Vuli\'c, Micha{\l} Lis, I\~nigo
Casanueva, Pawe{\l} Budzianowski
|
EVI: Multilingual Spoken Dialogue Tasks and Dataset for Knowledge-Based
Enrolment, Verification, and Identification
|
13 pages, 7 figures, 7 tables. Accepted in NAACL 2022 (Findings)
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge-based authentication is crucial for task-oriented spoken dialogue
systems that offer personalised and privacy-focused services. Such systems
should be able to enrol (E), verify (V), and identify (I) new and recurring
users based on their personal information, e.g. postcode, name, and date of
birth. In this work, we formalise the three authentication tasks and their
evaluation protocols, and we present EVI, a challenging spoken multilingual
dataset with 5,506 dialogues in English, Polish, and French. Our proposed
models set the first competitive benchmarks, explore the challenges of
multilingual natural language processing of spoken dialogue, and set directions
for future research.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 13:39:24 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Spithourakis",
"Georgios P.",
""
],
[
"Vulić",
"Ivan",
""
],
[
"Lis",
"Michał",
""
],
[
"Casanueva",
"Iñigo",
""
],
[
"Budzianowski",
"Paweł",
""
]
] |
new_dataset
| 0.999665 |
2204.13511
|
Pieter Delobelle
|
Pieter Delobelle, Thomas Winters, Bettina Berendt
|
RobBERTje: a Distilled Dutch BERT Model
|
Published in CLIN journal
|
Computational Linguistics in the Netherlands Journal 2021
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pre-trained large-scale language models such as BERT have gained a lot of
attention thanks to their outstanding performance on a wide range of natural
language tasks. However, due to their large number of parameters, they are
resource-intensive both to deploy and to fine-tune. Researchers have created
several methods for distilling language models into smaller ones to increase
efficiency, with a small performance trade-off. In this paper, we create
several different distilled versions of the state-of-the-art Dutch RobBERT
model and call them RobBERTje. The distillations differ in their distillation
corpus, namely whether or not they are shuffled and whether they are merged
with subsequent sentences. We found that the performance of the models using
the shuffled versus non-shuffled datasets is similar for most tasks and that
randomly merging subsequent sentences in a corpus creates models that train
faster and perform better on tasks with long sequences. Upon comparing
distillation architectures, we found that the larger DistilBERT architecture
worked significantly better than the Bort hyperparametrization. Interestingly,
we also found that the distilled models exhibit less gender-stereotypical bias
than its teacher model. Since smaller architectures decrease the time to
fine-tune, these models allow for more efficient training and more lightweight
deployment of many Dutch downstream language tasks.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 14:02:13 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Delobelle",
"Pieter",
""
],
[
"Winters",
"Thomas",
""
],
[
"Berendt",
"Bettina",
""
]
] |
new_dataset
| 0.998991 |
2204.13514
|
Freddie Rawlins
|
Frederick Rawlins, Richard Baker, Ivan Martinovic
|
Death By A Thousand COTS: Disrupting Satellite Communications using Low
Earth Orbit Constellations
|
13 pages, 25 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Satellites in Geostationary Orbit (GEO) provide a number of commercial,
government, and military services around the world, offering everything from
surveillance and monitoring to video calls and internet access. However a
dramatic lowering of the cost-per-kilogram to space has led to a recent
explosion in real and planned constellations in Low Earth Orbit (LEO) of
smaller satellites. These constellations are managed remotely and it is
important to consider a scenario in which an attacker gains control over the
constituent satellites. In this paper we aim to understand what damage this
attacker could cause, using the satellites to generate interference. To ground
our analysis, we simulate a number of existing and planned LEO constellations
against an example GEO constellation, and evaluate the relative effectiveness
of each. Our model shows that with conservative power estimates, both current
and planned constellations could disrupt GEO satellite services at every
groundstation considered, with effectiveness varying considerably between
locations. We analyse different patterns of interference, how they reflect the
structures of the constellations creating them, and how effective they might be
against a number of legitimate services. We found that real-time usage (e.g.
calls, streaming) would be most affected, with 3 constellation designs able to
generate thousands of outages of 30 seconds or longer over the course of the
day across all groundstations.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 14:04:46 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Rawlins",
"Frederick",
""
],
[
"Baker",
"Richard",
""
],
[
"Martinovic",
"Ivan",
""
]
] |
new_dataset
| 0.989044 |
2204.13546
|
Andrew MacFarlane Dr
|
Andrew MacFarlane, Marisela Gutierrez-Lopez, Stephann Makri, Tim
Atwell, Sondess Missaoui, Colin Porlezza, Glenda Cooper
|
DMINR: A Tool to Support Journalists Information Verification and
Exploration
|
9 pages, 8 figures
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Journalists are key information workers who have specific requirements from
information systems to support the verification and exploration of information.
We overview the DMINR tool that has been designed and developed to meet the
needs of journalists through the examination of journalists information
behaviour in a newsroom. We outline our co-design process as well as the
design, implementation and deployment of the tool. We report a usability test
on the tool and conclude with details of how to develop the tool further
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 14:55:50 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"MacFarlane",
"Andrew",
""
],
[
"Gutierrez-Lopez",
"Marisela",
""
],
[
"Makri",
"Stephann",
""
],
[
"Atwell",
"Tim",
""
],
[
"Missaoui",
"Sondess",
""
],
[
"Porlezza",
"Colin",
""
],
[
"Cooper",
"Glenda",
""
]
] |
new_dataset
| 0.989916 |
2204.13571
|
Hatem Fakhruldeen
|
Hatem Fakhruldeen, Gabriella Pizzuto, Jakub Glowacki and Andrew Ian
Cooper
|
ARChemist: Autonomous Robotic Chemistry System Architecture
|
7 pages, 5 figures, accepted for presentation at 2022 International
Conference on Robotics and Automation (ICRA2022)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Automated laboratory experiments have the potential to propel new
discoveries, while increasing reproducibility and improving scientists' safety
when handling dangerous materials. However, many automated laboratory workflows
have not fully leveraged the remarkable advancements in robotics and digital
lab equipment. As a result, most robotic systems used in the labs are
programmed specifically for a single experiment, often relying on proprietary
architectures or using unconventional hardware. In this work, we tackle this
problem by proposing a novel robotic system architecture specifically designed
with and for chemists, which allows the scientist to easily reconfigure their
setup for new experiments. Specifically, the system's strength is its ability
to combine together heterogeneous robotic platforms with standard laboratory
equipment to create different experimental setups. Finally, we show how the
architecture can be used for specific laboratory experiments through case
studies such as solubility screening and crystallisation.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 15:34:09 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Fakhruldeen",
"Hatem",
""
],
[
"Pizzuto",
"Gabriella",
""
],
[
"Glowacki",
"Jakub",
""
],
[
"Cooper",
"Andrew Ian",
""
]
] |
new_dataset
| 0.996248 |
2204.13604
|
Xindi Wang
|
Xindi Wang, Robert E. Mercer, Frank Rudzicz
|
MeSHup: A Corpus for Full Text Biomedical Document Indexing
|
LREC 2022 main conference
| null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Medical Subject Heading (MeSH) indexing refers to the problem of assigning a
given biomedical document with the most relevant labels from an extremely large
set of MeSH terms. Currently, the vast number of biomedical articles in the
PubMed database are manually annotated by human curators, which is time
consuming and costly; therefore, a computational system that can assist the
indexing is highly valuable. When developing supervised MeSH indexing systems,
the availability of a large-scale annotated text corpus is desirable. A
publicly available, large corpus that permits robust evaluation and comparison
of various systems is important to the research community. We release a large
scale annotated MeSH indexing corpus, MeSHup, which contains 1,342,667 full
text articles in English, together with the associated MeSH labels and
metadata, authors, and publication venues that are collected from the MEDLINE
database. We train an end-to-end model that combines features from documents
and their associated labels on our corpus and report the new baseline.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 16:04:20 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Wang",
"Xindi",
""
],
[
"Mercer",
"Robert E.",
""
],
[
"Rudzicz",
"Frank",
""
]
] |
new_dataset
| 0.992974 |
2204.13656
|
Zekang Chen
|
Zekang Chen, Jia Wei and Rui Li
|
Unsupervised Multi-Modal Medical Image Registration via
Discriminator-Free Image-to-Image Translation
|
Accepted in IJCAI 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In clinical practice, well-aligned multi-modal images, such as Magnetic
Resonance (MR) and Computed Tomography (CT), together can provide complementary
information for image-guided therapies. Multi-modal image registration is
essential for the accurate alignment of these multi-modal images. However, it
remains a very challenging task due to complicated and unknown spatial
correspondence between different modalities. In this paper, we propose a novel
translation-based unsupervised deformable image registration approach to
convert the multi-modal registration problem to a mono-modal one. Specifically,
our approach incorporates a discriminator-free translation network to
facilitate the training of the registration network and a patchwise contrastive
loss to encourage the translation network to preserve object shapes.
Furthermore, we propose to replace an adversarial loss, that is widely used in
previous multi-modal image registration methods, with a pixel loss in order to
integrate the output of translation into the target modality. This leads to an
unsupervised method requiring no ground-truth deformation or pairs of aligned
images for training. We evaluate four variants of our approach on the public
Learn2Reg 2021 datasets \cite{hering2021learn2reg}. The experimental results
demonstrate that the proposed architecture achieves state-of-the-art
performance. Our code is available at https://github.com/heyblackC/DFMIR.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 17:18:21 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Chen",
"Zekang",
""
],
[
"Wei",
"Jia",
""
],
[
"Li",
"Rui",
""
]
] |
new_dataset
| 0.959232 |
2204.13666
|
Milo\v{s} Nikoli\'c
|
Milo\v{s} Nikoli\'c, Enrique Torres Sanchez, Jiahui Wang, Ali Hadi
Zadeh, Mostafa Mahmoud, Ameer Abdelhadi, Andreas Moshovos
|
Schr\"odinger's FP: Dynamic Adaptation of Floating-Point Containers for
Deep Learning Training
| null | null | null | null |
cs.LG cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We introduce a software-hardware co-design approach to reduce memory traffic
and footprint during training with BFloat16 or FP32 boosting energy efficiency
and execution time performance. We introduce methods to dynamically adjust the
size and format of the floating-point containers used to store activations and
weights during training. The different value distributions lead us to different
approaches for exponents and mantissas. Gecko exploits the favourable exponent
distribution with a loss-less delta encoding approach to reduce the total
exponent footprint by up to $58\%$ in comparison to a 32 bit floating point
baseline. To content with the noisy mantissa distributions, we present two
lossy methods to eliminate as many as possible least significant bits while not
affecting accuracy. Quantum Mantissa, is a machine learning-first mantissa
compression method that taps on training's gradient descent algorithm to also
learn minimal mantissa bitlengths on a per-layer granularity, and obtain up to
$92\%$ reduction in total mantissa footprint. Alternatively, BitChop observes
changes in the loss function during training to adjust mantissa bit-length
network-wide yielding a reduction of $81\%$ in footprint. Schr\"{o}dinger's FP
implements hardware encoders/decoders that guided by Gecko/Quantum Mantissa or
Gecko/BitChop transparently encode/decode values when transferring to/from
off-chip memory boosting energy efficiency and reducing execution time.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 17:30:08 GMT"
}
] | 2022-04-29T00:00:00 |
[
[
"Nikolić",
"Miloš",
""
],
[
"Sanchez",
"Enrique Torres",
""
],
[
"Wang",
"Jiahui",
""
],
[
"Zadeh",
"Ali Hadi",
""
],
[
"Mahmoud",
"Mostafa",
""
],
[
"Abdelhadi",
"Ameer",
""
],
[
"Moshovos",
"Andreas",
""
]
] |
new_dataset
| 0.969714 |
2102.12579
|
Alexander Kulikov
|
Alexander S. Kulikov, Danila Pechenev, Nikita Slezkin
|
SAT-based Circuit Local Improvement
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Finding exact circuit size is a notorious optimization problem in practice.
Whereas modern computers and algorithmic techniques allow to find a circuit of
size seven in blink of an eye, it may take more than a week to search for a
circuit of size thirteen. One of the reasons of this behavior is that the
search space is enormous: the number of circuits of size $s$ is
$s^{\Theta(s)}$, the number of Boolean functions on $n$ variables is $2^{2^n}$.
In this paper, we explore the following natural heuristic idea for decreasing
the size of a given circuit: go through all its subcircuits of moderate size
and check whether any of them can be improved by reducing to SAT. This may be
viewed as a local search approach: we search for a smaller circuit in a ball
around a given circuit. Through this approach, we prove new upper bounds on the
circuit size of various symmetric functions. We also demonstrate that some
upper bounds that were proved by hand decades ago, nowadays can be found
automatically in a few seconds.
|
[
{
"version": "v1",
"created": "Fri, 19 Feb 2021 16:01:50 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2022 17:12:38 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Apr 2022 09:41:24 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Kulikov",
"Alexander S.",
""
],
[
"Pechenev",
"Danila",
""
],
[
"Slezkin",
"Nikita",
""
]
] |
new_dataset
| 0.984965 |
2103.12033
|
Khashayar Etemadi Someoliayi
|
Khashayar Etemadi, Nicolas Harrand, Simon Larsen, Haris Adzemovic,
Henry Luong Phu, Ashutosh Verma, Fernanda Madeiral, Douglas Wikstrom, Martin
Monperrus
|
Sorald: Automatic Patch Suggestions for SonarQube Static Analysis
Violations
| null |
IEEE Transactions on Dependable and Secure Computing, 2022
|
10.1109/TDSC.2022.3167316
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Previous work has shown that early resolution of issues detected by static
code analyzers can prevent major costs later on. However, developers often
ignore such issues for two main reasons. First, many issues should be
interpreted to determine if they correspond to actual flaws in the program.
Second, static analyzers often do not present the issues in a way that is
actionable. To address these problems, we present Sorald: a novel system that
devise metaprogramming templates to transform the abstract syntax trees of
programs and suggest fixes for static analysis warnings. Thus, the burden on
the developer is reduced from interpreting and fixing static issues, to
inspecting and approving full fledged solutions. Sorald fixes violations of 10
rules from SonarJava, one of the most widely used static analyzers for Java. We
evaluate Sorald on a dataset of 161 popular repositories on Github. Our
analysis shows the effectiveness of Sorald as it fixes 65% (852/1,307) of the
violations that meets the repair preconditions. Overall, our experiments show
it is possible to automatically fix notable violations of the static analysis
rules produced by the state-of-the-art static analyzer SonarJava.
|
[
{
"version": "v1",
"created": "Mon, 22 Mar 2021 17:34:48 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Jan 2022 14:36:57 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Etemadi",
"Khashayar",
""
],
[
"Harrand",
"Nicolas",
""
],
[
"Larsen",
"Simon",
""
],
[
"Adzemovic",
"Haris",
""
],
[
"Phu",
"Henry Luong",
""
],
[
"Verma",
"Ashutosh",
""
],
[
"Madeiral",
"Fernanda",
""
],
[
"Wikstrom",
"Douglas",
""
],
[
"Monperrus",
"Martin",
""
]
] |
new_dataset
| 0.992423 |
2104.07955
|
Weiqi Shu
|
Weiqi Shu, Ling Wang, Bolong Liu, and Jie Liu
|
LAI Estimation of Cucumber Crop Based on Improved Fully Convolutional
Network
|
There are some problems with this paper
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LAI (Leaf Area Index) is of great importance for crop yield estimation in
agronomy. It is directly related to plant growth status, net assimilation rate,
plant photosynthesis, and carbon dioxide in the environment. How to measure LAI
accurately and efficiently is the key to the crop yield estimation problem.
Manual measurement consumes a lot of human resources and material resources.
Remote sensing technology is not suitable for near-Earth LAI measurement.
Besides, methods based on traditional digital image processing are greatly
affected by environmental noise and image exposure. Nowadays, deep learning is
widely used in many fields. The improved FCN (Fully Convolutional Network) is
proposed in our study for LAI measure task. Eighty-two cucumber images
collected from our greenhouse are labeled to fine-tuning the pre-trained model.
The result shows that the improved FCN model performs well on our dataset. Our
method's mean IoU can reach 0.908, which is 11% better than conventional
methods and 4.7% better than the basic FCN model.
|
[
{
"version": "v1",
"created": "Fri, 16 Apr 2021 08:12:06 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2022 01:58:02 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Shu",
"Weiqi",
""
],
[
"Wang",
"Ling",
""
],
[
"Liu",
"Bolong",
""
],
[
"Liu",
"Jie",
""
]
] |
new_dataset
| 0.992893 |
2105.13634
|
Eman Alashwali
|
Eman Alashwali and Fatimah Alashwali
|
Saudi Parents' Privacy Concerns about Their Children's Smart Device
Applications
|
This is the author's version of the accepted manuscript at the
International Journal of Child-Computer Interaction. This is version 3 which
is similar to version 2 in content (we only removed redundant png images).
Version 2 and 3 contain major changes over version 1
| null | null | null |
cs.CR cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we investigate Saudi parents' privacy concerns regarding their
children's smart device applications (apps). To this end, we conducted a survey
and analysed 119 responses. Our results show that Saudi parents expressed a
high level of concern regarding their children's privacy when using smart
device apps. However, they expressed higher concerns about apps' content than
privacy issues such as apps' requests to access sensitive data. Furthermore,
parents' concerns are not in line with most of the children's installed apps,
which contain apps inappropriate for their age, require parental guidance, and
request access to sensitive data such as location. We also discuss several
aspects of Saudi parents' practices and concerns compared to those reported by
Western (mainly from the UK) and Chinese parents in previous reports. We found
interesting patterns and established new relationships. For example, Saudi and
Western parents show higher levels of privacy concerns than Chinese parents.
Finally, we tested 14 privacy practices and concerns against high versus low
socioeconomic classes (parents' education, technical background, and income) to
find whether there are significant differences between high and low classes (we
denote these differences by "digital divide"). Out of 42 tests (14 properties x
3 classes) we found significant differences between high and low classes in 7
tests only. While this is a positive trend overall, it is important to work on
bridging these gaps. The results of this paper provide key findings to identify
areas of improvement and recommendations, especially for Saudis, which can be
used by parents, developers, researchers, regulators, and policy makers.
|
[
{
"version": "v1",
"created": "Fri, 28 May 2021 07:20:50 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2022 03:51:06 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Apr 2022 01:09:16 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Alashwali",
"Eman",
""
],
[
"Alashwali",
"Fatimah",
""
]
] |
new_dataset
| 0.995089 |
2106.10331
|
Nirmalya Thakur
|
Nirmalya Thakur and Chia Y. Han
|
Exoskeleton-Based Multimodal Action and Movement Recognition:
Identifying and Developing the Optimal Boosted Learning Approach
| null |
Journal of Advances in Artificial Intelligence and Machine
Learning. 2021; Volume 1, Issue 1, Article 4
| null | null |
cs.RO cs.CY cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper makes two scientific contributions to the field of
exoskeleton-based action and movement recognition. First, it presents a novel
machine learning and pattern recognition-based framework that can detect a wide
range of actions and movements - walking, walking upstairs, walking downstairs,
sitting, standing, lying, stand to sit, sit to stand, sit to lie, lie to sit,
stand to lie, and lie to stand, with an overall accuracy of 82.63%. Second, it
presents a comprehensive comparative study of different learning approaches -
Random Forest, Artificial Neural Network, Decision Tree, Multiway Decision
Tree, Support Vector Machine, k-NN, Gradient Boosted Trees, Decision Stump,
AutoMLP, Linear Regression, Vector Linear Regression, Random Tree, Na\"ive
Bayes, Na\"ive Bayes (Kernel), Linear Discriminant Analysis, Quadratic
Discriminant Analysis, and Deep Learning applied to this framework. The
performance of each of these learning approaches was boosted by using the
AdaBoost algorithm, and the Cross Validation approach was used for training and
testing. The results show that in boosted form, the k-NN classifier outperforms
all the other boosted learning approaches and is, therefore, the optimal
learning method for this purpose. The results presented and discussed uphold
the importance of this work to contribute towards augmenting the abilities of
exoskeleton-based assisted and independent living of the elderly in the future
of Internet of Things-based living environments, such as Smart Homes. As a
specific use case, we also discuss how the findings of our work are relevant
for augmenting the capabilities of the Hybrid Assistive Limb exoskeleton, a
highly functional lower limb exoskeleton.
|
[
{
"version": "v1",
"created": "Fri, 18 Jun 2021 19:43:54 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2022 16:28:26 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Thakur",
"Nirmalya",
""
],
[
"Han",
"Chia Y.",
""
]
] |
new_dataset
| 0.999212 |
2107.14122
|
Muhammad Cheema
|
Punam Biswas, Tanzima Hashem, Muhammad Aamir Cheema
|
Safest Nearby Neighbor Queries in Road Networks (Full Version)
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Traditional route planning and k nearest neighbors queries only consider
distance or travel time and ignore road safety altogether. However, many
travellers prefer to avoid risky or unpleasant road conditions such as roads
with high crime rates (e.g., robberies, kidnapping, riots etc.) and bumpy
roads. To facilitate safe travel, we introduce a novel query for road networks
called the k safest nearby neighbors (kSNN) query. Given a query location
$v_l$, a distance constraint $d_c$ and a point of interest $p_i$, we define the
safest path from $v_l$ to $p_i$ as the path with the highest path safety score
among all the paths from $v_l$ to $p_i$ with length less than $d_c$. The path
safety score is computed considering the road safety of each road segment on
the path. Given a query location $v_l$, a distance constraint $d_c$ and a set
of POIs P, a kSNN query returns k POIs with the k highest path safety scores in
P along with their respective safest paths from the query location. We develop
two novel indexing structures called Ct-tree and a safety score based Voronoi
diagram (SNVD). We propose two efficient query processing algorithms each
exploiting one of the proposed indexes to effectively refine the search space
using the properties of the index. Our extensive experimental study on real
datasets demonstrates that our solution is on average an order of magnitude
faster than the baselines.
|
[
{
"version": "v1",
"created": "Thu, 29 Jul 2021 15:48:12 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2022 03:36:48 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Biswas",
"Punam",
""
],
[
"Hashem",
"Tanzima",
""
],
[
"Cheema",
"Muhammad Aamir",
""
]
] |
new_dataset
| 0.99383 |
2110.14340
|
Kazuaki Matsumura
|
Kazuaki Matsumura, Simon Garcia De Gonzalo, Antonio J. Pe\~na
|
JACC: An OpenACC Runtime Framework with Kernel-Level and Multi-GPU
Parallelization
|
Extended version of a paper to appear in: Proceedings of the 28th
IEEE International Conference on High Performance Computing, Data, and
Analytics (HiPC), December 17-18, 2021
| null |
10.1109/HiPC53243.2021.00032
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid development in computing technology has paved the way for
directive-based programming models towards a principal role in maintaining
software portability of performance-critical applications. Efforts on such
models involve a least engineering cost for enabling computational acceleration
on multiple architectures while programmers are only required to add meta
information upon sequential code. Optimizations for obtaining the best possible
efficiency, however, are often challenging. The insertions of directives by the
programmer can lead to side-effects that limit the available compiler
optimization possible, which could result in performance degradation. This is
exacerbated when targeting multi-GPU systems, as pragmas do not automatically
adapt to such systems, and require expensive and time consuming code adjustment
by programmers.
This paper introduces JACC, an OpenACC runtime framework which enables the
dynamic extension of OpenACC programs by serving as a transparent layer between
the program and the compiler. We add a versatile code-translation method for
multi-device utilization by which manually-optimized applications can be
distributed automatically while keeping original code structure and
parallelism. We show in some cases nearly linear scaling on the part of kernel
execution with the NVIDIA V100 GPUs. While adaptively using multi-GPUs, the
resulting performance improvements amortize the latency of GPU-to-GPU
communications.
|
[
{
"version": "v1",
"created": "Wed, 27 Oct 2021 10:43:48 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Dec 2021 14:05:57 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Apr 2022 12:43:52 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Matsumura",
"Kazuaki",
""
],
[
"De Gonzalo",
"Simon Garcia",
""
],
[
"Peña",
"Antonio J.",
""
]
] |
new_dataset
| 0.988474 |
2202.08758
|
Ziyin Ma
|
Ziyin Ma and Changjae Oh
|
A Wavelet-based Dual-stream Network for Underwater Image Enhancement
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a wavelet-based dual-stream network that addresses color cast and
blurry details in underwater images. We handle these artifacts separately by
decomposing an input image into multiple frequency bands using discrete wavelet
transform, which generates the downsampled structure image and detail images.
These sub-band images are used as input to our dual-stream network that
incorporates two sub-networks: the multi-color space fusion network and the
detail enhancement network. The multi-color space fusion network takes the
decomposed structure image as input and estimates the color corrected output by
employing the feature representations from diverse color spaces of the input.
The detail enhancement network addresses the blurriness of the original
underwater image by improving the image details from high-frequency sub-bands.
We validate the proposed method on both real-world and synthetic underwater
datasets and show the effectiveness of our model in color correction and blur
removal with low computational complexity.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 16:57:25 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2022 15:30:28 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Ma",
"Ziyin",
""
],
[
"Oh",
"Changjae",
""
]
] |
new_dataset
| 0.995995 |
2203.07736
|
Yi Cheng
|
Yi Cheng, Li Kuang
|
CSRS: Code Search with Relevance Matching and Semantic Matching
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Developers often search and reuse existing code snippets in the process of
software development. Code search aims to retrieve relevant code snippets from
a codebase according to natural language queries entered by the developer. Up
to now, researchers have already proposed information retrieval (IR) based
methods and deep learning (DL) based methods. The IR-based methods focus on
keyword matching, that is to rank codes by relevance between queries and code
snippets, while DL-based methods focus on capturing the semantic correlations.
However, the existing methods do not consider capturing two matching signals
simultaneously. Therefore, in this paper, we propose CSRS, a code search model
with relevance matching and semantic matching. CSRS comprises (1) an embedding
module containing convolution kernels of different sizes which can extract
n-gram embeddings of queries and codes, (2) a relevance matching module that
measures lexical matching signals, and (3) a co-attention based semantic
matching module to capture the semantic correlation. We train and evaluate CSRS
on a dataset with 18.22M and 10k code snippets. The experimental results
demonstrate that CSRS achieves an MRR of 0.614, which outperforms two
state-of-the-art models DeepCS and CARLCS-CNN by 33.77% and 18.53%
respectively. In addition, we also conducted several experiments to prove the
effectiveness of each component of CSRS.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 09:10:18 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Mar 2022 06:43:18 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Mar 2022 07:58:02 GMT"
},
{
"version": "v4",
"created": "Wed, 27 Apr 2022 06:56:00 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Cheng",
"Yi",
""
],
[
"Kuang",
"Li",
""
]
] |
new_dataset
| 0.994882 |
2203.08075
|
Xiao Liu
|
Xiao Liu, Da Yin, Yansong Feng, Dongyan Zhao
|
Things not Written in Text: Exploring Spatial Commonsense from Visual
Signals
|
Accepted by ACL 2022 main conference
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Spatial commonsense, the knowledge about spatial position and relationship
between objects (like the relative size of a lion and a girl, and the position
of a boy relative to a bicycle when cycling), is an important part of
commonsense knowledge. Although pretrained language models (PLMs) succeed in
many NLP tasks, they are shown to be ineffective in spatial commonsense
reasoning. Starting from the observation that images are more likely to exhibit
spatial commonsense than texts, we explore whether models with visual signals
learn more spatial commonsense than text-based PLMs. We propose a spatial
commonsense benchmark that focuses on the relative scales of objects, and the
positional relationship between people and objects under different actions. We
probe PLMs and models with visual signals, including vision-language pretrained
models and image synthesis models, on this benchmark, and find that image
synthesis models are more capable of learning accurate and consistent spatial
knowledge than other models. The spatial knowledge from image synthesis models
also helps in natural language understanding tasks that require spatial
commonsense.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 17:02:30 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2022 08:01:45 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Liu",
"Xiao",
""
],
[
"Yin",
"Da",
""
],
[
"Feng",
"Yansong",
""
],
[
"Zhao",
"Dongyan",
""
]
] |
new_dataset
| 0.956431 |
2204.12575
|
Tiago Brito
|
Tiago Brito, Pedro Lopes, Nuno Santos and Jos\'e Fragoso Santos
|
Wasmati: An Efficient Static Vulnerability Scanner for WebAssembly
|
Computers & Security
| null |
10.1016/j.cose.2022.102745
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
WebAssembly is a new binary instruction format that allows targeted compiled
code written in high-level languages to be executed with near-native speed by
the browser's JavaScript engine. However, given that WebAssembly binaries can
be compiled from unsafe languages like C/C++, classical code vulnerabilities
such as buffer overflows or format strings can be transferred over from the
original programs down to the cross-compiled binaries. As a result, this
possibility of incorporating vulnerabilities in WebAssembly modules has widened
the attack surface of modern web applications. This paper presents Wasmati, a
static analysis tool for finding security vulnerabilities in WebAssembly
binaries. It is based on the generation of a code property graph (CPG), a
program representation previously adopted for detecting vulnerabilities in
various languages but hitherto unapplied to WebAssembly. We formalize the
definition of CPG for WebAssembly, introduce techniques to generate CPG for
complex WebAssembly, and present four different query specification languages
for finding vulnerabilities by traversing a program's CPG. We implemented ten
queries capturing different vulnerability types and extensively tested Wasmati
on four heterogeneous datasets. We show that Wasmati can scale the generation
of CPGs for large real-world applications and can efficiently find
vulnerabilities for all our query types. We have also tested our tool on
WebAssembly binaries collected in the wild and identified several potential
vulnerabilities, some of which we have manually confirmed to exist unless the
enclosing application properly sanitizes the interaction with such affected
binaries.
|
[
{
"version": "v1",
"created": "Tue, 26 Apr 2022 20:26:35 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Brito",
"Tiago",
""
],
[
"Lopes",
"Pedro",
""
],
[
"Santos",
"Nuno",
""
],
[
"Santos",
"José Fragoso",
""
]
] |
new_dataset
| 0.984579 |
2204.12587
|
Mithun Das
|
Mithun Das and Somnath Banerjee and Animesh Mukherjee
|
hate-alert@DravidianLangTech-ACL2022: Ensembling Multi-Modalities for
Tamil TrollMeme Classification
|
Accepted at ACL 2022 DravidianLangTech Workshop
| null | null | null |
cs.MM cs.CL cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media platforms often act as breeding grounds for various forms of
trolling or malicious content targeting users or communities. One way of
trolling users is by creating memes, which in most cases unites an image with a
short piece of text embedded on top of it. The situation is more complex for
multilingual(e.g., Tamil) memes due to the lack of benchmark datasets and
models. We explore several models to detect Troll memes in Tamil based on the
shared task, "Troll Meme Classification in DravidianLangTech2022" at ACL-2022.
We observe while the text-based model MURIL performs better for Non-troll meme
classification, the image-based model VGG16 performs better for Troll-meme
classification. Further fusing these two modalities help us achieve stable
outcomes in both classes. Our fusion model achieved a 0.561 weighted average F1
score and ranked second in this task.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 17:53:39 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Das",
"Mithun",
""
],
[
"Banerjee",
"Somnath",
""
],
[
"Mukherjee",
"Animesh",
""
]
] |
new_dataset
| 0.988511 |
2204.12605
|
Wei-Chi Chen
|
Shih-Chun Lin, Chia-Hung Lin, and Wei-Chi Chen
|
Zero-Touch Network on Industrial IoT: An End-to-End Machine Learning
Approach
|
Submitted for publication in the IEEE Network
| null | null | null |
cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Industry 4.0-enabled smart factory is expected to realize the next revolution
for manufacturers. Although artificial intelligence (AI) technologies have
improved productivity, current use cases belong to small-scale and single-task
operations. To unbound the potential of smart factory, this paper develops
zero-touch network systems for intelligent manufacturing and facilitates
distributed AI applications in both training and inferring stages in a
large-scale manner. The open radio access network (O-RAN) architecture is first
introduced for the zero-touch platform to enable globally controlling
communications and computation infrastructure capability in the field. The
designed serverless framework allows intelligent and efficient learning
assignments and resource allocations. Hence, requested learning tasks can be
assigned to appropriate robots, and the underlying infrastructure can be used
to support the learning tasks without expert knowledge. Moreover, due to the
proposed network system's flexibility, powerful AI-enabled networking
algorithms can be utilized to ensure service-level agreements and superior
performances for factory workloads. Finally, three open research directions of
backward compatibility, end-to-end enhancements, and cybersecurity are
discussed for zero-touch smart factory.
|
[
{
"version": "v1",
"created": "Tue, 26 Apr 2022 21:41:43 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Lin",
"Shih-Chun",
""
],
[
"Lin",
"Chia-Hung",
""
],
[
"Chen",
"Wei-Chi",
""
]
] |
new_dataset
| 0.979687 |
2204.12617
|
Wendy Cano
|
Maria Barriga Beltran, Wendy Cano, Apichaya Chumsai, Haik Koyosan,
Debbie Lemus, Sandra Tenorio, Jongwook Woo
|
Spread of COVID-19: Adult Detention Facilities in LA County
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
We analyze the spread of COVID-19 cases within adult detention facilities in
Los Angeles (LA) county. Throughout the analysis we review the data to explore
the range of positive cases in each center and see the percentage of people who
were positive for COVID-19 against the amount of people who were tested.
Additionally, we see if there is any correlation between the surrounding
community of each detention center and the number of positive cases in each
center and explore the protocols in place at each detention center. We use the
cloud visualization tool SAP Analytics Cloud (SAC) with the data from the
California government website through adult detention facilities in LA County.
We found that (1) the number of confirmed cases at the facilities and the
surrounding communities are not related, (2) the data does not represent all
positive cases at the facility, and (3) there are not enough tests at the
facilities.
|
[
{
"version": "v1",
"created": "Tue, 26 Apr 2022 22:27:01 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Beltran",
"Maria Barriga",
""
],
[
"Cano",
"Wendy",
""
],
[
"Chumsai",
"Apichaya",
""
],
[
"Koyosan",
"Haik",
""
],
[
"Lemus",
"Debbie",
""
],
[
"Tenorio",
"Sandra",
""
],
[
"Woo",
"Jongwook",
""
]
] |
new_dataset
| 0.980523 |
2204.12633
|
Atul Kr. Ojha Dr
|
Mohit Raj, Shyam Ratan, Deepak Alok, Ritesh Kumar, Atul Kr. Ojha
|
Developing Universal Dependency Treebanks for Magahi and Braj
|
11 pages, Workshop on Parsing and its Applications for Indian
Languages (PAIL-2021) at ICON 2021
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we discuss the development of treebanks for two low-resourced
Indian languages - Magahi and Braj based on the Universal Dependencies
framework. The Magahi treebank contains 945 sentences and Braj treebank around
500 sentences marked with their lemmas, part-of-speech, morphological features
and universal dependencies. This paper gives a description of the different
dependency relationship found in the two languages and give some statistics of
the two treebanks. The dataset will be made publicly available on Universal
Dependency (UD) repository
(https://github.com/UniversalDependencies/UD_Magahi-MGTB/tree/master) in the
next(v2.10) release.
|
[
{
"version": "v1",
"created": "Tue, 26 Apr 2022 23:43:41 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Raj",
"Mohit",
""
],
[
"Ratan",
"Shyam",
""
],
[
"Alok",
"Deepak",
""
],
[
"Kumar",
"Ritesh",
""
],
[
"Ojha",
"Atul Kr.",
""
]
] |
new_dataset
| 0.999193 |
2204.12648
|
Spandan Garg
|
Roshanak Zilouchian Moghaddam, Spandan Garg, Colin B. Clement, Yevhen
Mohylevskyy, Neel Sundaresan
|
Generating Examples From CLI Usage: Can Transformers Help?
| null | null | null | null |
cs.SE cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Continuous evolution in modern software often causes documentation,
tutorials, and examples to be out of sync with changing interfaces and
frameworks. Relying on outdated documentation and examples can lead programs to
fail or be less efficient or even less secure. In response, programmers need to
regularly turn to other resources on the web such as StackOverflow for examples
to guide them in writing software. We recognize that this inconvenient,
error-prone, and expensive process can be improved by using machine learning
applied to software usage data. In this paper, we present our practical system
which uses machine learning on large-scale telemetry data and documentation
corpora, generating appropriate and complex examples that can be used to
improve documentation. We discuss both feature-based and transformer-based
machine learning approaches and demonstrate that our system achieves 100%
coverage for the used functionalities in the product, providing up-to-date
examples upon every release and reduces the numbers of PRs submitted by
software owners writing and editing documentation by >68%. We also share
valuable lessons learnt during the 3 years that our production quality system
has been deployed for Azure Cloud Command Line Interface (Azure CLI).
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 01:23:12 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Moghaddam",
"Roshanak Zilouchian",
""
],
[
"Garg",
"Spandan",
""
],
[
"Clement",
"Colin B.",
""
],
[
"Mohylevskyy",
"Yevhen",
""
],
[
"Sundaresan",
"Neel",
""
]
] |
new_dataset
| 0.98954 |
2204.12667
|
Inkyu Shin
|
Inkyu Shin, Yi-Hsuan Tsai, Bingbing Zhuang, Samuel Schulter, Buyu Liu,
Sparsh Garg, In So Kweon, Kuk-Jin Yoon
|
MM-TTA: Multi-Modal Test-Time Adaptation for 3D Semantic Segmentation
|
CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Test-time adaptation approaches have recently emerged as a practical solution
for handling domain shift without access to the source domain data. In this
paper, we propose and explore a new multi-modal extension of test-time
adaptation for 3D semantic segmentation. We find that directly applying
existing methods usually results in performance instability at test time
because multi-modal input is not considered jointly. To design a framework that
can take full advantage of multi-modality, where each modality provides
regularized self-supervisory signals to other modalities, we propose two
complementary modules within and across the modalities. First, Intra-modal
Pseudolabel Generation (Intra-PG) is introduced to obtain reliable pseudo
labels within each modality by aggregating information from two models that are
both pre-trained on source data but updated with target data at different
paces. Second, Inter-modal Pseudo-label Refinement (Inter-PR) adaptively
selects more reliable pseudo labels from different modalities based on a
proposed consistency scheme. Experiments demonstrate that our regularized
pseudo labels produce stable self-learning signals in numerous multi-modal
test-time adaptation scenarios for 3D semantic segmentation. Visit our project
website at https://www.nec-labs.com/~mas/MM-TTA.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 02:28:12 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Shin",
"Inkyu",
""
],
[
"Tsai",
"Yi-Hsuan",
""
],
[
"Zhuang",
"Bingbing",
""
],
[
"Schulter",
"Samuel",
""
],
[
"Liu",
"Buyu",
""
],
[
"Garg",
"Sparsh",
""
],
[
"Kweon",
"In So",
""
],
[
"Yoon",
"Kuk-Jin",
""
]
] |
new_dataset
| 0.999749 |
2204.12693
|
Lifeng Jin
|
Lifeng Jin, Kun Xu, Linfeng Song, Dong Yu
|
Distant finetuning with discourse relations for stance classification
|
NLPCC 2021
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Approaches for the stance classification task, an important task for
understanding argumentation in debates and detecting fake news, have been
relying on models which deal with individual debate topics. In this paper, in
order to train a system independent from topics, we propose a new method to
extract data with silver labels from raw text to finetune a model for stance
classification. The extraction relies on specific discourse relation
information, which is shown as a reliable and accurate source for providing
stance information. We also propose a 3-stage training framework where the
noisy level in the data used for finetuning decreases over different stages
going from the most noisy to the least noisy. Detailed experiments show that
the automatically annotated dataset as well as the 3-stage training help
improve model performance in stance classification. Our approach ranks 1st
among 26 competing teams in the stance classification track of the NLPCC 2021
shared task Argumentative Text Understanding for AI Debater, which confirms the
effectiveness of our approach.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 04:24:35 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Jin",
"Lifeng",
""
],
[
"Xu",
"Kun",
""
],
[
"Song",
"Linfeng",
""
],
[
"Yu",
"Dong",
""
]
] |
new_dataset
| 0.987018 |
2204.12701
|
Tyler Saxton
|
Tyler Saxton
|
Mapping suburban bicycle lanes using street scene images and deep
learning
|
77 pages, 24 figures. A minor thesis submitted in partial fulfilment
of the requirements for the degree of Master of Data Science
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
On-road bicycle lanes improve safety for cyclists, and encourage
participation in cycling for active transport and recreation. With many local
authorities responsible for portions of the infrastructure, official maps and
datasets of bicycle lanes may be out-of-date and incomplete. Even
"crowdsourced" databases may have significant gaps, especially outside popular
metropolitan areas. This thesis presents a method to create a map of bicycle
lanes in a survey area by taking sample street scene images from each road, and
then applying a deep learning model that has been trained to recognise bicycle
lane symbols. The list of coordinates where bicycle lane markings are detected
is then correlated to geospatial data about the road network to record bicycle
lane routes. The method was applied to successfully build a map for a survey
area in the outer suburbs of Melbourne. It was able to identify bicycle lanes
not previously recorded in the official state government dataset,
OpenStreetMap, or the "biking" layer of Google Maps.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 04:56:26 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Saxton",
"Tyler",
""
]
] |
new_dataset
| 0.999688 |
2204.12717
|
Toru Saito
|
Genya Ogawa (1), Toru Saito (1), Noriyuki Aoi (2) ((1) Subaru
Corporation, (2) Signate Inc.)
|
Dataset for Robust and Accurate Leading Vehicle Velocity Recognition
|
5 pages, 9 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognition of the surrounding environment using a camera is an important
technology in Advanced Driver-Assistance Systems and Autonomous Driving, and
recognition technology is often solved by machine learning approaches such as
deep learning in recent years. Machine learning requires datasets for learning
and evaluation. To develop robust recognition technology in the real world, in
addition to normal driving environment, data in environments that are difficult
for cameras such as rainy weather or nighttime are essential. We have
constructed a dataset that one can benchmark the technology, targeting the
velocity recognition of the leading vehicle. This task is an important one for
the Advanced Driver-Assistance Systems and Autonomous Driving. The dataset is
available at https://signate.jp/competitions/657
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 06:06:54 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Ogawa",
"Genya",
""
],
[
"Saito",
"Toru",
""
],
[
"Aoi",
"Noriyuki",
""
]
] |
new_dataset
| 0.999839 |
2204.12750
|
Dongyoon Hwang
|
Hojoon Lee, Dongyoon Hwang, Hyunseung Kim, Byungkun Lee, Jaegul Choo
|
DraftRec: Personalized Draft Recommendation for Winning in Multi-Player
Online Battle Arena Games
|
Accepted to WWW 2022
| null |
10.1145/3485447.3512278
| null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a personalized character recommendation system for
Multiplayer Online Battle Arena (MOBA) games which are considered as one of the
most popular online video game genres around the world. When playing MOBA
games, players go through a draft stage, where they alternately select a
virtual character to play. When drafting, players select characters by not only
considering their character preferences, but also the synergy and competence of
their team's character combination. However, the complexity of drafting induces
difficulties for beginners to choose the appropriate characters based on the
characters of their team while considering their own champion preferences. To
alleviate this problem, we propose DraftRec, a novel hierarchical model which
recommends characters by considering each player's champion preferences and the
interaction between the players. DraftRec consists of two networks: the player
network and the match network. The player network captures the individual
player's champion preference, and the match network integrates the complex
relationship between the players and their respective champions. We train and
evaluate our model from a manually collected 280,000 matches of League of
Legends and a publicly available 50,000 matches of Dota2. Empirically, our
method achieved state-of-the-art performance in character recommendation and
match outcome prediction task. Furthermore, a comprehensive user survey
confirms that DraftRec provides convincing and satisfying recommendations. Our
code and dataset are available at https://github.com/dojeon-ai/DraftRec.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 07:46:17 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Lee",
"Hojoon",
""
],
[
"Hwang",
"Dongyoon",
""
],
[
"Kim",
"Hyunseung",
""
],
[
"Lee",
"Byungkun",
""
],
[
"Choo",
"Jaegul",
""
]
] |
new_dataset
| 0.98863 |
2204.12802
|
Nan Wu
|
Nan Wu, Chaofan Wang
|
GTNet: A Tree-Based Deep Graph Learning Architecture
|
Submitted to IEEE Transactions on Neural Networks and Learning
Systems
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Graph Tree Networks (GTNets), a deep graph learning architecture
with a new general message passing scheme that originates from the tree
representation of graphs. In the tree representation, messages propagate upward
from the leaf nodes to the root node, and each node preserves its initial
information prior to receiving information from its child nodes (neighbors). We
formulate a general propagation rule following the nature of message passing in
the tree to update a node's feature by aggregating its initial feature and its
neighbor nodes' updated features. Two graph representation learning models are
proposed within this GTNet architecture - Graph Tree Attention Network (GTAN)
and Graph Tree Convolution Network (GTCN), with experimentally demonstrated
state-of-the-art performance on several popular benchmark datasets. Unlike the
vanilla Graph Attention Network (GAT) and Graph Convolution Network (GCN) which
have the "over-smoothing" issue, the proposed GTAN and GTCN models can go deep
as demonstrated by comprehensive experiments and rigorous theoretical analysis.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 09:43:14 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Wu",
"Nan",
""
],
[
"Wang",
"Chaofan",
""
]
] |
new_dataset
| 0.997448 |
2204.12811
|
Mike Zhang
|
Mike Zhang, Kristian N{\o}rgaard Jensen, Sif Dam Sonniks, Barbara
Plank
|
SkillSpan: Hard and Soft Skill Extraction from English Job Postings
|
Accepted to NAACL 2022 Main conference
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Skill Extraction (SE) is an important and widely-studied task useful to gain
insights into labor market dynamics. However, there is a lacuna of datasets and
annotation guidelines; available datasets are few and contain crowd-sourced
labels on the span-level or labels from a predefined skill inventory. To
address this gap, we introduce SKILLSPAN, a novel SE dataset consisting of
14.5K sentences and over 12.5K annotated spans. We release its respective
guidelines created over three different sources annotated for hard and soft
skills by domain experts. We introduce a BERT baseline (Devlin et al., 2019).
To improve upon this baseline, we experiment with language models that are
optimized for long spans (Joshi et al., 2020; Beltagy et al., 2020), continuous
pre-training on the job posting domain (Han and Eisenstein, 2019; Gururangan et
al., 2020), and multi-task learning (Caruana, 1997). Our results show that the
domain-adapted models significantly outperform their non-adapted counterparts,
and single-task outperforms multi-task learning.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 10:07:36 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Zhang",
"Mike",
""
],
[
"Jensen",
"Kristian Nørgaard",
""
],
[
"Sonniks",
"Sif Dam",
""
],
[
"Plank",
"Barbara",
""
]
] |
new_dataset
| 0.999333 |
2204.12817
|
Shan Zhang
|
Shan Zhang, Tianyi Wu, Sitong Wu, Guodong Guo
|
CATrans: Context and Affinity Transformer for Few-Shot Segmentation
|
Accepted by IJCAI 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Few-shot segmentation (FSS) aims to segment novel categories given scarce
annotated support images. The crux of FSS is how to aggregate dense
correlations between support and query images for query segmentation while
being robust to the large variations in appearance and context. To this end,
previous Transformer-based methods explore global consensus either on context
similarity or affinity map between support-query pairs. In this work, we
effectively integrate the context and affinity information via the proposed
novel Context and Affinity Transformer (CATrans) in a hierarchical
architecture. Specifically, the Relation-guided Context Transformer (RCT)
propagates context information from support to query images conditioned on more
informative support features. Based on the observation that a huge feature
distinction between support and query pairs brings barriers for context
knowledge transfer, the Relation-guided Affinity Transformer (RAT) measures
attention-aware affinity as auxiliary information for FSS, in which the
self-affinity is responsible for more reliable cross-affinity. We conduct
experiments to demonstrate the effectiveness of the proposed model,
outperforming the state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 10:20:47 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Zhang",
"Shan",
""
],
[
"Wu",
"Tianyi",
""
],
[
"Wu",
"Sitong",
""
],
[
"Guo",
"Guodong",
""
]
] |
new_dataset
| 0.975777 |
2204.12935
|
Shuang Peng
|
Shuang Peng, Shuai Zhu, Minghui Yang, Haozhou Huang, Dan Liu, Zujie
Wen, Xuelian Li, Biao Fan
|
AdaCoach: A Virtual Coach for Training Customer Service Agents
|
5 pages
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of online business, customer service agents gradually
play a crucial role as an interface between the companies and their customers.
Most companies spend a lot of time and effort on hiring and training customer
service agents. To this end, we propose AdaCoach: A Virtual Coach for Training
Customer Service Agents, to promote the ability of newly hired service agents
before they get to work. AdaCoach is designed to simulate real customers who
seek help and actively initiate the dialogue with the customer service agents.
Besides, AdaCoach uses an automated dialogue evaluation model to score the
performance of the customer agent in the training process, which can provide
necessary assistance when the newly hired customer service agent encounters
problems. We apply recent NLP technologies to ensure efficient run-time
performance in the deployed system. To the best of our knowledge, this is the
first system that trains the customer service agent through human-computer
interaction. Until now, the system has already supported more than 500,000
simulation training and cultivated over 1000 qualified customer service agents.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 13:39:27 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Peng",
"Shuang",
""
],
[
"Zhu",
"Shuai",
""
],
[
"Yang",
"Minghui",
""
],
[
"Huang",
"Haozhou",
""
],
[
"Liu",
"Dan",
""
],
[
"Wen",
"Zujie",
""
],
[
"Li",
"Xuelian",
""
],
[
"Fan",
"Biao",
""
]
] |
new_dataset
| 0.999689 |
2204.12974
|
Peng Wang
|
Yiqi Gao, Xinglin Hou, Yuanmeng Zhang, Tiezheng Ge, Yuning Jiang, Peng
Wang
|
CapOnImage: Context-driven Dense-Captioning on Image
|
13pages, 10figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing image captioning systems are dedicated to generating narrative
captions for images, which are spatially detached from the image in
presentation. However, texts can also be used as decorations on the image to
highlight the key points and increase the attractiveness of images. In this
work, we introduce a new task called captioning on image (CapOnImage), which
aims to generate dense captions at different locations of the image based on
contextual information. To fully exploit the surrounding visual context to
generate the most suitable caption for each location, we propose a multi-modal
pre-training model with multi-level pre-training tasks that progressively learn
the correspondence between texts and image locations from easy to difficult.
Since the model may generate redundant captions for nearby locations, we
further enhance the location embedding with neighbor locations as context. For
this new task, we also introduce a large-scale benchmark called CapOnImage2M,
which contains 2.1 million product images, each with an average of 4.8
spatially localized captions. Compared with other image captioning model
variants, our model achieves the best results in both captioning accuracy and
diversity aspects. We will make code and datasets public to facilitate future
research.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 14:40:31 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Gao",
"Yiqi",
""
],
[
"Hou",
"Xinglin",
""
],
[
"Zhang",
"Yuanmeng",
""
],
[
"Ge",
"Tiezheng",
""
],
[
"Jiang",
"Yuning",
""
],
[
"Wang",
"Peng",
""
]
] |
new_dataset
| 0.977859 |
2204.13006
|
Lucia Cascone
|
Lucia Cascone and Riccardo Distasi and Michele Nappi
|
Ollivier-Ricci Curvature For Head Pose Estimation From a Single Image
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Head pose estimation is a crucial challenge for many real-world applications,
such as attention and human behavior analysis. This paper aims to estimate head
pose from a single image by applying notions of network curvature. In the real
world, many complex networks have groups of nodes that are well connected to
each other with significant functional roles. Similarly, the interactions of
facial landmarks can be represented as complex dynamic systems modeled by
weighted graphs. The functionalities of such systems are therefore
intrinsically linked to the topology and geometry of the underlying graph. In
this work, using the geometric notion of Ollivier-Ricci curvature (ORC) on
weighted graphs as input to the XGBoost regression model, we show that the
intrinsic geometric basis of ORC offers a natural approach to discovering
underlying common structure within a pool of poses. Experiments on the BIWI,
AFLW2000 and Pointing'04 datasets show that the ORC_XGB method performs well
compared to state-of-the-art methods, both landmark-based and image-only.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 15:20:26 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Cascone",
"Lucia",
""
],
[
"Distasi",
"Riccardo",
""
],
[
"Nappi",
"Michele",
""
]
] |
new_dataset
| 0.994856 |
2204.13009
|
Hao Wang
|
Hao Wang
|
Extremal GloVe: Theoretically Accurate Distributed Word Embedding by
Tail Inference
|
ICCIP 2021
| null |
10.1145/3507971.3507972
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Distributed word embeddings such as Word2Vec and GloVe have been widely
adopted in industrial context settings. Major technical applications of GloVe
include recommender systems and natural language processing. The fundamental
theory behind GloVe relies on the selection of a weighting function in the
weighted least squres formulation that computes the powered ratio of word
occurrence count and the maximum word count in the corpus. However, the initial
formulation of GloVe is not theoretically sound in two aspects, namely the
selection of the weighting function and its power exponent is ad-hoc. In this
paper, we utilize the theory of extreme value analysis and propose a
theoretically accurate version of GloVe. By reformulating the weighted least
squares loss function as the expected loss function and accurately choosing the
power exponent, we create a theoretically accurate version of GloVe. We
demonstrate the competitiveness of our algorithm and show that the initial
formulation of GloVe with the suggested optimal parameter can be viewed as a
special case of our paradigm.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 15:29:10 GMT"
}
] | 2022-04-28T00:00:00 |
[
[
"Wang",
"Hao",
""
]
] |
new_dataset
| 0.991024 |
2003.12841
|
Simone Fontana
|
Simone Fontana, Daniele Cattaneo, Augusto Luis Ballardini, Matteo
Vaghi and Domenico Giorgio Sorrenti
|
A Benchmark for Point Clouds Registration Algorithms
| null |
Robotics and Autonomous Systems, 2021, 140: 103734
|
10.1016/j.robot.2021.103734
| null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Point clouds registration is a fundamental step of many point clouds
processing pipelines; however, most algorithms are tested on data that are
collected ad-hoc and not shared with the research community. These data often
cover only a very limited set of use cases; therefore, the results cannot be
generalised. Public datasets proposed until now, taken individually, cover only
a few kinds of environment and mostly a single sensor. For these reasons, we
developed a benchmark, for localization and mapping applications, using
multiple publicly available datasets. In this way, we are able to cover many
kinds of environment and many kinds of sensor that can produce point clouds.
Furthermore, the ground truth has been thoroughly inspected and evaluated to
ensure its quality. For some of the datasets, the accuracy of the ground truth
measuring system was not reported by the original authors, therefore we
estimated it with our own novel method, based on an iterative registration
algorithm. Along with the data, we provide a broad set of registration
problems, chosen to cover different types of initial misalignment, various
degrees of overlap, and different kinds of registration problems. Lastly, we
propose a metric to measure the performances of registration algorithms: it
combines the commonly used rotation and translation errors together, to allow
an objective comparison of the alignments. This work aims at encouraging
authors to use a public and shared benchmark, instead of data collected ad-hoc,
to ensure objectivity and repeatability, two fundamental characteristics in any
scientific field.
|
[
{
"version": "v1",
"created": "Sat, 28 Mar 2020 17:02:26 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Apr 2020 09:11:23 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Apr 2022 12:23:52 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Fontana",
"Simone",
""
],
[
"Cattaneo",
"Daniele",
""
],
[
"Ballardini",
"Augusto Luis",
""
],
[
"Vaghi",
"Matteo",
""
],
[
"Sorrenti",
"Domenico Giorgio",
""
]
] |
new_dataset
| 0.997228 |
2007.13960
|
Wei Jing
|
En Yen Puang and Keng Peng Tee and Wei Jing
|
KOVIS: Keypoint-based Visual Servoing with Zero-Shot Sim-to-Real
Transfer for Robotics Manipulation
|
Accepted by IROS 2020
| null |
10.1109/IROS45743.2020.9341370
| null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present KOVIS, a novel learning-based, calibration-free visual servoing
method for fine robotic manipulation tasks with eye-in-hand stereo camera
system. We train the deep neural network only in the simulated environment; and
the trained model could be directly used for real-world visual servoing tasks.
KOVIS consists of two networks. The first keypoint network learns the keypoint
representation from the image using with an autoencoder. Then the visual
servoing network learns the motion based on keypoints extracted from the camera
image. The two networks are trained end-to-end in the simulated environment by
self-supervised learning without manual data labeling. After training with data
augmentation, domain randomization, and adversarial examples, we are able to
achieve zero-shot sim-to-real transfer to real-world robotic manipulation
tasks. We demonstrate the effectiveness of the proposed method in both
simulated environment and real-world experiment with different robotic
manipulation tasks, including grasping, peg-in-hole insertion with 4mm
clearance, and M13 screw insertion. The demo video is available at
http://youtu.be/gfBJBR2tDzA
|
[
{
"version": "v1",
"created": "Tue, 28 Jul 2020 02:53:28 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Puang",
"En Yen",
""
],
[
"Tee",
"Keng Peng",
""
],
[
"Jing",
"Wei",
""
]
] |
new_dataset
| 0.998621 |
2008.03946
|
Yinhe Zheng Dr.
|
Yida Wang, Pei Ke, Yinhe Zheng, Kaili Huang, Yong Jiang, Xiaoyan Zhu,
and Minlie Huang
|
A Large-Scale Chinese Short-Text Conversation Dataset
|
Accepted by NLPCC 2020 (Best Student Paper)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The advancements of neural dialogue generation models show promising results
on modeling short-text conversations. However, training such models usually
needs a large-scale high-quality dialogue corpus, which is hard to access. In
this paper, we present a large-scale cleaned Chinese conversation dataset,
LCCC, which contains a base version (6.8million dialogues) and a large version
(12.0 million dialogues). The quality of our dataset is ensured by a rigorous
data cleaning pipeline, which is built based on a set of rules and a classifier
that is trained on manually annotated 110K dialogue pairs. We also release
pre-training dialogue models which are trained on LCCC-base and LCCC-large
respectively. The cleaned dataset and the pre-training models will facilitate
the research of short-text conversation modeling. All the models and datasets
are available at https://github.com/thu-coai/CDial-GPT.
|
[
{
"version": "v1",
"created": "Mon, 10 Aug 2020 08:12:49 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2022 07:07:56 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Wang",
"Yida",
""
],
[
"Ke",
"Pei",
""
],
[
"Zheng",
"Yinhe",
""
],
[
"Huang",
"Kaili",
""
],
[
"Jiang",
"Yong",
""
],
[
"Zhu",
"Xiaoyan",
""
],
[
"Huang",
"Minlie",
""
]
] |
new_dataset
| 0.999423 |
2010.04894
|
Ahmad Esmaeili
|
Ahmad Esmaeili and John C. Gallagher and John A. Springer and Eric T.
Matson
|
HAMLET: A Hierarchical Agent-based Machine Learning Platform
| null | null |
10.1145/3530191
| null |
cs.LG cs.AI cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hierarchical Multi-Agent Systems provide convenient and relevant ways to
analyze, model, and simulate complex systems composed of a large number of
entities that interact at different levels of abstraction. In this paper, we
introduce HAMLET (Hierarchical Agent-based Machine LEarning plaTform), a hybrid
machine learning platform based on hierarchical multi-agent systems, to
facilitate the research and democratization of geographically and/or locally
distributed machine learning entities. The proposed system models a machine
learning solutions as a hypergraph and autonomously sets up a multi-level
structure of heterogeneous agents based on their innate capabilities and
learned skills. HAMLET aids the design and management of machine learning
systems and provides analytical capabilities for research communities to assess
the existing and/or new algorithms/datasets through flexible and customizable
queries. The proposed hybrid machine learning platform does not assume
restrictions on the type of learning algorithms/datasets and is theoretically
proven to be sound and complete with polynomial computational requirements.
Additionally, it is examined empirically on 120 training and four generalized
batch testing tasks performed on 24 machine learning algorithms and 9 standard
datasets. The provided experimental results not only establish confidence in
the platform's consistency and correctness but also demonstrate its testing and
analytical capacity.
|
[
{
"version": "v1",
"created": "Sat, 10 Oct 2020 03:46:59 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Nov 2021 01:59:11 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Esmaeili",
"Ahmad",
""
],
[
"Gallagher",
"John C.",
""
],
[
"Springer",
"John A.",
""
],
[
"Matson",
"Eric T.",
""
]
] |
new_dataset
| 0.95404 |
2108.00355
|
Mo Shan
|
Mo Shan, Qiaojun Feng, You-Yi Jau, Nikolay Atanasov
|
ELLIPSDF: Joint Object Pose and Shape Optimization with a Bi-level
Ellipsoid and Signed Distance Function Description
|
Accepted by ICCV 2021
|
2021 IEEE/CVF International Conference on Computer Vision (ICCV),
Montreal, QC, Canada, pp. 5926-5935
|
10.1109/ICCV48922.2021.00589
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous systems need to understand the semantics and geometry of their
surroundings in order to comprehend and safely execute object-level task
specifications. This paper proposes an expressive yet compact model for joint
object pose and shape optimization, and an associated optimization algorithm to
infer an object-level map from multi-view RGB-D camera observations. The model
is expressive because it captures the identities, positions, orientations, and
shapes of objects in the environment. It is compact because it relies on a
low-dimensional latent representation of implicit object shape, allowing
onboard storage of large multi-category object maps. Different from other works
that rely on a single object representation format, our approach has a bi-level
object model that captures both the coarse level scale as well as the fine
level shape details. Our approach is evaluated on the large-scale real-world
ScanNet dataset and compared against state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sun, 1 Aug 2021 03:07:31 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Shan",
"Mo",
""
],
[
"Feng",
"Qiaojun",
""
],
[
"Jau",
"You-Yi",
""
],
[
"Atanasov",
"Nikolay",
""
]
] |
new_dataset
| 0.999713 |
2111.04473
|
Fanny Silavong
|
Fran Silavong, Sean Moran, Antonios Georgiadis, Rohan Saphal, Robert
Otter
|
Senatus -- A Fast and Accurate Code-to-Code Recommendation Engine
|
Accepted to MSR 2022
| null |
10.1145/3524842.3527947
| null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning on source code (MLOnCode) is a popular research field that
has been driven by the availability of large-scale code repositories and the
development of powerful probabilistic and deep learning models for mining
source code. Code-to-code recommendation is a task in MLOnCode that aims to
recommend relevant, diverse and concise code snippets that usefully extend the
code currently being written by a developer in their development environment
(IDE). Code-to-code recommendation engines hold the promise of increasing
developer productivity by reducing context switching from the IDE and
increasing code-reuse. Existing code-to-code recommendation engines do not
scale gracefully to large codebases, exhibiting a linear growth in query time
as the code repository increases in size. In addition, existing code-to-code
recommendation engines fail to account for the global statistics of code
repositories in the ranking function, such as the distribution of code snippet
lengths, leading to sub-optimal retrieval results. We address both of these
weaknesses with \emph{Senatus}, a new code-to-code recommendation engine. At
the core of Senatus is \emph{De-Skew} LSH a new locality sensitive hashing
(LSH) algorithm that indexes the data for fast (sub-linear time) retrieval
while also counteracting the skewness in the snippet length distribution using
novel abstract syntax tree-based feature scoring and selection algorithms. We
evaluate Senatus and find the recommendations to be of higher quality than
competing baselines, while achieving faster search. For example on the
CodeSearchNet dataset Senatus improves performance by 31.21\% F1 and
147.9\emph{x} faster query time compared to Facebook Aroma. Senatus also
outperforms standard MinHash LSH by 29.2\% F1 and 51.02\emph{x} faster query
time.
|
[
{
"version": "v1",
"created": "Fri, 5 Nov 2021 16:56:28 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2022 09:46:42 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Silavong",
"Fran",
""
],
[
"Moran",
"Sean",
""
],
[
"Georgiadis",
"Antonios",
""
],
[
"Saphal",
"Rohan",
""
],
[
"Otter",
"Robert",
""
]
] |
new_dataset
| 0.999572 |
2201.08093
|
Nitin Saini
|
Nitin Saini, Elia Bonetto, Eric Price, Aamir Ahmad and Michael J.
Black
|
AirPose: Multi-View Fusion Network for Aerial 3D Human Pose and Shape
Estimation
| null | null |
10.1109/LRA.2022.3145494
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this letter, we present a novel markerless 3D human motion capture (MoCap)
system for unstructured, outdoor environments that uses a team of autonomous
unmanned aerial vehicles (UAVs) with on-board RGB cameras and computation.
Existing methods are limited by calibrated cameras and off-line processing.
Thus, we present the first method (AirPose) to estimate human pose and shape
using images captured by multiple extrinsically uncalibrated flying cameras.
AirPose itself calibrates the cameras relative to the person instead of relying
on any pre-calibration. It uses distributed neural networks running on each UAV
that communicate viewpoint-independent information with each other about the
person (i.e., their 3D shape and articulated pose). The person's shape and pose
are parameterized using the SMPL-X body model, resulting in a compact
representation, that minimizes communication between the UAVs. The network is
trained using synthetic images of realistic virtual environments, and
fine-tuned on a small set of real images. We also introduce an
optimization-based post-processing method (AirPose$^{+}$) for offline
applications that require higher MoCap quality. We make our method's code and
data available for research at
https://github.com/robot-perception-group/AirPose. A video describing the
approach and results is available at https://youtu.be/xLYe1TNHsfs.
|
[
{
"version": "v1",
"created": "Thu, 20 Jan 2022 09:46:20 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Feb 2022 20:37:45 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Saini",
"Nitin",
""
],
[
"Bonetto",
"Elia",
""
],
[
"Price",
"Eric",
""
],
[
"Ahmad",
"Aamir",
""
],
[
"Black",
"Michael J.",
""
]
] |
new_dataset
| 0.964834 |
2201.09367
|
Bailin Deng
|
Zhi Deng, Yang Liu, Hao Pan, Wassim Jabi, Juyong Zhang, Bailin Deng
|
Sketch2PQ: Freeform Planar Quadrilateral Mesh Design via a Single Sketch
|
To appear in IEEE Transactions on Visualization and Computer Graphics
| null | null | null |
cs.GR cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The freeform architectural modeling process often involves two important
stages: concept design and digital modeling. In the first stage, architects
usually sketch the overall 3D shape and the panel layout on a physical or
digital paper briefly. In the second stage, a digital 3D model is created using
the sketch as a reference. The digital model needs to incorporate geometric
requirements for its components, such as the planarity of panels due to
consideration of construction costs, which can make the modeling process more
challenging. In this work, we present a novel sketch-based system to bridge the
concept design and digital modeling of freeform roof-like shapes represented as
planar quadrilateral (PQ) meshes. Our system allows the user to sketch the
surface boundary and contour lines under axonometric projection and supports
the sketching of occluded regions. In addition, the user can sketch feature
lines to provide directional guidance to the PQ mesh layout. Given the 2D
sketch input, we propose a deep neural network to infer in real-time the
underlying surface shape along with a dense conjugate direction field, both of
which are used to extract the final PQ mesh. To train and validate our network,
we generate a large synthetic dataset that mimics architect sketching of
freeform quadrilateral patches. The effectiveness and usability of our system
are demonstrated with quantitative and qualitative evaluation as well as user
studies.
|
[
{
"version": "v1",
"created": "Sun, 23 Jan 2022 21:09:59 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Apr 2022 10:55:44 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Apr 2022 21:32:05 GMT"
},
{
"version": "v4",
"created": "Mon, 25 Apr 2022 22:12:13 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Deng",
"Zhi",
""
],
[
"Liu",
"Yang",
""
],
[
"Pan",
"Hao",
""
],
[
"Jabi",
"Wassim",
""
],
[
"Zhang",
"Juyong",
""
],
[
"Deng",
"Bailin",
""
]
] |
new_dataset
| 0.950534 |
2202.11902
|
Karnati Venkata Naga Sreenivas
|
Klaus Jansen, Arindam Khan, Marvin Lira and K. V. N. Sreenivas
|
A PTAS for Packing Hypercubes into a Knapsack
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
We study the d-dimensional hypercube knapsack problem where we are given a
set of d-dimensional hypercubes with associated profits, and a knapsack which
is a unit d-dimensional hypercube. The goal is to find an axis-aligned
non-overlapping packing of a subset of hypercubes such that the profit of the
packed hypercubes is maximized. For this problem, Harren (ICALP'06) gave an
algorithm with an approximation ratio of (1+1/2^d+epsilon). For d=2, Jansen and
Solis-Oba (IPCO'08) showed that the problem admits a polynomial-time
approximation scheme (PTAS); Heydrich and Wiese (SODA'17) further improved the
running time and gave an efficient polynomial-time approximation scheme
(EPTAS). Both the results use structural properties of 2-D packing, which do
not generalize to higher dimensions. For d>2, it remains open to obtain a PTAS,
and in fact, there has been no improvement since Harren's result.
We settle the problem by providing a PTAS. Our main technical contribution is
a structural lemma which shows that any packing of hypercubes can be converted
into another structured packing such that a high profitable subset of
hypercubes is packed into a constant number of special hypercuboids, called
V-Boxes and N-Boxes. As a side result, we give an almost optimal algorithm for
a variant of the strip packing problem in higher dimensions. This might have
applications for other multidimensional geometric packing problems.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 05:03:43 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2022 11:05:34 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Jansen",
"Klaus",
""
],
[
"Khan",
"Arindam",
""
],
[
"Lira",
"Marvin",
""
],
[
"Sreenivas",
"K. V. N.",
""
]
] |
new_dataset
| 0.996129 |
2202.13898
|
Shiyi Kong
|
Shiyi Kong, Jun Ai, Minyan Lu, Shuguang Wang, W. Eric Wong
|
DistAD: Software Anomaly Detection Based on Execution Trace Distribution
|
need modification, the experiment results need carefully check
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Modern software systems have become increasingly complex, which makes them
difficult to test and validate. Detecting software partial anomalies in complex
systems at runtime can assist with handling unintended software behaviors,
avoiding catastrophic software failures and improving software runtime
availability. These detection techniques aim to identify the manifestation of
faults (anomalies) before they ultimately lead to unavoidable failures, thus,
supporting the following runtime fault-tolerant techniques. In this work, we
propose a novel anomaly detection method named DistAD, which is based on the
distribution of software runtime dynamic execution traces. Unlike other
existing works using key performance indicators, the execution trace is
collected during runtime via intrusive instrumentation. Instrumentation are
controlled following a sampling mechanism to avoid excessive overheads.
Bi-directional Long Short-Term Memory (Bi-LSTM), an architecture of Recurrent
Neural Network (RNN) is used to achieve the anomaly detection. The whole
framework is constructed under a One-Class Neural Network (OCNN) learning mode
which can help eliminate the limits of lacking for enough labeled samples and
the data imbalance issues. A series of controlled experiments are conducted on
a widely used database system named Cassandra to prove the validity and
feasibility of the proposed method. Overheads brought about by the intrusive
probing are also evaluated. The results show that DistAD can achieve more than
70% accuracy and 90% recall (in normal states) with no more than 2 times
overheads compared with unmonitored executions.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 15:46:13 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2022 13:24:21 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Kong",
"Shiyi",
""
],
[
"Ai",
"Jun",
""
],
[
"Lu",
"Minyan",
""
],
[
"Wang",
"Shuguang",
""
],
[
"Wong",
"W. Eric",
""
]
] |
new_dataset
| 0.993252 |
2203.13250
|
Xingyi Zhou
|
Xingyi Zhou, Tianwei Yin, Vladlen Koltun, Philipp Kr\"ahenb\"uhl
|
Global Tracking Transformers
|
CVPR 2022. Code is available at https://github.com/xingyizhou/GTR
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel transformer-based architecture for global multi-object
tracking. Our network takes a short sequence of frames as input and produces
global trajectories for all objects. The core component is a global tracking
transformer that operates on objects from all frames in the sequence. The
transformer encodes object features from all frames, and uses trajectory
queries to group them into trajectories. The trajectory queries are object
features from a single frame and naturally produce unique trajectories. Our
global tracking transformer does not require intermediate pairwise grouping or
combinatorial association, and can be jointly trained with an object detector.
It achieves competitive performance on the popular MOT17 benchmark, with 75.3
MOTA and 59.1 HOTA. More importantly, our framework seamlessly integrates into
state-of-the-art large-vocabulary detectors to track any objects. Experiments
on the challenging TAO dataset show that our framework consistently improves
upon baselines that are based on pairwise association, outperforming published
works by a significant 7.7 tracking mAP. Code is available at
https://github.com/xingyizhou/GTR.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 17:58:04 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2022 21:25:38 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Zhou",
"Xingyi",
""
],
[
"Yin",
"Tianwei",
""
],
[
"Koltun",
"Vladlen",
""
],
[
"Krähenbühl",
"Philipp",
""
]
] |
new_dataset
| 0.997763 |
2204.07462
|
Marco Calderini
|
Marco Calderini and Kangquan Li and Irene Villa
|
Two new families of bivariate APN functions
| null | null | null | null |
cs.IT math.IT math.NT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present two new families of quadratic APN functions. The
first one (F1) is constructed via biprojective polynomials. This family
includes one of the two APN families introduced by G\"olo\v{g}lu in 2022. Then,
following a similar approach as in Li \emph{et al.} (2022), we give another
family (F2) obtained by adding certain terms to F1. As a byproduct, this second
family includes one of the two families introduced by Li \emph{et al.} (2022).
Moreover, we show that for $n=12$, from our constructions, we can obtain APN
functions that are CCZ-inequivalent to any other known APN function over
$\mathbb{F}_{2^{12}}$.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2022 13:54:12 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2022 15:47:47 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Calderini",
"Marco",
""
],
[
"Li",
"Kangquan",
""
],
[
"Villa",
"Irene",
""
]
] |
new_dataset
| 0.993038 |
2204.07570
|
Abhishek Kumar Singh
|
Abhishek Kumar Singh and Kyle Jamieson
|
TreeStep: Tree Search for Vector Perturbation Precoding under
per-Antenna Power Constraint
|
Article under review for IEEE Globecom 22
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vector Perturbation Precoding (VPP) can speed up downlink data transmissions
in Large and Massive Multi-User MIMO systems but is known to be NP-hard. While
there are several algorithms in the literature for VPP under total power
constraint, they are not applicable for VPP under per-antenna power constraint.
This paper proposes a novel, parallel tree search algorithm for VPP under
per-antenna power constraint, called \emph{\textbf{TreeStep}}, to find good
quality solutions to the VPP problem with practical computational complexity.
We show that our method can provide huge performance gain over simple linear
precoding like Regularised Zero Forcing. We evaluate TreeStep for several large
MIMO~($16\times16$ and $24\times24$) and massive MIMO~($16\times32$ and
$24\times 48$) and demonstrate that TreeStep outperforms the popular
polynomial-time VPP algorithm, the Fixed Complexity Sphere Encoder, by
achieving the extremely low BER of $10^{-6}$ at a much lower SNR.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2022 17:47:18 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2022 15:52:14 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Singh",
"Abhishek Kumar",
""
],
[
"Jamieson",
"Kyle",
""
]
] |
new_dataset
| 0.950003 |
2204.08714
|
Xiaojie Chu
|
Xiaojie Chu, Liangyu Chen, Wenqing Yu
|
NAFSSR: Stereo Image Super-Resolution Using NAFNet
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stereo image super-resolution aims at enhancing the quality of
super-resolution results by utilizing the complementary information provided by
binocular systems. To obtain reasonable performance, most methods focus on
finely designing modules, loss functions, and etc. to exploit information from
another viewpoint. This has the side effect of increasing system complexity,
making it difficult for researchers to evaluate new ideas and compare methods.
This paper inherits a strong and simple image restoration model, NAFNet, for
single-view feature extraction and extends it by adding cross attention modules
to fuse features between views to adapt to binocular scenarios. The proposed
baseline for stereo image super-resolution is noted as NAFSSR. Furthermore,
training/testing strategies are proposed to fully exploit the performance of
NAFSSR. Extensive experiments demonstrate the effectiveness of our method. In
particular, NAFSSR outperforms the state-of-the-art methods on the KITTI 2012,
KITTI 2015, Middlebury, and Flickr1024 datasets. With NAFSSR, we won 1st place
in the NTIRE 2022 Stereo Image Super-resolution Challenge. Codes and models
will be released at https://github.com/megvii-research/NAFNet.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2022 07:38:10 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2022 07:04:33 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Chu",
"Xiaojie",
""
],
[
"Chen",
"Liangyu",
""
],
[
"Yu",
"Wenqing",
""
]
] |
new_dataset
| 0.995558 |
2204.11025
|
Iman Soltani Mohammadi
|
Iman Soltani Mohammadi, Mohammad Ghanbari, Mahmoud Reza Hashemi
|
GAMORRA: An API-Level Workload Model for Rasterization-based Graphics
Pipeline Architecture
| null | null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The performance of applications that require frame rendering time estimation
or dynamic frequency scaling, rely on the accuracy of the workload model that
is utilized within these applications. Existing models lack sufficient accuracy
in their core model. Hence, they require changes to the target application or
the hardware to produce accurate results. This paper introduces a mathematical
workload model for a rasterization-based graphics Application Programming
Interface (API) pipeline, named GAMORRA, which works based on the load and
complexity of each stage of the pipeline. Firstly, GAMORRA models each stage of
the pipeline based on their operation complexity and the input data size. Then,
the calculated workloads of the stages are fed to a Multiple Linear Regression
(MLR) model as explanatory variables. A hybrid offline/online training scheme
is proposed as well to train the model. A suite of benchmarks is also designed
to tune the model parameters based on the performance of the target system. The
experiments were performed on Direct3D 11 and on two different rendering
platforms comparing GAMORRA to an AutoRegressive (AR) model, a Frame Complexity
Model (FCM) and a frequency-based (FRQ) model. The experiments show an average
of 1.27 ms frame rendering time estimation error (9.45%) compared to an average
of 1.87 ms error (13.23%) for FCM which is the best method among the three
chosen methods. However, this comes at the cost of 0.54 ms (4.58%) increase in
time complexity compared to FCM. Furthermore, GAMMORA improves frametime
underestimations by 1.1% compared to FCM.
|
[
{
"version": "v1",
"created": "Sat, 23 Apr 2022 08:55:45 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2022 14:57:52 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Mohammadi",
"Iman Soltani",
""
],
[
"Ghanbari",
"Mohammad",
""
],
[
"Hashemi",
"Mahmoud Reza",
""
]
] |
new_dataset
| 0.997514 |
2204.11188
|
Wenbin Song
|
Wenbin Song, Mingrui Zhang, Joseph G. Wallwork, Junpeng Gao, Zheng
Tian, Fanglei Sun, Matthew D. Piggott, Junqing Chen, Zuoqiang Shi, Xiang
Chen, Jun Wang
|
M2N: Mesh Movement Networks for PDE Solvers
| null | null | null | null |
cs.LG cs.NA math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mainstream numerical Partial Differential Equation (PDE) solvers require
discretizing the physical domain using a mesh. Mesh movement methods aim to
improve the accuracy of the numerical solution by increasing mesh resolution
where the solution is not well-resolved, whilst reducing unnecessary resolution
elsewhere. However, mesh movement methods, such as the Monge-Ampere method,
require the solution of auxiliary equations, which can be extremely expensive
especially when the mesh is adapted frequently. In this paper, we propose to
our best knowledge the first learning-based end-to-end mesh movement framework
for PDE solvers. Key requirements of learning-based mesh movement methods are
alleviating mesh tangling, boundary consistency, and generalization to mesh
with different resolutions. To achieve these goals, we introduce the neural
spline model and the graph attention network (GAT) into our models
respectively. While the Neural-Spline based model provides more flexibility for
large deformation, the GAT based model can handle domains with more complicated
shapes and is better at performing delicate local deformation. We validate our
methods on stationary and time-dependent, linear and non-linear equations, as
well as regularly and irregularly shaped domains. Compared to the traditional
Monge-Ampere method, our approach can greatly accelerate the mesh adaptation
process, whilst achieving comparable numerical error reduction.
|
[
{
"version": "v1",
"created": "Sun, 24 Apr 2022 04:23:31 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Song",
"Wenbin",
""
],
[
"Zhang",
"Mingrui",
""
],
[
"Wallwork",
"Joseph G.",
""
],
[
"Gao",
"Junpeng",
""
],
[
"Tian",
"Zheng",
""
],
[
"Sun",
"Fanglei",
""
],
[
"Piggott",
"Matthew D.",
""
],
[
"Chen",
"Junqing",
""
],
[
"Shi",
"Zuoqiang",
""
],
[
"Chen",
"Xiang",
""
],
[
"Wang",
"Jun",
""
]
] |
new_dataset
| 0.992022 |
2204.11918
|
Anthony Francis Jr
|
Laura Downs, Anthony Francis, Nate Koenig, Brandon Kinman, Ryan
Hickman, Krista Reymann, Thomas B. McHugh and Vincent Vanhoucke
|
Google Scanned Objects: A High-Quality Dataset of 3D Scanned Household
Items
|
8 pages, 5 figures, 4 tables; to appear in the conference proceedings
of ICRA 2022
| null | null | null |
cs.RO cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interactive 3D simulations have enabled breakthroughs in robotics and
computer vision, but simulating the broad diversity of environments needed for
deep learning requires large corpora of photo-realistic 3D object models. To
address this need, we present Google Scanned Objects, an open-source collection
of over one thousand 3D-scanned household items released under a Creative
Commons license; these models are preprocessed for use in Ignition Gazebo and
the Bullet simulation platforms, but are easily adaptable to other simulators.
We describe our object scanning and curation pipeline, then provide statistics
about the contents of the dataset and its usage. We hope that the diversity,
quality, and flexibility of Google Scanned Objects will lead to advances in
interactive simulation, synthetic perception, and robotic learning.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 18:49:46 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Downs",
"Laura",
""
],
[
"Francis",
"Anthony",
""
],
[
"Koenig",
"Nate",
""
],
[
"Kinman",
"Brandon",
""
],
[
"Hickman",
"Ryan",
""
],
[
"Reymann",
"Krista",
""
],
[
"McHugh",
"Thomas B.",
""
],
[
"Vanhoucke",
"Vincent",
""
]
] |
new_dataset
| 0.999597 |
2204.11982
|
Juan Borrego Carazo
|
Juan Borrego-Carazo, Carles S\'anchez, David Castells-Rufas, Jordi
Carrabina, D\'ebora Gil
|
BronchoPose: an analysis of data and model configuration for
vision-based bronchoscopy pose estimation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Vision-based bronchoscopy (VB) models require the registration of the virtual
lung model with the frames from the video bronchoscopy to provide effective
guidance during the biopsy. The registration can be achieved by either tracking
the position and orientation of the bronchoscopy camera or by calibrating its
deviation from the pose (position and orientation) simulated in the virtual
lung model. Recent advances in neural networks and temporal image processing
have provided new opportunities for guided bronchoscopy. However, such progress
has been hindered by the lack of comparative experimental conditions.
In the present paper, we share a novel synthetic dataset allowing for a fair
comparison of methods. Moreover, this paper investigates several neural network
architectures for the learning of temporal information at different levels of
subject personalization. In order to improve orientation measurement, we also
present a standardized comparison framework and a novel metric for camera
orientation learning. Results on the dataset show that the proposed metric and
architectures, as well as the standardized conditions, provide notable
improvements to current state-of-the-art camera pose estimation in video
bronchoscopy.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 22:17:50 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Borrego-Carazo",
"Juan",
""
],
[
"Sánchez",
"Carles",
""
],
[
"Castells-Rufas",
"David",
""
],
[
"Carrabina",
"Jordi",
""
],
[
"Gil",
"Débora",
""
]
] |
new_dataset
| 0.956708 |
2204.11985
|
Pieter-Jan Kindermans
|
Pieter-Jan Kindermans, Charles Staats
|
When adversarial examples are excusable
| null | null | null | null |
cs.LG cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural networks work remarkably well in practice and theoretically they can
be universal approximators. However, they still make mistakes and a specific
type of them called adversarial errors seem inexcusable to humans. In this
work, we analyze both test errors and adversarial errors on a well controlled
but highly non-linear visual classification problem. We find that, when
approximating training on infinite data, test errors tend to be close to the
ground truth decision boundary. Qualitatively speaking these are also more
difficult for a human. By contrast, adversarial examples can be found almost
everywhere and are often obvious mistakes. However, when we constrain
adversarial examples to the manifold, we observe a 90\% reduction in
adversarial errors. If we inflate the manifold by training with Gaussian noise
we observe a similar effect. In both cases, the remaining adversarial errors
tend to be close to the ground truth decision boundary. Qualitatively, the
remaining adversarial errors are similar to test errors on difficult examples.
They do not have the customary quality of being inexcusable mistakes.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2022 22:31:58 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Kindermans",
"Pieter-Jan",
""
],
[
"Staats",
"Charles",
""
]
] |
new_dataset
| 0.972097 |
2204.12027
|
Hai Dao
|
Dao Thanh Hai
|
On Routing, Wavelength, Network Coding Assignment and Protection
Configuration Problem in Optical-processing-enabled Networks
|
12 pages, 3 figures, accepted version to IEEE TNSM
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
In optical-processing-enabled network, transitional lightpaths crossing the
same node could be optically encoded to each other to achieve greater spectral
efficiency. In this context, we present a new research problem, entitled,
routing, wavelength, network coding assignment and protection configuration
(RWNCA-PC) arisen in exploiting photonic network coding (NC) for dedicated path
protection in wavelength division multiplexing (WDM) networks with an extra
degree of freedom in the selection of protection triggering mechanism, that is,
network-side and client-side, tailoring to each connection. In order to
maximize the NC benefits, we thus provide a weighted multi-objective
optimization model for solving RWNCA-PC problem so as to minimize the
wavelength count as the strictly prioritized goal and the redundant resources
measured by the number of client-side connections as the secondary objective.
Numerical results on the realistic COST239 network reveal that a saving of up
to $25\%$ wavelength resources could be achieved thanks to the optimal use of
NC compared to the non-coding designs and among coding-aware designs, the use
of mixed protection configurations would be spectrally more efficient than the
design with only network-side protection scheme. Our proposal yields the
highest spectrum efficiency compared to all reference designs and moreover,
features an average saving of more than $40\%$ transponder count compared with
its single objective counterpart.
|
[
{
"version": "v1",
"created": "Tue, 26 Apr 2022 01:49:36 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Hai",
"Dao Thanh",
""
]
] |
new_dataset
| 0.993186 |
2204.12034
|
Ziling Heng
|
Xiaoru Li, Ziling Heng
|
A construction of optimal locally recoverable codes
|
arXiv admin note: substantial text overlap with arXiv:2204.11208
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Locally recoverable codes are widely used in distributed and cloud storage
systems. The objective of this paper is to present a construction of near MDS
codes with oval polynomials and then determine the locality of the codes. It
turns out that the near MDS codes and their duals are both distance-optimal and
dimension-optimal locally recoverable codes. The lengths of the locally
recoverable codes are different from known ones in the literature.
|
[
{
"version": "v1",
"created": "Tue, 26 Apr 2022 02:14:08 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Li",
"Xiaoru",
""
],
[
"Heng",
"Ziling",
""
]
] |
new_dataset
| 0.998303 |
2204.12070
|
Viet Lai
|
Viet Dac Lai, Amir Pouran Ben Veyseh, Franck Dernoncourt, Thien Huu
Nguyen
|
Symlink: A New Dataset for Scientific Symbol-Description Linking
|
arXiv admin note: substantial text overlap with arXiv:2202.09695
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Mathematical symbols and descriptions appear in various forms across document
section boundaries without explicit markup. In this paper, we present a new
large-scale dataset that emphasizes extracting symbols and descriptions in
scientific documents. Symlink annotates scientific papers of 5 different
domains (i.e., computer science, biology, physics, mathematics, and economics).
Our experiments on Symlink demonstrate the challenges of the symbol-description
linking task for existing models and call for further research effort in this
area. We will publicly release Symlink to facilitate future research.
|
[
{
"version": "v1",
"created": "Tue, 26 Apr 2022 04:36:14 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Lai",
"Viet Dac",
""
],
[
"Veyseh",
"Amir Pouran Ben",
""
],
[
"Dernoncourt",
"Franck",
""
],
[
"Nguyen",
"Thien Huu",
""
]
] |
new_dataset
| 0.999812 |
2204.12084
|
Khay Boon Hong
|
Khay Boon Hong
|
U-Net with ResNet Backbone for Garment Landmarking Purpose
|
A draft for purpose of archive, not intended for official academic
uses
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We build a heatmap-based landmark detection model to locate important
landmarks on 2D RGB garment images. The main goal is to detect edges, corners
and suitable interior region of the garments. This let us re-create 3D garments
in modern 3D editing software by incorporate landmark detection model and
texture unwrapping. We use a U-net architecture with ResNet backbone to build
the model. With an appropriate loss function, we are able to train a moderately
robust model.
|
[
{
"version": "v1",
"created": "Tue, 26 Apr 2022 05:47:27 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Hong",
"Khay Boon",
""
]
] |
new_dataset
| 0.994903 |
2204.12184
|
Junwei Liao
|
Junwei Liao, Duyu Tang, Fan Zhang, Shuming Shi
|
SkillNet-NLG: General-Purpose Natural Language Generation with a
Sparsely Activated Approach
|
8 pages,3 figures
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present SkillNet-NLG, a sparsely activated approach that handles many
natural language generation tasks with one model. Different from traditional
dense models that always activate all the parameters, SkillNet-NLG selectively
activates relevant parts of the parameters to accomplish a task, where the
relevance is controlled by a set of predefined skills. The strength of such
model design is that it provides an opportunity to precisely adapt relevant
skills to learn new tasks effectively. We evaluate on Chinese natural language
generation tasks. Results show that, with only one model file, SkillNet-NLG
outperforms previous best performance methods on four of five tasks.
SkillNet-NLG performs better than two multi-task learning baselines (a dense
model and a Mixture-of-Expert model) and achieves comparable performance to
task-specific models. Lastly, SkillNet-NLG surpasses baseline systems when
being adapted to new tasks.
|
[
{
"version": "v1",
"created": "Tue, 26 Apr 2022 09:37:01 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Liao",
"Junwei",
""
],
[
"Tang",
"Duyu",
""
],
[
"Zhang",
"Fan",
""
],
[
"Shi",
"Shuming",
""
]
] |
new_dataset
| 0.996029 |
2204.12294
|
Ivan Srba
|
Ivan Srba, Branislav Pecher, Matus Tomlein, Robert Moro, Elena
Stefancova, Jakub Simko, Maria Bielikova
|
Monant Medical Misinformation Dataset: Mapping Articles to Fact-Checked
Claims
|
11 pages, 4 figures, SIGIR 2022 Resource paper track
|
ACM SIGIR Conference on Research and Development in Information
Retrieval (SIGIR 2022)
|
10.1145/3477495.3531726
| null |
cs.CL cs.CY cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
False information has a significant negative influence on individuals as well
as on the whole society. Especially in the current COVID-19 era, we witness an
unprecedented growth of medical misinformation. To help tackle this problem
with machine learning approaches, we are publishing a feature-rich dataset of
approx. 317k medical news articles/blogs and 3.5k fact-checked claims. It also
contains 573 manually and more than 51k automatically labelled mappings between
claims and articles. Mappings consist of claim presence, i.e., whether a claim
is contained in a given article, and article stance towards the claim. We
provide several baselines for these two tasks and evaluate them on the manually
labelled part of the dataset. The dataset enables a number of additional tasks
related to medical misinformation, such as misinformation characterisation
studies or studies of misinformation diffusion between sources.
|
[
{
"version": "v1",
"created": "Tue, 26 Apr 2022 13:18:27 GMT"
}
] | 2022-04-27T00:00:00 |
[
[
"Srba",
"Ivan",
""
],
[
"Pecher",
"Branislav",
""
],
[
"Tomlein",
"Matus",
""
],
[
"Moro",
"Robert",
""
],
[
"Stefancova",
"Elena",
""
],
[
"Simko",
"Jakub",
""
],
[
"Bielikova",
"Maria",
""
]
] |
new_dataset
| 0.999835 |
1911.03129
|
Zhengyu Wu
|
Manli Yuan, Liwei Lin, Zhengyu Wu, and Xiucai Ye
|
A Novel Sybil Attack Detection Scheme Based on Edge Computing for Mobile
IoT Environment
|
The detection scheme needs further improvements which may have
something with AI techniques
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Internet of things (IoT) connects all items to the Internet through
information-sensing devices to exchange information for intelligent
identification and management. Sybil attack is a famous and crippling attack in
IoT. Most of the previous methods of detecting Sybil attacks in IoT mainly
focus on static IoT while there are very rare methods applicable to mobile IoT.
In this paper, a novel, lightweight, and distributive detection scheme based on
edge computing is proposed for detecting Sybil attacks in mobile IoT. In the
proposed scheme, a detection consists of two rounds. In each round, member
nodes are required to send packets to edge nodes. Edge nodes calculate a
possible interval of the received signal strength indication (RSSI) from the
first round and check whether the RSSI from the second round is in the interval
to detect Sybil attack. Extensive experimental studies are included to show
that the presented approach outperforms many existing approaches in terms of
true detection and false detection rates. Moreover, experimental results show
that the fault tolerance design in the proposed approach greatly enhances the
detection scheme.
|
[
{
"version": "v1",
"created": "Fri, 8 Nov 2019 08:47:29 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Oct 2021 15:30:45 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Apr 2022 13:45:51 GMT"
},
{
"version": "v4",
"created": "Sun, 24 Apr 2022 10:42:04 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Yuan",
"Manli",
""
],
[
"Lin",
"Liwei",
""
],
[
"Wu",
"Zhengyu",
""
],
[
"Ye",
"Xiucai",
""
]
] |
new_dataset
| 0.992915 |
2003.12223
|
Masahito Hayashi
|
Masahito Hayashi and Ning Cai
|
Secure network code over one-hop relay network
| null |
Journal on Selected Areas in Information Theory vol. 2, no. 1, 296
- 305 (2021)
|
10.1109/JSAIT.2021.3053697
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When there exists a malicious attacker in the network, we need to consider
the possibilities of eavesdropping and the contamination simultaneously. Under
an acyclic broadcast network, the optimality of linear codes was shown when Eve
is allowed to attack any $r$ edges. The optimality of linear codes is not shown
under a different assumption for Eve. As a typical example of an acyclic
unicast network, we focus on the one-hop relay network under the single
transmission scheme by assuming that Eve attacks only one edge in each level.
Surprisingly, as a result, we find that a non-linear code significantly
improves the performance on the one-hop relay network over linear codes. That
is, a non-liner code realizes the imperfect security on this model that cannot
be realized by linear codes. This kind of superiority of a linear code still
holds even with considering the effect of sequential error injection on
information leakage.
|
[
{
"version": "v1",
"created": "Fri, 27 Mar 2020 03:49:05 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Hayashi",
"Masahito",
""
],
[
"Cai",
"Ning",
""
]
] |
new_dataset
| 0.9935 |
2010.00475
|
Eduardo Fonseca
|
Eduardo Fonseca, Xavier Favory, Jordi Pons, Frederic Font, Xavier
Serra
|
FSD50K: An Open Dataset of Human-Labeled Sound Events
|
Accepted version in TASLP. Main updates include: estimation of the
amount of label noise in FSD50K, SNR comparison between FSD50K and AudioSet,
improved description of evaluation metrics including equations, clarification
of experimental methodology and some results, some content moved to Appendix
for readability. https://ieeexplore.ieee.org/document/9645159
| null | null | null |
cs.SD cs.LG eess.AS stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Most existing datasets for sound event recognition (SER) are relatively small
and/or domain-specific, with the exception of AudioSet, based on over 2M tracks
from YouTube videos and encompassing over 500 sound classes. However, AudioSet
is not an open dataset as its official release consists of pre-computed audio
features. Downloading the original audio tracks can be problematic due to
YouTube videos gradually disappearing and usage rights issues. To provide an
alternative benchmark dataset and thus foster SER research, we introduce
FSD50K, an open dataset containing over 51k audio clips totalling over 100h of
audio manually labeled using 200 classes drawn from the AudioSet Ontology. The
audio clips are licensed under Creative Commons licenses, making the dataset
freely distributable (including waveforms). We provide a detailed description
of the FSD50K creation process, tailored to the particularities of Freesound
data, including challenges encountered and solutions adopted. We include a
comprehensive dataset characterization along with discussion of limitations and
key factors to allow its audio-informed usage. Finally, we conduct sound event
classification experiments to provide baseline systems as well as insight on
the main factors to consider when splitting Freesound audio data for SER. Our
goal is to develop a dataset to be widely adopted by the community as a new
open benchmark for SER research.
|
[
{
"version": "v1",
"created": "Thu, 1 Oct 2020 15:07:25 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Apr 2022 20:12:00 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Fonseca",
"Eduardo",
""
],
[
"Favory",
"Xavier",
""
],
[
"Pons",
"Jordi",
""
],
[
"Font",
"Frederic",
""
],
[
"Serra",
"Xavier",
""
]
] |
new_dataset
| 0.999885 |
2012.00968
|
K\"ur\c{s}at Tekb{\i}y{\i}k
|
K\"ur\c{s}at Tekb{\i}y{\i}k, G\"une\c{s} Karabulut Kurt, Ali R{\i}za
Ekti, Halim Yanikomeroglu
|
Reconfigurable Intelligent Surfaces in Action for Non-Terrestrial
Networks
|
to appear in IEEE Vehicular Technology Magazine
| null |
10.1109/MVT.2022.3168995
| null |
cs.IT cs.LG eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Next-generation communication technology will be made possible by cooperation
between terrestrial networks with non-terrestrial networks (NTN) comprised of
high-altitude platform stations and satellites. Further, as humanity embarks on
the long road to establish new habitats on other planets, cooperation between
NTN and deep-space networks (DSN) will be necessary. In this regard, we propose
the use of reconfigurable intelligent surfaces (RIS) to improve coordination
between these networks given that RIS perfectly match the size, weight, and
power restrictions of operating in space. A comprehensive framework of
RIS-assisted non-terrestrial and interplanetary communications is presented
that pinpoints challenges, use cases, and open issues. Furthermore, the
performance of RIS-assisted NTN under environmental effects such as solar
scintillation and satellite drag is discussed in light of simulation results.
|
[
{
"version": "v1",
"created": "Wed, 2 Dec 2020 05:11:51 GMT"
},
{
"version": "v2",
"created": "Sat, 3 Apr 2021 19:36:58 GMT"
},
{
"version": "v3",
"created": "Wed, 24 Nov 2021 16:25:50 GMT"
},
{
"version": "v4",
"created": "Thu, 24 Feb 2022 17:34:55 GMT"
},
{
"version": "v5",
"created": "Fri, 22 Apr 2022 21:45:49 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Tekbıyık",
"Kürşat",
""
],
[
"Kurt",
"Güneş Karabulut",
""
],
[
"Ekti",
"Ali Rıza",
""
],
[
"Yanikomeroglu",
"Halim",
""
]
] |
new_dataset
| 0.992232 |
2102.10252
|
Hadi Jahanshahi
|
Hadi Jahanshahi and Mustafa Gokce Baydogan
|
nTreeClus: a Tree-based Sequence Encoder for Clustering Categorical
Series
|
Published in Neurocomputing (Available online 22 April 2022)
| null |
10.1016/j.neucom.2022.04.076
| null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The overwhelming presence of categorical/sequential data in diverse domains
emphasizes the importance of sequence mining. The challenging nature of
sequences proves the need for continuing research to find a more accurate and
faster approach providing a better understanding of their (dis)similarities.
This paper proposes a new Model-based approach for clustering sequence data,
namely nTreeClus. The proposed method deploys Tree-based Learners, k-mers, and
autoregressive models for categorical time series, culminating with a novel
numerical representation of the categorical sequences. Adopting this new
representation, we cluster sequences, considering the inherent patterns in
categorical time series. Accordingly, the model showed robustness to its
parameter. Under different simulated scenarios, nTreeClus improved the baseline
methods for various internal and external cluster validation metrics for up to
10.7% and 2.7%, respectively. The empirical evaluation using synthetic and real
datasets, protein sequences, and categorical time series showed that nTreeClus
is competitive or superior to most state-of-the-art algorithms.
|
[
{
"version": "v1",
"created": "Sat, 20 Feb 2021 03:58:17 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Feb 2022 21:17:39 GMT"
},
{
"version": "v3",
"created": "Sat, 23 Apr 2022 01:16:16 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Jahanshahi",
"Hadi",
""
],
[
"Baydogan",
"Mustafa Gokce",
""
]
] |
new_dataset
| 0.9761 |
2103.06911
|
Qiaojun Feng
|
Tianyu Zhao, Qiaojun Feng, Sai Jadhav, Nikolay Atanasov
|
CORSAIR: Convolutional Object Retrieval and Symmetry-AIded Registration
|
8 pages, 8 figures
|
2021 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), Prague, Czech Republic, 2021, pp. 47-54
|
10.1109/IROS51168.2021.9636347
| null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers online object-level mapping using partial point-cloud
observations obtained online in an unknown environment. We develop and approach
for fully Convolutional Object Retrieval and Symmetry-AIded Registration
(CORSAIR). Our model extends the Fully Convolutional Geometric Features model
to learn a global object-shape embedding in addition to local point-wise
features from the point-cloud observations. The global feature is used to
retrieve a similar object from a category database, and the local features are
used for robust pose registration between the observed and the retrieved
object. Our formulation also leverages symmetries, present in the object
shapes, to obtain promising local-feature pairs from different symmetry classes
for matching. We present results from synthetic and real-world datasets with
different object categories to verify the robustness of our method.
|
[
{
"version": "v1",
"created": "Thu, 11 Mar 2021 19:12:48 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Aug 2021 23:22:06 GMT"
},
{
"version": "v3",
"created": "Sat, 4 Sep 2021 22:55:55 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Zhao",
"Tianyu",
""
],
[
"Feng",
"Qiaojun",
""
],
[
"Jadhav",
"Sai",
""
],
[
"Atanasov",
"Nikolay",
""
]
] |
new_dataset
| 0.950102 |
2105.03280
|
Tosin Adewumi
|
Tosin P. Adewumi, Roshanak Vadoodi, Aparajita Tripathy, Konstantina
Nikolaidou, Foteini Liwicki and Marcus Liwicki
|
Potential Idiomatic Expression (PIE)-English: Corpus for Classes of
Idioms
|
Accepted at the International Conference on Language Resources and
Evaluation (LREC) 2022
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present a fairly large, Potential Idiomatic Expression (PIE) dataset for
Natural Language Processing (NLP) in English. The challenges with NLP systems
with regards to tasks such as Machine Translation (MT), word sense
disambiguation (WSD) and information retrieval make it imperative to have a
labelled idioms dataset with classes such as it is in this work. To the best of
the authors' knowledge, this is the first idioms corpus with classes of idioms
beyond the literal and the general idioms classification. In particular, the
following classes are labelled in the dataset: metaphor, simile, euphemism,
parallelism, personification, oxymoron, paradox, hyperbole, irony and literal.
We obtain an overall inter-annotator agreement (IAA) score, between two
independent annotators, of 88.89%. Many past efforts have been limited in the
corpus size and classes of samples but this dataset contains over 20,100
samples with almost 1,200 cases of idioms (with their meanings) from 10 classes
(or senses). The corpus may also be extended by researchers to meet specific
needs. The corpus has part of speech (PoS) tagging from the NLTK library.
Classification experiments performed on the corpus to obtain a baseline and
comparison among three common models, including the BERT model, give good
results. We also make publicly available the corpus and the relevant codes for
working with it for NLP tasks.
|
[
{
"version": "v1",
"created": "Sun, 25 Apr 2021 13:05:29 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Apr 2022 09:56:03 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Adewumi",
"Tosin P.",
""
],
[
"Vadoodi",
"Roshanak",
""
],
[
"Tripathy",
"Aparajita",
""
],
[
"Nikolaidou",
"Konstantina",
""
],
[
"Liwicki",
"Foteini",
""
],
[
"Liwicki",
"Marcus",
""
]
] |
new_dataset
| 0.99976 |
2105.05332
|
Ryan Szeto
|
Ryan Szeto, Jason J. Corso
|
The DEVIL is in the Details: A Diagnostic Evaluation Benchmark for Video
Inpainting
|
Accepted to CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantitative evaluation has increased dramatically among recent video
inpainting work, but the video and mask content used to gauge performance has
received relatively little attention. Although attributes such as camera and
background scene motion inherently change the difficulty of the task and affect
methods differently, existing evaluation schemes fail to control for them,
thereby providing minimal insight into inpainting failure modes. To address
this gap, we propose the Diagnostic Evaluation of Video Inpainting on
Landscapes (DEVIL) benchmark, which consists of two contributions: (i) a novel
dataset of videos and masks labeled according to several key inpainting failure
modes, and (ii) an evaluation scheme that samples slices of the dataset
characterized by a fixed content attribute, and scores performance on each
slice according to reconstruction, realism, and temporal consistency quality.
By revealing systematic changes in performance induced by particular
characteristics of the input content, our challenging benchmark enables more
insightful analysis into video inpainting methods and serves as an invaluable
diagnostic tool for the field. Our code and data are available at
https://github.com/MichiganCOG/devil .
|
[
{
"version": "v1",
"created": "Tue, 11 May 2021 20:13:53 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2022 16:18:39 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Szeto",
"Ryan",
""
],
[
"Corso",
"Jason J.",
""
]
] |
new_dataset
| 0.997525 |
2105.09580
|
Nanqing Dong
|
Nanqing Dong, Michael Kampffmeyer, Irina Voiculescu, Eric Xing
|
Negational Symmetry of Quantum Neural Networks for Binary Pattern
Classification
|
Accepted by Pattern Recognition
| null | null | null |
cs.LG quant-ph stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Entanglement is a physical phenomenon, which has fueled recent successes of
quantum algorithms. Although quantum neural networks (QNNs) have shown
promising results in solving simple machine learning tasks recently, for the
time being, the effect of entanglement in QNNs and the behavior of QNNs in
binary pattern classification are still underexplored. In this work, we provide
some theoretical insight into the properties of QNNs by presenting and
analyzing a new form of invariance embedded in QNNs for both quantum binary
classification and quantum representation learning, which we term negational
symmetry. Given a quantum binary signal and its negational counterpart where a
bitwise NOT operation is applied to each quantum bit of the binary signal, a
QNN outputs the same logits. That is to say, QNNs cannot differentiate a
quantum binary signal and its negational counterpart in a binary classification
task. We further empirically evaluate the negational symmetry of QNNs in binary
pattern classification tasks using Google's quantum computing framework. The
theoretical and experimental results suggest that negational symmetry is a
fundamental property of QNNs, which is not shared by classical models. Our
findings also imply that negational symmetry is a double-edged sword in
practical quantum applications.
|
[
{
"version": "v1",
"created": "Thu, 20 May 2021 08:13:38 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2022 15:52:28 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Dong",
"Nanqing",
""
],
[
"Kampffmeyer",
"Michael",
""
],
[
"Voiculescu",
"Irina",
""
],
[
"Xing",
"Eric",
""
]
] |
new_dataset
| 0.994835 |
2106.01977
|
Alexandros Nikou PhD
|
Alexandros Nikou, Anusha Mujumdar, Vaishnavi Sundararajan, Marin
Orlic, Aneta Vulgarakis Feljan
|
Safe RAN control: A Symbolic Reinforcement Learning Approach
|
To appear in International Conference of Control and Automation
(ICCA) 2022
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a Symbolic Reinforcement Learning (SRL) based
architecture for safety control of Radio Access Network (RAN) applications. In
particular, we provide a purely automated procedure in which a user can specify
high-level logical safety specifications for a given cellular network topology
in order for the latter to execute optimal safe performance which is measured
through certain Key Performance Indicators (KPIs). The network consists of a
set of fixed Base Stations (BS) which are equipped with antennas, which one can
control by adjusting their vertical tilt angle. The aforementioned process is
called Remote Electrical Tilt (RET) optimization. Recent research has focused
on performing this RET optimization by employing Reinforcement Learning (RL)
strategies due to the fact that they have self-learning capabilities to adapt
in uncertain environments. The term safety refers to particular constraints
bounds of the network KPIs in order to guarantee that when the algorithms are
deployed in a live network, the performance is maintained. In our proposed
architecture the safety is ensured through model-checking techniques over
combined discrete system models (automata) that are abstracted through the
learning process. We introduce a user interface (UI) developed to help a user
set intent specifications to the system, and inspect the difference in agent
proposed actions, and those that are allowed and blocked according to the
safety specification.
|
[
{
"version": "v1",
"created": "Thu, 3 Jun 2021 16:45:40 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2022 09:29:18 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Nikou",
"Alexandros",
""
],
[
"Mujumdar",
"Anusha",
""
],
[
"Sundararajan",
"Vaishnavi",
""
],
[
"Orlic",
"Marin",
""
],
[
"Feljan",
"Aneta Vulgarakis",
""
]
] |
new_dataset
| 0.98415 |
2108.08621
|
Xieyuanli Chen
|
Hao Dong, Xieyuanli Chen, Cyrill Stachniss
|
Online Range Image-based Pole Extractor for Long-term LiDAR Localization
in Urban Environments
|
Accepted by ECMR 2021
| null |
10.1109/ECMR50962.2021.9568850
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Reliable and accurate localization is crucial for mobile autonomous systems.
Pole-like objects, such as traffic signs, poles, lamps, etc., are ideal
landmarks for localization in urban environments due to their local
distinctiveness and long-term stability. In this paper, we present a novel,
accurate, and fast pole extraction approach that runs online and has little
computational demands such that this information can be used for a localization
system. Our method performs all computations directly on range images generated
from 3D LiDAR scans, which avoids processing 3D point cloud explicitly and
enables fast pole extraction for each scan. We test the proposed pole
extraction and localization approach on different datasets with different LiDAR
scanners, weather conditions, routes, and seasonal changes. The experimental
results show that our approach outperforms other state-of-the-art approaches,
while running online without a GPU. Besides, we release our pole dataset to the
public for evaluating the performance of pole extractor, as well as the
implementation of our approach.
|
[
{
"version": "v1",
"created": "Thu, 19 Aug 2021 11:16:54 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Dong",
"Hao",
""
],
[
"Chen",
"Xieyuanli",
""
],
[
"Stachniss",
"Cyrill",
""
]
] |
new_dataset
| 0.992635 |
2109.13641
|
Weidong Mei
|
Weidong Mei, Beixiong Zheng, Changsheng You, Rui Zhang
|
Intelligent Reflecting Surface Aided Wireless Networks: From
Single-Reflection to Multi-Reflection Design and Optimization
|
Invited paper. Accepted for publication in the Proceedings of the
IEEE
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intelligent reflecting surface (IRS) has emerged as a promising technique for
wireless communication networks. By dynamically tuning the reflection
amplitudes/phase shifts of a large number of passive elements, IRS enables
flexible wireless channel control and configuration, and thereby enhances the
wireless signal transmission rate and reliability significantly. Despite the
vast literature on designing and optimizing assorted IRS-aided wireless
systems, prior works have mainly focused on enhancing wireless links with
single signal reflection only by one or multiple IRSs, which may be
insufficient to boost the wireless link capacity under some harsh propagation
conditions (e.g., indoor environment with dense blockages/obstructions). This
issue can be tackled by employing two or more IRSs to assist each wireless link
and jointly exploiting their single as well as multiple signal reflections over
them. However, the resultant double-/multi-IRS aided wireless systems face more
complex design issues as well as new practical challenges for implementation as
compared to the conventional single-IRS counterpart, in terms of IRS reflection
optimization, channel acquisition, as well as IRS deployment and
association/selection. As such, a new paradigm for designing multi-IRS
cooperative passive beamforming and joint active/passive beam routing arises
which calls for innovative design approaches and optimization methods. In this
paper, we give a tutorial overview of multi-IRS aided wireless networks, with
an emphasis on addressing the new challenges due to multi-IRS signal reflection
and routing. Moreover, we point out important directions worthy of research and
investigation in the future.
|
[
{
"version": "v1",
"created": "Tue, 28 Sep 2021 11:58:59 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2022 12:54:06 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Mei",
"Weidong",
""
],
[
"Zheng",
"Beixiong",
""
],
[
"You",
"Changsheng",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.996709 |
2111.12772
|
Karl Willis
|
Karl D.D. Willis, Pradeep Kumar Jayaraman, Hang Chu, Yunsheng Tian,
Yifei Li, Daniele Grandi, Aditya Sanghi, Linh Tran, Joseph G. Lambourne,
Armando Solar-Lezama, Wojciech Matusik
|
JoinABLe: Learning Bottom-up Assembly of Parametric CAD Joints
|
CVPR 2022; code available at
https://github.com/AutodeskAILab/JoinABLe and data available at
https://github.com/AutodeskAILab/Fusion360GalleryDataset
| null | null | null |
cs.LG cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physical products are often complex assemblies combining a multitude of 3D
parts modeled in computer-aided design (CAD) software. CAD designers build up
these assemblies by aligning individual parts to one another using constraints
called joints. In this paper we introduce JoinABLe, a learning-based method
that assembles parts together to form joints. JoinABLe uses the weak
supervision available in standard parametric CAD files without the help of
object class labels or human guidance. Our results show that by making network
predictions over a graph representation of solid models we can outperform
multiple baseline methods with an accuracy (79.53%) that approaches human
performance (80%). Finally, to support future research we release the Fusion
360 Gallery assembly dataset, containing assemblies with rich information on
joints, contact surfaces, holes, and the underlying assembly graph structure.
|
[
{
"version": "v1",
"created": "Wed, 24 Nov 2021 20:05:59 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Apr 2022 22:14:53 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Willis",
"Karl D. D.",
""
],
[
"Jayaraman",
"Pradeep Kumar",
""
],
[
"Chu",
"Hang",
""
],
[
"Tian",
"Yunsheng",
""
],
[
"Li",
"Yifei",
""
],
[
"Grandi",
"Daniele",
""
],
[
"Sanghi",
"Aditya",
""
],
[
"Tran",
"Linh",
""
],
[
"Lambourne",
"Joseph G.",
""
],
[
"Solar-Lezama",
"Armando",
""
],
[
"Matusik",
"Wojciech",
""
]
] |
new_dataset
| 0.981808 |
2112.07111
|
Peng Zhao
|
Peng Zhao, Chen Li, Md Mamunur Rahaman, Hao Xu, Pingli Ma, Hechen
Yang, Hongzan Sun, Tao Jiang, Ning Xu and Marcin Grzegorzek
|
EMDS-6: Environmental Microorganism Image Dataset Sixth Version for
Image Denoising, Segmentation, Feature Extraction, Classification and
Detection Methods Evaluation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Environmental microorganisms (EMs) are ubiquitous around us and have an
important impact on the survival and development of human society. However, the
high standards and strict requirements for the preparation of environmental
microorganism (EM) data have led to the insufficient of existing related
databases, not to mention the databases with GT images. This problem seriously
affects the progress of related experiments. Therefore, This study develops the
Environmental Microorganism Dataset Sixth Version (EMDS-6), which contains 21
types of EMs. Each type of EM contains 40 original and 40 GT images, in total
1680 EM images. In this study, in order to test the effectiveness of EMDS-6. We
choose the classic algorithms of image processing methods such as image
denoising, image segmentation and target detection. The experimental result
shows that EMDS-6 can be used to evaluate the performance of image denoising,
image segmentation, image feature extraction, image classification, and object
detection methods.
|
[
{
"version": "v1",
"created": "Tue, 14 Dec 2021 02:28:24 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2022 09:51:20 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Zhao",
"Peng",
""
],
[
"Li",
"Chen",
""
],
[
"Rahaman",
"Md Mamunur",
""
],
[
"Xu",
"Hao",
""
],
[
"Ma",
"Pingli",
""
],
[
"Yang",
"Hechen",
""
],
[
"Sun",
"Hongzan",
""
],
[
"Jiang",
"Tao",
""
],
[
"Xu",
"Ning",
""
],
[
"Grzegorzek",
"Marcin",
""
]
] |
new_dataset
| 0.999384 |
2112.12495
|
Kirill Ivanov
|
Kirill Ivanov, R\"udiger Urbanke
|
Polar Codes Do Not Have Many Affine Automorphisms
|
Accepted to ISIT 2022
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Polar coding solutions demonstrate excellent performance under the list
decoding that is challenging to implement in hardware due to the path sorting
operations. As a potential solution to this problem, permutation decoding
recently became a hot research topic. However, it imposes more constraints on
the code structure.
In this paper, we study the structural properties of Arikan's polar codes. It
is known that they are invariant under lower-triangular affine permutations
among others. However, those permutations are not useful in the context of
permutation decoding. We show that, unfortunately, the group of affine
automorphisms of Arikan's polar codes asymptotically cannot be much bigger than
the group of lower-triangular permutations.
|
[
{
"version": "v1",
"created": "Thu, 23 Dec 2021 12:41:41 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Jan 2022 13:12:30 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Apr 2022 09:56:54 GMT"
}
] | 2022-04-26T00:00:00 |
[
[
"Ivanov",
"Kirill",
""
],
[
"Urbanke",
"Rüdiger",
""
]
] |
new_dataset
| 0.975951 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.