id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2211.00832
|
Mohammed Abdelsadek
|
Mohammed Y. Abdelsadek, Gunes Karabulut Kurt, and Halim Yanikomeroglu
|
Distributed Massive MIMO for LEO Satellite Networks
|
arXiv admin note: text overlap with arXiv:2106.09837
| null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The ultra-dense deployment of interconnected satellites will characterize
future low Earth orbit (LEO) mega-constellations. Exploiting this towards a
more efficient satellite network (SatNet), this paper proposes a novel LEO
SatNet architecture based on distributed massive multiple-input multiple-output
(DM-MIMO) technology allowing ground user terminals to be connected to a
cluster of satellites. To this end, we investigate various aspects of
DM-MIMO-based satellite network design, the benefits of using this
architecture, the associated challenges, and the potential solutions. In
addition, we propose a distributed joint power allocation and handover
management (D-JPAHM) technique that jointly optimizes the power allocation and
handover management processes in a cross-layer manner. This framework aims to
maximize the network throughput and minimize the handover rate while
considering the quality-of-service (QoS) demands of user terminals and the
power capabilities of the satellites. Moreover, we devise an artificial
intelligence (AI)-based solution to efficiently implement the proposed D-JPAHM
framework in a manner suitable for real-time operation and the dynamic SatNet
environment. To the best of our knowledge, this is the first work to introduce
and study DM-MIMO technology in LEO SatNets. Extensive simulation results
reveal the superiority of the proposed architecture and solutions compared to
conventional approaches in the literature.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 02:21:59 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Abdelsadek",
"Mohammed Y.",
""
],
[
"Kurt",
"Gunes Karabulut",
""
],
[
"Yanikomeroglu",
"Halim",
""
]
] |
new_dataset
| 0.990213 |
2211.00869
|
Haolin Deng
|
Haolin Deng, Yanan Zhang, Yangfan Zhang, Wangyang Ying, Changlong Yu,
Jun Gao, Wei Wang, Xiaoling Bai, Nan Yang, Jin Ma, Xiang Chen, Tianhua Zhou
|
Title2Event: Benchmarking Open Event Extraction with a Large-scale
Chinese Title Dataset
|
EMNLP 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Event extraction (EE) is crucial to downstream tasks such as new aggregation
and event knowledge graph construction. Most existing EE datasets manually
define fixed event types and design specific schema for each of them, failing
to cover diverse events emerging from the online text. Moreover, news titles,
an important source of event mentions, have not gained enough attention in
current EE research. In this paper, We present Title2Event, a large-scale
sentence-level dataset benchmarking Open Event Extraction without restricting
event types. Title2Event contains more than 42,000 news titles in 34 topics
collected from Chinese web pages. To the best of our knowledge, it is currently
the largest manually-annotated Chinese dataset for open event extraction. We
further conduct experiments on Title2Event with different models and show that
the characteristics of titles make it challenging for event extraction,
addressing the significance of advanced study on this problem. The dataset and
baseline codes are available at https://open-event-hub.github.io/title2event.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 04:39:36 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Deng",
"Haolin",
""
],
[
"Zhang",
"Yanan",
""
],
[
"Zhang",
"Yangfan",
""
],
[
"Ying",
"Wangyang",
""
],
[
"Yu",
"Changlong",
""
],
[
"Gao",
"Jun",
""
],
[
"Wang",
"Wei",
""
],
[
"Bai",
"Xiaoling",
""
],
[
"Yang",
"Nan",
""
],
[
"Ma",
"Jin",
""
],
[
"Chen",
"Xiang",
""
],
[
"Zhou",
"Tianhua",
""
]
] |
new_dataset
| 0.999531 |
2211.00872
|
Hadi Jahanshahi
|
Hadi Jahanshahi, Mucahit Cevik, Kianoush Mousavi, Ay\c{s}e Ba\c{s}ar
|
ADPTriage: Approximate Dynamic Programming for Bug Triage
| null | null | null | null |
cs.SE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bug triaging is a critical task in any software development project. It
entails triagers going over a list of open bugs, deciding whether each is
required to be addressed, and, if so, which developer should fix it. However,
the manual bug assignment in issue tracking systems (ITS) offers only a limited
solution and might easily fail when triagers must handle a large number of bug
reports. During the automated assignment, there are multiple sources of
uncertainties in the ITS, which should be addressed meticulously. In this
study, we develop a Markov decision process (MDP) model for an online bug
triage task. In addition to an optimization-based myopic technique, we provide
an ADP-based bug triage solution, called ADPTriage, which has the ability to
reflect the downstream uncertainty in the bug arrivals and developers'
timetables. Specifically, without placing any limits on the underlying
stochastic process, this technique enables real-time decision-making on bug
assignments while taking into consideration developers' expertise, bug type,
and bug fixing time. Our result shows a significant improvement over the myopic
approach in terms of assignment accuracy and fixing time. We also demonstrate
the empirical convergence of the model and conduct sensitivity analysis with
various model parameters. Accordingly, this work constitutes a significant step
forward in addressing the uncertainty in bug triage solutions
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 04:42:21 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Jahanshahi",
"Hadi",
""
],
[
"Cevik",
"Mucahit",
""
],
[
"Mousavi",
"Kianoush",
""
],
[
"Başar",
"Ayşe",
""
]
] |
new_dataset
| 0.997365 |
2211.00874
|
Zhifeng Tang
|
Zhifeng Tang, Zhuo Sun, Nan Yang, and Xiangyun Zhou
|
Age of Information of Multi-user Mobile Edge Computing Systems
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we analyze the average age of information (AoI) and the
average peak AoI (PAoI) of a multiuser mobile edge computing (MEC) system where
a base station (BS) generates and transmits computation-intensive packets to
user equipments (UEs). In this MEC system, we focus on three computing schemes:
(i) The local computing scheme where all computational tasks are computed by
the local server at the UE, (ii) The edge computing scheme where all
computational tasks are computed by the edge server at the BS, and (iii) The
partial computing scheme where computational tasks are partially allocated at
the edge server and the rest are computed by the local server. Considering
exponentially distributed transmission time and computation time and adopting
the first come first serve (FCFS) queuing policy, we derive closed-form
expressions for the average AoI and average PAoI. To address the complexity of
the average AoI expression, we derive simple upper and lower bounds on the
average AoI, which allow us to explicitly examine the dependence of the optimal
offloading decision on the MEC system parameters. Aided by simulation results,
we verify our analysis and illustrate the impact of system parameters on the
AoI performance.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 04:43:42 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Tang",
"Zhifeng",
""
],
[
"Sun",
"Zhuo",
""
],
[
"Yang",
"Nan",
""
],
[
"Zhou",
"Xiangyun",
""
]
] |
new_dataset
| 0.969906 |
2211.00891
|
Petr Lisonek
|
Reza Dastbasteh, Petr Lisonek
|
New quantum codes from self-dual codes over F_4
|
16 pages
| null | null | null |
cs.IT math.IT math.NT
|
http://creativecommons.org/licenses/by/4.0/
|
We present new constructions of binary quantum codes from quaternary linear
Hermitian self-dual codes. Our main ingredients for these constructions are
nearly self-orthogonal cyclic or duadic codes over F_4. An infinite family of
$0$-dimensional binary quantum codes is provided. We give minimum distance
lower bounds for our quantum codes in terms of the minimum distance of their
ingredient linear codes. We also present new results on the minimum distance of
linear cyclic codes using their fixed subcodes. Finally, we list many new
record-breaking quantum codes obtained from our constructions.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 05:30:54 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Dastbasteh",
"Reza",
""
],
[
"Lisonek",
"Petr",
""
]
] |
new_dataset
| 0.999866 |
2211.00897
|
Petr Lisonek
|
Reza Dastbasteh, Petr Lisonek
|
On the equivalence of linear cyclic and constacyclic codes
|
18 pages
| null | null | null |
cs.IT math.IT math.NT
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce new sufficient conditions for permutation and monomial
equivalence of linear cyclic codes over various finite fields. We recall that
monomial equivalence and isometric equivalence are the same relation for linear
codes over finite fields. A necessary and sufficient condition for the monomial
equivalence of linear cyclic codes through a shift map on their defining set is
also given. Moreover, we provide new algebraic criteria for the monomial
equivalence of constacyclic codes over $\mathbb{F}_4$. Finally, we prove that
if $\gcd(3n,\phi(3n))=1$, then all permutation equivalent constacyclic codes of
length $n$ over $\mathbb{F}_4$ are given by the action of multipliers. The
results of this work allow us to prune the search algorithm for new linear
codes and discover record-breaking linear and quantum codes.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 05:43:38 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Dastbasteh",
"Reza",
""
],
[
"Lisonek",
"Petr",
""
]
] |
new_dataset
| 0.964965 |
2211.00937
|
Jincheng Dai
|
Ke Yang, Sixian Wang, Jincheng Dai, Kailin Tan, Kai Niu, Ping Zhang
|
WITT: A Wireless Image Transmission Transformer for Semantic
Communications
| null | null | null | null |
cs.CV cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we aim to redesign the vision Transformer (ViT) as a new
backbone to realize semantic image transmission, termed wireless image
transmission transformer (WITT). Previous works build upon convolutional neural
networks (CNNs), which are inefficient in capturing global dependencies,
resulting in degraded end-to-end transmission performance especially for
high-resolution images. To tackle this, the proposed WITT employs Swin
Transformers as a more capable backbone to extract long-range information.
Different from ViTs in image classification tasks, WITT is highly optimized for
image transmission while considering the effect of the wireless channel.
Specifically, we propose a spatial modulation module to scale the latent
representations according to channel state information, which enhances the
ability of a single model to deal with various channel conditions. As a result,
extensive experiments verify that our WITT attains better performance for
different image resolutions, distortion metrics, and channel conditions. The
code is available at https://github.com/KeYang8/WITT.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 07:50:27 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Yang",
"Ke",
""
],
[
"Wang",
"Sixian",
""
],
[
"Dai",
"Jincheng",
""
],
[
"Tan",
"Kailin",
""
],
[
"Niu",
"Kai",
""
],
[
"Zhang",
"Ping",
""
]
] |
new_dataset
| 0.999748 |
2211.00941
|
Chengdong Liang
|
Chengdong Liang, Xiao-Lei Zhang, BinBin Zhang, Di Wu, Shengqiang Li,
Xingchen Song, Zhendong Peng, Fuping Pan
|
Fast-U2++: Fast and Accurate End-to-End Speech Recognition in Joint
CTC/Attention Frames
|
5 pages, 3 figures
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, the unified streaming and non-streaming two-pass (U2/U2++)
end-to-end model for speech recognition has shown great performance in terms of
streaming capability, accuracy and latency. In this paper, we present
fast-U2++, an enhanced version of U2++ to further reduce partial latency. The
core idea of fast-U2++ is to output partial results of the bottom layers in its
encoder with a small chunk, while using a large chunk in the top layers of its
encoder to compensate the performance degradation caused by the small chunk.
Moreover, we use knowledge distillation method to reduce the token emission
latency. We present extensive experiments on Aishell-1 dataset. Experiments and
ablation studies show that compared to U2++, fast-U2++ reduces model latency
from 320ms to 80ms, and achieves a character error rate (CER) of 5.06% with a
streaming setup.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 08:01:52 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Liang",
"Chengdong",
""
],
[
"Zhang",
"Xiao-Lei",
""
],
[
"Zhang",
"BinBin",
""
],
[
"Wu",
"Di",
""
],
[
"Li",
"Shengqiang",
""
],
[
"Song",
"Xingchen",
""
],
[
"Peng",
"Zhendong",
""
],
[
"Pan",
"Fuping",
""
]
] |
new_dataset
| 0.994315 |
2211.00992
|
Hannaneh Barahouei Pasandi
|
Hannaneh Barahouei Pasandi, Asma Haghighat, Azin Moradbeikie, Ahmad
Keshavarz, Habib Rostami, Sara Paiva, Sergio Ivan Lopes
|
Low-Cost Traffic Sensing System Based on LoRaWAN for Urban Areas
|
7 pages, accepted to Emerging Topics in Wireless (EmergingWireless)
in CoNEXT 2022
| null |
10.1145/3565474.3569069
| null |
cs.NI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The advent of Low Power Wide Area Networks (LPWAN) has enabled the
feasibility of wireless sensor networks for environmental traffic sensing
across urban areas. In this study, we explore the usage of LoRaWAN end nodes as
traffic sensing sensors to offer a practical traffic management solution. The
monitored Received Signal Strength Indicator (RSSI) factor is reported and used
in the gateways to assess the traffic of the environment. Our technique
utilizes LoRaWAN as a long-range communication technology to provide a
largescale system. In this work, we present a method of using LoRaWAN devices
to estimate traffic flows. LoRaWAN end devices then transmit their packets to
different gateways. Their RSSI will be affected by the number of cars present
on the roadway. We used SVM and clustering methods to classify the approximate
number of cars present. This paper details our experiences with the design and
real implementation of this system across an area that stretches for miles in
urban scenarios. We continuously measured and reported RSSI at different
gateways for weeks. Results have shown that if a LoRaWAN end node is placed in
an optimal position, up to 96% of correct environment traffic level detection
can be obtained. Additionally, we share the l
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 09:54:16 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Pasandi",
"Hannaneh Barahouei",
""
],
[
"Haghighat",
"Asma",
""
],
[
"Moradbeikie",
"Azin",
""
],
[
"Keshavarz",
"Ahmad",
""
],
[
"Rostami",
"Habib",
""
],
[
"Paiva",
"Sara",
""
],
[
"Lopes",
"Sergio Ivan",
""
]
] |
new_dataset
| 0.998934 |
2211.01000
|
Jingfan Yu
|
Jingfan Yu, Mengqian Zhang, Xi Chen, Zhixuan Fang
|
SoK: Play-to-Earn Projects
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Play-to-earn is one of the prospective categories of decentralized
applications. The play-to-earn projects combine blockchain technology with
entertaining games and finance, attracting various participants. While huge
amounts of capital have been poured into these projects, the new crypto niche
is considered controversial, and the traditional gaming industry is hesitant to
embrace blockchain technology. In addition, there is little systematic research
on these projects. In this paper, we delineate play-to-earn projects in terms
of economic & governance models and implementation and analyze how blockchain
technology can benefit these projects by providing system robustness,
transparency, composability, and decentralized governance. We begin by
identifying the participants and characterizing the tokens, which are products
of composability. We then summarize the roadmap and governance model to exposit
there is a transition from centralized governance to decentralized governance.
We also classify the implementation of the play-to-earn projects with different
extents of robustness and transparency. Finally, we discuss the security &
societal challenges for future research in terms of possible attacks, the
economics of tokens, and governance.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 10:01:09 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Yu",
"Jingfan",
""
],
[
"Zhang",
"Mengqian",
""
],
[
"Chen",
"Xi",
""
],
[
"Fang",
"Zhixuan",
""
]
] |
new_dataset
| 0.999195 |
2211.01070
|
Miguel Altamirano Cabrera
|
Sautenkov Oleg, Altamirano Cabrera Miguel, Rakhmatulin Viktor, and
Tsetserukou Dzmitry
|
CobotTouch: AR-based Interface with Fingertip-worn Tactile Display for
Immersive Operation/Control of Collaborative Robots
|
12 pages, 11 figures, Accepted paper in AsiaHaptics 2022
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Complex robotic tasks require human collaboration to benefit from their high
dexterity. Frequent human-robot interaction is mentally demanding and
time-consuming. Intuitive and easy-to-use robot control interfaces reduce the
negative influence on workers, especially inexperienced users. In this paper,
we present CobotTouch, a novel intuitive robot control interface with fingertip
haptic feedback. The proposed interface consists of a projected Graphical User
Interface on the robotic arm to control the position of the robot end-effector
based on gesture recognition, and a wearable haptic interface to deliver
tactile feedback on the user's fingertips. We evaluated the user's perception
of the designed tactile patterns presented by the haptic interface and the
intuitiveness of the proposed system for robot control in a use case. The
results revealed a high average recognition rate of 75.25\% for the tactile
patterns. An average NASA Task Load Index (TLX) indicated small mental and
temporal demands proving a high level of the intuitiveness of CobotTouch for
interaction with collaborative robots.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 12:03:09 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Oleg",
"Sautenkov",
""
],
[
"Miguel",
"Altamirano Cabrera",
""
],
[
"Viktor",
"Rakhmatulin",
""
],
[
"Dzmitry",
"Tsetserukou",
""
]
] |
new_dataset
| 0.996532 |
2211.01145
|
Zheng Li
|
Zheng Li and Mauricio Pradena Miquel and Pedro Pinacho-Davidson
|
Safety-centric and Smart Outdoor Workplace: A New Research Direction and
Its Technical Challenges
|
14 pages
| null | null | null |
cs.HC cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the fact that outside is becoming the frontier of indoor workplaces,
a large amount of real-world work like road construction has to be done by
outdoor human activities in open areas. Given the promise of the smart
workplace in various aspects including productivity and safety, we decided to
employ smart workplace technologies for a collaborative outdoor project both to
improve the work efficiency and to reduce the worker injuries. Nevertheless,
our trials on smart workplace implementation have encountered a few problems
ranging from the theoretical confusion among different stakeholders, to the
technical difficulties in extending underground devices' lifespan. This
triggers our rethinking of and discussions about "smart workplace". Eventually,
considering the unique characteristics of outdoor work (e.g., more
sophisticated workflows and more safety-related situations than office work),
we argue that "safety-centric and smart outdoor workplace" deserves dedicated
research attentions and efforts under the umbrella discipline of smart
environment. In addition, the identified technical challenges can in turn drive
different research dimensions of such a distinguishing topic.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 14:22:31 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Li",
"Zheng",
""
],
[
"Miquel",
"Mauricio Pradena",
""
],
[
"Pinacho-Davidson",
"Pedro",
""
]
] |
new_dataset
| 0.999309 |
2211.01173
|
Max Sokolich
|
Max Sokolich, Max Sokolich, David Rivas, Markos Duey, Daniel
Borsykowsky, Sambeeta Das
|
ModMag: A Modular Magnetic Micro-Robotic Manipulation Device
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Electromagnetic systems have been used extensively for the control
magnetically actuated objects, such as in microrheology and microrobotics
research. Therefore, optimizing the design of such systems is highly desired.
Some of the features that are lacking in most current designs are compactness,
portability, and versatility. Portability is especially relevant for biomedical
applications in which in vivo or in vitro testing may be conducted in locations
away from the laboratory microscope. This document describes the design,
fabrication and implementation of a compact, low cost, versatile, and user
friendly device (the ModMag) capable of controlling multiple electromagnetic
setups, including a two-dimensional 4-coil traditional configuration, a
3-dimensional Helmholtz configuration, and a 3-dimensional magnetic tweezer
configuration. All electronics and circuitry for powering the systems is
contained in a compact 10"x6"x3" system which includes a 10" touchscreen. A
graphical user interface provides additional ease of use. The system can also
be controlled remotely, allowing for more flexibility and the ability to
interface with other software running on the remote computer such as propriety
camera software. Aside from the software and circuitry, we also describe the
design of the electromagnetic coil setups and provide examples of the use of
the ModMag in experiments.
|
[
{
"version": "v1",
"created": "Sat, 1 Oct 2022 15:26:05 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Sokolich",
"Max",
""
],
[
"Sokolich",
"Max",
""
],
[
"Rivas",
"David",
""
],
[
"Duey",
"Markos",
""
],
[
"Borsykowsky",
"Daniel",
""
],
[
"Das",
"Sambeeta",
""
]
] |
new_dataset
| 0.995268 |
2211.01178
|
Michal Edelstein
|
Michal Edelstein, Hila Peleg, Shachar Itzhaky and Mirela Ben-Chen
|
AmiGo: Computational Design of Amigurumi Crochet Patterns
|
11 pages, 10 figures, SCF 2022
| null |
10.1145/3559400.3562005
| null |
cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose an approach for generating crochet instructions (patterns) from an
input 3D model. We focus on Amigurumi, which are knitted stuffed toys. Given a
closed triangle mesh, and a single point specified by the user, we generate
crochet instructions, which when knitted and stuffed result in a toy similar to
the input geometry. Our approach relies on constructing the geometry and
connectivity of a Crochet Graph, which is then translated into a crochet
pattern. We segment the shape automatically into chrochetable components, which
are connected using the join-as-you-go method, requiring no additional sewing.
We demonstrate that our method is applicable to a large variety of shapes and
geometries, and yields easily crochetable patterns.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 14:53:21 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Edelstein",
"Michal",
""
],
[
"Peleg",
"Hila",
""
],
[
"Itzhaky",
"Shachar",
""
],
[
"Ben-Chen",
"Mirela",
""
]
] |
new_dataset
| 0.999415 |
2211.01254
|
Ethan Nguyen
|
Ethan H. Nguyen, Haichun Yang, Zuhayr Asad, Ruining Deng, Agnes B.
Fogo, and Yuankai Huo
|
CircleSnake: Instance Segmentation with Circle Representation
|
Machine Learning in Medical Imaging Workshop for 2022 MICCAI
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Circle representation has recently been introduced as a medical imaging
optimized representation for more effective instance object detection on
ball-shaped medical objects. With its superior performance on instance
detection, it is appealing to extend the circle representation to instance
medical object segmentation. In this work, we propose CircleSnake, a simple
end-to-end circle contour deformation-based segmentation method for ball-shaped
medical objects. Compared to the prevalent DeepSnake method, our contribution
is three-fold: (1) We replace the complicated bounding box to octagon contour
transformation with a computation-free and consistent bounding circle to circle
contour adaption for segmenting ball-shaped medical objects; (2) Circle
representation has fewer degrees of freedom (DoF=2) as compared with the
octagon representation (DoF=8), thus yielding a more robust segmentation
performance and better rotation consistency; (3) To the best of our knowledge,
the proposed CircleSnake method is the first end-to-end circle representation
deep segmentation pipeline method with consistent circle detection, circle
contour proposal, and circular convolution. The key innovation is to integrate
the circular graph convolution with circle detection into an end-to-end
instance segmentation framework, enabled by the proposed simple and consistent
circle contour representation. Glomeruli are used to evaluate the performance
of the benchmarks. From the results, CircleSnake increases the average
precision of glomerular detection from 0.559 to 0.614. The Dice score increased
from 0.804 to 0.849. The code has been released:
https://github.com/hrlblab/CircleSnake
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 16:34:20 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Nguyen",
"Ethan H.",
""
],
[
"Yang",
"Haichun",
""
],
[
"Asad",
"Zuhayr",
""
],
[
"Deng",
"Ruining",
""
],
[
"Fogo",
"Agnes B.",
""
],
[
"Huo",
"Yuankai",
""
]
] |
new_dataset
| 0.995241 |
2211.01342
|
Zikang Leng
|
Zikang Leng, Yash Jain, Hyeokhyen Kwon, Thomas Pl\"otz
|
Fine-grained Human Activity Recognition Using Virtual On-body
Acceleration Data
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous work has demonstrated that virtual accelerometry data, extracted
from videos using cross-modality transfer approaches like IMUTube, is
beneficial for training complex and effective human activity recognition (HAR)
models. Systems like IMUTube were originally designed to cover activities that
are based on substantial body (part) movements. Yet, life is complex, and a
range of activities of daily living is based on only rather subtle movements,
which bears the question to what extent systems like IMUTube are of value also
for fine-grained HAR, i.e., When does IMUTube break? In this work we first
introduce a measure to quantitatively assess the subtlety of human movements
that are underlying activities of interest--the motion subtlety index
(MSI)--which captures local pixel movements and pose changes in the vicinity of
target virtual sensor locations, and correlate it to the eventual activity
recognition accuracy. We then perform a "stress-test" on IMUTube and explore
for which activities with underlying subtle movements a cross-modality transfer
approach works, and for which not. As such, the work presented in this paper
allows us to map out the landscape for IMUTube applications in practical
scenarios.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 17:51:56 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Leng",
"Zikang",
""
],
[
"Jain",
"Yash",
""
],
[
"Kwon",
"Hyeokhyen",
""
],
[
"Plötz",
"Thomas",
""
]
] |
new_dataset
| 0.993791 |
2211.01355
|
Xing Niu
|
Anna Currey, Maria N\u{a}dejde, Raghavendra Pappagari, Mia Mayer,
Stanislas Lauly, Xing Niu, Benjamin Hsu, Georgiana Dinu
|
MT-GenEval: A Counterfactual and Contextual Dataset for Evaluating
Gender Accuracy in Machine Translation
|
Accepted at EMNLP 2022. Data and code:
https://github.com/amazon-research/machine-translation-gender-eval
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As generic machine translation (MT) quality has improved, the need for
targeted benchmarks that explore fine-grained aspects of quality has increased.
In particular, gender accuracy in translation can have implications in terms of
output fluency, translation accuracy, and ethics. In this paper, we introduce
MT-GenEval, a benchmark for evaluating gender accuracy in translation from
English into eight widely-spoken languages. MT-GenEval complements existing
benchmarks by providing realistic, gender-balanced, counterfactual data in
eight language pairs where the gender of individuals is unambiguous in the
input segment, including multi-sentence segments requiring inter-sentential
gender agreement. Our data and code is publicly available under a CC BY SA 3.0
license.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 17:55:43 GMT"
}
] | 2022-11-03T00:00:00 |
[
[
"Currey",
"Anna",
""
],
[
"Nădejde",
"Maria",
""
],
[
"Pappagari",
"Raghavendra",
""
],
[
"Mayer",
"Mia",
""
],
[
"Lauly",
"Stanislas",
""
],
[
"Niu",
"Xing",
""
],
[
"Hsu",
"Benjamin",
""
],
[
"Dinu",
"Georgiana",
""
]
] |
new_dataset
| 0.999849 |
2106.02393
|
Pierre Laforgue
|
Nicol\`o Cesa-Bianchi, Pierre Laforgue, Andrea Paudice, Massimiliano
Pontil
|
Multitask Online Mirror Descent
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce and analyze MT-OMD, a multitask generalization of Online Mirror
Descent (OMD) which operates by sharing updates between tasks. We prove that
the regret of MT-OMD is of order $\sqrt{1 + \sigma^2(N-1)}\sqrt{T}$, where
$\sigma^2$ is the task variance according to the geometry induced by the
regularizer, $N$ is the number of tasks, and $T$ is the time horizon. Whenever
tasks are similar, that is $\sigma^2 \le 1$, our method improves upon the
$\sqrt{NT}$ bound obtained by running independent OMDs on each task. We further
provide a matching lower bound, and show that our multitask extensions of
Online Gradient Descent and Exponentiated Gradient, two major instances of OMD,
enjoy closed-form updates, making them easy to use in practice. Finally, we
present experiments which support our theoretical findings.
|
[
{
"version": "v1",
"created": "Fri, 4 Jun 2021 10:14:57 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Oct 2021 10:27:59 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Nov 2022 14:21:48 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Cesa-Bianchi",
"Nicolò",
""
],
[
"Laforgue",
"Pierre",
""
],
[
"Paudice",
"Andrea",
""
],
[
"Pontil",
"Massimiliano",
""
]
] |
new_dataset
| 0.983875 |
2106.08087
|
Ningyu Zhang
|
Ningyu Zhang, Mosha Chen, Zhen Bi, Xiaozhuan Liang, Lei Li, Xin Shang,
Kangping Yin, Chuanqi Tan, Jian Xu, Fei Huang, Luo Si, Yuan Ni, Guotong Xie,
Zhifang Sui, Baobao Chang, Hui Zong, Zheng Yuan, Linfeng Li, Jun Yan,
Hongying Zan, Kunli Zhang, Buzhou Tang, Qingcai Chen
|
CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark
|
Accepted by ACL 2022
| null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Artificial Intelligence (AI), along with the recent progress in biomedical
language understanding, is gradually changing medical practice. With the
development of biomedical language understanding benchmarks, AI applications
are widely used in the medical field. However, most benchmarks are limited to
English, which makes it challenging to replicate many of the successes in
English for other languages. To facilitate research in this direction, we
collect real-world biomedical data and present the first Chinese Biomedical
Language Understanding Evaluation (CBLUE) benchmark: a collection of natural
language understanding tasks including named entity recognition, information
extraction, clinical diagnosis normalization, single-sentence/sentence-pair
classification, and an associated online platform for model evaluation,
comparison, and analysis. To establish evaluation on these tasks, we report
empirical results with the current 11 pre-trained Chinese models, and
experimental results show that state-of-the-art neural models perform by far
worse than the human ceiling. Our benchmark is released at
\url{https://tianchi.aliyun.com/dataset/dataDetail?dataId=95414&lang=en-us}.
|
[
{
"version": "v1",
"created": "Tue, 15 Jun 2021 12:25:30 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jul 2021 09:51:13 GMT"
},
{
"version": "v3",
"created": "Tue, 6 Jul 2021 12:25:56 GMT"
},
{
"version": "v4",
"created": "Tue, 24 Aug 2021 09:22:24 GMT"
},
{
"version": "v5",
"created": "Sat, 28 Aug 2021 12:04:42 GMT"
},
{
"version": "v6",
"created": "Mon, 7 Mar 2022 09:14:20 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Zhang",
"Ningyu",
""
],
[
"Chen",
"Mosha",
""
],
[
"Bi",
"Zhen",
""
],
[
"Liang",
"Xiaozhuan",
""
],
[
"Li",
"Lei",
""
],
[
"Shang",
"Xin",
""
],
[
"Yin",
"Kangping",
""
],
[
"Tan",
"Chuanqi",
""
],
[
"Xu",
"Jian",
""
],
[
"Huang",
"Fei",
""
],
[
"Si",
"Luo",
""
],
[
"Ni",
"Yuan",
""
],
[
"Xie",
"Guotong",
""
],
[
"Sui",
"Zhifang",
""
],
[
"Chang",
"Baobao",
""
],
[
"Zong",
"Hui",
""
],
[
"Yuan",
"Zheng",
""
],
[
"Li",
"Linfeng",
""
],
[
"Yan",
"Jun",
""
],
[
"Zan",
"Hongying",
""
],
[
"Zhang",
"Kunli",
""
],
[
"Tang",
"Buzhou",
""
],
[
"Chen",
"Qingcai",
""
]
] |
new_dataset
| 0.999685 |
2111.12938
|
Ayush Tripathi
|
Ayush Tripathi and Arnab Kumar Mondal and Lalan Kumar and Prathosh A.P
|
SCLAiR : Supervised Contrastive Learning for User and Device Independent
Airwriting Recognition
| null | null |
10.1109/LSENS.2021.3139473
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Airwriting Recognition is the problem of identifying letters written in free
space with finger movement. It is essentially a specialized case of gesture
recognition, wherein the vocabulary of gestures corresponds to letters as in a
particular language. With the wide adoption of smart wearables in the general
population, airwriting recognition using motion sensors from a smart-band can
be used as a medium of user input for applications in Human-Computer
Interaction. There has been limited work in the recognition of in-air
trajectories using motion sensors, and the performance of the techniques in the
case when the device used to record signals is changed has not been explored
hitherto. Motivated by these, a new paradigm for device and user-independent
airwriting recognition based on supervised contrastive learning is proposed. A
two stage classification strategy is employed, the first of which involves
training an encoder network with supervised contrastive loss. In the subsequent
stage, a classification head is trained with the encoder weights kept frozen.
The efficacy of the proposed method is demonstrated through experiments on a
publicly available dataset and also with a dataset recorded in our lab using a
different device. Experiments have been performed in both supervised and
unsupervised settings and compared against several state-of-the-art domain
adaptation techniques. Data and the code for our implementation will be made
available at https://github.com/ayushayt/SCLAiR.
|
[
{
"version": "v1",
"created": "Thu, 25 Nov 2021 06:35:40 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Dec 2021 08:49:51 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Tripathi",
"Ayush",
""
],
[
"Mondal",
"Arnab Kumar",
""
],
[
"Kumar",
"Lalan",
""
],
[
"P",
"Prathosh A.",
""
]
] |
new_dataset
| 0.99955 |
2203.17149
|
Daniel Gehrig
|
Simon Schaefer, Daniel Gehrig, and Davide Scaramuzza
|
AEGNN: Asynchronous Event-based Graph Neural Networks
|
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
New Orleans, 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The best performing learning algorithms devised for event cameras work by
first converting events into dense representations that are then processed
using standard CNNs. However, these steps discard both the sparsity and high
temporal resolution of events, leading to high computational burden and
latency. For this reason, recent works have adopted Graph Neural Networks
(GNNs), which process events as ``static" spatio-temporal graphs, which are
inherently "sparse". We take this trend one step further by introducing
Asynchronous, Event-based Graph Neural Networks (AEGNNs), a novel
event-processing paradigm that generalizes standard GNNs to process events as
``evolving" spatio-temporal graphs. AEGNNs follow efficient update rules that
restrict recomputation of network activations only to the nodes affected by
each new event, thereby significantly reducing both computation and latency for
event-by-event processing. AEGNNs are easily trained on synchronous inputs and
can be converted to efficient, "asynchronous" networks at test time. We
thoroughly validate our method on object classification and detection tasks,
where we show an up to a 11-fold reduction in computational complexity (FLOPs),
with similar or even better performance than state-of-the-art asynchronous
methods. This reduction in computation directly translates to an 8-fold
reduction in computational latency when compared to standard GNNs, which opens
the door to low-latency event-based processing.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 16:21:12 GMT"
},
{
"version": "v2",
"created": "Fri, 6 May 2022 16:03:28 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Nov 2022 11:18:54 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Schaefer",
"Simon",
""
],
[
"Gehrig",
"Daniel",
""
],
[
"Scaramuzza",
"Davide",
""
]
] |
new_dataset
| 0.990414 |
2205.01904
|
Ayush Tripathi
|
Ayush Tripathi, Arnab Kumar Mondal, Lalan Kumar, Prathosh A.P
|
ImAiR: Airwriting Recognition framework using Image Representation of
IMU Signals
| null | null |
10.1109/LSENS.2022.3206307
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The problem of Airwriting Recognition is focused on identifying letters
written by movement of finger in free space. It is a type of gesture
recognition where the dictionary corresponds to letters in a specific language.
In particular, airwriting recognition using sensor data from wrist-worn devices
can be used as a medium of user input for applications in Human-Computer
Interaction (HCI). Recognition of in-air trajectories using such wrist-worn
devices is limited in literature and forms the basis of the current work. In
this paper, we propose an airwriting recognition framework by first encoding
the time-series data obtained from a wearable Inertial Measurement Unit (IMU)
on the wrist as images and then utilizing deep learning-based models for
identifying the written alphabets. The signals recorded from 3-axis
accelerometer and gyroscope in IMU are encoded as images using different
techniques such as Self Similarity Matrix (SSM), Gramian Angular Field (GAF)
and Markov Transition Field (MTF) to form two sets of 3-channel images. These
are then fed to two separate classification models and letter prediction is
made based on an average of the class conditional probabilities obtained from
the two models. Several standard model architectures for image classification
such as variants of ResNet, DenseNet, VGGNet, AlexNet and GoogleNet have been
utilized. Experiments performed on two publicly available datasets demonstrate
the efficacy of the proposed strategy. The code for our implementation will be
made available at https://github.com/ayushayt/ImAiR.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 06:10:34 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Sep 2022 07:58:14 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Tripathi",
"Ayush",
""
],
[
"Mondal",
"Arnab Kumar",
""
],
[
"Kumar",
"Lalan",
""
],
[
"P",
"Prathosh A.",
""
]
] |
new_dataset
| 0.997596 |
2205.07728
|
Albert Wu
|
Albert Wu, Thomas Lew, Kiril Solovey, Edward Schmerling, Marco Pavone
|
Robust-RRT: Probabilistically-Complete Motion Planning for Uncertain
Nonlinear Systems
|
16 pages of main text + 5 pages of appendix, 5 figures, submitted to
the 2022 International Symposium on Robotics Research
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Robust motion planning entails computing a global motion plan that is safe
under all possible uncertainty realizations, be it in the system dynamics, the
robot's initial position, or with respect to external disturbances. Current
approaches for robust motion planning either lack theoretical guarantees, or
make restrictive assumptions on the system dynamics and uncertainty
distributions. In this paper, we address these limitations by proposing the
robust rapidly-exploring random-tree (Robust-RRT) algorithm, which integrates
forward reachability analysis directly into sampling-based control trajectory
synthesis. We prove that Robust-RRT is probabilistically complete (PC) for
nonlinear Lipschitz continuous dynamical systems with bounded uncertainty. In
other words, Robust-RRT eventually finds a robust motion plan that is feasible
under all possible uncertainty realizations assuming such a plan exists. Our
analysis applies even to unstable systems that admit only short-horizon
feasible plans; this is because we explicitly consider the time evolution of
reachable sets along control trajectories. Thanks to the explicit consideration
of time dependency in our analysis, PC applies to unstabilizable systems. To
the best of our knowledge, this is the most general PC proof for robust
sampling-based motion planning, in terms of the types of uncertainties and
dynamical systems it can handle. Considering that an exact computation of
reachable sets can be computationally expensive for some dynamical systems, we
incorporate sampling-based reachability analysis into Robust-RRT and
demonstrate our robust planner on nonlinear, underactuated, and hybrid systems.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 14:46:12 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Nov 2022 15:22:55 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Wu",
"Albert",
""
],
[
"Lew",
"Thomas",
""
],
[
"Solovey",
"Kiril",
""
],
[
"Schmerling",
"Edward",
""
],
[
"Pavone",
"Marco",
""
]
] |
new_dataset
| 0.955635 |
2207.03477
|
Florin Brad
|
Andrei Manolache, Florin Brad, Antonio Barbalau, Radu Tudor Ionescu,
Marius Popescu
|
VeriDark: A Large-Scale Benchmark for Authorship Verification on the
Dark Web
|
Accepted at the 36th Conference on Neural Information Processing
Systems (NeurIPS 2022) Track on Datasets and Benchmarks. 21 pages, 4 figures,
11 tables
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The DarkWeb represents a hotbed for illicit activity, where users communicate
on different market forums in order to exchange goods and services. Law
enforcement agencies benefit from forensic tools that perform authorship
analysis, in order to identify and profile users based on their textual
content. However, authorship analysis has been traditionally studied using
corpora featuring literary texts such as fragments from novels or fan fiction,
which may not be suitable in a cybercrime context. Moreover, the few works that
employ authorship analysis tools for cybercrime prevention usually employ
ad-hoc experimental setups and datasets. To address these issues, we release
VeriDark: a benchmark comprised of three large scale authorship verification
datasets and one authorship identification dataset obtained from user activity
from either Dark Web related Reddit communities or popular illicit Dark Web
market forums. We evaluate competitive NLP baselines on the three datasets and
perform an analysis of the predictions to better understand the limitations of
such approaches. We make the datasets and baselines publicly available at
https://github.com/bit-ml/VeriDark
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2022 17:57:11 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Nov 2022 11:22:30 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Manolache",
"Andrei",
""
],
[
"Brad",
"Florin",
""
],
[
"Barbalau",
"Antonio",
""
],
[
"Ionescu",
"Radu Tudor",
""
],
[
"Popescu",
"Marius",
""
]
] |
new_dataset
| 0.999874 |
2210.11637
|
Xiang Wang
|
Wei Zhang, Jiaxi Cao, Xiang Wang, Enqi Tian and Bin Li
|
Slippage-robust Gaze Tracking for Near-eye Display
|
7 pages, 8 figures
| null | null | null |
cs.CV cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, head-mounted near-eye display devices have become the key
hardware foundation for virtual reality and augmented reality. Thus
head-mounted gaze tracking technology has received attention as an essential
part of human-computer interaction. However, unavoidable slippage of
head-mounted devices (HMD) often results higher gaze tracking errors and
hinders the practical usage of HMD. To tackle this problem, we propose a
slippage-robust gaze tracking for near-eye display method based on the aspheric
eyeball model and accurately compute the eyeball optical axis and rotation
center. We tested several methods on datasets with slippage and the
experimental results show that the proposed method significantly outperforms
the previous method (almost double the suboptimal method).
|
[
{
"version": "v1",
"created": "Thu, 20 Oct 2022 23:47:56 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Nov 2022 17:52:05 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Zhang",
"Wei",
""
],
[
"Cao",
"Jiaxi",
""
],
[
"Wang",
"Xiang",
""
],
[
"Tian",
"Enqi",
""
],
[
"Li",
"Bin",
""
]
] |
new_dataset
| 0.994344 |
2210.14722
|
Michalis Xefteris
|
Evripidis Bampis, Bruno Escoffier, Niklas Hahn and Michalis Xefteris
|
Online TSP with Known Locations
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we consider the Online Traveling Salesperson Problem (OLTSP)
where the locations of the requests are known in advance, but not their arrival
times. We study both the open variant, in which the algorithm is not required
to return to the origin when all the requests are served, as well as the closed
variant, in which the algorithm has to return to the origin after serving all
the requests. Our aim is to measure the impact of the extra knowledge of the
locations on the competitiveness of the problem. We present an online
3/2-competitive algorithm for the general case and a matching lower bound for
both the open and the closed variant. Then, we focus on some interesting metric
spaces (ring, star, semi-line), providing both lower bounds and polynomial time
online algorithms for the problem.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 13:51:49 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Oct 2022 13:55:51 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Nov 2022 12:51:59 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Bampis",
"Evripidis",
""
],
[
"Escoffier",
"Bruno",
""
],
[
"Hahn",
"Niklas",
""
],
[
"Xefteris",
"Michalis",
""
]
] |
new_dataset
| 0.953592 |
2211.00005
|
Piotr Koniusz
|
Lei Wang and Piotr Koniusz
|
Uncertainty-DTW for Time Series and Sequences
|
Accepted as an oral paper at the 17th European Conference on Computer
Vision (ECCV 2022). arXiv admin note: text overlap with arXiv:2210.16820
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic Time Warping (DTW) is used for matching pairs of sequences and
celebrated in applications such as forecasting the evolution of time series,
clustering time series or even matching sequence pairs in few-shot action
recognition. The transportation plan of DTW contains a set of paths; each path
matches frames between two sequences under a varying degree of time warping, to
account for varying temporal intra-class dynamics of actions. However, as DTW
is the smallest distance among all paths, it may be affected by the feature
uncertainty which varies across time steps/frames. Thus, in this paper, we
propose to model the so-called aleatoric uncertainty of a differentiable (soft)
version of DTW. To this end, we model the heteroscedastic aleatoric uncertainty
of each path by the product of likelihoods from Normal distributions, each
capturing variance of pair of frames. (The path distance is the sum of base
distances between features of pairs of frames of the path.) The Maximum
Likelihood Estimation (MLE) applied to a path yields two terms: (i) a sum of
Euclidean distances weighted by the variance inverse, and (ii) a sum of
log-variance regularization terms. Thus, our uncertainty-DTW is the smallest
weighted path distance among all paths, and the regularization term (penalty
for the high uncertainty) is the aggregate of log-variances along the path. The
distance and the regularization term can be used in various objectives. We
showcase forecasting the evolution of time series, estimating the Fr\'echet
mean of time series, and supervised/unsupervised few-shot action recognition of
the articulated human 3D body joints.
|
[
{
"version": "v1",
"created": "Sun, 30 Oct 2022 17:06:55 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Wang",
"Lei",
""
],
[
"Koniusz",
"Piotr",
""
]
] |
new_dataset
| 0.992154 |
2211.00046
|
Everlyn Chimoto
|
Everlyn Asiko Chimoto and Bruce A. Bassett
|
Very Low Resource Sentence Alignment: Luhya and Swahili
|
Accepted to LoResMT 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Language-agnostic sentence embeddings generated by pre-trained models such as
LASER and LaBSE are attractive options for mining large datasets to produce
parallel corpora for low-resource machine translation. We test LASER and LaBSE
in extracting bitext for two related low-resource African languages: Luhya and
Swahili. For this work, we created a new parallel set of nearly 8000
Luhya-English sentences which allows a new zero-shot test of LASER and LaBSE.
We find that LaBSE significantly outperforms LASER on both languages. Both
LASER and LaBSE however perform poorly at zero-shot alignment on Luhya,
achieving just 1.5% and 22.0% successful alignments respectively (P@1 score).
We fine-tune the embeddings on a small set of parallel Luhya sentences and show
significant gains, improving the LaBSE alignment accuracy to 53.3%. Further,
restricting the dataset to sentence embedding pairs with cosine similarity
above 0.7 yielded alignments with over 85% accuracy.
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 18:01:13 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Chimoto",
"Everlyn Asiko",
""
],
[
"Bassett",
"Bruce A.",
""
]
] |
new_dataset
| 0.97216 |
2211.00062
|
Ruhul Amin
|
Afsana Rahman, Dr. Ruhul Amin
|
Technology and COVID-19: How Reliant is Society on Technology?
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Social media and messaging platforms have become a support system for those
in fear of COVID-19 while, at the same time, becoming the root cause of
spreading hate, inaccurate representations, and false realities. As technology
has morphed into a commodity for daily tasks and actions, this article may be
useful for people of all ages and backgrounds who are interested in
understanding the impact of technology on society.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 00:43:59 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Rahman",
"Afsana",
""
],
[
"Amin",
"Dr. Ruhul",
""
]
] |
new_dataset
| 0.95876 |
2211.00067
|
Ravi Yellavajjala
|
Braxton Rolle, and Ravi Kiran
|
COVID-19 Infection Exposure to Customers Shopping during Black Friday
|
22 pages, 11 tables, and 8 figures
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The outbreak of COVID-19 within the last two years has resulted in much
further investigation into the safety of large events that involve a gathering
of people. This study aims to investigate how COVID-19 can spread through a
large crowd of people shopping in a store with no safety precautions taken. The
event being investigated is Black Friday, where hundreds or thousands of
customers flood stores to hopefully receive the best deals on popular items. A
mock store was created, separated into several different shopping sections, and
represented using a 2-D grid where each square on the grid represented a 5 feet
by 5 feet area of the mock store. Customers were simulated to enter the store,
shop for certain items, check out, and then leave the store. A percentage of
customers were chosen to be infective when they entered the store, which means
that they could spread infection quantum to other customers. Four hours of time
was simulated with around 6,000 customers being included. The maximum distance
exposure could be spread (2 feet-10 feet), the minimum time of exposure needed
to become infected (2 - 15 minutes), and the total percentage of customers who
started as infective (1% - 5%) were all changed and their effects on the number
of newly infected customers were measured. It was found that increasing the
maximum exposure distance by 2 feet resulted in between a 20% to 250% increase
in newly infected customers, depending on the distances being used. It was also
found that increasing the percentage of customers who started as infective from
1% to 2% and then to 5% resulted in a 200% to 300% increase in newly infected
customers.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 20:11:30 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Rolle",
"Braxton",
""
],
[
"Kiran",
"Ravi",
""
]
] |
new_dataset
| 0.974623 |
2211.00074
|
Md Sakib Ullah Sourav
|
A.T.M Mustafa Masud Chowdhury, Jeenat Sultana, Md Sakib Ullah Sourav
|
IoT-based Efficient Streetlight Controlling, Monitoring and Real-time
Error Detection System in Major Bangladeshi Cities
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A huge wastage of electricity can be seen in Bangladesh due to improper
street light management which leads to an enormous financial loss every year.
Many noteworthy works have been done by researchers from different parts of the
world in tackling this issue by using the Internet of Things yet very few in
Bangladeshi perspective. In this work, we propose an efficient Internet of
Things-based integrated streetlight framework that offers cloud-powered
monitoring, controlling through light dimming as per external lighting
conditions and traffic detection, as well as a fault-detecting system to ensure
low power and electricity consumption. We analyzed data from Dhaka North and
South City Corporation, Narayanganj City Corporation, and Chattogram City
Corporation where our proposed model demonstrates a reduction in energy cost of
up to approximately 60 percent more than that of the existing system.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 17:21:59 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Chowdhury",
"A. T. M Mustafa Masud",
""
],
[
"Sultana",
"Jeenat",
""
],
[
"Sourav",
"Md Sakib Ullah",
""
]
] |
new_dataset
| 0.960998 |
2211.00083
|
Kunal Chawla
|
Raj Sanjay Shah, Kunal Chawla, Dheeraj Eidnani, Agam Shah, Wendi Du,
Sudheer Chava, Natraj Raman, Charese Smiley, Jiaao Chen, Diyi Yang
|
WHEN FLUE MEETS FLANG: Benchmarks and Large Pre-trained Language Model
for Financial Domain
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Pre-trained language models have shown impressive performance on a variety of
tasks and domains. Previous research on financial language models usually
employs a generic training scheme to train standard model architectures,
without completely leveraging the richness of the financial data. We propose a
novel domain specific Financial LANGuage model (FLANG) which uses financial
keywords and phrases for better masking, together with span boundary objective
and in-filing objective. Additionally, the evaluation benchmarks in the field
have been limited. To this end, we contribute the Financial Language
Understanding Evaluation (FLUE), an open-source comprehensive suite of
benchmarks for the financial domain. These include new benchmarks across 5 NLP
tasks in financial domain as well as common benchmarks used in the previous
research. Experiments on these benchmarks suggest that our model outperforms
those in prior literature on a variety of NLP tasks. Our models, code and
benchmark data are publicly available on Github and Huggingface.
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 18:35:18 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Shah",
"Raj Sanjay",
""
],
[
"Chawla",
"Kunal",
""
],
[
"Eidnani",
"Dheeraj",
""
],
[
"Shah",
"Agam",
""
],
[
"Du",
"Wendi",
""
],
[
"Chava",
"Sudheer",
""
],
[
"Raman",
"Natraj",
""
],
[
"Smiley",
"Charese",
""
],
[
"Chen",
"Jiaao",
""
],
[
"Yang",
"Diyi",
""
]
] |
new_dataset
| 0.999649 |
2211.00091
|
Vung Pham
|
Vung Pham, Du Nguyen, Christopher Donan
|
Road Damages Detection and Classification with YOLOv7
|
8 pages, 5 tables, 9 figures, 17 references
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Maintaining the roadway infrastructure is one of the essential factors in
enabling a safe, economic, and sustainable transportation system. Manual
roadway damage data collection is laborious and unsafe for humans to perform.
This area is poised to benefit from the rapid advance and diffusion of
artificial intelligence technologies. Specifically, deep learning advancements
enable the detection of road damages automatically from the collected road
images. This work proposes to collect and label road damage data using Google
Street View and use YOLOv7 (You Only Look Once version 7) together with
coordinate attention and related accuracy fine-tuning techniques such as label
smoothing and ensemble method to train deep learning models for automatic road
damage detection and classification. The proposed approaches are applied to the
Crowdsensing-based Road Damage Detection Challenge (CRDDC2022), IEEE BigData
2022. The results show that the data collection from Google Street View is
efficient, and the proposed deep learning approach results in F1 scores of
81.7% on the road damage data collected from the United States using Google
Street View and 74.1% on all test images of this dataset.
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 18:55:58 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Pham",
"Vung",
""
],
[
"Nguyen",
"Du",
""
],
[
"Donan",
"Christopher",
""
]
] |
new_dataset
| 0.995986 |
2211.00099
|
Shangchen Han
|
Shangchen Han, Po-chen Wu, Yubo Zhang, Beibei Liu, Linguang Zhang,
Zheng Wang, Weiguang Si, Peizhao Zhang, Yujun Cai, Tomas Hodan, Randi
Cabezas, Luan Tran, Muzaffer Akbay, Tsz-Ho Yu, Cem Keskin, Robert Wang
|
UmeTrack: Unified multi-view end-to-end hand tracking for VR
|
SIGGRAPH Asia 2022 Conference Papers, 8 pages
| null |
10.1145/3550469.3555378
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Real-time tracking of 3D hand pose in world space is a challenging problem
and plays an important role in VR interaction. Existing work in this space are
limited to either producing root-relative (versus world space) 3D pose or rely
on multiple stages such as generating heatmaps and kinematic optimization to
obtain 3D pose. Moreover, the typical VR scenario, which involves multi-view
tracking from wide \ac{fov} cameras is seldom addressed by these methods. In
this paper, we present a unified end-to-end differentiable framework for
multi-view, multi-frame hand tracking that directly predicts 3D hand pose in
world space. We demonstrate the benefits of end-to-end differentiabilty by
extending our framework with downstream tasks such as jitter reduction and
pinch prediction. To demonstrate the efficacy of our model, we further present
a new large-scale egocentric hand pose dataset that consists of both real and
synthetic data. Experiments show that our system trained on this dataset
handles various challenging interactive motions, and has been successfully
applied to real-time VR applications.
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 19:09:21 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Han",
"Shangchen",
""
],
[
"Wu",
"Po-chen",
""
],
[
"Zhang",
"Yubo",
""
],
[
"Liu",
"Beibei",
""
],
[
"Zhang",
"Linguang",
""
],
[
"Wang",
"Zheng",
""
],
[
"Si",
"Weiguang",
""
],
[
"Zhang",
"Peizhao",
""
],
[
"Cai",
"Yujun",
""
],
[
"Hodan",
"Tomas",
""
],
[
"Cabezas",
"Randi",
""
],
[
"Tran",
"Luan",
""
],
[
"Akbay",
"Muzaffer",
""
],
[
"Yu",
"Tsz-Ho",
""
],
[
"Keskin",
"Cem",
""
],
[
"Wang",
"Robert",
""
]
] |
new_dataset
| 0.999243 |
2211.00110
|
Th\'eo Morales
|
Th\'eo Morales and Gerard Lacey
|
A new benchmark for group distribution shifts in hand grasp regression
for object manipulation. Can meta-learning raise the bar?
|
Workshop on Distribution Shifts, 36th Conference on Neural
Information Processing Systems (NeurIPS 2022)
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Understanding hand-object pose with computer vision opens the door to new
applications in mixed reality, assisted living or human-robot interaction. Most
methods are trained and evaluated on balanced datasets. This is of limited use
in real-world applications; how do these methods perform in the wild on unknown
objects? We propose a novel benchmark for object group distribution shifts in
hand and object pose regression. We then test the hypothesis that meta-learning
a baseline pose regression neural network can adapt to these shifts and
generalize better to unknown objects. Our results show measurable improvements
over the baseline, depending on the amount of prior knowledge. For the task of
joint hand-object pose regression, we observe optimization interference for the
meta-learner. To address this issue and improve the method further, we provide
a comprehensive analysis which should serve as a basis for future work on this
benchmark.
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 19:32:14 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Morales",
"Théo",
""
],
[
"Lacey",
"Gerard",
""
]
] |
new_dataset
| 0.972319 |
2211.00142
|
Jan A. Botha
|
Sebastian Gehrmann, Sebastian Ruder, Vitaly Nikolaev, Jan A. Botha,
Michael Chavinda, Ankur Parikh, Clara Rivera
|
TaTa: A Multilingual Table-to-Text Dataset for African Languages
|
24 pages, 6 figures
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Existing data-to-text generation datasets are mostly limited to English. To
address this lack of data, we create Table-to-Text in African languages (TaTa),
the first large multilingual table-to-text dataset with a focus on African
languages. We created TaTa by transcribing figures and accompanying text in
bilingual reports by the Demographic and Health Surveys Program, followed by
professional translation to make the dataset fully parallel. TaTa includes
8,700 examples in nine languages including four African languages (Hausa, Igbo,
Swahili, and Yor\`ub\'a) and a zero-shot test language (Russian). We
additionally release screenshots of the original figures for future research on
multilingual multi-modal approaches. Through an in-depth human evaluation, we
show that TaTa is challenging for current models and that less than half the
outputs from an mT5-XXL-based model are understandable and attributable to the
source data. We further demonstrate that existing metrics perform poorly for
TaTa and introduce learned metrics that achieve a high correlation with human
judgments. We release all data and annotations at
https://github.com/google-research/url-nlp.
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 21:05:42 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Gehrmann",
"Sebastian",
""
],
[
"Ruder",
"Sebastian",
""
],
[
"Nikolaev",
"Vitaly",
""
],
[
"Botha",
"Jan A.",
""
],
[
"Chavinda",
"Michael",
""
],
[
"Parikh",
"Ankur",
""
],
[
"Rivera",
"Clara",
""
]
] |
new_dataset
| 0.999895 |
2211.00198
|
Bernd Pfrommer
|
Bernd Pfrommer
|
Frequency Cam: Imaging Periodic Signals in Real-Time
|
13 pages, 16 figures, one table
| null | null | null |
cs.CV cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Due to their high temporal resolution and large dynamic range event cameras
are uniquely suited for the analysis of time-periodic signals in an image. In
this work we present an efficient and fully asynchronous event camera algorithm
for detecting the fundamental frequency at which image pixels flicker. The
algorithm employs a second-order digital infinite impulse response (IIR) filter
to perform an approximate per-pixel brightness reconstruction and is more
robust to high-frequency noise than the baseline method we compare to. We
further demonstrate that using the falling edge of the signal leads to more
accurate period estimates than the rising edge, and that for certain signals
interpolating the zero-level crossings can further increase accuracy. Our
experiments find that the outstanding capabilities of the camera in detecting
frequencies up to 64kHz for a single pixel do not carry over to full sensor
imaging as readout bandwidth limitations become a serious obstacle. This
suggests that a hardware implementation closer to the sensor will allow for
greatly improved frequency imaging. We discuss the important design parameters
for fullsensor frequency imaging and present Frequency Cam, an open-source
implementation as a ROS node that can run on a single core of a laptop CPU at
more than 50 million events per second. It produces results that are
qualitatively very similar to those obtained from the closed source vibration
analysis module in Prophesee's Metavision Toolkit. The code for Frequency Cam
and a demonstration video can be found at
https://github.com/berndpfrommer/frequency_cam
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2022 00:08:35 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Pfrommer",
"Bernd",
""
]
] |
new_dataset
| 0.995542 |
2211.00295
|
Abhilasha Ravichander
|
Abhilasha Ravichander, Matt Gardner, Ana Marasovi\'c
|
CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about
Negation
|
EMNLP 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The full power of human language-based communication cannot be realized
without negation. All human languages have some form of negation. Despite this,
negation remains a challenging phenomenon for current natural language
understanding systems. To facilitate the future development of models that can
process negation effectively, we present CONDAQA, the first English reading
comprehension dataset which requires reasoning about the implications of
negated statements in paragraphs. We collect paragraphs with diverse negation
cues, then have crowdworkers ask questions about the implications of the
negated statement in the passage. We also have workers make three kinds of
edits to the passage -- paraphrasing the negated statement, changing the scope
of the negation, and reversing the negation -- resulting in clusters of
question-answer pairs that are difficult for models to answer with spurious
shortcuts. CONDAQA features 14,182 question-answer pairs with over 200 unique
negation cues and is challenging for current state-of-the-art models. The best
performing model on CONDAQA (UnifiedQA-v2-3b) achieves only 42% on our
consistency metric, well below human performance which is 81%. We release our
dataset, along with fully-finetuned, few-shot, and zero-shot evaluations, to
facilitate the development of future NLP methods that work on negated language.
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2022 06:10:26 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Ravichander",
"Abhilasha",
""
],
[
"Gardner",
"Matt",
""
],
[
"Marasović",
"Ana",
""
]
] |
new_dataset
| 0.99963 |
2211.00298
|
Denis Krotov
|
Minjia Shi, Denis S. Krotov, Ferruh \"Ozbudak
|
Constructing MRD codes by switching
| null | null | null | null |
cs.IT cs.DM math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
MRD codes are maximum codes in the rank-distance metric space on $m$-by-$n$
matrices over the finite field of order $q$. They are diameter perfect and have
the cardinality $q^{m(n-d+1)}$ if $m\ge n$. We define switching in MRD codes as
replacing special MRD subcodes by other subcodes with the same parameters. We
consider constructions of MRD codes admitting such switching, including
punctured twisted Gabidulin codes and direct-product codes. Using switching, we
construct a huge class of MRD codes whose cardinality grows doubly
exponentially in $m$ if the other parameters ($n$, $q$, the code distance) are
fixed. Moreover, we construct MRD codes with different affine ranks and
aperiodic MRD codes.
Keywords: MRD codes, rank distance, bilinear forms graph, switching, diameter
perfect codes
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2022 06:26:19 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Shi",
"Minjia",
""
],
[
"Krotov",
"Denis S.",
""
],
[
"Özbudak",
"Ferruh",
""
]
] |
new_dataset
| 0.985303 |
2211.00360
|
Ayse Yilmazer-Metin
|
Ayse Yilmazer-Metin
|
sRSP: GPUlarda Asimetrik Senkronizasyon Icin Yeni Olceklenebilir Bir
Cozum
|
in Turkish language
| null | null | null |
cs.DC cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Asymmetric sharing is a dynamic sharing model, where a shared data is heavily
accessed by a (local) sharer, and rarely accessed by other (remote) sharers. On
GPUs, without special support, asymmetric sharing requires heavily loaded
synchronization on every access. With the introduction of Remote Scope
Promotion (RSP), access to the local sharer is allowed with lightweight
synchronization, while heavyweight synchronization is only used for remote
accesses where it is rarely needed. RSP ensures data consistency by promoting
local synchronizations on remote accesses. Unfortunately, the first
implementation of RSP is not a scalable solution. We offer a more efficient and
scalable RSP implementation. This new design, which we call sRSP, is based on
the monitoring of the local sharer and the selective execution of heavyweight
synchronization operations. We evaluated the sRSP with the time-detailed
Gem5-APU simulator and the results show that the sRSP improves performance by
an average of 29 percent on a 64 Compute Unit GPU.
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2022 10:13:36 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Yilmazer-Metin",
"Ayse",
""
]
] |
new_dataset
| 0.999643 |
2211.00387
|
Larissa Shimomura
|
Larissa C. Shimomura, Nikolay Yakovets, George Fletcher
|
Reasoning on Property Graphs with Graph Generating Dependencies
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Graph Generating Dependencies (GGDs) informally express constraints between
two (possibly different) graph patterns which enforce relationships on both
graph's data (via property value constraints) and its structure (via
topological constraints). Graph Generating Dependencies (GGDs) can express
tuple- and equality-generating dependencies on property graphs, both of which
find broad application in graph data management. In this paper, we discuss the
reasoning behind GGDs. We propose algorithms to solve the satisfiability,
implication, and validation problems for GGDs and analyze their complexity. To
demonstrate the practical use of GGDs, we propose an algorithm which finds
inconsistencies in data through validation of GGDs. Our experiments show that
even though the validation of GGDs has high computational complexity, GGDs can
be used to find data inconsistencies in a feasible execution time on both
synthetic and real-world data.
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2022 11:15:41 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Shimomura",
"Larissa C.",
""
],
[
"Yakovets",
"Nikolay",
""
],
[
"Fletcher",
"George",
""
]
] |
new_dataset
| 0.992322 |
2211.00426
|
Ran Xiaoqiong
|
Xiaoqiong Ran and Rong Luo
|
Two classes of subfield codes of linear codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, subfiled codes of linear code over GF$ (q) $ with good parameters
were studied, and many optimal subfield codes were obtained. In this paper, Our
mainly motivation is to generlize the results of the subfield codes of
hyperoval in Ding and Heng (Finite Fields Their Appl. 56, 308-331 (2019)), and
generlize the results of two families of subfield codes in Xiang and Yin
(Cryptogr. Commun. 13(1), 117-127 (2021)) to $ p $-ary where $ p $ is odd. We
get the parameters and weight distribution of these subfield codes. At the same
time, the parameters of their dual codes are also studied. When $ m=1 $, The
dual codes of these subfield codes are almost MDS code, when $ m>1 $ and $ p $
odd, these dual codes are dimension-optimal with respect to the sphere-backing
bound.
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2022 12:42:04 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Ran",
"Xiaoqiong",
""
],
[
"Luo",
"Rong",
""
]
] |
new_dataset
| 0.990041 |
2211.00448
|
Joon Son Chung
|
Youngjoon Jang, Youngtaek Oh, Jae Won Cho, Dong-Jin Kim, Joon Son
Chung, In So Kweon
|
Signing Outside the Studio: Benchmarking Background Robustness for
Continuous Sign Language Recognition
|
Our dataset is available at
https://github.com/art-jang/Signing-Outside-the-Studio
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of this work is background-robust continuous sign language
recognition. Most existing Continuous Sign Language Recognition (CSLR)
benchmarks have fixed backgrounds and are filmed in studios with a static
monochromatic background. However, signing is not limited only to studios in
the real world. In order to analyze the robustness of CSLR models under
background shifts, we first evaluate existing state-of-the-art CSLR models on
diverse backgrounds. To synthesize the sign videos with a variety of
backgrounds, we propose a pipeline to automatically generate a benchmark
dataset utilizing existing CSLR benchmarks. Our newly constructed benchmark
dataset consists of diverse scenes to simulate a real-world environment. We
observe even the most recent CSLR method cannot recognize glosses well on our
new dataset with changed backgrounds. In this regard, we also propose a simple
yet effective training scheme including (1) background randomization and (2)
feature disentanglement for CSLR models. The experimental results on our
dataset demonstrate that our method generalizes well to other unseen background
data with minimal additional training images.
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2022 13:27:44 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Jang",
"Youngjoon",
""
],
[
"Oh",
"Youngtaek",
""
],
[
"Cho",
"Jae Won",
""
],
[
"Kim",
"Dong-Jin",
""
],
[
"Chung",
"Joon Son",
""
],
[
"Kweon",
"In So",
""
]
] |
new_dataset
| 0.998627 |
2211.00513
|
Keisuke Toyama
|
Keisuke Toyama, Katsuhito Sudoh, Satoshi Nakamura
|
E2E Refined Dataset
|
4 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Although the well-known MR-to-text E2E dataset has been used by many
researchers, its MR-text pairs include many deletion/insertion/substitution
errors. Since such errors affect the quality of MR-to-text systems, they must
be fixed as much as possible. Therefore, we developed a refined dataset and
some python programs that convert the original E2E dataset into a refined
dataset.
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2022 15:01:20 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Toyama",
"Keisuke",
""
],
[
"Sudoh",
"Katsuhito",
""
],
[
"Nakamura",
"Satoshi",
""
]
] |
new_dataset
| 0.999714 |
2211.00549
|
Jose Vargas-Quiros
|
Jose Vargas-Quiros, Laura Cabrera-Quiros, Hayley Hung
|
No-audio speaking status detection in crowded settings via visual
pose-based filtering and wearable acceleration
| null | null | null | null |
cs.CV cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognizing who is speaking in a crowded scene is a key challenge towards the
understanding of the social interactions going on within. Detecting speaking
status from body movement alone opens the door for the analysis of social
scenes in which personal audio is not obtainable. Video and wearable sensors
make it possible recognize speaking in an unobtrusive, privacy-preserving way.
When considering the video modality, in action recognition problems, a bounding
box is traditionally used to localize and segment out the target subject, to
then recognize the action taking place within it. However, cross-contamination,
occlusion, and the articulated nature of the human body, make this approach
challenging in a crowded scene. Here, we leverage articulated body poses for
subject localization and in the subsequent speech detection stage. We show that
the selection of local features around pose keypoints has a positive effect on
generalization performance while also significantly reducing the number of
local features considered, making for a more efficient method. Using two
in-the-wild datasets with different viewpoints of subjects, we investigate the
role of cross-contamination in this effect. We additionally make use of
acceleration measured through wearable sensors for the same task, and present a
multimodal approach combining both methods.
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2022 15:55:48 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Vargas-Quiros",
"Jose",
""
],
[
"Cabrera-Quiros",
"Laura",
""
],
[
"Hung",
"Hayley",
""
]
] |
new_dataset
| 0.9909 |
2211.00555
|
John Paul Miranda
|
John Paul P. Miranda, Julieta M. Umali, Aileen P. de Leon
|
Datasets of Fire and Crime Incidents in Pampanga, Philippines
|
10 pages, 10 citations, 5 figures, 1 table, journal article,
peer-reviewed
|
International Journal of Computing Sciences Research, 2022
|
10.25147/ijcsr.2017.001.1.121
| null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
The fire and crime incident datasets were requested and collected from two
Philippine regional agencies (i.e., the Bureau of Fire Protection and the
Philippine National Police). The datasets were used to initially analyze and
map both fire and crime incidents within the province of Pampanga for a
specific time frame. Several data preparation, normalization, and data cleaning
steps were implemented to properly map and identify patterns within the
datasets. The initial results also indicate the leading causes of fire and
crimes are rubbish and acts against property. Fires mostly occur during the dry
season in the province. Crime is particularly high during December, and most of
the fire and crime incidents occur during the time when people are most active.
The dataset was able to present the temporal characteristics of the fire and
crime incidents that occurred in the province of Pampanga. Merge the existing
dataset with the other datasets from other related agencies to get a bigger
picture and produce more objective results that could be used for
decision-making.
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2022 16:06:55 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Miranda",
"John Paul P.",
""
],
[
"Umali",
"Julieta M.",
""
],
[
"de Leon",
"Aileen P.",
""
]
] |
new_dataset
| 0.99965 |
2211.00596
|
Ernesto Gomez
|
Ernesto Gomez (1), Keith E. Schubert (2), and Khalil Dajani (1) ((1)
California State University San Bernardino, School of Computer Science and
Engineering, (2) Baylor University, Department of Electrical and Computer
Engineering)
|
Algebra of N-event synchronization
|
9 pages, 2 figures
| null | null | null |
cs.DC cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
We have previously defined synchronization (Gomez, E. and K. Schubert 2011)
as a relation between the times at which a pair of events can happen, and
introduced an algebra that covers all possible relations for such pairs. In
this work we introduce the synchronization matrix, to make it easier to
calculate the properties and results of $N$ event synchronizations, such as are
commonly encountered in parallel execution of multiple processes. The
synchronization matrix leads to the definition of N-event synchronization
algebras as specific extensions to the original algebra. We derive general
properties of such synchronization, and we are able to analyze effects of
synchronization on the phase space of parallel execution introduced in (Gomez E
Kai R, Schubert KE 2017)
|
[
{
"version": "v1",
"created": "Tue, 1 Nov 2022 17:09:58 GMT"
}
] | 2022-11-02T00:00:00 |
[
[
"Gomez",
"Ernesto",
""
],
[
"Schubert",
"Keith E.",
""
],
[
"Dajani",
"Khalil",
""
]
] |
new_dataset
| 0.965882 |
2009.02041
|
Woo-Ri Ko
|
Woo-Ri Ko, Minsu Jang, Jaeyeon Lee and Jaehong Kim
|
AIR-Act2Act: Human-human interaction dataset for teaching non-verbal
social behaviors to robots
|
6 pages, 6 figures, 2 tables, submitted to the International Journal
of Robotics Research (IJRR)
|
INT J ROBOT RES 40.4-5 (2021) 691-697
| null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To better interact with users, a social robot should understand the users'
behavior, infer the intention, and respond appropriately. Machine learning is
one way of implementing robot intelligence. It provides the ability to
automatically learn and improve from experience instead of explicitly telling
the robot what to do. Social skills can also be learned through watching
human-human interaction videos. However, human-human interaction datasets are
relatively scarce to learn interactions that occur in various situations.
Moreover, we aim to use service robots in the elderly-care domain; however,
there has been no interaction dataset collected for this domain. For this
reason, we introduce a human-human interaction dataset for teaching non-verbal
social behaviors to robots. It is the only interaction dataset that elderly
people have participated in as performers. We recruited 100 elderly people and
two college students to perform 10 interactions in an indoor environment. The
entire dataset has 5,000 interaction samples, each of which contains depth
maps, body indexes and 3D skeletal data that are captured with three Microsoft
Kinect v2 cameras. In addition, we provide the joint angles of a humanoid NAO
robot which are converted from the human behavior that robots need to learn.
The dataset and useful python scripts are available for download at
https://github.com/ai4r/AIR-Act2Act. It can be used to not only teach social
skills to robots but also benchmark action recognition algorithms.
|
[
{
"version": "v1",
"created": "Fri, 4 Sep 2020 07:48:04 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Ko",
"Woo-Ri",
""
],
[
"Jang",
"Minsu",
""
],
[
"Lee",
"Jaeyeon",
""
],
[
"Kim",
"Jaehong",
""
]
] |
new_dataset
| 0.999167 |
2012.15691
|
Meng Cao
|
Meng Cao
|
Quantum error-correcting codes from matrix-product codes related to
quasi-orthogonal and quasi-unitary matrices
| null | null | null | null |
cs.IT math.IT quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Matrix-product codes over finite fields are an important class of long linear
codes by combining several commensurate shorter linear codes with a defining
matrix over finite fields. The construction of matrix-product codes with
certain self-orthogonality over finite fields is an effective way to obtain
good $q$-ary quantum codes of large length. This article has two purposes: the
first is to summarize some results of this topic obtained by the author of this
article and his cooperators in [10-12]; the second is to add some new results
on quasi-orthogonal matrices (resp. quasi-unitary matrices), Euclidean
dual-containing (resp. Hermitian dual-containing) matrix-product codes and
$q$-ary quantum codes derived from these matrix-product codes.
|
[
{
"version": "v1",
"created": "Thu, 31 Dec 2020 16:17:37 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Oct 2022 16:33:30 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Cao",
"Meng",
""
]
] |
new_dataset
| 0.999568 |
2108.03517
|
Evan Yao
|
Negin Golrezaei and Evan Yao
|
Upfront Commitment in Online Resource Allocation with Patient Customers
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In many on-demand online platforms such as ride-sharing, grocery delivery, or
shipping, some arriving agents are patient and willing to wait a short amount
of time for the resource or service as long as there is an upfront guarantee
that service will be ultimately provided within a certain delay. Motivated by
this, we present a setting with patient and impatient agents who seek a
resource or service that replenishes periodically. Impatient agents demand the
resource immediately upon arrival while patient agents are willing to wait a
short period conditioned on an upfront commitment to receive the resource. We
study this setting under adversarial arrival models using a relaxed notion of
competitive ratio. We present a class of POLYtope-based Resource Allocation
(POLYRA) algorithms that achieve optimal or near-optimal competitive ratios.
Such POLYRA algorithms work by consulting a particular polytope and only making
decisions that guarantee the algorithm's state remains feasible in this
polytope. When the number of agent types is either two or three, POLYRA
algorithms can obtain the optimal competitive ratio. To design these polytopes,
we construct an upper bound on the competitive ratio of any algorithm, which is
characterized via a linear program (LP) that considers a collection of
overlapping worst-case input sequences. Our designed POLYRA algorithms then
mimic the optimal solution of this upper bound LP via its polytope's
definition, obtaining the optimal competitive ratio. When there are more than
three types, our overlapping worst-case input sequences do not necessarily
result in an attainable competitive ratio, and so we present a class of simple
and interpretable POLYRA algorithm which achieves at least 80% of the optimal
competitive ratio. We complement our theoretical studies with numerical
analysis which shows the efficiency of our algorithms beyond adversarial
arrivals
|
[
{
"version": "v1",
"created": "Sat, 7 Aug 2021 20:28:00 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Oct 2022 16:21:59 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Golrezaei",
"Negin",
""
],
[
"Yao",
"Evan",
""
]
] |
new_dataset
| 0.99327 |
2109.00210
|
Ze Huang
|
Ze Huang, Li Sun, Cheng Zhao, Song Li, Songzhi Su
|
EventPoint: Self-Supervised Interest Point Detection and Description for
Event-based Camera
|
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a self-supervised learned local detector and descriptor,
called EventPoint, for event stream/camera tracking and registration.
Event-based cameras have grown in popularity because of their biological
inspiration and low power consumption. Despite this, applying local features
directly to the event stream is difficult due to its peculiar data structure.
We propose a new time-surface-like event stream representation method called
Tencode. The event stream data processed by Tencode can obtain the pixel-level
positioning of interest points while also simultaneously extracting descriptors
through a neural network. Instead of using costly and unreliable manual
annotation, our network leverages the prior knowledge of local feature
extraction on color images and conducts self-supervised learning via
homographic and spatio-temporal adaptation. To the best of our knowledge, our
proposed method is the first research on event-based local features learning
using a deep neural network. We provide comprehensive experiments of feature
point detection and matching, and three public datasets are used for evaluation
(i.e. DSEC, N-Caltech101, and HVGA ATIS Corner Dataset). The experimental
findings demonstrate that our method outperforms SOTA in terms of feature point
detection and description.
|
[
{
"version": "v1",
"created": "Wed, 1 Sep 2021 06:58:14 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Sep 2021 11:43:40 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Oct 2022 15:44:37 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Huang",
"Ze",
""
],
[
"Sun",
"Li",
""
],
[
"Zhao",
"Cheng",
""
],
[
"Li",
"Song",
""
],
[
"Su",
"Songzhi",
""
]
] |
new_dataset
| 0.999505 |
2109.05455
|
Gabriel Hartmann
|
Gabriel Hartmann, Zvi Shiller, Amos Azaria
|
Competitive Driving of Autonomous Vehicles
|
12 pages
|
IEEE Access, Volume: 10, Publication Date: 2022, On Pages:
111772-111783
|
10.1109/ACCESS.2022.3215984
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes Ariel Team's autonomous racing controller for the Indy
Autonomous Challenge (IAC) simulation race. IAC is the first multi-vehicle
autonomous head-to-head competition, reaching speeds of 300 km/h along an oval
track, modeled after the Indianapolis Motor Speedway (IMS). Our racing
controller attempts to maximize progress along the track while avoiding
collisions with opponent vehicles and obeying the race rules. To this end, the
racing controller first computes a race line offline. Then, it repeatedly
computes online a small set of dynamically feasible maneuver candidates, each
tested for collision with the opponent vehicles. Finally, it selects the
maneuver that maximizes progress along the track, taking into account the race
line. The maneuver candidates, as well as the predicted trajectories of the
opponent vehicles, are approximated using a point mass model. Despite the
simplicity of this racing controller, it managed to drive competitively and
with no collision with any of the opponent vehicles in the IAC final simulation
race.
|
[
{
"version": "v1",
"created": "Sun, 12 Sep 2021 08:02:48 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Oct 2022 08:46:45 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Hartmann",
"Gabriel",
""
],
[
"Shiller",
"Zvi",
""
],
[
"Azaria",
"Amos",
""
]
] |
new_dataset
| 0.963655 |
2109.15254
|
Mat\'u\v{s} Pikuliak
|
Mat\'u\v{s} Pikuliak, \v{S}tefan Grivalsk\'y, Martin Kon\^opka,
Miroslav Bl\v{s}t\'ak, Martin Tamajka, Viktor Bachrat\'y, Mari\'an \v{S}imko,
Pavol Bal\'a\v{z}ik, Michal Trnka, Filip Uhl\'arik
|
SlovakBERT: Slovak Masked Language Model
|
12 pages, 2 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a new Slovak masked language model called SlovakBERT. This is to
our best knowledge the first paper discussing Slovak transformers-based
language models. We evaluate our model on several NLP tasks and achieve
state-of-the-art results. This evaluation is likewise the first attempt to
establish a benchmark for Slovak language models. We publish the masked
language model, as well as the fine-tuned models for part-of-speech tagging,
sentiment analysis and semantic textual similarity.
|
[
{
"version": "v1",
"created": "Thu, 30 Sep 2021 16:36:49 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Oct 2022 19:41:06 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Pikuliak",
"Matúš",
""
],
[
"Grivalský",
"Štefan",
""
],
[
"Konôpka",
"Martin",
""
],
[
"Blšták",
"Miroslav",
""
],
[
"Tamajka",
"Martin",
""
],
[
"Bachratý",
"Viktor",
""
],
[
"Šimko",
"Marián",
""
],
[
"Balážik",
"Pavol",
""
],
[
"Trnka",
"Michal",
""
],
[
"Uhlárik",
"Filip",
""
]
] |
new_dataset
| 0.999108 |
2112.04017
|
Zachary Neal
|
Karl Godard, Zachary P. Neal
|
fastball: A fast algorithm to sample bipartite graphs with fixed degree
sequences
|
Journal of Complex Networks (2022)
| null |
10.1093/comnet/cnac049
| null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Many applications require randomly sampling bipartite graphs with fixed
degrees, or randomly sampling incidence matrices with fixed row and column
sums. Although several sampling algorithms exist, the ``curveball'' algorithm
is the most efficient with an asymptotic time complexity of O(n log n), and has
been proven to sample uniformly at random. In this paper, we introduce the
``fastball'' algorithm, which adopts a similar approach but has an asymptotic
time complexity of O(n). We show that a C++ implementation of fastball randomly
samples large bipartite graphs with fixed degrees faster than curveball, and
illustrate the value of this faster algorithm in the context of the fixed
degree sequence model for backbone extraction.
|
[
{
"version": "v1",
"created": "Tue, 7 Dec 2021 22:05:25 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Feb 2022 16:01:42 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Jul 2022 17:21:17 GMT"
},
{
"version": "v4",
"created": "Mon, 29 Aug 2022 20:18:00 GMT"
},
{
"version": "v5",
"created": "Sun, 30 Oct 2022 15:22:30 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Godard",
"Karl",
""
],
[
"Neal",
"Zachary P.",
""
]
] |
new_dataset
| 0.97986 |
2202.08498
|
Fengze Li
|
Fengze Li, Jieming Ma, Zhongbei Tian, Ji Ge, Hai-Ning Liang, Yungang
Zhang and Tianxi Wen
|
Mirror-Yolo: An attention-based instance segmentation and detection
model for mirrors
| null | null |
10.1109/ICFSP55781.2022.9925001
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mirrors can degrade the performance of computer vision models, however to
accurately detect mirrors in images remains challenging. YOLOv4 achieves
phenomenal results both in object detection accuracy and speed, nevertheless
the model often fails in detecting mirrors. In this paper, a novel mirror
detection method `Mirror-YOLO' is proposed, which mainly targets on mirror
detection. Based on YOLOv4, the proposed model embeds an attention mechanism
for better feature acquisition, and a hypercolumn-stairstep approach for
feature map fusion. Mirror-YOLO can also produce accurate bounding polygons for
instance segmentation. The effectiveness of our proposed model is demonstrated
by our experiments, compared to the existing mirror detection methods, the
proposed Mirror-YOLO achieves better performance in detection accuracy on the
mirror image dataset.
|
[
{
"version": "v1",
"created": "Thu, 17 Feb 2022 08:03:48 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Li",
"Fengze",
""
],
[
"Ma",
"Jieming",
""
],
[
"Tian",
"Zhongbei",
""
],
[
"Ge",
"Ji",
""
],
[
"Liang",
"Hai-Ning",
""
],
[
"Zhang",
"Yungang",
""
],
[
"Wen",
"Tianxi",
""
]
] |
new_dataset
| 0.99814 |
2202.10793
|
Yixuan He
|
Yixuan He, Xitong Zhang, Junjie Huang, Benedek Rozemberczki, Mihai
Cucuringu, Gesine Reinert
|
PyTorch Geometric Signed Directed: A Software Package on Graph Neural
Networks for Signed and Directed Graphs
| null | null | null | null |
cs.LG cs.AI cs.SI stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Networks are ubiquitous in many real-world applications (e.g., social
networks encoding trust/distrust relationships, correlation networks arising
from time series data). While many networks are signed or directed, or both,
there is a lack of unified software packages on graph neural networks (GNNs)
specially designed for signed and directed networks. In this paper, we present
PyTorch Geometric Signed Directed (PyGSD), a software package which fills this
gap. Along the way, we also provide a brief review surveying typical tasks,
loss functions and evaluation metrics in the analysis of signed and directed
networks, discuss data used in related experiments, provide an overview of
methods proposed, and evaluate the implemented methods with experiments. The
deep learning framework consists of easy-to-use GNN models, synthetic and
real-world data, as well as task-specific evaluation metrics and loss functions
for signed and directed networks. As an extension library for PyG, our proposed
software is maintained with open-source releases, detailed documentation,
continuous integration, unit tests and code coverage checks. Our code is
publicly available at
\url{https://github.com/SherylHYX/pytorch_geometric_signed_directed}.
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 10:25:59 GMT"
},
{
"version": "v2",
"created": "Sun, 15 May 2022 21:53:10 GMT"
},
{
"version": "v3",
"created": "Sun, 18 Sep 2022 23:11:24 GMT"
},
{
"version": "v4",
"created": "Mon, 31 Oct 2022 17:02:56 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"He",
"Yixuan",
""
],
[
"Zhang",
"Xitong",
""
],
[
"Huang",
"Junjie",
""
],
[
"Rozemberczki",
"Benedek",
""
],
[
"Cucuringu",
"Mihai",
""
],
[
"Reinert",
"Gesine",
""
]
] |
new_dataset
| 0.998728 |
2203.00592
|
Yuxuan Zhao
|
Yuxuan Zhao, Alexandru Uta
|
Tiny Autoscalers for Tiny Workloads: Dynamic CPU Allocation for
Serverless Functions
|
Published in 22nd IEEE International Symposium on Cluster, Cloud and
Internet Computing (CCGrid), 2022
| null |
10.1109/CCGrid54584.2022.00026
| null |
cs.DC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In serverless computing, applications are executed under lightweight
virtualization and isolation environments, such as containers or micro virtual
machines. Typically, their memory allocation is set by the user before
deployment. All other resources, such as CPU, are allocated by the provider
statically and proportionally to memory allocations. This contributes to either
under-utilization or throttling. The former significantly impacts the provider,
while the latter impacts the client. To solve this problem and accommodate both
clients and providers, a solution is dynamic CPU allocation achieved through
autoscaling. Autoscaling has been investigated for long-running applications
using history-based techniques and prediction. However, serverless applications
are short-running workloads, where such techniques are not well suited. In this
paper, we investigate tiny autoscalers and how dynamic CPU allocation
techniques perform for short-running serverless workloads. We experiment with
Kubernetes as the underlying platform and implement using its vertical pod
autoscaler several dynamic CPU rightsizing techniques. We compare these
techniques using state-of-the-art serverless workloads. Our experiments show
that dynamic CPU allocation for short-running serverless functions is feasible
and can be achieved with lightweight algorithms that offer good performance.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 16:27:29 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Oct 2022 10:46:22 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Zhao",
"Yuxuan",
""
],
[
"Uta",
"Alexandru",
""
]
] |
new_dataset
| 0.995394 |
2203.08098
|
Sudeep Dasari
|
Sudeep Dasari, Jianren Wang, Joyce Hong, Shikhar Bahl, Yixin Lin,
Austin Wang, Abitha Thankaraj, Karanbir Chahal, Berk Calli, Saurabh Gupta,
David Held, Lerrel Pinto, Deepak Pathak, Vikash Kumar, Abhinav Gupta
|
RB2: Robotic Manipulation Benchmarking with a Twist
|
accepted at the NeurIPS 2021 Datasets and Benchmarks Track
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Benchmarks offer a scientific way to compare algorithms using objective
performance metrics. Good benchmarks have two features: (a) they should be
widely useful for many research groups; (b) and they should produce
reproducible findings. In robotic manipulation research, there is a trade-off
between reproducibility and broad accessibility. If the benchmark is kept
restrictive (fixed hardware, objects), the numbers are reproducible but the
setup becomes less general. On the other hand, a benchmark could be a loose set
of protocols (e.g. object sets) but the underlying variation in setups make the
results non-reproducible. In this paper, we re-imagine benchmarking for robotic
manipulation as state-of-the-art algorithmic implementations, alongside the
usual set of tasks and experimental protocols. The added baseline
implementations will provide a way to easily recreate SOTA numbers in a new
local robotic setup, thus providing credible relative rankings between existing
approaches and new work. However, these local rankings could vary between
different setups. To resolve this issue, we build a mechanism for pooling
experimental data between labs, and thus we establish a single global ranking
for existing (and proposed) SOTA algorithms. Our benchmark, called
Ranking-Based Robotics Benchmark (RB2), is evaluated on tasks that are inspired
from clinically validated Southampton Hand Assessment Procedures. Our benchmark
was run across two different labs and reveals several surprising findings. For
example, extremely simple baselines like open-loop behavior cloning, outperform
more complicated models (e.g. closed loop, RNN, Offline-RL, etc.) that are
preferred by the field. We hope our fellow researchers will use RB2 to improve
their research's quality and rigor.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 17:25:59 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Oct 2022 03:19:06 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Dasari",
"Sudeep",
""
],
[
"Wang",
"Jianren",
""
],
[
"Hong",
"Joyce",
""
],
[
"Bahl",
"Shikhar",
""
],
[
"Lin",
"Yixin",
""
],
[
"Wang",
"Austin",
""
],
[
"Thankaraj",
"Abitha",
""
],
[
"Chahal",
"Karanbir",
""
],
[
"Calli",
"Berk",
""
],
[
"Gupta",
"Saurabh",
""
],
[
"Held",
"David",
""
],
[
"Pinto",
"Lerrel",
""
],
[
"Pathak",
"Deepak",
""
],
[
"Kumar",
"Vikash",
""
],
[
"Gupta",
"Abhinav",
""
]
] |
new_dataset
| 0.998496 |
2203.15065
|
Asaf Karnieli
|
Asaf Karnieli, Ohad Fried, Yacov Hel-Or
|
DeepShadow: Neural Shape from Shadow
|
ECCV 2022. Project page available at
https://asafkar.github.io/deepshadow/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents DeepShadow, a one-shot method for recovering the depth
map and surface normals from photometric stereo shadow maps. Previous works
that try to recover the surface normals from photometric stereo images treat
cast shadows as a disturbance. We show that the self and cast shadows not only
do not disturb 3D reconstruction, but can be used alone, as a strong learning
signal, to recover the depth map and surface normals. We demonstrate that 3D
reconstruction from shadows can even outperform shape-from-shading in certain
cases. To the best of our knowledge, our method is the first to reconstruct 3D
shape-from-shadows using neural networks. The method does not require any
pre-training or expensive labeled data, and is optimized during inference time.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 20:11:15 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Oct 2022 08:38:31 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Karnieli",
"Asaf",
""
],
[
"Fried",
"Ohad",
""
],
[
"Hel-Or",
"Yacov",
""
]
] |
new_dataset
| 0.982879 |
2203.15807
|
Charis Mesaritakis
|
K. Sozos, A. Bogris, P. Bienstman, G. Sarantoglou, S. Deligiannidis,
C. Mesaritakis
|
High Speed Photonic Neuromorphic Computing Using Recurrent Optical
Spectrum Slicing Neural Networks
|
9 pages including supplementary material
|
Nature Communication Engineering 2022
|
10.1038/s44172-022-00024-5
|
Commun Eng 1, 24 (2022)
|
cs.ET physics.comp-ph physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neuromorphic Computing implemented in photonic hardware is one of the most
promising routes towards achieving machine learning processing at the
picosecond scale, with minimum power consumption. In this work, we present a
new concept for realizing photonic recurrent neural networks and reservoir
computing architectures with the use of recurrent optical spectrum slicing.
This is accomplished through simple optical filters placed in an loop, where
each filter processes a specific spectral slice of the incoming optical signal.
The synaptic weights in our scheme are equivalent to filters central
frequencies and bandwidths. This new method for implementing recurrent neural
processing in the photonic domain, which we call Recurrent Optical Spectrum
Slicing Neural Networks, is numerically evaluated on a demanding,
industry-relevant task such as high baud rate optical signal equalization 100
Gbaud, exhibiting ground-breaking performance. The performance enhancement
surpasses state-of-the-art digital processing techniques by doubling the reach
while minimizing complexity and power consumption by a factor of 10 compared to
state-of-the-art solutions. In this respect, ROSS-NNs can pave the way for the
implementation of ultra-efficient photonic hardware accelerators tailored for
processing high-bandwidth optical signals in optical communication and
high-speed imaging applications
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 09:13:00 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Sozos",
"K.",
""
],
[
"Bogris",
"A.",
""
],
[
"Bienstman",
"P.",
""
],
[
"Sarantoglou",
"G.",
""
],
[
"Deligiannidis",
"S.",
""
],
[
"Mesaritakis",
"C.",
""
]
] |
new_dataset
| 0.983148 |
2205.07598
|
In-Soo Kim
|
In-soo Kim, Mehdi Bennis, and Junil Choi
|
Cell-Free MmWave Massive MIMO Systems with Low-Capacity Fronthaul Links
and Low-Resolution ADC/DACs
|
to appear in IEEE Transactions on Vehicular Technology
|
IEEE Transactions on Vehicular Technology, vol. 71, no. 10, pp.
10512-10526, Oct. 2022
|
10.1109/TVT.2022.3184172
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider the uplink channel estimation phase and downlink
data transmission phase of cell-free millimeter wave (mmWave) massive
multiple-input multiple-output (MIMO) systems with low-capacity fronthaul links
and low-resolution analog-to-digital converters/digital-to-analog converters
(ADC/DACs). In cell-free massive MIMO, a control unit dictates the baseband
processing at a geographical scale, while the base stations communicate with
the control unit through fronthaul links. Unlike most of previous works in
cell-free massive MIMO with finite-capacity fronthaul links, we consider the
general case where the fronthaul capacity and ADC/DAC resolution are not
necessarily the same. In particular, the fronthaul compression and ADC/DAC
quantization occur independently where each one is modeled based on the
information theoretic argument and additive quantization noise model (AQNM).
Then, we address the codebook design problem that aims to minimize the channel
estimation error for the independent and identically distributed (i.i.d.) and
colored compression noise cases. Also, we propose an alternating optimization
(AO) method to tackle the max-min fairness problem. In essence, the AO method
alternates between two subproblems that correspond to the power allocation and
codebook design problems. The AO method proposed for the zero-forcing (ZF)
precoder is guaranteed to converge, whereas the one for the maximum ratio
transmission (MRT) precoder has no such guarantee. Finally, the performance of
the proposed schemes is evaluated by the simulation results in terms of both
energy and spectral efficiency. The numerical results show that the proposed
scheme for the ZF precoder yields spectral and energy efficiency 28% and 15%
higher than that of the best baseline.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 12:04:17 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jun 2022 08:37:42 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Kim",
"In-soo",
""
],
[
"Bennis",
"Mehdi",
""
],
[
"Choi",
"Junil",
""
]
] |
new_dataset
| 0.995654 |
2206.09682
|
Chejian Xu
|
Chejian Xu, Wenhao Ding, Weijie Lyu, Zuxin Liu, Shuai Wang, Yihan He,
Hanjiang Hu, Ding Zhao, Bo Li
|
SafeBench: A Benchmarking Platform for Safety Evaluation of Autonomous
Vehicles
|
Published as a conference paper at NeurIPS 2022 (Track on Datasets
and Benchmarks)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
As shown by recent studies, machine intelligence-enabled systems are
vulnerable to test cases resulting from either adversarial manipulation or
natural distribution shifts. This has raised great concerns about deploying
machine learning algorithms for real-world applications, especially in
safety-critical domains such as autonomous driving (AD). On the other hand,
traditional AD testing on naturalistic scenarios requires hundreds of millions
of driving miles due to the high dimensionality and rareness of the
safety-critical scenarios in the real world. As a result, several approaches
for autonomous driving evaluation have been explored, which are usually,
however, based on different simulation platforms, types of safety-critical
scenarios, scenario generation algorithms, and driving route variations. Thus,
despite a large amount of effort in autonomous driving testing, it is still
challenging to compare and understand the effectiveness and efficiency of
different testing scenario generation algorithms and testing mechanisms under
similar conditions. In this paper, we aim to provide the first unified platform
SafeBench to integrate different types of safety-critical testing scenarios,
scenario generation algorithms, and other variations such as driving routes and
environments. Meanwhile, we implement 4 deep reinforcement learning-based AD
algorithms with 4 types of input (e.g., bird's-eye view, camera) to perform
fair comparisons on SafeBench. We find our generated testing scenarios are
indeed more challenging and observe the trade-off between the performance of AD
agents under benign and safety-critical testing scenarios. We believe our
unified platform SafeBench for large-scale and effective autonomous driving
testing will motivate the development of new testing scenario generation and
safe AD algorithms. SafeBench is available at https://safebench.github.io.
|
[
{
"version": "v1",
"created": "Mon, 20 Jun 2022 09:50:30 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Jul 2022 01:37:05 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Oct 2022 20:55:17 GMT"
},
{
"version": "v4",
"created": "Sat, 29 Oct 2022 00:32:06 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Xu",
"Chejian",
""
],
[
"Ding",
"Wenhao",
""
],
[
"Lyu",
"Weijie",
""
],
[
"Liu",
"Zuxin",
""
],
[
"Wang",
"Shuai",
""
],
[
"He",
"Yihan",
""
],
[
"Hu",
"Hanjiang",
""
],
[
"Zhao",
"Ding",
""
],
[
"Li",
"Bo",
""
]
] |
new_dataset
| 0.997882 |
2206.10175
|
Xiujuan Zhu
|
Ying Hu, Xiujuan Zhu, Yunlong Li, Hao Huang, and Liang He
|
A Multi-grained based Attention Network for Semi-supervised Sound Event
Detection
| null |
INTERSPEECH 2022
|
10.21437/Interspeech.2022-767
| null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sound event detection (SED) is an interesting but challenging task due to the
scarcity of data and diverse sound events in real life. This paper presents a
multi-grained based attention network (MGA-Net) for semi-supervised sound event
detection. To obtain the feature representations related to sound events, a
residual hybrid convolution (RH-Conv) block is designed to boost the vanilla
convolution's ability to extract the time-frequency features. Moreover, a
multi-grained attention (MGA) module is designed to learn temporal resolution
features from coarse-level to fine-level. With the MGA module,the network could
capture the characteristics of target events with short- or long-duration,
resulting in more accurately determining the onset and offset of sound events.
Furthermore, to effectively boost the performance of the Mean Teacher (MT)
method, a spatial shift (SS) module as a data perturbation mechanism is
introduced to increase the diversity of data. Experimental results show that
the MGA-Net outperforms the published state-of-the-art competitors, achieving
53.27% and 56.96% event-based macro F1 (EB-F1) score, 0.709 and 0.739
polyphonic sound detection score (PSDS) on the validation and public set
respectively.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 08:15:27 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Hu",
"Ying",
""
],
[
"Zhu",
"Xiujuan",
""
],
[
"Li",
"Yunlong",
""
],
[
"Huang",
"Hao",
""
],
[
"He",
"Liang",
""
]
] |
new_dataset
| 0.951454 |
2206.11078
|
Meng-Ju Tsai
|
Meng-Ju Tsai, Zhiyong Cui, Hao Yang, Cole Kopca, Sophie Tien, and
Yinhai Wang
|
Traffic-Twitter Transformer: A Nature Language Processing-joined
Framework For Network-wide Traffic Forecasting
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With accurate and timely traffic forecasting, the impacted traffic conditions
can be predicted in advance to guide agencies and residents to respond to
changes in traffic patterns appropriately. However, existing works on traffic
forecasting mainly relied on historical traffic patterns confining to
short-term prediction, under 1 hour, for instance. To better manage future
roadway capacity and accommodate social and human impacts, it is crucial to
propose a flexible and comprehensive framework to predict physical-aware
long-term traffic conditions for public users and transportation agencies. In
this paper, the gap of robust long-term traffic forecasting was bridged by
taking social media features into consideration. A correlation study and a
linear regression model were first implemented to evaluate the significance of
the correlation between two time-series data, traffic intensity and Twitter
data intensity. Two time-series data were then fed into our proposed
social-aware framework, Traffic-Twitter Transformer, which integrated Nature
Language representations into time-series records for long-term traffic
prediction. Experimental results in the Great Seattle Area showed that our
proposed model outperformed baseline models in all evaluation matrices. This
NLP-joined social-aware framework can become a valuable implement of
network-wide traffic prediction and management for traffic agencies.
|
[
{
"version": "v1",
"created": "Sun, 19 Jun 2022 20:17:15 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jun 2022 04:24:21 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Oct 2022 00:23:37 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Tsai",
"Meng-Ju",
""
],
[
"Cui",
"Zhiyong",
""
],
[
"Yang",
"Hao",
""
],
[
"Kopca",
"Cole",
""
],
[
"Tien",
"Sophie",
""
],
[
"Wang",
"Yinhai",
""
]
] |
new_dataset
| 0.995802 |
2207.04507
|
Nicole Wein
|
Aaron Bernstein, Nicole Wein
|
Closing the Gap Between Directed Hopsets and Shortcut Sets
|
Abstract shortened to meet arXiv requirements, v2: fixed a typo, v3:
implemented reviewer comments
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For an n-vertex directed graph $G = (V,E)$, a $\beta$-\emph{shortcut set} $H$
is a set of additional edges $H \subseteq V \times V$ such that $G \cup H$ has
the same transitive closure as $G$, and for every pair $u,v \in V$, there is a
$uv$-path in $G \cup H$ with at most $\beta$ edges. A natural generalization of
shortcut sets to distances is a $(\beta,\epsilon)$-\emph{hopset} $H \subseteq V
\times V$, where the requirement is that $H$ and $G \cup H$ have the same
shortest-path distances, and for every $u,v \in V$, there is a
$(1+\epsilon)$-approximate shortest path in $G \cup H$ with at most $\beta$
edges.
There is a large literature on the tradeoff between the size of a shortcut
set / hopset and the value of $\beta$. We highlight the most natural point on
this tradeoff: what is the minimum value of $\beta$, such that for any graph
$G$, there exists a $\beta$-shortcut set (or a $(\beta,\epsilon)$-hopset) with
$O(n)$ edges? Not only is this a natural structural question in its own right,
but shortcuts sets / hopsets form the core of many distributed, parallel, and
dynamic algorithms for reachability / shortest paths. Until very recently the
best known upper bound was a folklore construction showing $\beta =
O(n^{1/2})$, but in a breakthrough result Kogan and Parter [SODA 2022] improve
this to $\beta = \tilde{O}(n^{1/3})$ for shortcut sets and $\tilde{O}(n^{2/5})$
for hopsets.
Our result is to close the gap between shortcut sets and hopsets. That is, we
show that for any graph $G$ and any fixed $\epsilon$ there is a
$(\tilde{O}(n^{1/3}),\epsilon)$ hopset with $O(n)$ edges. More generally, we
achieve a smooth tradeoff between hopset size and $\beta$ which exactly matches
the tradeoff of Kogan and Parter for shortcut sets (up to polylog factors).
Using a very recent black-box reduction of Kogan and Parter, our new hopset
implies improved bounds for approximate distance preservers.
|
[
{
"version": "v1",
"created": "Sun, 10 Jul 2022 17:14:01 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jul 2022 14:25:14 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Oct 2022 03:33:15 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Bernstein",
"Aaron",
""
],
[
"Wein",
"Nicole",
""
]
] |
new_dataset
| 0.979221 |
2207.09927
|
Vasileios Mezaris
|
Nikolaos Gkalelis, Dimitrios Daskalakis, Vasileios Mezaris
|
ViGAT: Bottom-up event recognition and explanation in video using
factorized graph attention network
| null |
IEEE Access, vol. 10, pp. 108797-108816, 2022
|
10.1109/ACCESS.2022.3213652
| null |
cs.CV cs.AI cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper a pure-attention bottom-up approach, called ViGAT, that
utilizes an object detector together with a Vision Transformer (ViT) backbone
network to derive object and frame features, and a head network to process
these features for the task of event recognition and explanation in video, is
proposed. The ViGAT head consists of graph attention network (GAT) blocks
factorized along the spatial and temporal dimensions in order to capture
effectively both local and long-term dependencies between objects or frames.
Moreover, using the weighted in-degrees (WiDs) derived from the adjacency
matrices at the various GAT blocks, we show that the proposed architecture can
identify the most salient objects and frames that explain the decision of the
network. A comprehensive evaluation study is performed, demonstrating that the
proposed approach provides state-of-the-art results on three large, publicly
available video datasets (FCVID, Mini-Kinetics, ActivityNet).
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 14:12:05 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Oct 2022 12:44:11 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Gkalelis",
"Nikolaos",
""
],
[
"Daskalakis",
"Dimitrios",
""
],
[
"Mezaris",
"Vasileios",
""
]
] |
new_dataset
| 0.998261 |
2207.12644
|
Rohan Pratap Singh
|
Rohan Pratap Singh, Mehdi Benallegue, Mitsuharu Morisawa, Rafael
Cisneros, Fumio Kanehiro
|
Learning Bipedal Walking On Planned Footsteps For Humanoid Robots
|
GitHub code: https://github.com/rohanpsingh/LearningHumanoidWalking
| null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Deep reinforcement learning (RL) based controllers for legged robots have
demonstrated impressive robustness for walking in different environments for
several robot platforms. To enable the application of RL policies for humanoid
robots in real-world settings, it is crucial to build a system that can achieve
robust walking in any direction, on 2D and 3D terrains, and be controllable by
a user-command. In this paper, we tackle this problem by learning a policy to
follow a given step sequence. The policy is trained with the help of a set of
procedurally generated step sequences (also called footstep plans). We show
that simply feeding the upcoming 2 steps to the policy is sufficient to achieve
omnidirectional walking, turning in place, standing, and climbing stairs. Our
method employs curriculum learning on the complexity of terrains, and
circumvents the need for reference motions or pre-trained weights. We
demonstrate the application of our proposed method to learn RL policies for 2
new robot platforms - HRP5P and JVRC-1 - in the MuJoCo simulation environment.
The code for training and evaluation is available online.
|
[
{
"version": "v1",
"created": "Tue, 26 Jul 2022 04:16:00 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Oct 2022 09:48:32 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Singh",
"Rohan Pratap",
""
],
[
"Benallegue",
"Mehdi",
""
],
[
"Morisawa",
"Mitsuharu",
""
],
[
"Cisneros",
"Rafael",
""
],
[
"Kanehiro",
"Fumio",
""
]
] |
new_dataset
| 0.997435 |
2209.04427
|
Nelly Elsayed
|
Zag ElSayed, Murat Ozer, Nelly Elsayed, Magdy Bayoumi
|
Zydeco-Style Spike Sorting Low Power VLSI Architecture for IoT BCI
Implants
|
6 pages, 7 Figures
| null | null | null |
cs.AR cs.ET cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Brain Computer Interface (BCI) has great potential for solving many brain
signal analysis limitations, mental disorder resolutions, and restoring missing
limb functionality via neural-controlled implants. However, there is no single
available, and safe implant for daily life usage exists yet. Most of the
proposed implants have several implementation issues, such as infection hazards
and heat dissipation, which limits their usability and makes it more
challenging to pass regulations and quality control production. The wireless
implant does not require a chronic wound in the skull. However, the current
complex clustering neuron identification algorithms inside the implant chip
consume a lot of power and bandwidth, causing higher heat dissipation issues
and draining the implant's battery. The spike sorting is the core unit of an
invasive BCI chip, which plays a significant role in power consumption,
accuracy, and area. Therefore, in this study, we propose a low-power adaptive
simplified VLSI architecture, "Zydeco-Style," for BCI spike sorting that is
computationally less complex with higher accuracy that performs up to 93.5% in
the worst-case scenario. The architecture uses a low-power Bluetooth Wireless
communication module with external IoT medical ICU devices. The proposed
architecture was implemented and simulated in Verilog. In addition, we are
proposing an implant conceptual design.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 19:53:28 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Sep 2022 13:44:37 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Oct 2022 22:03:55 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"ElSayed",
"Zag",
""
],
[
"Ozer",
"Murat",
""
],
[
"Elsayed",
"Nelly",
""
],
[
"Bayoumi",
"Magdy",
""
]
] |
new_dataset
| 0.993418 |
2209.07025
|
Hung-Jui Guo
|
Hung-Jui Guo, Jonathan Z. Bakdash, Laura R. Marusich and Balakrishnan
Prabhakaran
|
Dynamic X-Ray Vision in Mixed Reality
| null | null |
10.1145/3562939.3565675
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
X-ray vision, a technique that allows users to see through walls and other
obstacles, is a popular technique for Augmented Reality (AR) and Mixed Reality
(MR). In this paper, we demonstrate a dynamic X-ray vision window that is
rendered in real-time based on the user's current position and changes with
movement in the physical environment. Moreover, the location and transparency
of the window are also dynamically rendered based on the user's eye gaze. We
build this X-ray vision window for a current state-of-the-art MR Head-Mounted
Device (HMD) -- HoloLens 2 by integrating several different features: scene
understanding, eye tracking, and clipping primitive.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 03:32:10 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Guo",
"Hung-Jui",
""
],
[
"Bakdash",
"Jonathan Z.",
""
],
[
"Marusich",
"Laura R.",
""
],
[
"Prabhakaran",
"Balakrishnan",
""
]
] |
new_dataset
| 0.999429 |
2209.09116
|
Vinod Kumar Chauhan
|
Vinod Kumar Chauhan, Mark Bass, Ajith Kumar Parlikad and Alexandra
Brintrup
|
Trolley optimisation: An extension of bin packing to load PCB components
| null | null | null | null |
cs.CE math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A trolley is a container for loading printed circuit board (PCB) components
and a trolley optimisation problem (TOP) is an assignment of PCB components to
trolleys for use in the production of a set of PCBs in an assembly line. In
this paper, we introduce the TOP, a novel operation research application. To
formulate the TOP, we derive a novel extension of the bin packing problem. We
exploit the problem structure to decompose the TOP into two smaller, identical
and independent problems. Further, we develop a mixed integer linear
programming model to solve the TOP and prove that the TOP is an NP-complete
problem. A case study of an aerospace manufacturing company is used to
illustrate the TOP which successfully automated the manual process in the
company and resulted in significant cost reductions and flexibility in the
building process.
|
[
{
"version": "v1",
"created": "Mon, 19 Sep 2022 15:38:14 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Oct 2022 12:34:19 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Chauhan",
"Vinod Kumar",
""
],
[
"Bass",
"Mark",
""
],
[
"Parlikad",
"Ajith Kumar",
""
],
[
"Brintrup",
"Alexandra",
""
]
] |
new_dataset
| 0.982702 |
2210.02643
|
Yanyan Zou
|
Peng Lin, Yanyan Zou, Lingfei Wu, Mian Ma, Zhuoye Ding, Bo Long
|
Automatic Scene-based Topic Channel Construction System for E-Commerce
|
EMNLP2022 Camera-ready
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Scene marketing that well demonstrates user interests within a certain
scenario has proved effective for offline shopping. To conduct scene marketing
for e-commerce platforms, this work presents a novel product form, scene-based
topic channel which typically consists of a list of diverse products belonging
to the same usage scenario and a topic title that describes the scenario with
marketing words. As manual construction of channels is time-consuming due to
billions of products as well as dynamic and diverse customers' interests, it is
necessary to leverage AI techniques to automatically construct channels for
certain usage scenarios and even discover novel topics. To be specific, we
first frame the channel construction task as a two-step problem, i.e.,
scene-based topic generation and product clustering, and propose an E-commerce
Scene-based Topic Channel construction system (i.e., ESTC) to achieve automated
production, consisting of scene-based topic generation model for the e-commerce
domain, product clustering on the basis of topic similarity, as well as quality
control based on automatic model filtering and human screening. Extensive
offline experiments and online A/B test validates the effectiveness of such a
novel product form as well as the proposed system. In addition, we also
introduce the experience of deploying the proposed system on a real-world
e-commerce recommendation platform.
|
[
{
"version": "v1",
"created": "Thu, 6 Oct 2022 02:29:10 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Oct 2022 09:32:39 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Lin",
"Peng",
""
],
[
"Zou",
"Yanyan",
""
],
[
"Wu",
"Lingfei",
""
],
[
"Ma",
"Mian",
""
],
[
"Ding",
"Zhuoye",
""
],
[
"Long",
"Bo",
""
]
] |
new_dataset
| 0.995543 |
2210.07789
|
Pezhman Nasirifard
|
Pezhman Nasirifard, Hans-Arno Jacobsen
|
i13DR: A Real-Time Demand Response Infrastructure for Integrating
Renewable Energy Resources
| null | null | null | null |
cs.DC cs.SY eess.SP eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the ongoing integration of Renewable Energy Sources (RES), the
complexity of power grids is increasing. Due to the fluctuating nature of RES,
ensuring the reliability of power grids can be challenging. One possible
approach for addressing these challenges is Demand Response (DR) which is
described as matching the demand for electrical energy according to the changes
and the availability of supply. However, implementing a DR system to monitor
and control a broad set of electrical appliances in real-time introduces
several new complications, including ensuring the reliability and financial
feasibility of the system. In this work, we address these issues by designing
and implementing a distributed real-time DR infrastructure for laptops, which
estimates and controls the power consumption of a network of connected laptops
in response to the fast, irregular changes of RES. Furthermore, since our
approach is entirely software-based, we dramatically reduce the initial costs
of the demand side participants. The result of our field experiments confirms
that our system successfully schedules and executes rapid and effective DR
events. However, the accuracy of the estimated power consumption of all
participating laptops is relatively low, directly caused by our software-based
approach.
|
[
{
"version": "v1",
"created": "Fri, 14 Oct 2022 13:19:20 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Oct 2022 09:27:55 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Nasirifard",
"Pezhman",
""
],
[
"Jacobsen",
"Hans-Arno",
""
]
] |
new_dataset
| 0.999588 |
2210.08993
|
Lei Zhang
|
Chuan Chen, Lei Zhang, Yihao Li, Tianchi Liao, Siran Zhao, Zibin
Zheng, Huawei Huang, Jiajing Wu
|
When Digital Economy Meets Web3.0: Applications and Challenges
|
14 pages, 5 figures
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the continuous development of web technology, Web3.0 has attracted a
considerable amount of attention due to its unique decentralized
characteristics. The digital economy is an important driver of high-quality
economic development and is currently in a rapid development stage. In the
digital economy scenario, the centralized nature of the Internet and other
characteristics usually bring about security issues such as infringement and
privacy leakage. Therefore, it is necessary to investigate how to use Web3.0
technologies to solve the pain points encountered in the development of the
digital economy by fully exploring the critical technologies of digital economy
and Web3.0. In this paper, we discuss the aspects of Web3.0 that should be
integrated with the digital economy to better find the entry point to solve the
problems by examining the latest advances of Web3.0 in machine learning,
finance, and data management. We hope this research will inspire those who are
involved in both academia and industry, and finally help to build a favourable
ecology for the digital economy.
|
[
{
"version": "v1",
"created": "Mon, 26 Sep 2022 13:32:04 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2022 07:28:55 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Oct 2022 06:53:51 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Chen",
"Chuan",
""
],
[
"Zhang",
"Lei",
""
],
[
"Li",
"Yihao",
""
],
[
"Liao",
"Tianchi",
""
],
[
"Zhao",
"Siran",
""
],
[
"Zheng",
"Zibin",
""
],
[
"Huang",
"Huawei",
""
],
[
"Wu",
"Jiajing",
""
]
] |
new_dataset
| 0.978056 |
2210.09946
|
Xuan Yang
|
Xuan Yang, Quanjin Tao, Xiao Feng, Donghong Cai, Xiang Ren, Yang Yang
|
MMGA: Multimodal Learning with Graph Alignment
|
Please contact xuany@zju.edu.cn for the dataset
| null | null | null |
cs.MM cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal pre-training breaks down the modality barriers and allows the
individual modalities to be mutually augmented with information, resulting in
significant advances in representation learning. However, graph modality, as a
very general and important form of data, cannot be easily interacted with other
modalities because of its non-regular nature. In this paper, we propose MMGA
(Multimodal learning with Graph Alignment), a novel multimodal pre-training
framework to incorporate information from graph (social network), image and
text modalities on social media to enhance user representation learning. In
MMGA, a multi-step graph alignment mechanism is proposed to add the
self-supervision from graph modality to optimize the image and text encoders,
while using the information from the image and text modalities to guide the
graph encoder learning. We conduct experiments on the dataset crawled from
Instagram. The experimental results show that MMGA works well on the dataset
and improves the fans prediction task's performance. We release our dataset,
the first social media multimodal dataset with graph, of 60,000 users labeled
with specific topics based on 2 million posts to facilitate future research.
|
[
{
"version": "v1",
"created": "Tue, 18 Oct 2022 15:50:31 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Oct 2022 08:06:13 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Yang",
"Xuan",
""
],
[
"Tao",
"Quanjin",
""
],
[
"Feng",
"Xiao",
""
],
[
"Cai",
"Donghong",
""
],
[
"Ren",
"Xiang",
""
],
[
"Yang",
"Yang",
""
]
] |
new_dataset
| 0.996018 |
2210.10349
|
Botao Yu
|
Botao Yu, Peiling Lu, Rui Wang, Wei Hu, Xu Tan, Wei Ye, Shikun Zhang,
Tao Qin, Tie-Yan Liu
|
Museformer: Transformer with Fine- and Coarse-Grained Attention for
Music Generation
|
Accepted by the Thirty-sixth Conference on Neural Information
Processing Systems (NeurIPS 2022)
| null | null | null |
cs.SD cs.AI cs.CL cs.LG cs.MM eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Symbolic music generation aims to generate music scores automatically. A
recent trend is to use Transformer or its variants in music generation, which
is, however, suboptimal, because the full attention cannot efficiently model
the typically long music sequences (e.g., over 10,000 tokens), and the existing
models have shortcomings in generating musical repetition structures. In this
paper, we propose Museformer, a Transformer with a novel fine- and
coarse-grained attention for music generation. Specifically, with the
fine-grained attention, a token of a specific bar directly attends to all the
tokens of the bars that are most relevant to music structures (e.g., the
previous 1st, 2nd, 4th and 8th bars, selected via similarity statistics); with
the coarse-grained attention, a token only attends to the summarization of the
other bars rather than each token of them so as to reduce the computational
cost. The advantages are two-fold. First, it can capture both music
structure-related correlations via the fine-grained attention, and other
contextual information via the coarse-grained attention. Second, it is
efficient and can model over 3X longer music sequences compared to its
full-attention counterpart. Both objective and subjective experimental results
demonstrate its ability to generate long music sequences with high quality and
better structures.
|
[
{
"version": "v1",
"created": "Wed, 19 Oct 2022 07:31:56 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Oct 2022 03:50:25 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Yu",
"Botao",
""
],
[
"Lu",
"Peiling",
""
],
[
"Wang",
"Rui",
""
],
[
"Hu",
"Wei",
""
],
[
"Tan",
"Xu",
""
],
[
"Ye",
"Wei",
""
],
[
"Zhang",
"Shikun",
""
],
[
"Qin",
"Tao",
""
],
[
"Liu",
"Tie-Yan",
""
]
] |
new_dataset
| 0.997088 |
2210.11968
|
Haoyan Guan
|
Haoyan Guan, Michael Spratling
|
CobNet: Cross Attention on Object and Background for Few-Shot
Segmentation
|
Accepted to ICPR2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Few-shot segmentation aims to segment images containing objects from
previously unseen classes using only a few annotated samples. Most current
methods focus on using object information extracted, with the aid of human
annotations, from support images to identify the same objects in new query
images. However, background information can also be useful to distinguish
objects from their surroundings. Hence, some previous methods also extract
background information from the support images. In this paper, we argue that
such information is of limited utility, as the background in different images
can vary widely. To overcome this issue, we propose CobNet which utilises
information about the background that is extracted from the query images
without annotations of those images. Experiments show that our method achieves
a mean Intersection-over-Union score of 61.4% and 37.8% for 1-shot segmentation
on PASCAL-5i and COCO-20i respectively, outperforming previous methods. It is
also shown to produce state-of-the-art performances of 53.7% for
weakly-supervised few-shot segmentation, where no annotations are provided for
the support images.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 13:49:46 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Oct 2022 10:06:50 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Guan",
"Haoyan",
""
],
[
"Spratling",
"Michael",
""
]
] |
new_dataset
| 0.999084 |
2210.13832
|
Chen Zhang
|
Chen Zhang, Luis Fernando D'Haro, Qiquan Zhang, Thomas Friedrichs,
Haizhou Li
|
FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation
|
EMNLP-2022, 20 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent model-based reference-free metrics for open-domain dialogue evaluation
exhibit promising correlations with human judgment. However, they either
perform turn-level evaluation or look at a single dialogue quality dimension.
One would expect a good evaluation metric to assess multiple quality dimensions
at the dialogue level. To this end, we are motivated to propose a
multi-dimensional dialogue-level metric, which consists of three sub-metrics
with each targeting a specific dimension. The sub-metrics are trained with
novel self-supervised objectives and exhibit strong correlations with human
judgment for their respective dimensions. Moreover, we explore two approaches
to combine the sub-metrics: metric ensemble and multitask learning. Both
approaches yield a holistic metric that significantly outperforms individual
sub-metrics. Compared to the existing state-of-the-art metric, the combined
metrics achieve around 16% relative improvement on average across three
high-quality dialogue-level evaluation benchmarks.
|
[
{
"version": "v1",
"created": "Tue, 25 Oct 2022 08:26:03 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Oct 2022 07:05:37 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Zhang",
"Chen",
""
],
[
"D'Haro",
"Luis Fernando",
""
],
[
"Zhang",
"Qiquan",
""
],
[
"Friedrichs",
"Thomas",
""
],
[
"Li",
"Haizhou",
""
]
] |
new_dataset
| 0.990534 |
2210.14910
|
Alexandre Duval
|
Alexandre Duval, Anita Paas, Abdalwhab Abdalwhab and David St-Onge
|
The eyes and hearts of UAV pilots: observations of physiological
responses in real-life scenarios
| null | null | null | null |
cs.HC cs.CV cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The drone industry is diversifying and the number of pilots increases
rapidly. In this context, flight schools need adapted tools to train pilots,
most importantly with regard to their own awareness of their physiological and
cognitive limits. In civil and military aviation, pilots can train themselves
on realistic simulators to tune their reaction and reflexes, but also to gather
data on their piloting behavior and physiological states. It helps them to
improve their performances. Opposed to cockpit scenarios, drone teleoperation
is conducted outdoor in the field, thus with only limited potential from
desktop simulation training. This work aims to provide a solution to gather
pilots behavior out in the field and help them increase their performance. We
combined advance object detection from a frontal camera to gaze and heart-rate
variability measurements. We observed pilots and analyze their behavior over
three flight challenges. We believe this tool can support pilots both in their
training and in their regular flight tasks. A demonstration video is available
on https://www.youtube.com/watch?v=eePhjd2qNiI
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 14:16:56 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Duval",
"Alexandre",
""
],
[
"Paas",
"Anita",
""
],
[
"Abdalwhab",
"Abdalwhab",
""
],
[
"St-Onge",
"David",
""
]
] |
new_dataset
| 0.998751 |
2210.15491
|
Ekkasit Pinyoanuntapong
|
Ekkasit Pinyoanuntapong, Ayman Ali, Pu Wang, Minwoo Lee, Chen Chen
|
GaitMixer: Skeleton-based Gait Representation Learning via Wide-spectrum
Multi-axial Mixer
|
Submitted to ICASSP 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Most existing gait recognition methods are appearance-based, which rely on
the silhouettes extracted from the video data of human walking activities. The
less-investigated skeleton-based gait recognition methods directly learn the
gait dynamics from 2D/3D human skeleton sequences, which are theoretically more
robust solutions in the presence of appearance changes caused by clothes,
hairstyles, and carrying objects. However, the performance of skeleton-based
solutions is still largely behind the appearance-based ones. This paper aims to
close such performance gap by proposing a novel network model, GaitMixer, to
learn more discriminative gait representation from skeleton sequence data. In
particular, GaitMixer follows a heterogeneous multi-axial mixer architecture,
which exploits the spatial self-attention mixer followed by the temporal
large-kernel convolution mixer to learn rich multi-frequency signals in the
gait feature maps. Experiments on the widely used gait database, CASIA-B,
demonstrate that GaitMixer outperforms the previous SOTA skeleton-based methods
by a large margin while achieving a competitive performance compared with the
representative appearance-based solutions. Code will be available at
https://github.com/exitudio/gaitmixer
|
[
{
"version": "v1",
"created": "Thu, 27 Oct 2022 14:30:52 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Oct 2022 02:38:31 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Pinyoanuntapong",
"Ekkasit",
""
],
[
"Ali",
"Ayman",
""
],
[
"Wang",
"Pu",
""
],
[
"Lee",
"Minwoo",
""
],
[
"Chen",
"Chen",
""
]
] |
new_dataset
| 0.956589 |
2210.16381
|
Srishti Gupta
|
Srishti Gupta, Chun-Hua Tsai, John M. Carroll
|
Not Another Day Zero: Design Hackathons for Community-Based Water
Quality Monitoring
|
21 pages, 3 figures, 3 tables
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
This study looks at water quality monitoring and management as a new form of
community engagement. Through a series of a unique research method called
`design hackathons', we engaged with a hyperlocal community of citizens who are
actively involved in monitoring and management of their local watershed. These
design hackathons sought to understand the motivation, practices, collaboration
and experiences of these citizens. Qualitative analysis of data revealed the
nature of the complex stakeholder network, workflow practices, initiatives to
engage with a larger community, current state of technological infrastructure
being used, and innovative design scenarios proposed by the hackathon
participants. Based on this comprehensive analysis, we conceptualize water
quality monitoring and management as community-based monitoring and management,
and water data as community data. Such a conceptualization sheds light on how
these practices can help in preempting water crisis by empowering citizens
through increased awareness, active participation and informal learning of
water data and resources.
|
[
{
"version": "v1",
"created": "Fri, 28 Oct 2022 19:46:24 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Gupta",
"Srishti",
""
],
[
"Tsai",
"Chun-Hua",
""
],
[
"Carroll",
"John M.",
""
]
] |
new_dataset
| 0.988829 |
2210.16398
|
Dananajy Srinivas
|
Marie Grace, Xajavion "Jay" Seabrum, Dananjay Srinivas, Alexis Palmer
|
System Demo: Tool and Infrastructure for Offensive Language Error
Analysis (OLEA) in English
|
Source code and library download available on PyPI :
https://pypi.org/project/olea/
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The automatic detection of offensive language is a pressing societal need.
Many systems perform well on explicit offensive language but struggle to detect
more complex, nuanced, or implicit cases of offensive and hateful language.
OLEA is an open-source Python library that provides easy-to-use tools for error
analysis in the context of detecting offensive language in English. OLEA also
provides an infrastructure for re-distribution of new datasets and analysis
methods requiring very little coding.
|
[
{
"version": "v1",
"created": "Fri, 28 Oct 2022 20:38:34 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Grace",
"Marie",
""
],
[
"Seabrum",
"Xajavion \"Jay\"",
""
],
[
"Srinivas",
"Dananjay",
""
],
[
"Palmer",
"Alexis",
""
]
] |
new_dataset
| 0.999035 |
2210.16407
|
Yuling Gu
|
Yuling Gu, Yao Fu, Valentina Pyatkin, Ian Magnusson, Bhavana Dalvi
Mishra and Peter Clark
|
Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE
|
Accepted at The Third Workshop on Figurative Language Processing @
EMNLP 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Figurative language (e.g., "he flew like the wind") is challenging to
understand, as it is hard to tell what implicit information is being conveyed
from the surface form alone. We hypothesize that to perform this task well, the
reader needs to mentally elaborate the scene being described to identify a
sensible meaning of the language. We present DREAM-FLUTE, a figurative language
understanding system that does this, first forming a "mental model" of
situations described in a premise and hypothesis before making an
entailment/contradiction decision and generating an explanation. DREAM-FLUTE
uses an existing scene elaboration model, DREAM, for constructing its "mental
model." In the FigLang2022 Shared Task evaluation, DREAM-FLUTE achieved (joint)
first place (Acc@60=63.3%), and can perform even better with ensemble
techniques, demonstrating the effectiveness of this approach. More generally,
this work suggests that adding a reflective component to pretrained language
models can improve their performance beyond standard fine-tuning (3.3%
improvement in Acc@60).
|
[
{
"version": "v1",
"created": "Fri, 28 Oct 2022 21:14:23 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Gu",
"Yuling",
""
],
[
"Fu",
"Yao",
""
],
[
"Pyatkin",
"Valentina",
""
],
[
"Magnusson",
"Ian",
""
],
[
"Mishra",
"Bhavana Dalvi",
""
],
[
"Clark",
"Peter",
""
]
] |
new_dataset
| 0.997984 |
2210.16431
|
Fenglin Liu
|
Fenglin Liu, Xian Wu, Shen Ge, Xuancheng Ren, Wei Fan, Xu Sun, Yuexian
Zou
|
DiMBERT: Learning Vision-Language Grounded Representations with
Disentangled Multimodal-Attention
|
Published in ACM TKDD2022 (ACM Transactions on Knowledge Discovery
from Data)
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Vision-and-language (V-L) tasks require the system to understand both vision
content and natural language, thus learning fine-grained joint representations
of vision and language (a.k.a. V-L representations) is of paramount importance.
Recently, various pre-trained V-L models are proposed to learn V-L
representations and achieve improved results in many tasks. However, the
mainstream models process both vision and language inputs with the same set of
attention matrices. As a result, the generated V-L representations are
entangled in one common latent space. To tackle this problem, we propose
DiMBERT (short for Disentangled Multimodal-Attention BERT), which is a novel
framework that applies separated attention spaces for vision and language, and
the representations of multi-modalities can thus be disentangled explicitly. To
enhance the correlation between vision and language in disentangled spaces, we
introduce the visual concepts to DiMBERT which represent visual information in
textual format. In this manner, visual concepts help to bridge the gap between
the two modalities. We pre-train DiMBERT on a large amount of image-sentence
pairs on two tasks: bidirectional language modeling and sequence-to-sequence
language modeling. After pre-train, DiMBERT is further fine-tuned for the
downstream tasks. Experiments show that DiMBERT sets new state-of-the-art
performance on three tasks (over four datasets), including both generation
tasks (image captioning and visual storytelling) and classification tasks
(referring expressions). The proposed DiM (short for Disentangled
Multimodal-Attention) module can be easily incorporated into existing
pre-trained V-L models to boost their performance, up to a 5% increase on the
representative task. Finally, we conduct a systematic analysis and demonstrate
the effectiveness of our DiM and the introduced visual concepts.
|
[
{
"version": "v1",
"created": "Fri, 28 Oct 2022 23:00:40 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Liu",
"Fenglin",
""
],
[
"Wu",
"Xian",
""
],
[
"Ge",
"Shen",
""
],
[
"Ren",
"Xuancheng",
""
],
[
"Fan",
"Wei",
""
],
[
"Sun",
"Xu",
""
],
[
"Zou",
"Yuexian",
""
]
] |
new_dataset
| 0.993769 |
2210.16453
|
Neelanjan Bhowmik
|
Neelanjan Bhowmik, Toby P. Breckon
|
Joint Sub-component Level Segmentation and Classification for Anomaly
Detection within Dual-Energy X-Ray Security Imagery
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
X-ray baggage security screening is in widespread use and crucial to
maintaining transport security for threat/anomaly detection tasks. The
automatic detection of anomaly, which is concealed within cluttered and complex
electronics/electrical items, using 2D X-ray imagery is of primary interest in
recent years. We address this task by introducing joint object sub-component
level segmentation and classification strategy using deep Convolution Neural
Network architecture. The performance is evaluated over a dataset of cluttered
X-ray baggage security imagery, consisting of consumer electrical and
electronics items using variants of dual-energy X-ray imagery (pseudo-colour,
high, low, and effective-Z). The proposed joint sub-component level
segmentation and classification approach achieve ~99% true positive and ~5%
false positive for anomaly detection task.
|
[
{
"version": "v1",
"created": "Sat, 29 Oct 2022 00:44:50 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Bhowmik",
"Neelanjan",
""
],
[
"Breckon",
"Toby P.",
""
]
] |
new_dataset
| 0.951034 |
2210.16457
|
Yi Cui
|
Yi Cui, Yao Li, Jayson R. Miedema, Sherif Farag, J.S. Marron, Nancy E.
Thomas
|
Region of Interest Detection in Melanocytic Skin Tumor Whole Slide
Images
|
Accepted to MedNeurIPS 2022
| null | null | null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Automated region of interest detection in histopathological image analysis is
a challenging and important topic with tremendous potential impact on clinical
practice. The deep-learning methods used in computational pathology help us to
reduce costs and increase the speed and accuracy of regions of interest
detection and cancer diagnosis. In this work, we propose a patch-based region
of interest detection method for melanocytic skin tumor whole-slide images. We
work with a dataset that contains 165 primary melanomas and nevi Hematoxylin
and Eosin whole-slide images and build a deep-learning method. The proposed
method performs well on a hold-out test data set including five TCGA-SKCM
slides (accuracy of 93.94\% in slide classification task and intersection over
union rate of 41.27\% in the region of interest detection task), showing the
outstanding performance of our model on melanocytic skin tumor. Even though we
test the experiments on the skin tumor dataset, our work could also be extended
to other medical image detection problems, such as various tumors'
classification and prediction, to help and benefit the clinical evaluation and
diagnosis of different tumors.
|
[
{
"version": "v1",
"created": "Sat, 29 Oct 2022 01:12:08 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Cui",
"Yi",
""
],
[
"Li",
"Yao",
""
],
[
"Miedema",
"Jayson R.",
""
],
[
"Farag",
"Sherif",
""
],
[
"Marron",
"J. S.",
""
],
[
"Thomas",
"Nancy E.",
""
]
] |
new_dataset
| 0.999784 |
2210.16510
|
Kohei Honda
|
Kohei Honda, Kenji Koide, Masashi Yokozuka, Shuji Oishi, and Atsuhiko
Banno
|
Generalized LOAM: LiDAR Odometry Estimation with Trainable Local
Geometric Features
|
8 pages, 7 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a LiDAR odometry estimation framework called Generalized
LOAM. Our proposed method is generalized in that it can seamlessly fuse various
local geometric shapes around points to improve the position estimation
accuracy compared to the conventional LiDAR odometry and mapping (LOAM) method.
To utilize continuous geometric features for LiDAR odometry estimation, we
incorporate tiny neural networks into a generalized iterative closest point
(GICP) algorithm. These neural networks improve the data association metric and
the matching cost function using local geometric features. Experiments with the
KITTI benchmark demonstrate that our proposed method reduces relative
trajectory errors compared to the other LiDAR odometry estimation methods.
|
[
{
"version": "v1",
"created": "Sat, 29 Oct 2022 06:39:12 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Honda",
"Kohei",
""
],
[
"Koide",
"Kenji",
""
],
[
"Yokozuka",
"Masashi",
""
],
[
"Oishi",
"Shuji",
""
],
[
"Banno",
"Atsuhiko",
""
]
] |
new_dataset
| 0.98098 |
2210.16572
|
Yu-Ju Tsai
|
Zhong-Min Tsai, Yu-Ju Tsai, Chien-Yao Wang, Hong-Yuan Liao, Youn-Long
Lin, Yung-Yu Chuang
|
SearchTrack: Multiple Object Tracking with Object-Customized Search and
Motion-Aware Features
|
BMVC 2022. Code: https://github.com/qa276390/SearchTrack
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper presents a new method, SearchTrack, for multiple object tracking
and segmentation (MOTS). To address the association problem between detected
objects, SearchTrack proposes object-customized search and motion-aware
features. By maintaining a Kalman filter for each object, we encode the
predicted motion into the motion-aware feature, which includes both motion and
appearance cues. For each object, a customized fully convolutional search
engine is created by SearchTrack by learning a set of weights for dynamic
convolutions specific to the object. Experiments demonstrate that our
SearchTrack method outperforms competitive methods on both MOTS and MOT tasks,
particularly in terms of association accuracy. Our method achieves 71.5 HOTA
(car) and 57.6 HOTA (pedestrian) on the KITTI MOTS and 53.4 HOTA on MOT17. In
terms of association accuracy, our method achieves state-of-the-art performance
among 2D online methods on the KITTI MOTS. Our code is available at
https://github.com/qa276390/SearchTrack.
|
[
{
"version": "v1",
"created": "Sat, 29 Oct 2022 11:17:53 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Tsai",
"Zhong-Min",
""
],
[
"Tsai",
"Yu-Ju",
""
],
[
"Wang",
"Chien-Yao",
""
],
[
"Liao",
"Hong-Yuan",
""
],
[
"Lin",
"Youn-Long",
""
],
[
"Chuang",
"Yung-Yu",
""
]
] |
new_dataset
| 0.9894 |
2210.16595
|
Xianwang Xie
|
Xianwang Xie, Bin Wu, Botao Hou
|
BEPHAP: A Blockchain-Based Efficient Privacy-Preserving Handover
Authentication Protocol with Key Agreement for Internet of Vehicles
|
14 pages, 7 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Internet of Vehicles (IoV) can significantly improve transportation
efficiency and ensure traffic safety. Authentication is regarded as the
fundamental defense line against attacks in IoV. However, the state-of-the-art
approaches suffer from several drawbacks, including bottlenecks of the single
cloud server model, high computational overhead of operations, excessive trust
in cloud servers and roadside units (RSUs), and leakage of vehicle trajectory
privacy. In this paper, BEPHAP, a Blockchain-based Efficient Privacy-preserving
Handover Authentication Protocol with key agreement for internet of vehicles,
is introduced to address these problems. BEPHAP achieves anonymous cross-domain
mutual handover authentication with key agreement based on the tamper-proof
blockchain, symmetric cryptography, and the chameleon hash function under a
security model that cloud servers and RSUs may launch attacks. BEPHAP is
particularly well suited for IoV since it allows vehicles only need to perform
lightweight cryptographic operations during the authentication phase. BEPHAP
also achieves data confidentiality, unlinkability, traceability,
non-repudiation, non-frameability, and key escrow freeness. Formal verification
based on ProVerif and formal security proofs based on the BAN logic indicates
that BEPHAP is resistant to various typical attacks, such as man-in-the-middle
attacks, impersonation attacks, and replay attacks. Performance analysis
demonstrates that BEPHAP surpasses existing works in both computation and
communication efficiencies. And the message loss rate remains 0 at 5000
requests per second, which meets the requirement of IoV.
|
[
{
"version": "v1",
"created": "Sat, 29 Oct 2022 13:29:38 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Xie",
"Xianwang",
""
],
[
"Wu",
"Bin",
""
],
[
"Hou",
"Botao",
""
]
] |
new_dataset
| 0.997377 |
2210.16627
|
Kaiyuan Tan
|
Huimin Xiong, Kunle Li, Kaiyuan Tan, Yang Feng, Joey Tianyi Zhou, Jin
Hao, Zuozhu Liu
|
TFormer: 3D Tooth Segmentation in Mesh Scans with Geometry Guided
Transformer
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Optical Intra-oral Scanners (IOS) are widely used in digital dentistry,
providing 3-Dimensional (3D) and high-resolution geometrical information of
dental crowns and the gingiva. Accurate 3D tooth segmentation, which aims to
precisely delineate the tooth and gingiva instances in IOS, plays a critical
role in a variety of dental applications. However, segmentation performance of
previous methods are error-prone in complicated tooth-tooth or tooth-gingiva
boundaries, and usually exhibit unsatisfactory results across various patients,
yet the clinically applicability is not verified with large-scale dataset. In
this paper, we propose a novel method based on 3D transformer architectures
that is evaluated with large-scale and high-resolution 3D IOS datasets. Our
method, termed TFormer, captures both local and global dependencies among
different teeth to distinguish various types of teeth with divergent anatomical
structures and confusing boundaries. Moreover, we design a geometry guided loss
based on a novel point curvature to exploit boundary geometric features, which
helps refine the boundary predictions for more accurate and smooth
segmentation. We further employ a multi-task learning scheme, where an
additional teeth-gingiva segmentation head is introduced to improve the
performance. Extensive experimental results in a large-scale dataset with
16,000 IOS, the largest IOS dataset to our best knowledge, demonstrate that our
TFormer can surpass existing state-of-the-art baselines with a large margin,
with its utility in real-world scenarios verified by a clinical applicability
test.
|
[
{
"version": "v1",
"created": "Sat, 29 Oct 2022 15:20:54 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Xiong",
"Huimin",
""
],
[
"Li",
"Kunle",
""
],
[
"Tan",
"Kaiyuan",
""
],
[
"Feng",
"Yang",
""
],
[
"Zhou",
"Joey Tianyi",
""
],
[
"Hao",
"Jin",
""
],
[
"Liu",
"Zuozhu",
""
]
] |
new_dataset
| 0.988709 |
2210.16639
|
Yihua Cheng
|
Yihua Cheng, Anton Arapin, Ziyi Zhang, Qizheng Zhang, Hanchen Li, Nick
Feamster, Junchen Jiang
|
GRACE: Loss-Resilient Real-Time Video Communication Using Data-Scalable
Autoencoder
| null | null | null | null |
cs.MM cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Across many real-time video applications, we see a growing need (especially
in long delays and dynamic bandwidth) to allow clients to decode each frame
once any (non-empty) subset of its packets is received and improve quality with
each new packet. We call it data-scalable delivery. Unfortunately, existing
techniques (e.g., FEC, RS and Fountain Codes) fall short: they require either
delivery of a minimum number of packets to decode frames, and/or pad video data
with redundancy in anticipation of packet losses, which hurts video quality if
no packets get lost. This work explores a new approach, inspired by recent
advances of neural-network autoencoders, which make data-scalable delivery
possible. We present Grace, a concrete data-scalable real-time video system.
With the same video encoding, Grace's quality is slightly lower than
traditional codec without redundancy when no packet is lost, but with each
missed packet, its quality degrades much more gracefully than existing
solutions, allowing clients to flexibly trade between frame delay and video
quality. Grace makes two contributions: (1) it trains new custom autoencoders
to balance compression efficiency and resilience against a wide range of packet
losses; and (2) it uses a new transmission scheme to deliver autoencoder-coded
frames as individually decodable packets. We test Grace (and traditional
loss-resilient schemes and codecs) on real network traces and videos, and show
that while Grace's compression efficiency is slightly worse than heavily
engineered video codecs, it significantly reduces tail video frame delay (by
2$\times$ at the 95th percentile) with the marginally lowered video quality
|
[
{
"version": "v1",
"created": "Sat, 29 Oct 2022 16:02:48 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Cheng",
"Yihua",
""
],
[
"Arapin",
"Anton",
""
],
[
"Zhang",
"Ziyi",
""
],
[
"Zhang",
"Qizheng",
""
],
[
"Li",
"Hanchen",
""
],
[
"Feamster",
"Nick",
""
],
[
"Jiang",
"Junchen",
""
]
] |
new_dataset
| 0.990642 |
2210.16644
|
Anchit Gupta
|
Darshan Singh S, Anchit Gupta, C. V. Jawahar, Makarand Tapaswi
|
Unsupervised Audio-Visual Lecture Segmentation
|
17 pages, 14 figures, 14 tables, Accepted to WACV 2023. Project page:
https://cvit.iiit.ac.in/research/projects/cvit-projects/avlectures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Over the last decade, online lecture videos have become increasingly popular
and have experienced a meteoric rise during the pandemic. However,
video-language research has primarily focused on instructional videos or
movies, and tools to help students navigate the growing online lectures are
lacking. Our first contribution is to facilitate research in the educational
domain, by introducing AVLectures, a large-scale dataset consisting of 86
courses with over 2,350 lectures covering various STEM subjects. Each course
contains video lectures, transcripts, OCR outputs for lecture frames, and
optionally lecture notes, slides, assignments, and related educational content
that can inspire a variety of tasks. Our second contribution is introducing
video lecture segmentation that splits lectures into bite-sized topics that
show promise in improving learner engagement. We formulate lecture segmentation
as an unsupervised task that leverages visual, textual, and OCR cues from the
lecture, while clip representations are fine-tuned on a pretext self-supervised
task of matching the narration with the temporally aligned visual content. We
use these representations to generate segments using a temporally consistent
1-nearest neighbor algorithm, TW-FINCH. We evaluate our method on 15 courses
and compare it against various visual and textual baselines, outperforming all
of them. Our comprehensive ablation studies also identify the key factors
driving the success of our approach.
|
[
{
"version": "v1",
"created": "Sat, 29 Oct 2022 16:26:34 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"S",
"Darshan Singh",
""
],
[
"Gupta",
"Anchit",
""
],
[
"Jawahar",
"C. V.",
""
],
[
"Tapaswi",
"Makarand",
""
]
] |
new_dataset
| 0.999889 |
2210.16776
|
Veysel Kocaman Vk
|
Veysel Kocaman, Ofer M. Shir, Thomas B\"ack, Ahmed Nabil Belbachir
|
Saliency Can Be All You Need In Contrastive Self-Supervised Learning
|
Accepted for the 17th International Symposium on Visual Computing
(ISVC 2022)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose an augmentation policy for Contrastive Self-Supervised Learning
(SSL) in the form of an already established Salient Image Segmentation
technique entitled Global Contrast based Salient Region Detection. This
detection technique, which had been devised for unrelated Computer Vision
tasks, was empirically observed to play the role of an augmentation facilitator
within the SSL protocol. This observation is rooted in our practical attempts
to learn, by SSL-fashion, aerial imagery of solar panels, which exhibit
challenging boundary patterns. Upon the successful integration of this
technique on our problem domain, we formulated a generalized procedure and
conducted a comprehensive, systematic performance assessment with various
Contrastive SSL algorithms subject to standard augmentation techniques. This
evaluation, which was conducted across multiple datasets, indicated that the
proposed technique indeed contributes to SSL. We hypothesize whether salient
image segmentation may suffice as the only augmentation policy in Contrastive
SSL when treating downstream segmentation tasks.
|
[
{
"version": "v1",
"created": "Sun, 30 Oct 2022 08:47:53 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Kocaman",
"Veysel",
""
],
[
"Shir",
"Ofer M.",
""
],
[
"Bäck",
"Thomas",
""
],
[
"Belbachir",
"Ahmed Nabil",
""
]
] |
new_dataset
| 0.995215 |
2210.16785
|
Andrew Huard
|
Andrew Huard, Mengyu Chen, Misha Sra
|
CardsVR: A Two-Person VR Experience with Passive Haptic Feedback from a
Deck of Playing Cards
| null | null |
10.1109/ISMAR55827.2022.00070
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Presence in virtual reality (VR) is meaningful for remotely connecting with
others and facilitating social interactions despite great distance while
providing a sense of "being there." This work presents CardsVR, a two-person VR
experience that allows remote participants to play a game of cards together. An
entire deck of tracked cards are used to recreate the sense of playing cards
in-person. Prior work in VR commonly provides passive haptic feedback either
through a single object or through static objects in the environment. CardsVR
is novel in providing passive haptic feedback through multiple cards that are
individually tracked and represented in the virtual environment. Participants
interact with the physical cards by picking them up, holding them, playing
them, or moving them on the physical table. Our participant study (N=23) shows
that passive haptic feedback provides significant improvement in three standard
measures of presence: Possibility to Act, Realism, and Haptics.
|
[
{
"version": "v1",
"created": "Sun, 30 Oct 2022 09:27:37 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Huard",
"Andrew",
""
],
[
"Chen",
"Mengyu",
""
],
[
"Sra",
"Misha",
""
]
] |
new_dataset
| 0.99901 |
2210.16807
|
Stefano Berretti
|
F. Principi, S. Berretti, C. Ferrari, N. Otberdout, M. Daoudi, A. Del
Bimbo
|
The Florence 4D Facial Expression Dataset
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Human facial expressions change dynamically, so their recognition / analysis
should be conducted by accounting for the temporal evolution of face
deformations either in 2D or 3D. While abundant 2D video data do exist, this is
not the case in 3D, where few 3D dynamic (4D) datasets were released for public
use. The negative consequence of this scarcity of data is amplified by current
deep learning based-methods for facial expression analysis that require large
quantities of variegate samples to be effectively trained. With the aim of
smoothing such limitations, in this paper we propose a large dataset, named
Florence 4D, composed of dynamic sequences of 3D face models, where a
combination of synthetic and real identities exhibit an unprecedented variety
of 4D facial expressions, with variations that include the classical
neutral-apex transition, but generalize to expression-to-expression. All these
characteristics are not exposed by any of the existing 4D datasets and they
cannot even be obtained by combining more than one dataset. We strongly believe
that making such a data corpora publicly available to the community will allow
designing and experimenting new applications that were not possible to
investigate till now. To show at some extent the difficulty of our data in
terms of different identities and varying expressions, we also report a
baseline experimentation on the proposed dataset that can be used as baseline.
|
[
{
"version": "v1",
"created": "Sun, 30 Oct 2022 10:45:21 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Principi",
"F.",
""
],
[
"Berretti",
"S.",
""
],
[
"Ferrari",
"C.",
""
],
[
"Otberdout",
"N.",
""
],
[
"Daoudi",
"M.",
""
],
[
"Del Bimbo",
"A.",
""
]
] |
new_dataset
| 0.999253 |
2210.16847
|
Liu Zhuang Mr.
|
Zhuang Liu, Zhichao Zhao, Ye Yuan, Zhi Qiao, Jinfeng Bai and Zhilong
Ji
|
1st Place Solutions for UG2+ Challenge 2022 ATMOSPHERIC TURBULENCE
MITIGATION
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this technical report, we briefly introduce the solution of our team
''summer'' for Atomospheric Turbulence Mitigation in UG$^2$+ Challenge in CVPR
2022. In this task, we propose a unified end-to-end framework to reconstruct a
high quality image from distorted frames, which is mainly consists of a
Restormer-based image reconstruction module and a NIMA-based image quality
assessment module. Our framework is efficient and generic, which is adapted to
both hot-air image and text pattern. Moreover, we elaborately synthesize more
than 10 thousands of images to simulate atmospheric turbulence. And these
images improve the robustness of the model. Finally, we achieve the average
accuracy of 98.53\% on the reconstruction result of the text patterns, ranking
1st on the final leaderboard.
|
[
{
"version": "v1",
"created": "Sun, 30 Oct 2022 14:11:36 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Liu",
"Zhuang",
""
],
[
"Zhao",
"Zhichao",
""
],
[
"Yuan",
"Ye",
""
],
[
"Qiao",
"Zhi",
""
],
[
"Bai",
"Jinfeng",
""
],
[
"Ji",
"Zhilong",
""
]
] |
new_dataset
| 0.986238 |
2210.16849
|
Yiwen Wang
|
Yiwen Wang, Zijian Lan, Xihong Wu, Tianshu Qu
|
TT-Net: Dual-path transformer based sound field translation in the
spherical harmonic domain
|
Submitted to ICASSP 2023
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the current method for the sound field translation tasks based on
spherical harmonic (SH) analysis, the solution based on the additive theorem
usually faces the problem of singular values caused by large matrix condition
numbers. The influence of different distances and frequencies of the spherical
radial function on the stability of the translation matrix will affect the
accuracy of the SH coefficients at the selected point. Due to the problems
mentioned above, we propose a neural network scheme based on the dual-path
transformer. More specifically, the dual-path network is constructed by the
self-attention module along the two dimensions of the frequency and order axes.
The transform-average-concatenate layer and upscaling layer are introduced in
the network, which provides solutions for multiple sampling points and
upscaling. Numerical simulation results indicate that both the working
frequency range and the distance range of the translation are extended. More
accurate higher-order SH coefficients are obtained with the proposed dual-path
network.
|
[
{
"version": "v1",
"created": "Sun, 30 Oct 2022 14:16:48 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Wang",
"Yiwen",
""
],
[
"Lan",
"Zijian",
""
],
[
"Wu",
"Xihong",
""
],
[
"Qu",
"Tianshu",
""
]
] |
new_dataset
| 0.965105 |
2210.16901
|
Xin Zhong
|
Travis Munyer, Daniel Brinkman, Xin Zhong, Chenyu Huang, Iason
Konstantzos
|
Foreign Object Debris Detection for Airport Pavement Images based on
Self-supervised Localization and Vision Transformer
|
This paper has been accepted for publication by the 2022
International Conference on Computational Science & Computational
Intelligence (CSCI'22), Research Track on Signal & Image Processing, Computer
Vision & Pattern Recognition
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Supervised object detection methods provide subpar performance when applied
to Foreign Object Debris (FOD) detection because FOD could be arbitrary objects
according to the Federal Aviation Administration (FAA) specification. Current
supervised object detection algorithms require datasets that contain annotated
examples of every to-be-detected object. While a large and expensive dataset
could be developed to include common FOD examples, it is infeasible to collect
all possible FOD examples in the dataset representation because of the
open-ended nature of FOD. Limitations of the dataset could cause FOD detection
systems driven by those supervised algorithms to miss certain FOD, which can
become dangerous to airport operations. To this end, this paper presents a
self-supervised FOD localization by learning to predict the runway images,
which avoids the enumeration of FOD annotation examples. The localization
method utilizes the Vision Transformer (ViT) to improve localization
performance. The experiments show that the method successfully detects
arbitrary FOD in real-world runway situations. The paper also provides an
extension to the localization result to perform classification; a feature that
can be useful to downstream tasks. To train the localization, this paper also
presents a simple and realistic dataset creation framework that only collects
clean runway images. The training and testing data for this method are
collected at a local airport using unmanned aircraft systems (UAS).
Additionally, the developed dataset is provided for public use and further
studies.
|
[
{
"version": "v1",
"created": "Sun, 30 Oct 2022 17:48:57 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Munyer",
"Travis",
""
],
[
"Brinkman",
"Daniel",
""
],
[
"Zhong",
"Xin",
""
],
[
"Huang",
"Chenyu",
""
],
[
"Konstantzos",
"Iason",
""
]
] |
new_dataset
| 0.999331 |
2210.16923
|
Ali Imran
|
Megan Heath, Ali Imran, David St-Onge
|
See as a Bee: UV Sensor for Aerial Strawberry Crop Monitoring
|
The video reference is: https://www.youtube.com/watch?v=ZSasfgOsjAY.
This paper has been submitted to ICRA 2023
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Precision agriculture aims to use technological tools for the agro-food
sector to increase productivity, cut labor costs, and reduce the use of
resources. This work takes inspiration from bees vision to design a remote
sensing system tailored to incorporate UV-reflectance into a flower detector.
We demonstrate how this approach can provide feature-rich images for deep
learning strawberry flower detection and we apply it to a scalable, yet cost
effective aerial monitoring robotic system in the field. We also compare the
performance of our UV-G-B image detector with a similar work that utilizes RGB
images.
|
[
{
"version": "v1",
"created": "Sun, 30 Oct 2022 18:56:24 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Heath",
"Megan",
""
],
[
"Imran",
"Ali",
""
],
[
"St-Onge",
"David",
""
]
] |
new_dataset
| 0.995358 |
2210.16924
|
Samyak Prajapati
|
Samyak Prajapati, Amrit Raj, Yash Chaudhari, Akhilesh Nandwal, Japman
Singh Monga
|
OGInfra: Geolocating Oil & Gas Infrastructure using Remote Sensing based
Active Fire Data
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Remote sensing has become a crucial part of our daily lives, whether it be
from triangulating our location using GPS or providing us with a weather
forecast. It has multiple applications in domains such as military,
socio-economical, commercial, and even in supporting humanitarian efforts. This
work proposes a novel technique for the automated geo-location of Oil & Gas
infrastructure with the use of Active Fire Data from the NASA FIRMS data
repository & Deep Learning techniques; achieving a top accuracy of 90.68% with
the use of ResNet101.
|
[
{
"version": "v1",
"created": "Sun, 30 Oct 2022 18:58:15 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Prajapati",
"Samyak",
""
],
[
"Raj",
"Amrit",
""
],
[
"Chaudhari",
"Yash",
""
],
[
"Nandwal",
"Akhilesh",
""
],
[
"Monga",
"Japman Singh",
""
]
] |
new_dataset
| 0.99937 |
2210.16986
|
Jun Zhou
|
Jun Zhou, Feng Qi, Zhigang Hua, Daohong Jian, Ziqi Liu, Hua Wu,
Xingwen Zhang, Shuang Yang
|
A Practical Distributed ADMM Solver for Billion-Scale Generalized
Assignment Problems
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Assigning items to owners is a common problem found in various real-world
applications, for example, audience-channel matching in marketing campaigns,
borrower-lender matching in loan management, and shopper-merchant matching in
e-commerce. Given an objective and multiple constraints, an assignment problem
can be formulated as a constrained optimization problem. Such assignment
problems are usually NP-hard, so when the number of items or the number of
owners is large, solving for exact solutions becomes challenging. In this
paper, we are interested in solving constrained assignment problems with
hundreds of millions of items. Thus, with just tens of owners, the number of
decision variables is at billion-scale. This scale is usually seen in the
internet industry, which makes decisions for large groups of users. We relax
the possible integer constraint, and formulate a general optimization problem
that covers commonly seen assignment problems. Its objective function is
convex. Its constraints are either linear, or convex and separable by items. We
study to solve our generalized assignment problems in the Bregman Alternating
Direction Method of Multipliers (BADMM) framework where we exploit Bregman
divergence to transform the Augmented Lagrangian into a separable form, and
solve many subproblems in parallel. The entire solution can thus be implemented
using a MapReduce-style distributed computation framework. We present
experiment results on both synthetic and real-world datasets to verify its
accuracy and scalability.
|
[
{
"version": "v1",
"created": "Mon, 24 Oct 2022 06:08:52 GMT"
}
] | 2022-11-01T00:00:00 |
[
[
"Zhou",
"Jun",
""
],
[
"Qi",
"Feng",
""
],
[
"Hua",
"Zhigang",
""
],
[
"Jian",
"Daohong",
""
],
[
"Liu",
"Ziqi",
""
],
[
"Wu",
"Hua",
""
],
[
"Zhang",
"Xingwen",
""
],
[
"Yang",
"Shuang",
""
]
] |
new_dataset
| 0.997286 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.