id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2211.06074
|
Stefano Scanzio
|
Gianluca Cena, Stefano Scanzio, Adriano Valenzano
|
SDMAC: A Software-Defined MAC for Wi-Fi to Ease Implementation of Soft
Real-time Applications
|
preprint, 11 pages
|
IEEE Transactions on Industrial Informatics, vol. 15, no. 6, pp.
3143-3154, June 2019
|
10.1109/TII.2018.2873205
| null |
cs.NI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In distributed control systems where devices are connected through Wi-Fi,
direct access to low-level MAC operations may help applications to meet their
timing constraints. In particular, the ability to timely control single
transmission attempts on air, by means of software programs running at the user
space level, eases the implementation of mechanisms aimed at improving
communication timeliness and reliability. Relevant examples are deterministic
traffic scheduling, seamless channel redundancy, rate adaptation algorithms,
and so on. In this paper, a novel architecture is defined, we call SDMAC, which
in its current embodiment relies on conventional Linux PCs equipped with
commercial Wi-Fi adapters. Preliminary SDMAC implementation on a real testbed
and its experimental evaluation showed that integrating this paradigm in
existing protocol stacks constitutes a viable option, whose performance suits a
wide range of applications characterized by soft real-time requirements.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 09:09:22 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Cena",
"Gianluca",
""
],
[
"Scanzio",
"Stefano",
""
],
[
"Valenzano",
"Adriano",
""
]
] |
new_dataset
| 0.999571 |
2211.06109
|
Rafael Kiesel
|
Rafael Kiesel, Andr\'e Schidler
|
A Dynamic MaxSAT-based Approach to Directed Feedback Vertex Sets
|
17 pages + 5 pages of appendix
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new approach to the Directed Feedback Vertex Set Problem
(DFVSP), where the input is a directed graph and the solution is a minimum set
of vertices whose removal makes the graph acyclic.
Our approach, implemented in the solver DAGer, is based on two novel
contributions: Firstly, we add a wide range of data reductions that are
partially inspired by reductions for the similar vertex cover problem. For
this, we give a theoretical basis for lifting reductions from vertex cover to
DFVSP but also incorporate novel ideas into strictly more general and new DFVSP
reductions.
Secondly, we propose dynamically encoding DFVSP in propositional logic using
cycle propagation for improved performance. Cycle propagation builds on the
idea that already a limited number of the constraints in a propositional
encoding is usually sufficient for finding an optimal solution. Our algorithm,
therefore, starts with a small number of constraints and cycle propagation adds
additional constraints when necessary. We propose an efficient integration of
cycle propagation into the workflow of MaxSAT solvers, further improving the
performance of our algorithm.
Our extensive experimental evaluation shows that DAGer significantly
outperforms the state-of-the-art solvers and that our data reductions alone
directly solve many of the instances.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 10:25:37 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Kiesel",
"Rafael",
""
],
[
"Schidler",
"André",
""
]
] |
new_dataset
| 0.993095 |
2211.06195
|
Changhwa Lee
|
Changhwa Lee, Junuk Cha, Hansol Lee, Seongyeong Lee, Donguk Kim,
Seungryul Baek
|
HOReeNet: 3D-aware Hand-Object Grasping Reenactment
|
5 pages, 5 figures
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present HOReeNet, which tackles the novel task of manipulating images
involving hands, objects, and their interactions. Especially, we are interested
in transferring objects of source images to target images and manipulating 3D
hand postures to tightly grasp the transferred objects. Furthermore, the
manipulation needs to be reflected in the 2D image space. In our reenactment
scenario involving hand-object interactions, 3D reconstruction becomes
essential as 3D contact reasoning between hands and objects is required to
achieve a tight grasp. At the same time, to obtain high-quality 2D images from
3D space, well-designed 3D-to-2D projection and image refinement are required.
Our HOReeNet is the first fully differentiable framework proposed for such a
task. On hand-object interaction datasets, we compared our HOReeNet to the
conventional image translation algorithms and reenactment algorithm. We
demonstrated that our approach could achieved the state-of-the-art on the
proposed task.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 13:35:27 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Lee",
"Changhwa",
""
],
[
"Cha",
"Junuk",
""
],
[
"Lee",
"Hansol",
""
],
[
"Lee",
"Seongyeong",
""
],
[
"Kim",
"Donguk",
""
],
[
"Baek",
"Seungryul",
""
]
] |
new_dataset
| 0.999569 |
2211.06235
|
Zhang Kaiduo
|
Kaiduo Zhang, Muyi Sun, Jianxin Sun, Binghao Zhao, Kunbo Zhang, Zhenan
Sun, Tieniu Tan
|
HumanDiffusion: a Coarse-to-Fine Alignment Diffusion Framework for
Controllable Text-Driven Person Image Generation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Text-driven person image generation is an emerging and challenging task in
cross-modality image generation. Controllable person image generation promotes
a wide range of applications such as digital human interaction and virtual
try-on. However, previous methods mostly employ single-modality information as
the prior condition (e.g. pose-guided person image generation), or utilize the
preset words for text-driven human synthesis. Introducing a sentence composed
of free words with an editable semantic pose map to describe person appearance
is a more user-friendly way. In this paper, we propose HumanDiffusion, a
coarse-to-fine alignment diffusion framework, for text-driven person image
generation. Specifically, two collaborative modules are proposed, the Stylized
Memory Retrieval (SMR) module for fine-grained feature distillation in data
processing and the Multi-scale Cross-modality Alignment (MCA) module for
coarse-to-fine feature alignment in diffusion. These two modules guarantee the
alignment quality of the text and image, from image-level to feature-level,
from low-resolution to high-resolution. As a result, HumanDiffusion realizes
open-vocabulary person image generation with desired semantic poses. Extensive
experiments conducted on DeepFashion demonstrate the superiority of our method
compared with previous approaches. Moreover, better results could be obtained
for complicated person images with various details and uncommon poses.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 14:30:34 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Zhang",
"Kaiduo",
""
],
[
"Sun",
"Muyi",
""
],
[
"Sun",
"Jianxin",
""
],
[
"Zhao",
"Binghao",
""
],
[
"Zhang",
"Kunbo",
""
],
[
"Sun",
"Zhenan",
""
],
[
"Tan",
"Tieniu",
""
]
] |
new_dataset
| 0.9998 |
2211.06241
|
Matias Valdenegro-Toro
|
Lokesh Veeramacheneni and Matias Valdenegro-Toro
|
A Benchmark for Out of Distribution Detection in Point Cloud 3D Semantic
Segmentation
|
4 pages, Robot Learning Workshop @ NeurIPS 2022
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Safety-critical applications like autonomous driving use Deep Neural Networks
(DNNs) for object detection and segmentation. The DNNs fail to predict when
they observe an Out-of-Distribution (OOD) input leading to catastrophic
consequences. Existing OOD detection methods were extensively studied for image
inputs but have not been explored much for LiDAR inputs. So in this study, we
proposed two datasets for benchmarking OOD detection in 3D semantic
segmentation. We used Maximum Softmax Probability and Entropy scores generated
using Deep Ensembles and Flipout versions of RandLA-Net as OOD scores. We
observed that Deep Ensembles out perform Flipout model in OOD detection with
greater AUROC scores for both datasets.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 14:33:51 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Veeramacheneni",
"Lokesh",
""
],
[
"Valdenegro-Toro",
"Matias",
""
]
] |
new_dataset
| 0.972502 |
2211.06267
|
Nikhil Kumar
|
Tobias Friedrich, Davis Issac, Nikhil Kumar, Nadym Mallek, Ziena Zeif
|
Approximate Max-Flow Min-Multicut Theorem for Graphs of Bounded
Treewidth
| null | null | null | null |
cs.DS cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
We prove an approximate max-multiflow min-multicut theorem for bounded
treewidth graphs. In particular, we show the following: Given a treewidth-$r$
graph, there exists a (fractional) multicommodity flow of value $f$, and a
multicut of capacity $c$ such that $ f \leq c \leq \mathcal{O}(\ln (r+1)) \cdot
f$. It is well known that the multiflow-multicut gap on an $r$-vertex (constant
degree) expander graph can be $\Omega(\ln r)$, and hence our result is tight up
to constant factors. Our proof is constructive, and we also obtain a polynomial
time $\mathcal{O}(\ln (r+1))$-approximation algorithm for the minimum multicut
problem on treewidth-$r$ graphs. Our algorithm proceeds by rounding the optimal
fractional solution to the natural linear programming relaxation of the
multicut problem. We introduce novel modifications to the well-known region
growing algorithm to facilitate the rounding while guaranteeing at most a
logarithmic factor loss in the treewidth.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 15:20:23 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Friedrich",
"Tobias",
""
],
[
"Issac",
"Davis",
""
],
[
"Kumar",
"Nikhil",
""
],
[
"Mallek",
"Nadym",
""
],
[
"Zeif",
"Ziena",
""
]
] |
new_dataset
| 0.998149 |
2211.06292
|
Claudia Vanea
|
Claudia Vanea, Jonathan Campbell, Omri Dodi, Liis Salum\"ae, Karen
Meir, Drorith Hochner-Celnikier, Hagit Hochner, Triin Laisk, Linda M. Ernst,
Cecilia M. Lindgren and Christoffer Nell{\aa}ker
|
A New Graph Node Classification Benchmark: Learning Structure from
Histology Cell Graphs
|
Last two authors contributed equally. To be published at New
Frontiers In Graph Learning, Neurips 2022
| null | null | null |
cs.LG cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a new benchmark dataset, Placenta, for node classification in an
underexplored domain: predicting microanatomical tissue structures from cell
graphs in placenta histology whole slide images. This problem is uniquely
challenging for graph learning for a few reasons. Cell graphs are large (>1
million nodes per image), node features are varied (64-dimensions of 11 types
of cells), class labels are imbalanced (9 classes ranging from 0.21% of the
data to 40.0%), and cellular communities cluster into heterogeneously
distributed tissues of widely varying sizes (from 11 nodes to 44,671 nodes for
a single structure). Here, we release a dataset consisting of two cell graphs
from two placenta histology images totalling 2,395,747 nodes, 799,745 of which
have ground truth labels. We present inductive benchmark results for 7 scalable
models and show how the unique qualities of cell graphs can help drive the
development of novel graph neural network architectures.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 16:02:29 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Vanea",
"Claudia",
""
],
[
"Campbell",
"Jonathan",
""
],
[
"Dodi",
"Omri",
""
],
[
"Salumäe",
"Liis",
""
],
[
"Meir",
"Karen",
""
],
[
"Hochner-Celnikier",
"Drorith",
""
],
[
"Hochner",
"Hagit",
""
],
[
"Laisk",
"Triin",
""
],
[
"Ernst",
"Linda M.",
""
],
[
"Lindgren",
"Cecilia M.",
""
],
[
"Nellåker",
"Christoffer",
""
]
] |
new_dataset
| 0.99964 |
2211.06305
|
Shahad Al-Khalifa
|
Shahad Al-Khalifa
|
CryptoHalal: An Intelligent Decision-System for Identifying Halal and
Haram Cryptocurrencies
|
36 pages, 7 Figures, and 3 Tables
| null | null | null |
cs.CY cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this research, we discussed a rising issue for Muslims in today world that
involves a financial and technical innovation, namely: cryptocurrencies. We
found out through a questionnaire that many Muslims are having a hard time
finding the jurisprudence rulings on certain cryptocurrencies. Therefore, the
objective of this research is to investigate and identify features that play a
part in determining the jurisprudence rulings on cryptocurrencies. We have
collected a dataset containing 106 cryptocurrencies classified into 56 Halal
and 50 Haram cryptocurrencies, and used 20 handcrafted features. Moreover,
based on these identified features, we designed an intelligent system that
contains a Machine Learning model for classifying cryptocurrencies into Halal
and Haram.
|
[
{
"version": "v1",
"created": "Fri, 4 Nov 2022 17:34:09 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Al-Khalifa",
"Shahad",
""
]
] |
new_dataset
| 0.999743 |
2211.06323
|
Fabian Offert
|
Fabian Offert and Thao Phan
|
A Sign That Spells: DALL-E 2, Invisual Images and The Racial Politics of
Feature Space
| null | null | null | null |
cs.CY cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we examine how generative machine learning systems produce a
new politics of visual culture. We focus on DALL-E 2 and related models as an
emergent approach to image-making that operates through the cultural techniques
of feature extraction and semantic compression. These techniques, we argue, are
inhuman, invisual, and opaque, yet are still caught in a paradox that is
ironically all too human: the consistent reproduction of whiteness as a latent
feature of dominant visual culture. We use Open AI's failed efforts to 'debias'
their system as a critical opening to interrogate how systems like DALL-E 2
dissolve and reconstitute politically salient human concepts like race. This
example vividly illustrates the stakes of this moment of transformation, when
so-called foundation models reconfigure the boundaries of visual culture and
when 'doing' anti-racism means deploying quick technical fixes to mitigate
personal discomfort, or more importantly, potential commercial loss.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 17:49:17 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Offert",
"Fabian",
""
],
[
"Phan",
"Thao",
""
]
] |
new_dataset
| 0.995646 |
2211.06331
|
Egor Dmitriev
|
E. Dmitriev, M. W. Chekol and S. Wang
|
MGTCOM: Community Detection in Multimodal Graphs
|
10 pages, 4 figures
| null | null | null |
cs.SI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Community detection is the task of discovering groups of nodes sharing
similar patterns within a network. With recent advancements in deep learning,
methods utilizing graph representation learning and deep clustering have shown
great results in community detection. However, these methods often rely on the
topology of networks (i) ignoring important features such as network
heterogeneity, temporality, multimodality, and other possibly relevant
features. Besides, (ii) the number of communities is not known a priori and is
often left to model selection. In addition, (iii) in multimodal networks all
nodes are assumed to be symmetrical in their features; while true for
homogeneous networks, most of the real-world networks are heterogeneous where
feature availability often varies. In this paper, we propose a novel framework
(named MGTCOM) that overcomes the above challenges (i)--(iii). MGTCOM
identifies communities through multimodal feature learning by leveraging a new
sampling technique for unsupervised learning of temporal embeddings.
Importantly, MGTCOM is an end-to-end framework optimizing network embeddings,
communities, and the number of communities in tandem. In order to assess its
performance, we carried out an extensive evaluation on a number of multimodal
networks. We found out that our method is competitive against state-of-the-art
and performs well in inductive inference.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 16:11:03 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Dmitriev",
"E.",
""
],
[
"Chekol",
"M. W.",
""
],
[
"Wang",
"S.",
""
]
] |
new_dataset
| 0.977701 |
2211.06332
|
Joshua Springer
|
Joshua Springer
|
Autonomous Multirotor Landing on Landing Pads and Lava Flows
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Landing is a challenging part of autonomous drone flight and a great research
opportunity. This PhD proposes to improve on fiducial autonomous landing
algorithms by making them more flexible. Further, it leverages its location,
Iceland, to develop a method for landing on lava flows in cooperation with
analog Mars exploration missions taking place in Iceland now - and potentially
for future Mars landings.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 16:31:14 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Springer",
"Joshua",
""
]
] |
new_dataset
| 0.999052 |
2211.06335
|
Sheena Panthaplackel
|
Sheena Panthaplackel, Milos Gligoric, Junyi Jessy Li, Raymond J.
Mooney
|
Using Developer Discussions to Guide Fixing Bugs in Software
|
Accepted in the Findings of EMNLP 2022
| null | null | null |
cs.SE cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Automatically fixing software bugs is a challenging task. While recent work
showed that natural language context is useful in guiding bug-fixing models,
the approach required prompting developers to provide this context, which was
simulated through commit messages written after the bug-fixing code changes
were made. We instead propose using bug report discussions, which are available
before the task is performed and are also naturally occurring, avoiding the
need for any additional information from developers. For this, we augment
standard bug-fixing datasets with bug report discussions. Using these newly
compiled datasets, we demonstrate that various forms of natural language
context derived from such discussions can aid bug-fixing, even leading to
improved performance over using commit messages corresponding to the oracle
bug-fixing commits.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 16:37:33 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Panthaplackel",
"Sheena",
""
],
[
"Gligoric",
"Milos",
""
],
[
"Li",
"Junyi Jessy",
""
],
[
"Mooney",
"Raymond J.",
""
]
] |
new_dataset
| 0.997795 |
2211.06385
|
Md Vasimuddin
|
Md Vasimuddin, Ramanarayan Mohanty, Sanchit Misra, Sasikanth Avancha
|
DistGNN-MB: Distributed Large-Scale Graph Neural Network Training on x86
via Minibatch Sampling
| null | null | null | null |
cs.LG cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Training Graph Neural Networks, on graphs containing billions of vertices and
edges, at scale using minibatch sampling poses a key challenge: strong-scaling
graphs and training examples results in lower compute and higher communication
volume and potential performance loss. DistGNN-MB employs a novel Historical
Embedding Cache combined with compute-communication overlap to address this
challenge. On a 32-node (64-socket) cluster of $3^{rd}$ generation Intel Xeon
Scalable Processors with 36 cores per socket, DistGNN-MB trains 3-layer
GraphSAGE and GAT models on OGBN-Papers100M to convergence with epoch times of
2 seconds and 4.9 seconds, respectively, on 32 compute nodes. At this scale,
DistGNN-MB trains GraphSAGE 5.2x faster than the widely-used DistDGL.
DistGNN-MB trains GraphSAGE and GAT 10x and 17.2x faster, respectively, as
compute nodes scale from 2 to 32.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 18:07:33 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Vasimuddin",
"Md",
""
],
[
"Mohanty",
"Ramanarayan",
""
],
[
"Misra",
"Sanchit",
""
],
[
"Avancha",
"Sasikanth",
""
]
] |
new_dataset
| 0.99686 |
2211.06390
|
Mark Wyse
|
Mark Wyse, Daniel Petrisko, Farzam Gilani, Yuan-Mao Chueh, Paul Gao,
Dai Cheol Jung, Sripathi Muralitharan, Shashank Vijaya Ranga, Mark Oskin,
Michael Taylor
|
The BlackParrot BedRock Cache Coherence System
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents BP-BedRock, the open-source cache coherence protocol and
system implemented within the BlackParrot 64-bit RISC-V multicore processor.
BP-BedRock implements the BedRock directory-based MOESIF cache coherence
protocol and includes two different open-source coherence protocol engines, one
FSM-based and the other microcode programmable. Both coherence engines support
coherent uncacheable access to cacheable memory and L1-based atomic
read-modify-write operations.
Fitted within the BlackParrot multicore, BP-BedRock has been silicon
validated in a GlobalFoundries 12nm FinFET process and FPGA validated with both
coherence engines in 8-core configurations, booting Linux and running off the
shelf benchmarks. After describing BP-BedRock and the design of the two
coherence engines, we study their performance by analyzing processing occupancy
and running the Splash-3 benchmarks on the 8-core FPGA implementations. Careful
design and coherence-specific ISA extensions enable the programmable controller
to achieve performance within 1% of the fixed-function FSM controller on
average (2.3% worst-case) as demonstrated on our FPGA test system. Analysis
shows that the programmable coherence engine increases die area by only 4% in
an ASIC process and increases logic utilization by only 6.3% on FPGA with one
additional block RAM added per core.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 18:21:44 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Wyse",
"Mark",
""
],
[
"Petrisko",
"Daniel",
""
],
[
"Gilani",
"Farzam",
""
],
[
"Chueh",
"Yuan-Mao",
""
],
[
"Gao",
"Paul",
""
],
[
"Jung",
"Dai Cheol",
""
],
[
"Muralitharan",
"Sripathi",
""
],
[
"Ranga",
"Shashank Vijaya",
""
],
[
"Oskin",
"Mark",
""
],
[
"Taylor",
"Michael",
""
]
] |
new_dataset
| 0.99609 |
2211.06408
|
Yunqi Miao
|
Yunqi Miao, Alexandros Lattas, Jiankang Deng, Jungong Han, Stefanos
Zafeiriou
|
Physically-Based Face Rendering for NIR-VIS Face Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Near infrared (NIR) to Visible (VIS) face matching is challenging due to the
significant domain gaps as well as a lack of sufficient data for cross-modality
model training. To overcome this problem, we propose a novel method for paired
NIR-VIS facial image generation. Specifically, we reconstruct 3D face shape and
reflectance from a large 2D facial dataset and introduce a novel method of
transforming the VIS reflectance to NIR reflectance. We then use a
physically-based renderer to generate a vast, high-resolution and
photorealistic dataset consisting of various poses and identities in the NIR
and VIS spectra. Moreover, to facilitate the identity feature learning, we
propose an IDentity-based Maximum Mean Discrepancy (ID-MMD) loss, which not
only reduces the modality gap between NIR and VIS images at the domain level
but encourages the network to focus on the identity features instead of facial
details, such as poses and accessories. Extensive experiments conducted on four
challenging NIR-VIS face recognition benchmarks demonstrate that the proposed
method can achieve comparable performance with the state-of-the-art (SOTA)
methods without requiring any existing NIR-VIS face recognition datasets. With
slightly fine-tuning on the target NIR-VIS face recognition datasets, our
method can significantly surpass the SOTA performance. Code and pretrained
models are released under the insightface
(https://github.com/deepinsight/insightface/tree/master/recognition).
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 18:48:16 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Miao",
"Yunqi",
""
],
[
"Lattas",
"Alexandros",
""
],
[
"Deng",
"Jiankang",
""
],
[
"Han",
"Jungong",
""
],
[
"Zafeiriou",
"Stefanos",
""
]
] |
new_dataset
| 0.99769 |
2211.06420
|
Tiago Pimentel
|
Tiago Pimentel, Josef Valvoda, Niklas Stoehr, Ryan Cotterell
|
The Architectural Bottleneck Principle
|
Accepted at EMNLP 2022. Tiago Pimentel and Josef Valvoda contributed
equally to this work. Code available in
https://github.com/rycolab/attentional-probe
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we seek to measure how much information a component in a
neural network could extract from the representations fed into it. Our work
stands in contrast to prior probing work, most of which investigates how much
information a model's representations contain. This shift in perspective leads
us to propose a new principle for probing, the architectural bottleneck
principle: In order to estimate how much information a given component could
extract, a probe should look exactly like the component. Relying on this
principle, we estimate how much syntactic information is available to
transformers through our attentional probe, a probe that exactly resembles a
transformer's self-attention head. Experimentally, we find that, in three
models (BERT, ALBERT, and RoBERTa), a sentence's syntax tree is mostly
extractable by our probe, suggesting these models have access to syntactic
information while composing their contextual representations. Whether this
information is actually used by these models, however, remains an open
question.
|
[
{
"version": "v1",
"created": "Fri, 11 Nov 2022 18:58:08 GMT"
}
] | 2022-11-14T00:00:00 |
[
[
"Pimentel",
"Tiago",
""
],
[
"Valvoda",
"Josef",
""
],
[
"Stoehr",
"Niklas",
""
],
[
"Cotterell",
"Ryan",
""
]
] |
new_dataset
| 0.987795 |
2111.12309
|
Yufei Xu
|
Yufei Xu, Qiming Zhang, Jing Zhang, Dacheng Tao
|
RegionCL: Can Simple Region Swapping Contribute to Contrastive Learning?
|
ECCV2022, 15 pages, 8 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised methods (SSL) have achieved significant success via
maximizing the mutual information between two augmented views, where cropping
is a popular augmentation technique. Cropped regions are widely used to
construct positive pairs, while the left regions after cropping have rarely
been explored in existing methods, although they together constitute the same
image instance and both contribute to the description of the category. In this
paper, we make the first attempt to demonstrate the importance of both regions
in cropping from a complete perspective and propose a simple yet effective
pretext task called Region Contrastive Learning (RegionCL). Specifically, given
two different images, we randomly crop a region (called the paste view) from
each image with the same size and swap them to compose two new images together
with the left regions (called the canvas view), respectively. Then, contrastive
pairs can be efficiently constructed according to the following simple
criteria, i.e., each view is (1) positive with views augmented from the same
original image and (2) negative with views augmented from other images. With
minor modifications to popular SSL methods, RegionCL exploits those abundant
pairs and helps the model distinguish the regions features from both canvas and
paste views, therefore learning better visual representations. Experiments on
ImageNet, MS COCO, and Cityscapes demonstrate that RegionCL improves MoCo v2,
DenseCL, and SimSiam by large margins and achieves state-of-the-art performance
on classification, detection, and segmentation tasks. The code will be
available at https://github.com/Annbless/RegionCL.git.
|
[
{
"version": "v1",
"created": "Wed, 24 Nov 2021 07:19:46 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2022 09:03:00 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Xu",
"Yufei",
""
],
[
"Zhang",
"Qiming",
""
],
[
"Zhang",
"Jing",
""
],
[
"Tao",
"Dacheng",
""
]
] |
new_dataset
| 0.989984 |
2204.08009
|
Tatiana Shavrina
|
Dina Pisarevskaya, Tatiana Shavrina
|
WikiOmnia: generative QA corpus on the whole Russian Wikipedia
|
Accepted to GEM Workshop, EMNLP 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The General QA field has been developing the methodology referencing the
Stanford Question answering dataset (SQuAD) as the significant benchmark.
However, compiling factual questions is accompanied by time- and
labour-consuming annotation, limiting the training data's potential size. We
present the WikiOmnia dataset, a new publicly available set of QA-pairs and
corresponding Russian Wikipedia article summary sections, composed with a fully
automated generative pipeline. The dataset includes every available article
from Wikipedia for the Russian language. The WikiOmnia pipeline is available
open-source and is also tested for creating SQuAD-formatted QA on other
domains, like news texts, fiction, and social media. The resulting dataset
includes two parts: raw data on the whole Russian Wikipedia (7,930,873 QA pairs
with paragraphs for ruGPT-3 XL and 7,991,040 QA pairs with paragraphs for
ruT5-large) and cleaned data with strict automatic verification (over 160,000
QA pairs with paragraphs for ruGPT-3 XL and over 3,400,000 QA pairs with
paragraphs for ruT5-large).
|
[
{
"version": "v1",
"created": "Sun, 17 Apr 2022 12:59:36 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Apr 2022 12:36:01 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Nov 2022 20:25:12 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Pisarevskaya",
"Dina",
""
],
[
"Shavrina",
"Tatiana",
""
]
] |
new_dataset
| 0.999757 |
2204.08524
|
Sugandha Doda
|
Sugandha Doda, Yuanyuan Wang, Matthias Kahl, Eike Jens Hoffmann, Kim
Ouan, Hannes Taubenb\"ock, Xiao Xiang Zhu
|
So2Sat POP -- A Curated Benchmark Data Set for Population Estimation
from Space on a Continental Scale
| null | null | null | null |
cs.LG cs.AI cs.CY stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Obtaining a dynamic population distribution is key to many decision-making
processes such as urban planning, disaster management and most importantly
helping the government to better allocate socio-technical supply. For the
aspiration of these objectives, good population data is essential. The
traditional method of collecting population data through the census is
expensive and tedious. In recent years, statistical and machine learning
methods have been developed to estimate population distribution. Most of the
methods use data sets that are either developed on a small scale or not
publicly available yet. Thus, the development and evaluation of new methods
become challenging. We fill this gap by providing a comprehensive data set for
population estimation in 98 European cities. The data set comprises a digital
elevation model, local climate zone, land use proportions, nighttime lights in
combination with multi-spectral Sentinel-2 imagery, and data from the Open
Street Map initiative. We anticipate that it would be a valuable addition to
the research community for the development of sophisticated approaches in the
field of population estimation.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 07:30:43 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2022 07:25:37 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Doda",
"Sugandha",
""
],
[
"Wang",
"Yuanyuan",
""
],
[
"Kahl",
"Matthias",
""
],
[
"Hoffmann",
"Eike Jens",
""
],
[
"Ouan",
"Kim",
""
],
[
"Taubenböck",
"Hannes",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
new_dataset
| 0.994705 |
2204.12674
|
Yuqi Chen
|
Yuqi Chen, Keming Chen, Xian Sun, Zequn Zhang
|
A Span-level Bidirectional Network for Aspect Sentiment Triplet
Extraction
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aspect Sentiment Triplet Extraction (ASTE) is a new fine-grained sentiment
analysis task that aims to extract triplets of aspect terms, sentiments, and
opinion terms from review sentences. Recently, span-level models achieve
gratifying results on ASTE task by taking advantage of the predictions of all
possible spans. Since all possible spans significantly increases the number of
potential aspect and opinion candidates, it is crucial and challenging to
efficiently extract the triplet elements among them. In this paper, we present
a span-level bidirectional network which utilizes all possible spans as input
and extracts triplets from spans bidirectionally. Specifically, we devise both
the aspect decoder and opinion decoder to decode the span representations and
extract triples from aspect-to-opinion and opinion-to-aspect directions. With
these two decoders complementing with each other, the whole network can extract
triplets from spans more comprehensively. Moreover, considering that mutual
exclusion cannot be guaranteed between the spans, we design a similar span
separation loss to facilitate the downstream task of distinguishing the correct
span by expanding the KL divergence of similar spans during the training
process; in the inference process, we adopt an inference strategy to remove
conflicting triplets from the results base on their confidence scores.
Experimental results show that our framework not only significantly outperforms
state-of-the-art methods, but achieves better performance in predicting
triplets with multi-token entities and extracting triplets in sentences contain
multi-triplets.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 02:55:43 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2022 10:27:27 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Chen",
"Yuqi",
""
],
[
"Chen",
"Keming",
""
],
[
"Sun",
"Xian",
""
],
[
"Zhang",
"Zequn",
""
]
] |
new_dataset
| 0.970148 |
2204.13384
|
Jan Philip Wahle
|
Jan Philip Wahle and Terry Ruas and Saif M. Mohammad and Bela Gipp
|
D3: A Massive Dataset of Scholarly Metadata for Analyzing the State of
Computer Science Research
| null | null | null | null |
cs.DL cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
DBLP is the largest open-access repository of scientific articles on computer
science and provides metadata associated with publications, authors, and
venues. We retrieved more than 6 million publications from DBLP and extracted
pertinent metadata (e.g., abstracts, author affiliations, citations) from the
publication texts to create the DBLP Discovery Dataset (D3). D3 can be used to
identify trends in research activity, productivity, focus, bias, accessibility,
and impact of computer science research. We present an initial analysis focused
on the volume of computer science research (e.g., number of papers, authors,
research activity), trends in topics of interest, and citation patterns. Our
findings show that computer science is a growing research field (approx. 15%
annually), with an active and collaborative researcher community. While papers
in recent years present more bibliographical entries in comparison to previous
decades, the average number of citations has been declining. Investigating
papers' abstracts reveals that recent topic trends are clearly reflected in D3.
Finally, we list further applications of D3 and pose supplemental research
questions. The D3 dataset, our findings, and source code are publicly available
for research purposes.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 09:59:52 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Sep 2022 15:07:17 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Nov 2022 15:03:09 GMT"
},
{
"version": "v4",
"created": "Thu, 10 Nov 2022 10:55:39 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Wahle",
"Jan Philip",
""
],
[
"Ruas",
"Terry",
""
],
[
"Mohammad",
"Saif M.",
""
],
[
"Gipp",
"Bela",
""
]
] |
new_dataset
| 0.99957 |
2205.01663
|
Daniel M. Ziegler
|
Daniel M. Ziegler, Seraphina Nix, Lawrence Chan, Tim Bauman, Peter
Schmidt-Nielsen, Tao Lin, Adam Scherlis, Noa Nabeshima, Ben Weinstein-Raun,
Daniel de Haas, Buck Shlegeris, Nate Thomas
|
Adversarial Training for High-Stakes Reliability
|
30 pages, 7 figures, NeurIPS camera-ready
| null | null | null |
cs.LG cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In the future, powerful AI systems may be deployed in high-stakes settings,
where a single failure could be catastrophic. One technique for improving AI
safety in high-stakes settings is adversarial training, which uses an adversary
to generate examples to train on in order to achieve better worst-case
performance.
In this work, we used a safe language generation task (``avoid injuries'') as
a testbed for achieving high reliability through adversarial training. We
created a series of adversarial training techniques -- including a tool that
assists human adversaries -- to find and eliminate failures in a classifier
that filters text completions suggested by a generator. In our task, we
determined that we can set very conservative classifier thresholds without
significantly impacting the quality of the filtered outputs. We found that
adversarial training increased robustness to the adversarial attacks that we
trained on -- doubling the time for our contractors to find adversarial
examples both with our tool (from 13 to 26 minutes) and without (from 20 to 44
minutes) -- without affecting in-distribution performance.
We hope to see further work in the high-stakes reliability setting, including
more powerful tools for enhancing human adversaries and better ways to measure
high levels of reliability, until we can confidently rule out the possibility
of catastrophic deployment-time failures of powerful models.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2022 17:50:06 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2022 17:58:20 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Sep 2022 17:36:48 GMT"
},
{
"version": "v4",
"created": "Fri, 7 Oct 2022 01:30:53 GMT"
},
{
"version": "v5",
"created": "Thu, 10 Nov 2022 01:02:29 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Ziegler",
"Daniel M.",
""
],
[
"Nix",
"Seraphina",
""
],
[
"Chan",
"Lawrence",
""
],
[
"Bauman",
"Tim",
""
],
[
"Schmidt-Nielsen",
"Peter",
""
],
[
"Lin",
"Tao",
""
],
[
"Scherlis",
"Adam",
""
],
[
"Nabeshima",
"Noa",
""
],
[
"Weinstein-Raun",
"Ben",
""
],
[
"de Haas",
"Daniel",
""
],
[
"Shlegeris",
"Buck",
""
],
[
"Thomas",
"Nate",
""
]
] |
new_dataset
| 0.988067 |
2207.11876
|
Kohei Yamashita
|
Kohei Yamashita, Yuto Enyo, Shohei Nobuhara, Ko Nishino
|
nLMVS-Net: Deep Non-Lambertian Multi-View Stereo
|
Accepted to WACV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a novel multi-view stereo (MVS) method that can simultaneously
recover not just per-pixel depth but also surface normals, together with the
reflectance of textureless, complex non-Lambertian surfaces captured under
known but natural illumination. Our key idea is to formulate MVS as an
end-to-end learnable network, which we refer to as nLMVS-Net, that seamlessly
integrates radiometric cues to leverage surface normals as view-independent
surface features for learned cost volume construction and filtering. It first
estimates surface normals as pixel-wise probability densities for each view
with a novel shape-from-shading network. These per-pixel surface normal
densities and the input multi-view images are then input to a novel cost volume
filtering network that learns to recover per-pixel depth and surface normal.
The reflectance is also explicitly estimated by alternating with geometry
reconstruction. Extensive quantitative evaluations on newly established
synthetic and real-world datasets show that nLMVS-Net can robustly and
accurately recover the shape and reflectance of complex objects in natural
settings.
|
[
{
"version": "v1",
"created": "Mon, 25 Jul 2022 02:20:21 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2022 09:00:36 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Yamashita",
"Kohei",
""
],
[
"Enyo",
"Yuto",
""
],
[
"Nobuhara",
"Shohei",
""
],
[
"Nishino",
"Ko",
""
]
] |
new_dataset
| 0.966891 |
2208.07339
|
Tim Dettmers
|
Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer
|
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale
|
Published at NeurIPS 2022. Camera-ready version
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models have been widely adopted but require significant GPU
memory for inference. We develop a procedure for Int8 matrix multiplication for
feed-forward and attention projection layers in transformers, which cut the
memory needed for inference by half while retaining full precision performance.
With our method, a 175B parameter 16/32-bit checkpoint can be loaded, converted
to Int8, and used immediately without performance degradation. This is made
possible by understanding and working around properties of highly systematic
emergent features in transformer language models that dominate attention and
transformer predictive performance. To cope with these features, we develop a
two-part quantization procedure, LLM.int8(). We first use vector-wise
quantization with separate normalization constants for each inner product in
the matrix multiplication, to quantize most of the features. However, for the
emergent outliers, we also include a new mixed-precision decomposition scheme,
which isolates the outlier feature dimensions into a 16-bit matrix
multiplication while still more than 99.9% of values are multiplied in 8-bit.
Using LLM.int8(), we show empirically it is possible to perform inference in
LLMs with up to 175B parameters without any performance degradation. This
result makes such models much more accessible, for example making it possible
to use OPT-175B/BLOOM on a single server with consumer GPUs. We open-source our
software.
|
[
{
"version": "v1",
"created": "Mon, 15 Aug 2022 17:08:50 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2022 18:14:31 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Dettmers",
"Tim",
""
],
[
"Lewis",
"Mike",
""
],
[
"Belkada",
"Younes",
""
],
[
"Zettlemoyer",
"Luke",
""
]
] |
new_dataset
| 0.999376 |
2209.03561
|
Ghanta Sai Krishna
|
Sanskar Singh, Shivaibhav Dewangan, Ghanta Sai Krishna, Vandit Tyagi,
Sainath Reddy, Prathistith Raj Medi
|
Video Vision Transformers for Violence Detection
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Law enforcement and city safety are significantly impacted by detecting
violent incidents in surveillance systems. Although modern (smart) cameras are
widely available and affordable, such technological solutions are impotent in
most instances. Furthermore, personnel monitoring CCTV recordings frequently
show a belated reaction, resulting in the potential cause of catastrophe to
people and property. Thus automated detection of violence for swift actions is
very crucial. The proposed solution uses a novel end-to-end deep learning-based
video vision transformer (ViViT) that can proficiently discern fights, hostile
movements, and violent events in video sequences. The study presents utilizing
a data augmentation strategy to overcome the downside of weaker inductive
biasness while training vision transformers on a smaller training datasets. The
evaluated results can be subsequently sent to local concerned authority, and
the captured video can be analyzed. In comparison to state-of-theart (SOTA)
approaches the proposed method achieved auspicious performance on some of the
challenging benchmark datasets.
|
[
{
"version": "v1",
"created": "Thu, 8 Sep 2022 04:44:01 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2022 12:29:44 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Singh",
"Sanskar",
""
],
[
"Dewangan",
"Shivaibhav",
""
],
[
"Krishna",
"Ghanta Sai",
""
],
[
"Tyagi",
"Vandit",
""
],
[
"Reddy",
"Sainath",
""
],
[
"Medi",
"Prathistith Raj",
""
]
] |
new_dataset
| 0.999041 |
2211.00974
|
Ilias Chalkidis
|
Dimitris Mamakas, Petros Tsotsi, Ion Androutsopoulos, Ilias Chalkidis
|
Processing Long Legal Documents with Pre-trained Transformers: Modding
LegalBERT and Longformer
|
9 pages, long paper at NLLP Workshop 2022 proceedings
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pre-trained Transformers currently dominate most NLP tasks. They impose,
however, limits on the maximum input length (512 sub-words in BERT), which are
too restrictive in the legal domain. Even sparse-attention models, such as
Longformer and BigBird, which increase the maximum input length to 4,096
sub-words, severely truncate texts in three of the six datasets of LexGLUE.
Simpler linear classifiers with TF-IDF features can handle texts of any length,
require far less resources to train and deploy, but are usually outperformed by
pre-trained Transformers. We explore two directions to cope with long legal
texts: (i) modifying a Longformer warm-started from LegalBERT to handle even
longer texts (up to 8,192 sub-words), and (ii) modifying LegalBERT to use
TF-IDF representations. The first approach is the best in terms of performance,
surpassing a hierarchical version of LegalBERT, which was the previous state of
the art in LexGLUE. The second approach leads to computationally more efficient
models at the expense of lower performance, but the resulting models still
outperform overall a linear SVM with TF-IDF features in long legal document
classification.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 09:27:01 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2022 16:10:57 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Mamakas",
"Dimitris",
""
],
[
"Tsotsi",
"Petros",
""
],
[
"Androutsopoulos",
"Ion",
""
],
[
"Chalkidis",
"Ilias",
""
]
] |
new_dataset
| 0.966006 |
2211.02369
|
Tatsuya Chuman
|
Tatsuya Chuman and Hitoshi Kiya
|
A Jigsaw Puzzle Solver-based Attack on Block-wise Image Encryption for
Privacy-preserving DNNs
|
To be appeared in IWAIT2023
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Privacy-preserving deep neural networks (DNNs) have been proposed for
protecting data privacy in the cloud server. Although several encryption
schemes for visually protection have been proposed for privacy-preserving DNNs,
several attacks enable to restore visual information from encrypted images. On
the other hand, it has been confirmed that the block-wise image encryption
scheme which utilizes block and pixel shuffling is robust against several
attacks. In this paper, we propose a jigsaw puzzle solver-based attack to
restore visual information from encrypted images including block and pixel
shuffling. In experiments, images encrypted by using the block-wise image
encryption are mostly restored by using the proposed attack.
|
[
{
"version": "v1",
"created": "Fri, 4 Nov 2022 10:54:21 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2022 12:09:28 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Chuman",
"Tatsuya",
""
],
[
"Kiya",
"Hitoshi",
""
]
] |
new_dataset
| 0.997755 |
2211.03995
|
Takuya Mieno
|
Takuya Mieno, Mitsuru Funakoshi, and Shunsuke Inenaga
|
Computing palindromes on a trie in linear time
|
accepted to ISAAC 2022
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A trie $\mathcal{T}$ is a rooted tree such that each edge is labeled by a
single character from the alphabet, and the labels of out-going edges from the
same node are mutually distinct. Given a trie $\mathcal{T}$ with $n$ edges, we
show how to compute all distinct palindromes and all maximal palindromes on
$\mathcal{T}$ in $O(n)$ time, in the case of integer alphabets of size
polynomial in $n$. This improves the state-of-the-art $O(n \log h)$-time
algorithms by Funakoshi et al. [PCS 2019], where $h$ is the height of
$\mathcal{T}$. Using our new algorithms, the eertree with suffix links for a
given trie $\mathcal{T}$ can readily be obtained in $O(n)$ time. Further, our
trie-based $O(n)$-space data structure allows us to report all distinct
palindromes and maximal palindromes in a query string represented in the trie
$\mathcal{T}$, in output optimal time. This is an improvement over an existing
(na\"ive) solution that precomputes and stores all distinct palindromes and
maximal palindromes for each and every string in the trie $\mathcal{T}$
separately, using a total $O(n^2)$ preprocessing time and space, and reports
them in output optimal time upon query.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 04:24:53 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2022 04:37:24 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Mieno",
"Takuya",
""
],
[
"Funakoshi",
"Mitsuru",
""
],
[
"Inenaga",
"Shunsuke",
""
]
] |
new_dataset
| 0.996618 |
2211.04656
|
Daniel Davila
|
Daniel Davila, Dawei Du, Bryon Lewis, Christopher Funk, Joseph Van
Pelt, Roderick Collins, Kellie Corona, Matt Brown, Scott McCloskey, Anthony
Hoogs, Brian Clipp
|
MEVID: Multi-view Extended Videos with Identities for Video Person
Re-Identification
|
This paper was accepted to WACV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present the Multi-view Extended Videos with Identities
(MEVID) dataset for large-scale, video person re-identification (ReID) in the
wild. To our knowledge, MEVID represents the most-varied video person ReID
dataset, spanning an extensive indoor and outdoor environment across nine
unique dates in a 73-day window, various camera viewpoints, and entity clothing
changes. Specifically, we label the identities of 158 unique people wearing 598
outfits taken from 8, 092 tracklets, average length of about 590 frames, seen
in 33 camera views from the very large-scale MEVA person activities dataset.
While other datasets have more unique identities, MEVID emphasizes a richer set
of information about each individual, such as: 4 outfits/identity vs. 2
outfits/identity in CCVID, 33 viewpoints across 17 locations vs. 6 in 5
simulated locations for MTA, and 10 million frames vs. 3 million for LS-VID.
Being based on the MEVA video dataset, we also inherit data that is
intentionally demographically balanced to the continental United States. To
accelerate the annotation process, we developed a semi-automatic annotation
framework and GUI that combines state-of-the-art real-time models for object
detection, pose estimation, person ReID, and multi-object tracking. We evaluate
several state-of-the-art methods on MEVID challenge problems and
comprehensively quantify their robustness in terms of changes of outfit, scale,
and background location. Our quantitative analysis on the realistic, unique
aspects of MEVID shows that there are significant remaining challenges in video
person ReID and indicates important directions for future research.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 03:07:31 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2022 14:35:24 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Davila",
"Daniel",
""
],
[
"Du",
"Dawei",
""
],
[
"Lewis",
"Bryon",
""
],
[
"Funk",
"Christopher",
""
],
[
"Van Pelt",
"Joseph",
""
],
[
"Collins",
"Roderick",
""
],
[
"Corona",
"Kellie",
""
],
[
"Brown",
"Matt",
""
],
[
"McCloskey",
"Scott",
""
],
[
"Hoogs",
"Anthony",
""
],
[
"Clipp",
"Brian",
""
]
] |
new_dataset
| 0.999791 |
2211.04971
|
Michele Cafagna
|
Michele Cafagna, Kees van Deemter, Albert Gatt
|
Understanding Cross-modal Interactions in V&L Models that Generate Scene
Descriptions
| null | null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Image captioning models tend to describe images in an object-centric way,
emphasising visible objects. But image descriptions can also abstract away from
objects and describe the type of scene depicted. In this paper, we explore the
potential of a state-of-the-art Vision and Language model, VinVL, to caption
images at the scene level using (1) a novel dataset which pairs images with
both object-centric and scene descriptions. Through (2) an in-depth analysis of
the effect of the fine-tuning, we show (3) that a small amount of curated data
suffices to generate scene descriptions without losing the capability to
identify object-level concepts in the scene; the model acquires a more holistic
view of the image compared to when object-centric descriptions are generated.
We discuss the parallels between these results and insights from computational
and cognitive science research on scene perception.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 15:33:51 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2022 16:49:37 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Cafagna",
"Michele",
""
],
[
"van Deemter",
"Kees",
""
],
[
"Gatt",
"Albert",
""
]
] |
new_dataset
| 0.993863 |
2211.05123
|
Luca Schaller
|
Luca Schaller
|
Up to 58 Tets/Hex to untangle Hex meshes
|
34 pages, 17 figures, Bachelorthesis
| null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
The request for high-quality solutions continually grows in a world where
more and more tasks are executed through computers. This also counts for fields
such as engineering, computer graphics, etc., which use meshes to solve their
problems. A mesh is a combination of some elementary elements, for which
hexahedral elements are a good choice thanks to their superior numerical
features. The solutions reached using these meshes depend on the quality of the
elements making up the mesh. The problem is that these individual elements can
take on a shape which prevents accurate computations. Such elements are
considered to be invalid. To allow users to get accurate results, the shape of
these elements must therefore be changed to be considered valid. In this work,
we combine the results of two papers to scan a mesh, identify possible invalid
elements and then change the shape of these elements to make them valid. With
this combination, we end up with a working algorithm. But there is room for
improvement, which is why we introduce multiple improvements to speed up the
algorithm as well as make it more robust. We then test our algorithm and
compare it to another approach. This work, therefore, introduces a new
efficient and robust approach to untangle invalid meshes.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 08:38:07 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Schaller",
"Luca",
""
]
] |
new_dataset
| 0.968499 |
2211.05125
|
David Kou\v{r}il
|
Mat\'u\v{s} Tal\v{c}\'ik, Filip Op\'alen\'y, Tereza Clarence,
Katar\'ina Furmanov\'a, Jan By\v{s}ka, Barbora Kozl\'ikov\'a, David
Kou\v{r}il
|
ChromoSkein: Untangling Three-Dimensional Chromatin Fiber With a
Web-Based Visualization Framework
| null | null | null | null |
cs.HC cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
We present ChromoSkein, a web-based tool for visualizing three-dimensional
chromatin models. The spatial organization of chromatin is essential to its
function. Experimental methods, namely Hi-C, reveal the spatial conformation
but output a 2D matrix representation. Biologists leverage simulation to bring
this information back to 3D, assembling a 3D chromatin shape prediction using
the 2D matrices as constraints. Our overview of existing chromatin
visualization software shows that the available tools limit the utility of 3D
through ineffective shading and a lack of advanced interactions. We designed
ChromoSkein to encourage analytical work directly with the 3D representation.
Our tool features a 3D view that supports understanding the shape of the highly
tangled chromatin fiber and the spatial relationships of its parts. Users can
explore and filter the 3D model using two interactions. First, they can manage
occlusion both by toggling the visibility of semantic parts and by adding
cutting planes. Second, they can segment the model through the creation of
custom selections. To complement the 3D view, we link the spatial
representation with non-spatial genomic data, such as 2D Hi-C maps and 1D
genomic signals. We demonstrate the utility of ChromoSkein in two exemplary use
cases that examine functional genomic loci in the spatial context of
chromosomes and the whole genome.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 12:37:52 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Talčík",
"Matúš",
""
],
[
"Opálený",
"Filip",
""
],
[
"Clarence",
"Tereza",
""
],
[
"Furmanová",
"Katarína",
""
],
[
"Byška",
"Jan",
""
],
[
"Kozlíková",
"Barbora",
""
],
[
"Kouřil",
"David",
""
]
] |
new_dataset
| 0.999374 |
2211.05229
|
Rajdeep Adak
|
Rajdeep Adak, Abhishek Kumbhar, Rajas Pathare, Sagar Gowda
|
Automatic Number Plate Recognition (ANPR) with YOLOv3-CNN
|
29 pages, 4 figures, 2 tables
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a YOLOv3-CNN pipeline for detecting vehicles, segregation of
number plates, and local storage of final recognized characters. Vehicle
identification is performed under various image correction schemes to determine
the effect of environmental factors (angle of perception, luminosity,
motion-blurring, and multi-line custom font etc.). A YOLOv3 object detection
model was trained to identify vehicles from a dataset of traffic images. A
second YOLOv3 layer was trained to identify number plates from vehicle images.
Based upon correction schemes, individual characters were segregated and
verified against real-time data to calculate accuracy of this approach. While
characters under direct view were recognized accurately, some numberplates
affected by environmental factors had reduced levels of accuracy. We summarize
the results under various environmental factors against real-time data and
produce an overall accuracy of the pipeline model.
|
[
{
"version": "v1",
"created": "Mon, 7 Nov 2022 12:59:01 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Adak",
"Rajdeep",
""
],
[
"Kumbhar",
"Abhishek",
""
],
[
"Pathare",
"Rajas",
""
],
[
"Gowda",
"Sagar",
""
]
] |
new_dataset
| 0.999857 |
2211.05232
|
Moran Beladev
|
Fengjun Wang, Sarai Mizrachi, Moran Beladev, Guy Nadav, Gil Amsalem,
Karen Lastmann Assaraf, Hadas Harush Boker
|
MuMIC -- Multimodal Embedding for Multi-label Image Classification with
Tempered Sigmoid
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-label image classification is a foundational topic in various domains.
Multimodal learning approaches have recently achieved outstanding results in
image representation and single-label image classification. For instance,
Contrastive Language-Image Pretraining (CLIP) demonstrates impressive
image-text representation learning abilities and is robust to natural
distribution shifts. This success inspires us to leverage multimodal learning
for multi-label classification tasks, and benefit from contrastively learnt
pretrained models. We propose the Multimodal Multi-label Image Classification
(MuMIC) framework, which utilizes a hardness-aware tempered sigmoid based
Binary Cross Entropy loss function, thus enables the optimization on
multi-label objectives and transfer learning on CLIP. MuMIC is capable of
providing high classification performance, handling real-world noisy data,
supporting zero-shot predictions, and producing domain-specific image
embeddings. In this study, a total of 120 image classes are defined, and more
than 140K positive annotations are collected on approximately 60K Booking.com
images. The final MuMIC model is deployed on Booking.com Content Intelligence
Platform, and it outperforms other state-of-the-art models with 85.6% GAP@10
and 83.8% GAP on all 120 classes, as well as a 90.1% macro mAP score across 32
majority classes. We summarize the modeling choices which are extensively
tested through ablation studies. To the best of our knowledge, we are the first
to adapt contrastively learnt multimodal pretraining for real-world multi-label
image classification problems, and the innovation can be transferred to other
domains.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 17:29:35 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Wang",
"Fengjun",
""
],
[
"Mizrachi",
"Sarai",
""
],
[
"Beladev",
"Moran",
""
],
[
"Nadav",
"Guy",
""
],
[
"Amsalem",
"Gil",
""
],
[
"Assaraf",
"Karen Lastmann",
""
],
[
"Boker",
"Hadas Harush",
""
]
] |
new_dataset
| 0.99955 |
2211.05237
|
Minahil Raza
|
Minahil Raza, Hanna Prokopova, Samir Huseynzade, Sepinoud Azimi and
Sebastien Lafond
|
SimuShips -- A High Resolution Simulation Dataset for Ship Detection
with Precise Annotations
| null | null | null | null |
cs.CV cs.AI cs.LG cs.RO eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Obstacle detection is a fundamental capability of an autonomous maritime
surface vessel (AMSV). State-of-the-art obstacle detection algorithms are based
on convolutional neural networks (CNNs). While CNNs provide higher detection
accuracy and fast detection speed, they require enormous amounts of data for
their training. In particular, the availability of domain-specific datasets is
a challenge for obstacle detection. The difficulty in conducting onsite
experiments limits the collection of maritime datasets. Owing to the logistic
cost of conducting on-site operations, simulation tools provide a safe and
cost-efficient alternative for data collection. In this work, we introduce
SimuShips, a publicly available simulation-based dataset for maritime
environments. Our dataset consists of 9471 high-resolution (1920x1080) images
which include a wide range of obstacle types, atmospheric and illumination
conditions along with occlusion, scale and visible proportion variations. We
provide annotations in the form of bounding boxes. In addition, we conduct
experiments with YOLOv5 to test the viability of simulation data. Our
experiments indicate that the combination of real and simulated images improves
the recall for all classes by 2.9%.
|
[
{
"version": "v1",
"created": "Thu, 22 Sep 2022 07:33:31 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Raza",
"Minahil",
""
],
[
"Prokopova",
"Hanna",
""
],
[
"Huseynzade",
"Samir",
""
],
[
"Azimi",
"Sepinoud",
""
],
[
"Lafond",
"Sebastien",
""
]
] |
new_dataset
| 0.999352 |
2211.05278
|
Praveen Kumar
|
Praveen Kumar
|
Network Security Roadmap
| null | null | null | null |
cs.CR cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Users may already have some perception of provided security based on
experience with earlier generations. To maintain the stability and coherent
integration of 5G services, it is imperative that security and privacy features
prevalent in earlier generations are also present in 5G. However, it is not
sufficient just to provide the same security features as in the legacy systems
due to the new threat model introduced by the integration of new technologies
like SDN, virtualization and SBA. 5G systems are expected to be more
service-oriented. This suggests there will be an additional emphasis on
security and privacy requirements that spawn from the new dimension of
service-oriented security architecture.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 01:05:58 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Kumar",
"Praveen",
""
]
] |
new_dataset
| 0.996248 |
2211.05344
|
Yiming Cui
|
Yiming Cui, Wanxiang Che, Shijin Wang, Ting Liu
|
LERT: A Linguistically-motivated Pre-trained Language Model
|
11 pages
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Pre-trained Language Model (PLM) has become a representative foundation model
in the natural language processing field. Most PLMs are trained with
linguistic-agnostic pre-training tasks on the surface form of the text, such as
the masked language model (MLM). To further empower the PLMs with richer
linguistic features, in this paper, we aim to propose a simple but effective
way to learn linguistic features for pre-trained language models. We propose
LERT, a pre-trained language model that is trained on three types of linguistic
features along with the original MLM pre-training task, using a
linguistically-informed pre-training (LIP) strategy. We carried out extensive
experiments on ten Chinese NLU tasks, and the experimental results show that
LERT could bring significant improvements over various comparable baselines.
Furthermore, we also conduct analytical experiments in various linguistic
aspects, and the results prove that the design of LERT is valid and effective.
Resources are available at https://github.com/ymcui/LERT
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 05:09:16 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Cui",
"Yiming",
""
],
[
"Che",
"Wanxiang",
""
],
[
"Wang",
"Shijin",
""
],
[
"Liu",
"Ting",
""
]
] |
new_dataset
| 0.968284 |
2211.05352
|
Qian Wu
|
Rui Deng, Qian Wu, Yuke Li
|
3D-CSL: self-supervised 3D context similarity learning for
Near-Duplicate Video Retrieval
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce 3D-CSL, a compact pipeline for Near-Duplicate
Video Retrieval (NDVR), and explore a novel self-supervised learning strategy
for video similarity learning. Most previous methods only extract video spatial
features from frames separately and then design kinds of complex mechanisms to
learn the temporal correlations among frame features. However, parts of
spatiotemporal dependencies have already been lost. To address this, our 3D-CSL
extracts global spatiotemporal dependencies in videos end-to-end with a 3D
transformer and find a good balance between efficiency and effectiveness by
matching on clip-level. Furthermore, we propose a two-stage self-supervised
similarity learning strategy to optimize the entire network. Firstly, we
propose PredMAE to pretrain the 3D transformer with video prediction task;
Secondly, ShotMix, a novel video-specific augmentation, and FCS loss, a novel
triplet loss, are proposed further promote the similarity learning results. The
experiments on FIVR-200K and CC_WEB_VIDEO demonstrate the superiority and
reliability of our method, which achieves the state-of-the-art performance on
clip-level NDVR.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 05:51:08 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Deng",
"Rui",
""
],
[
"Wu",
"Qian",
""
],
[
"Li",
"Yuke",
""
]
] |
new_dataset
| 0.985702 |
2211.05416
|
Phuc Nguyen Tri
|
Phuc Nguyen, Hideaki Takeda
|
Wikidata-lite for Knowledge Extraction and Exploration
|
3 pages, workshop paper
| null | null | null |
cs.DB
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Wikidata is the largest collaborative general knowledge graph supported by a
worldwide community. It includes many helpful topics for knowledge exploration
and data science applications. However, due to the enormous size of Wikidata,
it is challenging to retrieve a large amount of data with millions of results,
make complex queries requiring large aggregation operations, or access too many
statement references. This paper introduces our preliminary works on
Wikidata-lite, a toolkit to build a database offline for knowledge extraction
and exploration, e.g., retrieving item information, statements, provenances, or
searching entities by their keywords and attributes. Wikidata-lite has high
performance and memory efficiency, much faster than the official Wikidata
SPARQL endpoint for big queries. The Wikidata-lite repository is available at
https://github.com/phucty/wikidb.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 08:46:47 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Nguyen",
"Phuc",
""
],
[
"Takeda",
"Hideaki",
""
]
] |
new_dataset
| 0.988215 |
2211.05429
|
Ravi Kiran Sarvadevabhatla
|
Nikhil Bansal, Kartik Gupta, Kiruthika Kannan, Sivani Pentapati, Ravi
Kiran Sarvadevabhatla
|
DrawMon: A Distributed System for Detection of Atypical Sketch Content
in Concurrent Pictionary Games
|
Presented at ACM Multimedia 2022. For project page and dataset, visit
https://drawm0n.github.io
| null |
10.1145/3503161.3547747
| null |
cs.CV cs.GR cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Pictionary, the popular sketch-based guessing game, provides an opportunity
to analyze shared goal cooperative game play in restricted communication
settings. However, some players occasionally draw atypical sketch content.
While such content is occasionally relevant in the game context, it sometimes
represents a rule violation and impairs the game experience. To address such
situations in a timely and scalable manner, we introduce DrawMon, a novel
distributed framework for automatic detection of atypical sketch content in
concurrently occurring Pictionary game sessions. We build specialized online
interfaces to collect game session data and annotate atypical sketch content,
resulting in AtyPict, the first ever atypical sketch content dataset. We use
AtyPict to train CanvasNet, a deep neural atypical content detection network.
We utilize CanvasNet as a core component of DrawMon. Our analysis of post
deployment game session data indicates DrawMon's effectiveness for scalable
monitoring and atypical sketch content detection. Beyond Pictionary, our
contributions also serve as a design guide for customized atypical content
response systems involving shared and interactive whiteboards. Code and
datasets are available at https://drawm0n.github.io.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 09:09:41 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Bansal",
"Nikhil",
""
],
[
"Gupta",
"Kartik",
""
],
[
"Kannan",
"Kiruthika",
""
],
[
"Pentapati",
"Sivani",
""
],
[
"Sarvadevabhatla",
"Ravi Kiran",
""
]
] |
new_dataset
| 0.999492 |
2211.05486
|
Fei Shen
|
Fei Shen, Mengwan Wei, and Junchi Ren
|
HSGNet: Object Re-identification with Hierarchical Similarity Graph
Network
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Object re-identification method is made up of backbone network, feature
aggregation, and loss function. However, most backbone networks lack a special
mechanism to handle rich scale variations and mine discriminative feature
representations. In this paper, we firstly design a hierarchical similarity
graph module (HSGM) to reduce the conflict of backbone and re-identification
networks. The designed HSGM builds a rich hierarchical graph to mine the
mapping relationships between global-local and local-local. Secondly, we divide
the feature map along with the spatial and channel directions in each
hierarchical graph. The HSGM applies the spatial features and channel features
extracted from different locations as nodes, respectively, and utilizes the
similarity scores between nodes to construct spatial and channel similarity
graphs. During the learning process of HSGM, we utilize a learnable parameter
to re-optimize the importance of each position, as well as evaluate the
correlation between different nodes. Thirdly, we develop a novel hierarchical
similarity graph network (HSGNet) by embedding the HSGM in the backbone
network. Furthermore, HSGM can be easily embedded into backbone networks of any
depth to improve object re-identification ability. Finally, extensive
experiments on three large-scale object datasets demonstrate that the proposed
HSGNet is superior to state-of-the-art object re-identification approaches.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 11:02:40 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Shen",
"Fei",
""
],
[
"Wei",
"Mengwan",
""
],
[
"Ren",
"Junchi",
""
]
] |
new_dataset
| 0.99786 |
2211.05499
|
Azade Farshad
|
Azade Farshad, Yousef Yeganeh, Helisa Dhamo, Federico Tombari, Nassir
Navab
|
DisPositioNet: Disentangled Pose and Identity in Semantic Image
Manipulation
|
Accepted to BMVC 2022
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph representation of objects and their relations in a scene, known as a
scene graph, provides a precise and discernible interface to manipulate a scene
by modifying the nodes or the edges in the graph. Although existing works have
shown promising results in modifying the placement and pose of objects, scene
manipulation often leads to losing some visual characteristics like the
appearance or identity of objects. In this work, we propose DisPositioNet, a
model that learns a disentangled representation for each object for the task of
image manipulation using scene graphs in a self-supervised manner. Our
framework enables the disentanglement of the variational latent embeddings as
well as the feature representation in the graph. In addition to producing more
realistic images due to the decomposition of features like pose and identity,
our method takes advantage of the probabilistic sampling in the intermediate
features to generate more diverse images in object replacement or addition
tasks. The results of our experiments show that disentangling the feature
representations in the latent manifold of the model outperforms the previous
works qualitatively and quantitatively on two public benchmarks. Project Page:
https://scenegenie.github.io/DispositioNet/
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 11:47:37 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Farshad",
"Azade",
""
],
[
"Yeganeh",
"Yousef",
""
],
[
"Dhamo",
"Helisa",
""
],
[
"Tombari",
"Federico",
""
],
[
"Navab",
"Nassir",
""
]
] |
new_dataset
| 0.998195 |
2211.05580
|
Fanhang Yang
|
Jigang Tong, Fanhang Yang, Sen Yang, Enzeng Dong, Shengzhi Du, Xing
Wang, Xianlin Yi
|
Hyperbolic Cosine Transformer for LiDAR 3D Object Detection
|
8 pages, 5 figures and 3 tables. This paper possibly publicated on
the IEEE Robotics and Automation Letters
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Transformer has achieved great success in computer vision. However,
it is constrained because the spatial and temporal complexity grows
quadratically with the number of large points in 3D object detection
applications. Previous point-wise methods are suffering from time consumption
and limited receptive fields to capture information among points. In this
paper, we propose a two-stage hyperbolic cosine transformer (ChTR3D) for 3D
object detection from LiDAR point clouds. The proposed ChTR3D refines proposals
by applying cosh-attention in linear computation complexity to encode rich
contextual relationships among points. The cosh-attention module reduces the
space and time complexity of the attention operation. The traditional softmax
operation is replaced by non-negative ReLU activation and
hyperbolic-cosine-based operator with re-weighting mechanism. Extensive
experiments on the widely used KITTI dataset demonstrate that, compared with
vanilla attention, the cosh-attention significantly improves the inference
speed with competitive performance. Experiment results show that, among
two-stage state-of-the-art methods using point-level features, the proposed
ChTR3D is the fastest one.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 13:54:49 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Tong",
"Jigang",
""
],
[
"Yang",
"Fanhang",
""
],
[
"Yang",
"Sen",
""
],
[
"Dong",
"Enzeng",
""
],
[
"Du",
"Shengzhi",
""
],
[
"Wang",
"Xing",
""
],
[
"Yi",
"Xianlin",
""
]
] |
new_dataset
| 0.998166 |
2211.05673
|
Ivan P Yamshchikov
|
Ivan P. Yamshchikov and Alexey Tikhonov and Yorgos Pantis and
Charlotte Schubert and J\"urgen Jost
|
BERT in Plutarch's Shadows
| null | null | null | null |
cs.CL cs.AI cs.CY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The extensive surviving corpus of the ancient scholar Plutarch of Chaeronea
(ca. 45-120 CE) also contains several texts which, according to current
scholarly opinion, did not originate with him and are therefore attributed to
an anonymous author Pseudo-Plutarch. These include, in particular, the work
Placita Philosophorum (Quotations and Opinions of the Ancient Philosophers),
which is extremely important for the history of ancient philosophy. Little is
known about the identity of that anonymous author and its relation to other
authors from the same period. This paper presents a BERT language model for
Ancient Greek. The model discovers previously unknown statistical properties
relevant to these literary, philosophical, and historical problems and can shed
new light on this authorship question. In particular, the Placita
Philosophorum, together with one of the other Pseudo-Plutarch texts, shows
similarities with the texts written by authors from an Alexandrian context
(2nd/3rd century CE).
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 16:21:42 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Yamshchikov",
"Ivan P.",
""
],
[
"Tikhonov",
"Alexey",
""
],
[
"Pantis",
"Yorgos",
""
],
[
"Schubert",
"Charlotte",
""
],
[
"Jost",
"Jürgen",
""
]
] |
new_dataset
| 0.998947 |
2211.05709
|
Li Siyao
|
Li Siyao, Yuhang Li, Bo Li, Chao Dong, Ziwei Liu, Chen Change Loy
|
AnimeRun: 2D Animation Visual Correspondence from Open Source 3D Movies
|
Accepted by NeurIPS 2022 Track on Dataset and Benchmark
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Existing correspondence datasets for two-dimensional (2D) cartoon suffer from
simple frame composition and monotonic movements, making them insufficient to
simulate real animations. In this work, we present a new 2D animation visual
correspondence dataset, AnimeRun, by converting open source three-dimensional
(3D) movies to full scenes in 2D style, including simultaneous moving
background and interactions of multiple subjects. Our analyses show that the
proposed dataset not only resembles real anime more in image composition, but
also possesses richer and more complex motion patterns compared to existing
datasets. With this dataset, we establish a comprehensive benchmark by
evaluating several existing optical flow and segment matching methods, and
analyze shortcomings of these methods on animation data. Data, code and other
supplementary materials are available at
https://lisiyao21.github.io/projects/AnimeRun.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 17:26:21 GMT"
}
] | 2022-11-11T00:00:00 |
[
[
"Siyao",
"Li",
""
],
[
"Li",
"Yuhang",
""
],
[
"Li",
"Bo",
""
],
[
"Dong",
"Chao",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Loy",
"Chen Change",
""
]
] |
new_dataset
| 0.999859 |
2112.05597
|
Mauro Martini
|
Andrea Eirale, Mauro Martini, Luigi Tagliavini, Dario Gandini,
Marcello Chiaberge, Giuseppe Quaglia
|
Marvin: an Innovative Omni-Directional Robotic Assistant for Domestic
Environments
|
20 pages, 9 figures, 3 table
|
Sensors 2022, 22(14), 5261
|
10.3390/s22145261
| null |
cs.RO cs.AI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Population ageing and pandemics recently demonstrate to cause isolation of
elderly people in their houses, generating the need for a reliable assistive
figure. Robotic assistants are the new frontier of innovation for domestic
welfare, and elderly monitoring is one of the services a robot can handle for
collective well-being. Despite these emerging needs, in the actual landscape of
robotic assistants there are no platform which successfully combines a reliable
mobility in cluttered domestic spaces, with lightweight and offline Artificial
Intelligence (AI) solutions for perception and interaction. In this work, we
present Marvin, a novel assistive robotic platform we developed with a modular
layer-based architecture, merging a flexible mechanical design with
cutting-edge AI for perception and vocal control. We focus the design of Marvin
on three target service functions: monitoring of elderly and reduced-mobility
subjects, remote presence and connectivity, and night assistance. Compared to
previous works, we propose a tiny omnidirectional platform, which enables agile
mobility and effective obstacle avoidance. Moreover, we design a controllable
positioning device, which easily allows the user to access the interface for
connectivity and extends the visual range of the camera sensor. Nonetheless, we
delicately consider the privacy issues arising from private data collection on
cloud services, a critical aspect of commercial AI-based assistants. To this
end, we demonstrate how lightweight deep learning solutions for visual
perception and vocal command can be adopted, completely running offline on the
embedded hardware of the robot.
|
[
{
"version": "v1",
"created": "Fri, 10 Dec 2021 15:27:53 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2022 16:57:21 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Jul 2022 11:17:47 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Eirale",
"Andrea",
""
],
[
"Martini",
"Mauro",
""
],
[
"Tagliavini",
"Luigi",
""
],
[
"Gandini",
"Dario",
""
],
[
"Chiaberge",
"Marcello",
""
],
[
"Quaglia",
"Giuseppe",
""
]
] |
new_dataset
| 0.995343 |
2204.00057
|
Ceyhun Onur
|
Ceyhun Onur, Arda Yurdakul
|
ElectAnon: A Blockchain-Based, Anonymous, Robust and Scalable
Ranked-Choice Voting Protocol
| null | null | null | null |
cs.CR cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Remote voting has become more critical in recent years, especially after the
Covid-19 outbreak. Blockchain technology and its benefits like
decentralization, security, and transparency have encouraged remote voting
systems to use blockchains. Analysis of existing solutions reveals that
anonymity, robustness, and scalability are common problems in blockchain-based
election systems. In this work, we propose ElectAnon, a blockchain-based,
ranked-choice election protocol focusing on anonymity, robustness, and
scalability. ElectAnon achieves anonymity by enabling voters to cast their
votes via zero-knowledge proofs anonymously. Robustness is realized by removing
the direct control of the authorities in the voting process by using
timed-state machines. Results show that ElectAnon is scalable amongst existing
works as it reduces the gas consumption up to 89% compared to previous works.
The proposed protocol includes a candidate proposal system and swappable
tallying libraries. An extension is also proposed to minimize the trust
assumption on election authorities. Our code is available on
https://github.com/ceyonur/electanon.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 19:46:27 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Nov 2022 19:13:42 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Onur",
"Ceyhun",
""
],
[
"Yurdakul",
"Arda",
""
]
] |
new_dataset
| 0.998563 |
2204.13041
|
Peter Selinger
|
Peng Fu, Kohei Kishida, Neil J. Ross, Peter Selinger
|
Proto-Quipper with dynamic lifting
| null | null | null | null |
cs.PL math.CT quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quipper is a functional programming language for quantum computing.
Proto-Quipper is a family of languages aiming to provide a formal foundation
for Quipper. In this paper, we extend Proto-Quipper-M with a construct called
dynamic lifting, which is present in Quipper. By virtue of being a circuit
description language, Proto-Quipper has two separate runtimes: circuit
generation time and circuit execution time. Values that are known at circuit
generation time are called parameters, and values that are known at circuit
execution time are called states. Dynamic lifting is an operation that enables
a state, such as the result of a measurement, to be lifted to a parameter,
where it can influence the generation of the next portion of the circuit. As a
result, dynamic lifting enables Proto-Quipper programs to interleave classical
and quantum computation. We describe the syntax of a language we call
Proto-Quipper-Dyn. Its type system uses a system of modalities to keep track of
the use of dynamic lifting. We also provide an operational semantics, as well
as an abstract categorical semantics for dynamic lifting based on enriched
category theory. We prove that both the type system and the operational
semantics are sound with respect to our categorical semantics. Finally, we give
some examples of Proto-Quipper-Dyn programs that make essential use of dynamic
lifting.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 16:34:15 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Nov 2022 20:56:01 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Fu",
"Peng",
""
],
[
"Kishida",
"Kohei",
""
],
[
"Ross",
"Neil J.",
""
],
[
"Selinger",
"Peter",
""
]
] |
new_dataset
| 0.999519 |
2207.09521
|
Jeroen Bertels
|
Sofie Tilborghs, Jeroen Bertels, David Robben, Dirk Vandermeulen,
Frederik Maes
|
The Dice loss in the context of missing or empty labels: Introducing
$\Phi$ and $\epsilon$
|
8 pages, 3 figures, 1 table, International Conference on Medical
Image Computing and Computer Assisted Intervention (MICCAI) 2022
|
Medical Image Computing and Computer Assisted Intervention
(MICCAI) 2022. Lecture Notes in Computer Science, vol 13435. Springer, Cham
|
10.1007/978-3-031-16443-9_51
| null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Albeit the Dice loss is one of the dominant loss functions in medical image
segmentation, most research omits a closer look at its derivative, i.e. the
real motor of the optimization when using gradient descent. In this paper, we
highlight the peculiar action of the Dice loss in the presence of missing or
empty labels. First, we formulate a theoretical basis that gives a general
description of the Dice loss and its derivative. It turns out that the choice
of the reduction dimensions $\Phi$ and the smoothing term $\epsilon$ is
non-trivial and greatly influences its behavior. We find and propose heuristic
combinations of $\Phi$ and $\epsilon$ that work in a segmentation setting with
either missing or empty labels. Second, we empirically validate these findings
in a binary and multiclass segmentation setting using two publicly available
datasets. We confirm that the choice of $\Phi$ and $\epsilon$ is indeed
pivotal. With $\Phi$ chosen such that the reductions happen over a single batch
(and class) element and with a negligible $\epsilon$, the Dice loss deals with
missing labels naturally and performs similarly compared to recent adaptations
specific for missing labels. With $\Phi$ chosen such that the reductions happen
over multiple batch elements or with a heuristic value for $\epsilon$, the Dice
loss handles empty labels correctly. We believe that this work highlights some
essential perspectives and hope that it encourages researchers to better
describe their exact implementation of the Dice loss in future work.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 19:20:06 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Nov 2022 10:31:51 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Tilborghs",
"Sofie",
""
],
[
"Bertels",
"Jeroen",
""
],
[
"Robben",
"David",
""
],
[
"Vandermeulen",
"Dirk",
""
],
[
"Maes",
"Frederik",
""
]
] |
new_dataset
| 0.959528 |
2208.08566
|
Alexander Lavin
|
Erik Peterson, Alexander Lavin
|
Physical Computing for Materials Acceleration Platforms
| null |
MATTER, VOLUME 5, ISSUE 11, P3586-3596, NOVEMBER 02, 2022
|
10.1016/j.matt.2022.09.022
| null |
cs.AI cs.AR cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
A ''technology lottery'' describes a research idea or technology succeeding
over others because it is suited to the available software and hardware, not
necessarily because it is superior to alternative directions--examples abound,
from the synergies of deep learning and GPUs to the disconnect of urban design
and autonomous vehicles. The nascent field of Self-Driving Laboratories (SDL),
particularly those implemented as Materials Acceleration Platforms (MAPs), is
at risk of an analogous pitfall: the next logical step for building MAPs is to
take existing lab equipment and workflows and mix in some AI and automation. In
this whitepaper, we argue that the same simulation and AI tools that will
accelerate the search for new materials, as part of the MAPs research program,
also make possible the design of fundamentally new computing mediums. We need
not be constrained by existing biases in science, mechatronics, and
general-purpose computing, but rather we can pursue new vectors of engineering
physics with advances in cyber-physical learning and closed-loop,
self-optimizing systems. Here we outline a simulation-based MAP program to
design computers that use physics itself to solve optimization problems. Such
systems mitigate the hardware-software-substrate-user information losses
present in every other class of MAPs and they perfect alignment between
computing problems and computing mediums eliminating any technology lottery. We
offer concrete steps toward early ''Physical Computing (PC) -MAP'' advances and
the longer term cyber-physical R&D which we expect to introduce a new era of
innovative collaboration between materials researchers and computer scientists.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 23:03:54 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Peterson",
"Erik",
""
],
[
"Lavin",
"Alexander",
""
]
] |
new_dataset
| 0.980747 |
2211.03900
|
Thien-Minh Nguyen
|
Thien-Minh Nguyen, Daniel Duberg, Patric Jensfelt, Shenghai Yuan,
Lihua Xie
|
SLICT: Multi-input Multi-scale Surfel-Based Lidar-Inertial
Continuous-Time Odometry and Mapping
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While feature association to a global map has significant benefits, to keep
the computations from growing exponentially, most lidar-based odometry and
mapping methods opt to associate features with local maps at one voxel scale.
Taking advantage of the fact that surfels (surface elements) at different voxel
scales can be organized in a tree-like structure, we propose an octree-based
global map of multi-scale surfels that can be updated incrementally. This
alleviates the need for recalculating, for example, a k-d tree of the whole map
repeatedly. The system can also take input from a single or a number of
sensors, reinforcing the robustness in degenerate cases. We also propose a
point-to-surfel (PTS) association scheme, continuous-time optimization on PTS
and IMU preintegration factors, along with loop closure and bundle adjustment,
making a complete framework for Lidar-Inertial continuous-time odometry and
mapping. Experiments on public and in-house datasets demonstrate the advantages
of our system compared to other state-of-the-art methods. To benefit the
community, we release the source code and dataset at
https://github.com/brytsknguyen/slict.
|
[
{
"version": "v1",
"created": "Mon, 7 Nov 2022 23:17:05 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Nov 2022 17:22:17 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Nguyen",
"Thien-Minh",
""
],
[
"Duberg",
"Daniel",
""
],
[
"Jensfelt",
"Patric",
""
],
[
"Yuan",
"Shenghai",
""
],
[
"Xie",
"Lihua",
""
]
] |
new_dataset
| 0.99642 |
2211.04508
|
Hongyu Gong
|
Paul-Ambroise Duquenne, Hongyu Gong, Ning Dong, Jingfei Du, Ann Lee,
Vedanuj Goswani, Changhan Wang, Juan Pino, Beno\^it Sagot, Holger Schwenk
|
SpeechMatrix: A Large-Scale Mined Corpus of Multilingual
Speech-to-Speech Translations
|
18 pages
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present SpeechMatrix, a large-scale multilingual corpus of
speech-to-speech translations mined from real speech of European Parliament
recordings. It contains speech alignments in 136 language pairs with a total of
418 thousand hours of speech. To evaluate the quality of this parallel speech,
we train bilingual speech-to-speech translation models on mined data only and
establish extensive baseline results on EuroParl-ST, VoxPopuli and FLEURS test
sets. Enabled by the multilinguality of SpeechMatrix, we also explore
multilingual speech-to-speech translation, a topic which was addressed by few
other works. We also demonstrate that model pre-training and sparse scaling
using Mixture-of-Experts bring large gains to translation performance. The
mined data and models are freely available.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 19:09:27 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Duquenne",
"Paul-Ambroise",
""
],
[
"Gong",
"Hongyu",
""
],
[
"Dong",
"Ning",
""
],
[
"Du",
"Jingfei",
""
],
[
"Lee",
"Ann",
""
],
[
"Goswani",
"Vedanuj",
""
],
[
"Wang",
"Changhan",
""
],
[
"Pino",
"Juan",
""
],
[
"Sagot",
"Benoît",
""
],
[
"Schwenk",
"Holger",
""
]
] |
new_dataset
| 0.995278 |
2211.04534
|
Alessandro Suglia
|
Alessandro Suglia, Jos\'e Lopes, Emanuele Bastianelli, Andrea Vanzo,
Shubham Agarwal, Malvina Nikandrou, Lu Yu, Ioannis Konstas, Verena Rieser
|
Going for GOAL: A Resource for Grounded Football Commentaries
|
Preprint formatted using the ACM Multimedia template (8 pages +
appendix)
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent video+language datasets cover domains where the interaction is highly
structured, such as instructional videos, or where the interaction is scripted,
such as TV shows. Both of these properties can lead to spurious cues to be
exploited by models rather than learning to ground language. In this paper, we
present GrOunded footbAlL commentaries (GOAL), a novel dataset of football (or
`soccer') highlights videos with transcribed live commentaries in English. As
the course of a game is unpredictable, so are commentaries, which makes them a
unique resource to investigate dynamic language grounding. We also provide
state-of-the-art baselines for the following tasks: frame reordering, moment
retrieval, live commentary retrieval and play-by-play live commentary
generation. Results show that SOTA models perform reasonably well in most
tasks. We discuss the implications of these results and suggest new tasks for
which GOAL can be used. Our codebase is available at:
https://gitlab.com/grounded-sport-convai/goal-baselines.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 20:04:27 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Suglia",
"Alessandro",
""
],
[
"Lopes",
"José",
""
],
[
"Bastianelli",
"Emanuele",
""
],
[
"Vanzo",
"Andrea",
""
],
[
"Agarwal",
"Shubham",
""
],
[
"Nikandrou",
"Malvina",
""
],
[
"Yu",
"Lu",
""
],
[
"Konstas",
"Ioannis",
""
],
[
"Rieser",
"Verena",
""
]
] |
new_dataset
| 0.996565 |
2211.04630
|
Marek Gagolewski
|
Marek Gagolewski
|
Minimalist Data Wrangling with Python
|
Release: v1.0.2.9001 (2022-11-09T12:17:50+1100)
| null |
10.5281/zenodo.6451068
| null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Minimalist Data Wrangling with Python is envisaged as a student's first
introduction to data science, providing a high-level overview as well as
discussing key concepts in detail. We explore methods for cleaning data
gathered from different sources, transforming, selecting, and extracting
features, performing exploratory data analysis and dimensionality reduction,
identifying naturally occurring data clusters, modelling patterns in data,
comparing data between groups, and reporting the results. This textbook is a
non-profit project. Its online and PDF versions are freely available at
https://datawranglingpy.gagolewski.com/.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 01:24:39 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Gagolewski",
"Marek",
""
]
] |
new_dataset
| 0.997161 |
2211.04741
|
Mohit Bhasi Thazhath
|
Mohit Bhasi Thazhath, Jan Michalak, Thang Hoang
|
Harpocrates: Privacy-Preserving and Immutable Audit Log for Sensitive
Data Operations
|
To appear at IEEE 4th International Conference on Trust, Privacy and
Security in Intelligent Systems, and Applications (TPS-ISA) 2022
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The audit log is a crucial component to monitor fine-grained operations over
sensitive data (e.g., personal, health) for security inspection and assurance.
Since such data operations can be highly sensitive, it is vital to ensure that
the audit log achieves not only validity and immutability, but also
confidentiality against active threats to standard data regulations (e.g.,
HIPAA) compliance. Despite its critical needs, state-of-the-art
privacy-preserving audit log schemes (e.g., Ghostor (NSDI '20), Calypso (VLDB
'19)) do not fully obtain a high level of privacy, integrity, and immutability
simultaneously, in which certain information (e.g., user identities) is still
leaked in the log.
In this paper, we propose Harpocrates, a new privacy-preserving and immutable
audit log scheme. Harpocrates permits data store, share, and access operations
to be recorded in the audit log without leaking sensitive information (e.g.,
data identifier, user identity), while permitting the validity of data
operations to be publicly verifiable. Harpocrates makes use of blockchain
techniques to achieve immutability and avoid a single point of failure, while
cryptographic zero-knowledge proofs are harnessed for confidentiality and
public verifiability. We analyze the security of our proposed technique and
prove that it achieves non-malleability and indistinguishability. We fully
implemented Harpocrates and evaluated its performance on a real blockchain
system (i.e., Hyperledger Fabric) deployed on a commodity platform (i.e.,
Amazon EC2). Experimental results demonstrated that Harpocrates is highly
scalable and achieves practical performance.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 08:27:05 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Thazhath",
"Mohit Bhasi",
""
],
[
"Michalak",
"Jan",
""
],
[
"Hoang",
"Thang",
""
]
] |
new_dataset
| 0.992055 |
2211.04753
|
MInsoo Lee
|
Gyumin Shim, Minsoo Lee and Jaegul Choo
|
ReFu: Refine and Fuse the Unobserved View for Detail-Preserving
Single-Image 3D Human Reconstruction
|
Accepted at ACM MM 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Single-image 3D human reconstruction aims to reconstruct the 3D textured
surface of the human body given a single image. While implicit function-based
methods recently achieved reasonable reconstruction performance, they still
bear limitations showing degraded quality in both surface geometry and texture
from an unobserved view. In response, to generate a realistic textured surface,
we propose ReFu, a coarse-to-fine approach that refines the projected backside
view image and fuses the refined image to predict the final human body. To
suppress the diffused occupancy that causes noise in projection images and
reconstructed meshes, we propose to train occupancy probability by
simultaneously utilizing 2D and 3D supervisions with occupancy-based volume
rendering. We also introduce a refinement architecture that generates
detail-preserving backside-view images with front-to-back warping. Extensive
experiments demonstrate that our method achieves state-of-the-art performance
in 3D human reconstruction from a single image, showing enhanced geometry and
texture quality from an unobserved view.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 09:14:11 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Shim",
"Gyumin",
""
],
[
"Lee",
"Minsoo",
""
],
[
"Choo",
"Jaegul",
""
]
] |
new_dataset
| 0.999381 |
2211.04785
|
Ying Peng
|
Jie Wu, Ying Peng, Shengming Zhang, Weigang Qi, Jian Zhang
|
Masked Vision-Language Transformers for Scene Text Recognition
|
The paper is accepted by the 33rd British Machine Vision Conference
(BMVC 2022)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Scene text recognition (STR) enables computers to recognize and read the text
in various real-world scenes. Recent STR models benefit from taking linguistic
information in addition to visual cues into consideration. We propose a novel
Masked Vision-Language Transformers (MVLT) to capture both the explicit and the
implicit linguistic information. Our encoder is a Vision Transformer, and our
decoder is a multi-modal Transformer. MVLT is trained in two stages: in the
first stage, we design a STR-tailored pretraining method based on a masking
strategy; in the second stage, we fine-tune our model and adopt an iterative
correction method to improve the performance. MVLT attains superior results
compared to state-of-the-art STR models on several benchmarks. Our code and
model are available at https://github.com/onealwj/MVLT.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 10:28:23 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Wu",
"Jie",
""
],
[
"Peng",
"Ying",
""
],
[
"Zhang",
"Shengming",
""
],
[
"Qi",
"Weigang",
""
],
[
"Zhang",
"Jian",
""
]
] |
new_dataset
| 0.998974 |
2211.04793
|
Soumen Basu
|
Soumen Basu, Mayank Gupta, Pratyaksha Rana, Pankaj Gupta, Chetan Arora
|
RadFormer: Transformers with Global-Local Attention for Interpretable
and Accurate Gallbladder Cancer Detection
|
To Appear in Elsevier Medical Image Analysis
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose a novel deep neural network architecture to learn interpretable
representation for medical image analysis. Our architecture generates a global
attention for region of interest, and then learns bag of words style deep
feature embeddings with local attention. The global, and local feature maps are
combined using a contemporary transformer architecture for highly accurate
Gallbladder Cancer (GBC) detection from Ultrasound (USG) images. Our
experiments indicate that the detection accuracy of our model beats even human
radiologists, and advocates its use as the second reader for GBC diagnosis. Bag
of words embeddings allow our model to be probed for generating interpretable
explanations for GBC detection consistent with the ones reported in medical
literature. We show that the proposed model not only helps understand decisions
of neural network models but also aids in discovery of new visual features
relevant to the diagnosis of GBC. Source-code and model will be available at
https://github.com/sbasu276/RadFormer
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 10:40:35 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Basu",
"Soumen",
""
],
[
"Gupta",
"Mayank",
""
],
[
"Rana",
"Pratyaksha",
""
],
[
"Gupta",
"Pankaj",
""
],
[
"Arora",
"Chetan",
""
]
] |
new_dataset
| 0.995178 |
2211.04803
|
Usman Khalil Ph.D (Scholar)
|
Usman Khalil, Owais Ahmed Malik, Ong Wee Hong, Mueen Uddin (Sr. Member
IEEE)
|
DSCOT: An NFT-Based Blockchain Architecture for the Authentication of
IoT-Enabled Smart Devices in Smart Cities
|
18 pages, 15 figures, 5 tables, journal
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Smart city architecture brings all the underlying architectures, i.e.,
Internet of Things (IoT), Cyber-Physical Systems (CPSs), Internet of
Cyber-Physical Things (IoCPT), and Internet of Everything (IoE), together to
work as a system under its umbrella. The goal of smart city architecture is to
come up with a solution that may integrate all the real-time response
applications. However, the cyber-physical space poses threats that can
jeopardize the working of a smart city where all the data belonging to people,
systems, and processes will be at risk. Various architectures based on
centralized and distributed mechanisms support smart cities; however, the
security concerns regarding traceability, scalability, security services,
platform assistance, and resource management persist. In this paper, private
blockchain-based architecture Decentralized Smart City of Things (DSCoT) is
proposed. It actively utilizes fog computing for all the users and smart
devices connected to a fog node in a particular management system in a smart
city, i.e., a smart house or hospital, etc. Non-fungible tokens (NFTs) have
been utilized for representation to define smart device attributes. NFTs in the
proposed DSCoT architecture provide devices and user authentication (IoT)
functionality. DSCoT has been designed to provide a smart city solution that
ensures robust security features such as Confidentiality, Integrity,
Availability (CIA), and authorization by defining new attributes and functions
for Owner, User, Fog, and IoT devices authentication. The evaluation of the
proposed functions and components in terms of Gas consumption and time
complexity has shown promising results. Comparatively, the Gas consumption for
minting DSCoT NFT showed approximately 27%, and a DSCoT approve() was
approximately 11% more efficient than the PUF-based NFT solution.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 10:55:20 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Khalil",
"Usman",
"",
"Sr. Member\n IEEE"
],
[
"Malik",
"Owais Ahmed",
"",
"Sr. Member\n IEEE"
],
[
"Hong",
"Ong Wee",
"",
"Sr. Member\n IEEE"
],
[
"Uddin",
"Mueen",
"",
"Sr. Member\n IEEE"
]
] |
new_dataset
| 0.999323 |
2211.04831
|
Liang Zhao
|
Liang Zhao, Xinyuan Zhao, Hailong Ma, Xinyu Zhang, Long Zeng
|
3DFill:Reference-guided Image Inpainting by Self-supervised 3D Image
Alignment
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most existing image inpainting algorithms are based on a single view,
struggling with large holes or the holes containing complicated scenes. Some
reference-guided algorithms fill the hole by referring to another viewpoint
image and use 2D image alignment. Due to the camera imaging process, simple 2D
transformation is difficult to achieve a satisfactory result. In this paper, we
propose 3DFill, a simple and efficient method for reference-guided image
inpainting. Given a target image with arbitrary hole regions and a reference
image from another viewpoint, the 3DFill first aligns the two images by a
two-stage method: 3D projection + 2D transformation, which has better results
than 2D image alignment. The 3D projection is an overall alignment between
images and the 2D transformation is a local alignment focused on the hole
region. The entire process of image alignment is self-supervised. We then fill
the hole in the target image with the contents of the aligned image. Finally,
we use a conditional generation network to refine the filled image to obtain
the inpainting result. 3DFill achieves state-of-the-art performance on image
inpainting across a variety of wide view shifts and has a faster inference
speed than other inpainting models.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 12:09:03 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Zhao",
"Liang",
""
],
[
"Zhao",
"Xinyuan",
""
],
[
"Ma",
"Hailong",
""
],
[
"Zhang",
"Xinyu",
""
],
[
"Zeng",
"Long",
""
]
] |
new_dataset
| 0.993629 |
2211.04972
|
Yuichiro Tanaka
|
Yutaro Ishida and Sansei Hori and Yuichiro Tanaka and Yuma Yoshimoto
and Kouhei Hashimoto and Gouki Iwamoto and Yoshiya Aratani and Kenya
Yamashita and Shinya Ishimoto and Kyosuke Hitaka and Fumiaki Yamaguchi and
Ryuhei Miyoshi and Kentaro Honda and Yushi Abe and Yoshitaka Kato and Takashi
Morie and Hakaru Tamukoh
|
Hibikino-Musashi@Home 2018 Team Description Paper
|
8 pages, 5 figures, RoboCup@Home
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our team, Hibikino-Musashi@Home (the shortened name is HMA), was founded in
2010. It is based in the Kitakyushu Science and Research Park, Japan. We have
participated in the RoboCup@Home Japan open competition open platform league
every year since 2010. Moreover, we participated in the RoboCup 2017 Nagoya as
open platform league and domestic standard platform league teams. Currently,
the Hibikino-Musashi@Home team has 20 members from seven different laboratories
based in the Kyushu Institute of Technology. In this paper, we introduce the
activities of our team and the technologies.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 15:36:24 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Ishida",
"Yutaro",
""
],
[
"Hori",
"Sansei",
""
],
[
"Tanaka",
"Yuichiro",
""
],
[
"Yoshimoto",
"Yuma",
""
],
[
"Hashimoto",
"Kouhei",
""
],
[
"Iwamoto",
"Gouki",
""
],
[
"Aratani",
"Yoshiya",
""
],
[
"Yamashita",
"Kenya",
""
],
[
"Ishimoto",
"Shinya",
""
],
[
"Hitaka",
"Kyosuke",
""
],
[
"Yamaguchi",
"Fumiaki",
""
],
[
"Miyoshi",
"Ryuhei",
""
],
[
"Honda",
"Kentaro",
""
],
[
"Abe",
"Yushi",
""
],
[
"Kato",
"Yoshitaka",
""
],
[
"Morie",
"Takashi",
""
],
[
"Tamukoh",
"Hakaru",
""
]
] |
new_dataset
| 0.999767 |
2211.04986
|
Nikita Koval
|
Nikita Koval, Dan Alistarh, Roman Elizarov
|
Fast and Scalable Channels in Kotlin Coroutines
| null | null | null | null |
cs.DS cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Asynchronous programming has gained significant popularity over the last
decade: support for this programming pattern is available in many popular
languages via libraries and native language implementations, typically in the
form of coroutines or the async/await construct. Instead of programming via
shared memory, this concept assumes implicit synchronization through message
passing. The key data structure enabling such communication is the rendezvous
channel. Roughly, a rendezvous channel is a blocking queue of size zero, so
both send(e) and receive() operations wait for each other, performing a
rendezvous when they meet. To optimize the message passing pattern, channels
are usually equipped with a fixed-size buffer, so send(e)-s do not suspend and
put elements into the buffer until its capacity is exceeded. This primitive is
known as a buffered channel.
This paper presents a fast and scalable algorithm for both rendezvous and
buffered channels. Similarly to modern queues, our solution is based on an
infinite array with two positional counters for send(e) and receive()
operations, leveraging the unconditional Fetch-And-Add instruction to update
them. Yet, the algorithm requires non-trivial modifications of this classic
pattern, in order to support the full channel semantics, such as buffering and
cancellation of waiting requests. We compare the performance of our solution to
that of the Kotlin implementation, as well as against other academic proposals,
showing up to 9.8x speedup. To showcase its expressiveness and performance, we
also integrated the proposed algorithm into the standard Kotlin Coroutines
library, replacing the previous channel implementations.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 16:03:11 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Koval",
"Nikita",
""
],
[
"Alistarh",
"Dan",
""
],
[
"Elizarov",
"Roman",
""
]
] |
new_dataset
| 0.997814 |
2211.05030
|
Daphne Ippolito
|
Daphne Ippolito, Ann Yuan, Andy Coenen, Sehmon Burnam
|
Creative Writing with an AI-Powered Writing Assistant: Perspectives from
Professional Writers
| null | null | null | null |
cs.HC cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent developments in natural language generation (NLG) using neural
language models have brought us closer than ever to the goal of building
AI-powered creative writing tools. However, most prior work on human-AI
collaboration in the creative writing domain has evaluated new systems with
amateur writers, typically in contrived user studies of limited scope. In this
work, we commissioned 13 professional, published writers from a diverse set of
creative writing backgrounds to craft stories using Wordcraft, a text editor
with built-in AI-powered writing assistance tools. Using interviews and
participant journals, we discuss the potential of NLG to have significant
impact in the creative writing domain--especially with respect to
brainstorming, generation of story details, world-building, and research
assistance. Experienced writers, more so than amateurs, typically have
well-developed systems and methodologies for writing, as well as distinctive
voices and target audiences. Our work highlights the challenges in building for
these writers; NLG technologies struggle to preserve style and authorial voice,
and they lack deep understanding of story contents. In order for AI-powered
writing assistants to realize their full potential, it is essential that they
take into account the diverse goals and expertise of human writers.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 17:00:56 GMT"
}
] | 2022-11-10T00:00:00 |
[
[
"Ippolito",
"Daphne",
""
],
[
"Yuan",
"Ann",
""
],
[
"Coenen",
"Andy",
""
],
[
"Burnam",
"Sehmon",
""
]
] |
new_dataset
| 0.986556 |
2110.00976
|
Ilias Chalkidis
|
Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion
Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras
|
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English
|
9 pages, long paper at ACL 2022 proceedings. LexGLUE benchmark is
available at: https://huggingface.co/datasets/lex_glue. Code is available at:
https://github.com/coastalcph/lex-glue. Update TFIDF-SVM scores in the last
version
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Laws and their interpretations, legal arguments and agreements\ are typically
expressed in writing, leading to the production of vast corpora of legal text.
Their analysis, which is at the center of legal practice, becomes increasingly
elaborate as these collections grow in size. Natural language understanding
(NLU) technologies can be a valuable tool to support legal practitioners in
these endeavors. Their usefulness, however, largely depends on whether current
state-of-the-art models can generalize across various tasks in the legal
domain. To answer this currently open question, we introduce the Legal General
Language Understanding Evaluation (LexGLUE) benchmark, a collection of datasets
for evaluating model performance across a diverse set of legal NLU tasks in a
standardized way. We also provide an evaluation and analysis of several generic
and legal-oriented models demonstrating that the latter consistently offer
performance improvements across multiple tasks.
|
[
{
"version": "v1",
"created": "Sun, 3 Oct 2021 10:50:51 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Oct 2021 17:50:57 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Mar 2022 16:11:17 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Nov 2022 12:14:57 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Chalkidis",
"Ilias",
""
],
[
"Jana",
"Abhik",
""
],
[
"Hartung",
"Dirk",
""
],
[
"Bommarito",
"Michael",
""
],
[
"Androutsopoulos",
"Ion",
""
],
[
"Katz",
"Daniel Martin",
""
],
[
"Aletras",
"Nikolaos",
""
]
] |
new_dataset
| 0.999856 |
2201.09006
|
Leonardo Iwaya
|
Leonardo Horn Iwaya, M. Ali Babar, Awais Rashid and Chamila
Wijayarathna
|
On the Privacy of Mental Health Apps: An Empirical Investigation and its
Implications for Apps Development
|
40 pages, 13 figures
| null |
10.1007/s10664-022-10236-0
| null |
cs.CR cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An increasing number of mental health services are offered through mobile
systems, a paradigm called mHealth. Although there is an unprecedented growth
in the adoption of mHealth systems, partly due to the COVID-19 pandemic,
concerns about data privacy risks due to security breaches are also increasing.
Whilst some studies have analyzed mHealth apps from different angles, including
security, there is relatively little evidence for data privacy issues that may
exist in mHealth apps used for mental health services, whose recipients can be
particularly vulnerable. This paper reports an empirical study aimed at
systematically identifying and understanding data privacy incorporated in
mental health apps. We analyzed 27 top-ranked mental health apps from Google
Play Store. Our methodology enabled us to perform an in-depth privacy analysis
of the apps, covering static and dynamic analysis, data sharing behaviour,
server-side tests, privacy impact assessment requests, and privacy policy
evaluation. Furthermore, we mapped the findings to the LINDDUN threat taxonomy,
describing how threats manifest on the studied apps. The findings reveal
important data privacy issues such as unnecessary permissions, insecure
cryptography implementations, and leaks of personal data and credentials in
logs and web requests. There is also a high risk of user profiling as the apps'
development do not provide foolproof mechanisms against linkability,
detectability and identifiability. Data sharing among third parties and
advertisers in the current apps' ecosystem aggravates this situation. Based on
the empirical findings of this study, we provide recommendations to be
considered by different stakeholders of mHealth apps in general and apps
developers in particular. [...]
|
[
{
"version": "v1",
"created": "Sat, 22 Jan 2022 09:23:56 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Iwaya",
"Leonardo Horn",
""
],
[
"Babar",
"M. Ali",
""
],
[
"Rashid",
"Awais",
""
],
[
"Wijayarathna",
"Chamila",
""
]
] |
new_dataset
| 0.981015 |
2205.00158
|
Heng Fan
|
Libo Zhang, Junyuan Gao, Zhen Xiao, Heng Fan
|
AnimalTrack: A Benchmark for Multi-Animal Tracking in the Wild
|
Tech. report
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-animal tracking (MAT), a multi-object tracking (MOT) problem, is
crucial for animal motion and behavior analysis and has many crucial
applications such as biology, ecology and animal conservation. Despite its
importance, MAT is largely under-explored compared to other MOT problems such
as multi-human tracking due to the scarcity of dedicated benchmarks. To address
this problem, we introduce AnimalTrack, a dedicated benchmark for multi-animal
tracking in the wild. Specifically, AnimalTrack consists of 58 sequences from a
diverse selection of 10 common animal categories. On average, each sequence
comprises of 33 target objects for tracking. In order to ensure high quality,
every frame in AnimalTrack is manually labeled with careful inspection and
refinement. To our best knowledge, AnimalTrack is the first benchmark dedicated
to multi-animal tracking. In addition, to understand how existing MOT
algorithms perform on AnimalTrack and provide baselines for future comparison,
we extensively evaluate 14 state-of-the-art representative trackers. The
evaluation results demonstrate that, not surprisingly, most of these trackers
become degenerated due to the differences between pedestrians and animals in
various aspects (e.g., pose, motion, and appearance), and more efforts are
desired to improve multi-animal tracking. We hope that AnimalTrack together
with evaluation and analysis will foster further progress on multi-animal
tracking. The dataset and evaluation as well as our analysis will be made
available at https://hengfan2010.github.io/projects/AnimalTrack/.
|
[
{
"version": "v1",
"created": "Sat, 30 Apr 2022 04:23:59 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Nov 2022 15:50:07 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Zhang",
"Libo",
""
],
[
"Gao",
"Junyuan",
""
],
[
"Xiao",
"Zhen",
""
],
[
"Fan",
"Heng",
""
]
] |
new_dataset
| 0.99967 |
2205.04745
|
Hans-Peter Lehmann
|
Florian Kurpicz, Hans-Peter Lehmann, Peter Sanders
|
PaCHash: Packed and Compressed Hash Tables
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce PaCHash, a hash table that stores its objects contiguously in an
array without intervening space, even if the objects have variable size. In
particular, each object can be compressed using standard compression
techniques. A small search data structure allows locating the objects in
constant expected time. PaCHash is most naturally described as a static
external hash table where it needs a constant number of bits of internal memory
per block of external memory. Here, in some sense, PaCHash beats a lower bound
on the space consumption of k-perfect hashing. An implementation for fast SSDs
needs about 5 bits of internal memory per block of external memory, requires
only one disk access (of variable length) per search operation, and has small
internal search overhead compared to the disk access cost. Our experiments show
that it has lower space consumption than all previous approaches even when
considering objects of identical size.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 08:42:03 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jun 2022 12:59:10 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Nov 2022 13:29:31 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Kurpicz",
"Florian",
""
],
[
"Lehmann",
"Hans-Peter",
""
],
[
"Sanders",
"Peter",
""
]
] |
new_dataset
| 0.999537 |
2205.12598
|
Soumya Sanyal
|
Soumya Sanyal, Zeyi Liao, Xiang Ren
|
RobustLR: Evaluating Robustness to Logical Perturbation in Deductive
Reasoning
|
Accpeted at EMNLP 2022, code available at
https://github.com/INK-USC/RobustLR
| null | null | null |
cs.CL cs.LG cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Transformers have been shown to be able to perform deductive reasoning on a
logical rulebase containing rules and statements written in English natural
language. While the progress is promising, it is currently unclear if these
models indeed perform logical reasoning by understanding the underlying logical
semantics in the language. To this end, we propose RobustLR, a suite of
evaluation datasets that evaluate the robustness of these models to minimal
logical edits in rulebases and some standard logical equivalence conditions. In
our experiments with RoBERTa and T5, we find that the models trained in prior
works do not perform consistently on the different perturbations in RobustLR,
thus showing that the models are not robust to the proposed logical
perturbations. Further, we find that the models find it especially hard to
learn logical negation and disjunction operators. Overall, using our evaluation
sets, we demonstrate some shortcomings of the deductive reasoning-based
language models, which can eventually help towards designing better models for
logical reasoning over natural language. All the datasets and code base have
been made publicly available.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 09:23:50 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Nov 2022 06:14:13 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Sanyal",
"Soumya",
""
],
[
"Liao",
"Zeyi",
""
],
[
"Ren",
"Xiang",
""
]
] |
new_dataset
| 0.992587 |
2205.15670
|
Akash Patel
|
Akash Patel, Bj\"orn Lindqvist, Christoforos Kanellakis, Ali-akbar
Agha-mohammadi and George Nikolakopoulos
|
REF: A Rapid Exploration Framework for Deploying Autonomous MAVs in
Unknown Environments
| null |
Journal of Intelligent and Robotics System 2022
| null | null |
cs.RO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Exploration and mapping of unknown environments is a fundamental task in
applications for autonomous robots. In this article, we present a complete
framework for deploying MAVs in autonomous exploration missions in unknown
subterranean areas. The main motive of exploration algorithms is to depict the
next best frontier for the robot such that new ground can be covered in a fast,
safe yet efficient manner. The proposed framework uses a novel frontier
selection method that also contributes to the safe navigation of autonomous
robots in obstructed areas such as subterranean caves, mines, and urban areas.
The framework presented in this work bifurcates the exploration problem in
local and global exploration. The proposed exploration framework is also
adaptable according to computational resources available onboard the robot
which means the trade-off between the speed of exploration and the quality of
the map can be made. Such capability allows the proposed framework to be
deployed in a subterranean exploration, mapping as well as in fast search and
rescue scenarios. The overall system is considered a low-complexity and
baseline solution for navigation and object localization in tunnel-like
environments. The performance of the proposed framework is evaluated in
detailed simulation studies with comparisons made against a high-level
exploration-planning framework developed for the DARPA Sub-T challenge as it
will be presented in this article.
|
[
{
"version": "v1",
"created": "Tue, 31 May 2022 10:23:02 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Jun 2022 10:04:25 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Jun 2022 13:17:52 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Nov 2022 06:59:28 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Patel",
"Akash",
""
],
[
"Lindqvist",
"Björn",
""
],
[
"Kanellakis",
"Christoforos",
""
],
[
"Agha-mohammadi",
"Ali-akbar",
""
],
[
"Nikolakopoulos",
"George",
""
]
] |
new_dataset
| 0.968431 |
2206.00897
|
Ritwik Gupta
|
Fernando Paolo, Tsu-ting Tim Lin, Ritwik Gupta, Bryce Goodman, Nirav
Patel, Daniel Kuster, David Kroodsma, Jared Dunnmon
|
xView3-SAR: Detecting Dark Fishing Activity Using Synthetic Aperture
Radar Imagery
|
Accepted to NeurIPS 2022. 10 pages (25 with references and
supplement)
| null | null | null |
cs.CV cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsustainable fishing practices worldwide pose a major threat to marine
resources and ecosystems. Identifying vessels that do not show up in
conventional monitoring systems -- known as ``dark vessels'' -- is key to
managing and securing the health of marine environments. With the rise of
satellite-based synthetic aperture radar (SAR) imaging and modern machine
learning (ML), it is now possible to automate detection of dark vessels day or
night, under all-weather conditions. SAR images, however, require a
domain-specific treatment and are not widely accessible to the ML community.
Maritime objects (vessels and offshore infrastructure) are relatively small and
sparse, challenging traditional computer vision approaches. We present the
largest labeled dataset for training ML models to detect and characterize
vessels and ocean structures in SAR imagery. xView3-SAR consists of nearly
1,000 analysis-ready SAR images from the Sentinel-1 mission that are, on
average, 29,400-by-24,400 pixels each. The images are annotated using a
combination of automated and manual analysis. Co-located bathymetry and wind
state rasters accompany every SAR image. We also provide an overview of the
xView3 Computer Vision Challenge, an international competition using xView3-SAR
for ship detection and characterization at large scale. We release the data
(\href{https://iuu.xview.us/}{https://iuu.xview.us/}) and code
(\href{https://github.com/DIUx-xView}{https://github.com/DIUx-xView}) to
support ongoing development and evaluation of ML approaches for this important
application.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 06:53:45 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jun 2022 18:33:07 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Sep 2022 23:29:56 GMT"
},
{
"version": "v4",
"created": "Sat, 5 Nov 2022 09:53:31 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Paolo",
"Fernando",
""
],
[
"Lin",
"Tsu-ting Tim",
""
],
[
"Gupta",
"Ritwik",
""
],
[
"Goodman",
"Bryce",
""
],
[
"Patel",
"Nirav",
""
],
[
"Kuster",
"Daniel",
""
],
[
"Kroodsma",
"David",
""
],
[
"Dunnmon",
"Jared",
""
]
] |
new_dataset
| 0.99958 |
2206.07348
|
Quanfeng Xu
|
Quanfeng Xu, Yi Tang, Yumei She
|
Unsupervised multi-branch Capsule for Hyperspectral and LiDAR
classification
|
10 pages
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
With the convenient availability of remote sensing data, how to make models
to interpret complex remote sensing data attracts wide attention. In remote
sensing data, hyperspectral images contain spectral information and LiDAR
contains elevation information. Hence, more explorations are warranted to
better fuse the features of different source data. In this paper, we introduce
semantic understanding to dynamically fuse data from two different sources,
extract features of HSI and LiDAR through different capsule network branches
and improve self-supervised loss and random rigid rotation in Canonical Capsule
to a high-dimensional situation. Canonical Capsule computes the capsule
decomposition of objects by permutation-equivariant attention and the process
is self-supervised by training pairs of randomly rotated objects. After fusing
the features of HSI and LiDAR with semantic understanding, the unsupervised
extraction of spectral-spatial-elevation fusion features is achieved. With two
real-world examples of HSI and LiDAR fused, the experimental results show that
the proposed multi-branch high-dimensional canonical capsule algorithm can be
effective for semantic understanding of HSI and LiDAR. It indicates that the
model can extract HSI and LiDAR data features effectively as opposed to
existing models for unsupervised extraction of multi-source RS data.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 07:57:58 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Nov 2022 05:48:26 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Xu",
"Quanfeng",
""
],
[
"Tang",
"Yi",
""
],
[
"She",
"Yumei",
""
]
] |
new_dataset
| 0.997879 |
2208.09577
|
Yuan Zhang
|
Xudong Gong, Qinlin Feng, Yuan Zhang, Jiangling Qin, Weijie Ding, Biao
Li, Peng Jiang, Kun Gai
|
Real-time Short Video Recommendation on Mobile Devices
|
Accepted by CIKM 2022, 10 pages
| null |
10.1145/3511808
| null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Short video applications have attracted billions of users in recent years,
fulfilling their various needs with diverse content. Users usually watch short
videos on many topics on mobile devices in a short period of time, and give
explicit or implicit feedback very quickly to the short videos they watch. The
recommender system needs to perceive users' preferences in real-time in order
to satisfy their changing interests. Traditionally, recommender systems
deployed at server side return a ranked list of videos for each request from
client. Thus it cannot adjust the recommendation results according to the
user's real-time feedback before the next request. Due to client-server
transmitting latency, it is also unable to make immediate use of users'
real-time feedback. However, as users continue to watch videos and feedback,
the changing context leads the ranking of the server-side recommendation system
inaccurate. In this paper, we propose to deploy a short video recommendation
framework on mobile devices to solve these problems. Specifically, we design
and deploy a tiny on-device ranking model to enable real-time re-ranking of
server-side recommendation results. We improve its prediction accuracy by
exploiting users' real-time feedback of watched videos and client-specific
real-time features. With more accurate predictions, we further consider
interactions among candidate videos, and propose a context-aware re-ranking
method based on adaptive beam search. The framework has been deployed on
Kuaishou, a billion-user scale short video application, and improved effective
view, like and follow by 1.28%, 8.22% and 13.6% respectively.
|
[
{
"version": "v1",
"created": "Sat, 20 Aug 2022 02:00:16 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Nov 2022 03:49:46 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Gong",
"Xudong",
""
],
[
"Feng",
"Qinlin",
""
],
[
"Zhang",
"Yuan",
""
],
[
"Qin",
"Jiangling",
""
],
[
"Ding",
"Weijie",
""
],
[
"Li",
"Biao",
""
],
[
"Jiang",
"Peng",
""
],
[
"Gai",
"Kun",
""
]
] |
new_dataset
| 0.997256 |
2210.01560
|
Hans-Peter Lehmann
|
Hans-Peter Lehmann, Peter Sanders, Stefan Walzer
|
SicHash -- Small Irregular Cuckoo Tables for Perfect Hashing
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A Perfect Hash Function (PHF) is a hash function that has no collisions on a
given input set. PHFs can be used for space efficient storage of data in an
array, or for determining a compact representative of each object in the set.
In this paper, we present the PHF construction algorithm SicHash - Small
Irregular Cuckoo Tables for Perfect Hashing. At its core, SicHash uses a known
technique: It places objects in a cuckoo hash table and then stores the final
hash function choice of each object in a retrieval data structure. We combine
the idea with irregular cuckoo hashing, where each object has a different
number of hash functions. Additionally, we use many small tables that we
overload beyond their asymptotic maximum load factor. The most space efficient
competitors often use brute force methods to determine the PHFs. SicHash
provides a more direct construction algorithm that only rarely needs to
recompute parts. Our implementation improves the state of the art in terms of
space usage versus construction time for a wide range of configurations. At the
same time, it provides very fast queries.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 12:31:47 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Nov 2022 16:32:57 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Lehmann",
"Hans-Peter",
""
],
[
"Sanders",
"Peter",
""
],
[
"Walzer",
"Stefan",
""
]
] |
new_dataset
| 0.999391 |
2211.00917
|
Tianqi Zhang
|
Tianqi Zhang, Tong Shen, Kai Yuan, Kaiwen Xue and Huihuan Qian
|
A Novel Autonomous Robotics System for Aquaculture Environment
Monitoring
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Implementing fully automatic unmanned surface vehicles (USVs) monitoring
water quality is challenging since effectively collecting environmental data
while keeping the platform stable and environmental-friendly is hard to
approach. To address this problem, we construct a USV that can automatically
navigate an efficient path to sample water quality parameters in order to
monitor the aquatic environment. The detection device needs to be stable enough
to resist a hostile environment or climates while enormous volumes will disturb
the aquaculture environment. Meanwhile, planning an efficient path for
information collecting needs to deal with the contradiction between the
restriction of energy and the amount of information in the coverage region. To
tackle with mentioned challenges, we provide a USV platform that can perfectly
balance mobility, stability, and portability attributed to its special
round-shape structure and redundancy motion design. For informative planning,
we combined the TSP and CPP algorithms to construct an optimistic plan for
collecting more data within a certain range and limiting energy restrictions.We
designed a fish existence prediction scenario to verify the novel system in
both simulation experiments and field experiments. The novel aquaculture
environment monitoring system significantly reduces the burden of manual
operation in the fishery inspection field. Additionally, the simplicity of the
sensor setup and the minimal cost of the platform enables its other possible
applications in aquatic exploration and commercial utilization.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 07:00:15 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Nov 2022 02:09:49 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Zhang",
"Tianqi",
""
],
[
"Shen",
"Tong",
""
],
[
"Yuan",
"Kai",
""
],
[
"Xue",
"Kaiwen",
""
],
[
"Qian",
"Huihuan",
""
]
] |
new_dataset
| 0.997089 |
2211.03889
|
Samarth Sinha
|
Samarth Sinha, Roman Shapovalov, Jeremy Reizenstein, Ignacio Rocco,
Natalia Neverova, Andrea Vedaldi, David Novotny
|
Common Pets in 3D: Dynamic New-View Synthesis of Real-Life Deformable
Categories
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Obtaining photorealistic reconstructions of objects from sparse views is
inherently ambiguous and can only be achieved by learning suitable
reconstruction priors. Earlier works on sparse rigid object reconstruction
successfully learned such priors from large datasets such as CO3D. In this
paper, we extend this approach to dynamic objects. We use cats and dogs as a
representative example and introduce Common Pets in 3D (CoP3D), a collection of
crowd-sourced videos showing around 4,200 distinct pets. CoP3D is one of the
first large-scale datasets for benchmarking non-rigid 3D reconstruction "in the
wild". We also propose Tracker-NeRF, a method for learning 4D reconstruction
from our dataset. At test time, given a small number of video frames of an
unseen object, Tracker-NeRF predicts the trajectories of its 3D points and
generates new views, interpolating viewpoint and time. Results on CoP3D reveal
significantly better non-rigid new-view synthesis performance than existing
baselines.
|
[
{
"version": "v1",
"created": "Mon, 7 Nov 2022 22:42:42 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Sinha",
"Samarth",
""
],
[
"Shapovalov",
"Roman",
""
],
[
"Reizenstein",
"Jeremy",
""
],
[
"Rocco",
"Ignacio",
""
],
[
"Neverova",
"Natalia",
""
],
[
"Vedaldi",
"Andrea",
""
],
[
"Novotny",
"David",
""
]
] |
new_dataset
| 0.995305 |
2211.03977
|
Yunsheng Tian
|
Yunsheng Tian, Jie Xu, Yichen Li, Jieliang Luo, Shinjiro Sueda, Hui
Li, Karl D.D. Willis, Wojciech Matusik
|
Assemble Them All: Physics-Based Planning for Generalizable Assembly by
Disassembly
|
Accepted by SIGGRAPH Asia 2022. Project website:
http://assembly.csail.mit.edu/
| null |
10.1145/3550454.3555525
| null |
cs.RO cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Assembly planning is the core of automating product assembly, maintenance,
and recycling for modern industrial manufacturing. Despite its importance and
long history of research, planning for mechanical assemblies when given the
final assembled state remains a challenging problem. This is due to the
complexity of dealing with arbitrary 3D shapes and the highly constrained
motion required for real-world assemblies. In this work, we propose a novel
method to efficiently plan physically plausible assembly motion and sequences
for real-world assemblies. Our method leverages the assembly-by-disassembly
principle and physics-based simulation to efficiently explore a reduced search
space. To evaluate the generality of our method, we define a large-scale
dataset consisting of thousands of physically valid industrial assemblies with
a variety of assembly motions required. Our experiments on this new benchmark
demonstrate we achieve a state-of-the-art success rate and the highest
computational efficiency compared to other baseline algorithms. Our method also
generalizes to rotational assemblies (e.g., screws and puzzles) and solves
80-part assemblies within several minutes.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 03:15:15 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Tian",
"Yunsheng",
""
],
[
"Xu",
"Jie",
""
],
[
"Li",
"Yichen",
""
],
[
"Luo",
"Jieliang",
""
],
[
"Sueda",
"Shinjiro",
""
],
[
"Li",
"Hui",
""
],
[
"Willis",
"Karl D. D.",
""
],
[
"Matusik",
"Wojciech",
""
]
] |
new_dataset
| 0.960402 |
2211.04002
|
Robin Hankin Dr
|
Robin K. S. Hankin
|
The free algebra in R
|
5 pages
| null | null | null |
cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The free algebra is an interesting and useful algebraic object. Here I
introduce "freealg", an R package which furnishes computational support for
free algebras. The package uses the standard template library's "map" class for
efficiency, which uses the fact that the order of the terms is algebraically
immaterial. The package follows "disordR" discipline. I demonstrate some
properties of free algebra using the package, and showcase package idiom. The
package is available on CRAN at https://CRAN.R-project.org/package=freealg.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 04:54:39 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Hankin",
"Robin K. S.",
""
]
] |
new_dataset
| 0.996955 |
2211.04013
|
Arusarka Bose
|
Arusarka Bose (1), Zili Zhou (2), Guandong Xu (3) ((1) Indian
Institute of Technology Kharagpur, (2) University of Manchester, (3)
University of Technology Sydney)
|
COV19IR : COVID-19 Domain Literature Information Retrieval
| null | null | null | null |
cs.IR cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Increasing number of COVID-19 research literatures cause new challenges in
effective literature screening and COVID-19 domain knowledge aware Information
Retrieval. To tackle the challenges, we demonstrate two tasks along
withsolutions, COVID-19 literature retrieval, and question answering. COVID-19
literature retrieval task screens matching COVID-19 literature documents for
textual user query, and COVID-19 question answering task predicts proper text
fragments from text corpus as the answer of specific COVID-19 related
questions. Based on transformer neural network, we provided solutions to
implement the tasks on CORD-19 dataset, we display some examples to show the
effectiveness of our proposed solutions.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 05:12:37 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Bose",
"Arusarka",
""
],
[
"Zhou",
"Zili",
""
],
[
"Xu",
"Guandong",
""
]
] |
new_dataset
| 0.950662 |
2211.04062
|
Xu Chen
|
Xu Chen, Zhiyong Feng, Zhiqing Wei, J. Andrew Zhang, Xin Yuan, Ping
Zhang
|
Concurrent Downlink and Uplink Joint Communication and Sensing for 6G
Networks
|
5 pages, 5 figures, submitted to IEEE transactions on vehicular
technology correspondence
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Joint communication and sensing (JCAS) is a promising technology for 6th
Generation (6G) mobile networks, such as intelligent vehicular networks,
intelligent manufacturing, and so on. Equipped with two spatially separated
antenna arrays, the base station (BS) can perform downlink active JCAS in a
mono-static setup. This paper proposes a Concurrent Downlink and Uplink (CDU)
JCAS system where the BS can use the echo of transmitted dedicated signals for
sensing in the uplink timeslot, while performing reliable uplink communication.
A novel successive interference cancellation-based CDU JCAS processing method
is proposed to enable the estimation of uplink communication symbols and
downlink sensing parameters. Extensive simulation results verify the
feasibility of the CDU JCAS system, showing a performance improvement of more
than 10 dB compared to traditional JCAS methods while maintaining reliable
uplink communication.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 07:44:20 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Chen",
"Xu",
""
],
[
"Feng",
"Zhiyong",
""
],
[
"Wei",
"Zhiqing",
""
],
[
"Zhang",
"J. Andrew",
""
],
[
"Yuan",
"Xin",
""
],
[
"Zhang",
"Ping",
""
]
] |
new_dataset
| 0.979353 |
2211.04094
|
Xavier Granier Pr. Dr. Eng.
|
Sarah Tournon-Valiente, Vincent Baillet, Mehdi Chayani, Bruno
Dutailly, Xavier Granier, Valentin Grimaud
|
The French National 3D Data Repository for Humanities: Features,
Feedback and Open Questions
|
CAA 2021 - "Digital Crossroads" full paper version (in review)
| null | null | null |
cs.DL cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce the French National 3D Data Repository for Humanities designed
for the conservation and the publication of 3D research data in the field of
Humanities and Social Sciences. We present the choices made for the data
organization, metadata, standards and infrastructure towards a FAIR service.
With 437 references at the time of the writing, we have feedback on some
challenges to develop such a service and to make it widely used. This leads to
open questions and future developments.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 08:52:16 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Tournon-Valiente",
"Sarah",
""
],
[
"Baillet",
"Vincent",
""
],
[
"Chayani",
"Mehdi",
""
],
[
"Dutailly",
"Bruno",
""
],
[
"Granier",
"Xavier",
""
],
[
"Grimaud",
"Valentin",
""
]
] |
new_dataset
| 0.999126 |
2211.04108
|
Daan Bloembergen
|
Cl\'audia Fonseca Pinh\~ao, Chris Eijgenstein, Iva Gornishka, Shayla
Jansen, Diederik M. Roijers, Daan Bloembergen
|
Determining Accessible Sidewalk Width by Extracting Obstacle Information
from Point Clouds
|
4 pages, 9 figures. Presented at the workshop on "The Future of Urban
Accessibility" at ACM ASSETS'22. Code for this paper is available at
https://github.com/Amsterdam-AI-Team/Urban_PointCloud_Sidewalk_Width
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Obstacles on the sidewalk often block the path, limiting passage and
resulting in frustration and wasted time, especially for citizens and visitors
who use assistive devices (wheelchairs, walkers, strollers, canes, etc). To
enable equal participation and use of the city, all citizens should be able to
perform and complete their daily activities in a similar amount of time and
effort. Therefore, we aim to offer accessibility information regarding
sidewalks, so that citizens can better plan their routes, and to help city
officials identify the location of bottlenecks and act on them. In this paper
we propose a novel pipeline to estimate obstacle-free sidewalk widths based on
3D point cloud data of the city of Amsterdam, as the first step to offer a more
complete set of information regarding sidewalk accessibility.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 09:19:16 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Pinhão",
"Cláudia Fonseca",
""
],
[
"Eijgenstein",
"Chris",
""
],
[
"Gornishka",
"Iva",
""
],
[
"Jansen",
"Shayla",
""
],
[
"Roijers",
"Diederik M.",
""
],
[
"Bloembergen",
"Daan",
""
]
] |
new_dataset
| 0.996227 |
2211.04253
|
Chunzhuo Wang
|
Chunzhuo Wang, T. Sunil Kumar, Walter De Raedt, Guido Camps, Hans
Hallez, Bart Vanrumste
|
Eat-Radar: Continuous Fine-Grained Eating Gesture Detection Using FMCW
Radar and 3D Temporal Convolutional Network
| null | null | null | null |
cs.CV eess.IV eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unhealthy dietary habits are considered as the primary cause of multiple
chronic diseases such as obesity and diabetes. The automatic food intake
monitoring system has the potential to improve the quality of life (QoF) of
people with dietary related diseases through dietary assessment. In this work,
we propose a novel contact-less radar-based food intake monitoring approach.
Specifically, a Frequency Modulated Continuous Wave (FMCW) radar sensor is
employed to recognize fine-grained eating and drinking gestures. The
fine-grained eating/drinking gesture contains a series of movement from raising
the hand to the mouth until putting away the hand from the mouth. A 3D temporal
convolutional network (3D-TCN) is developed to detect and segment eating and
drinking gestures in meal sessions by processing the Range-Doppler Cube (RD
Cube). Unlike previous radar-based research, this work collects data in
continuous meal sessions. We create a public dataset that contains 48 meal
sessions (3121 eating gestures and 608 drinking gestures) from 48 participants
with a total duration of 783 minutes. Four eating styles (fork & knife,
chopsticks, spoon, hand) are included in this dataset. To validate the
performance of the proposed approach, 8-fold cross validation method is
applied. Experimental results show that our proposed 3D-TCN outperforms the
model that combines a convolutional neural network and a long-short-term-memory
network (CNN-LSTM), and also the CNN-Bidirectional LSTM model (CNN-BiLSTM) in
eating and drinking gesture detection. The 3D-TCN model achieves a segmental
F1-score of 0.887 and 0.844 for eating and drinking gestures, respectively. The
results of the proposed approach indicate the feasibility of using radar for
fine-grained eating and drinking gesture detection and segmentation in meal
sessions.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 14:03:44 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Wang",
"Chunzhuo",
""
],
[
"Kumar",
"T. Sunil",
""
],
[
"De Raedt",
"Walter",
""
],
[
"Camps",
"Guido",
""
],
[
"Hallez",
"Hans",
""
],
[
"Vanrumste",
"Bart",
""
]
] |
new_dataset
| 0.99853 |
2211.04269
|
Daniel Romero
|
Daniel Romero, Peter Gerstoft, Hadi Givehchian, Dinesh Bharadia
|
Spoofing Attack Detection in the Physical Layer with Commutative Neural
Networks
| null | null | null | null |
cs.LG cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a spoofing attack, an attacker impersonates a legitimate user to access or
tamper with data intended for or produced by the legitimate user. In wireless
communication systems, these attacks may be detected by relying on features of
the channel and transmitter radios. In this context, a popular approach is to
exploit the dependence of the received signal strength (RSS) at multiple
receivers or access points with respect to the spatial location of the
transmitter. Existing schemes rely on long-term estimates, which makes it
difficult to distinguish spoofing from movement of a legitimate user. This
limitation is here addressed by means of a deep neural network that implicitly
learns the distribution of pairs of short-term RSS vector estimates. The
adopted network architecture imposes the invariance to permutations of the
input (commutativity) that the decision problem exhibits. The merits of the
proposed algorithm are corroborated on a data set that we collected.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 14:20:58 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Romero",
"Daniel",
""
],
[
"Gerstoft",
"Peter",
""
],
[
"Givehchian",
"Hadi",
""
],
[
"Bharadia",
"Dinesh",
""
]
] |
new_dataset
| 0.951 |
2211.04458
|
Daniel Neuen
|
Akanksha Agrawal, D\'aniel Marx, Daniel Neuen, Jasper Slusallek
|
Computing Square Colorings on Bounded-Treewidth and Planar Graphs
|
72 pages, 15 figures, full version of a paper accepted at SODA 2023
| null | null | null |
cs.DS cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A square coloring of a graph $G$ is a coloring of the square $G^2$ of $G$,
that is, a coloring of the vertices of $G$ such that any two vertices that are
at distance at most $2$ in $G$ receive different colors. We investigate the
complexity of finding a square coloring with a given number of $q$ colors. We
show that the problem is polynomial-time solvable on graphs of bounded
treewidth by presenting an algorithm with running time $n^{2^{\operatorname{tw}
+ 4}+O(1)}$ for graphs of treewidth at most $\operatorname{tw}$. The somewhat
unusual exponent $2^{\operatorname{tw}}$ in the running time is essentially
optimal: we show that for any $\epsilon>0$, there is no algorithm with running
time $f(\operatorname{tw})n^{(2-\epsilon)^{\operatorname{tw}}}$ unless the
Exponential-Time Hypothesis (ETH) fails.
We also show that the square coloring problem is NP-hard on planar graphs for
any fixed number $q \ge 4$ of colors. Our main algorithmic result is showing
that the problem (when the number of colors $q$ is part of the input) can be
solved in subexponential time $2^{O(n^{2/3}\log n)}$ on planar graphs. The
result follows from the combination of two algorithms. If the number $q$ of
colors is small ($\le n^{1/3}$), then we can exploit a treewidth bound on the
square of the graph to solve the problem in time $2^{O(\sqrt{qn}\log n)}$. If
the number of colors is large ($\ge n^{1/3}$), then an algorithm based on
protrusion decompositions and building on our result for the bounded-treewidth
case solves the problem in time $2^{O(n\log n/q)}$.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 18:52:11 GMT"
}
] | 2022-11-09T00:00:00 |
[
[
"Agrawal",
"Akanksha",
""
],
[
"Marx",
"Dániel",
""
],
[
"Neuen",
"Daniel",
""
],
[
"Slusallek",
"Jasper",
""
]
] |
new_dataset
| 0.986682 |
2009.00514
|
Christophe Lecoutre
|
Fr\'ed\'eric Boussemart and Christophe Lecoutre and Gilles Audemard
and C\'edric Piette
|
XCSP3-core: A Format for Representing Constraint
Satisfaction/Optimization Problems
|
arXiv admin note: substantial text overlap with arXiv:1611.03398
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this document, we introduce XCSP3-core, a subset of XCSP3 that allows us
to represent constraint satisfaction/optimization problems. The interest of
XCSP3-core is multiple: (i) focusing on the most popular frameworks (CSP and
COP) and constraints, (ii) facilitating the parsing process by means of
dedicated XCSP3-core parsers written in Java and C++ (using callback
functions), (iii) and defining a core format for comparisons (competitions) of
constraint solvers.
|
[
{
"version": "v1",
"created": "Tue, 1 Sep 2020 15:24:49 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Jan 2021 12:00:45 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Nov 2022 11:05:36 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Boussemart",
"Frédéric",
""
],
[
"Lecoutre",
"Christophe",
""
],
[
"Audemard",
"Gilles",
""
],
[
"Piette",
"Cédric",
""
]
] |
new_dataset
| 0.988239 |
2109.02942
|
Yang Su Mr.
|
Yansong Gao, Yang Su, Surya Nepal, Damith C. Ranasinghe
|
NoisFre: Noise-Tolerant Memory Fingerprints from Commodity Devices for
Security Functions
|
Accepted to IEEE Transactions on Dependable and Secure Computing.
Yansong Gao and Yang Su contributed equally to the study and are co-first
authors in alphabetical order
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Building hardware security primitives with on-device memory fingerprints is a
compelling proposition given the ubiquity of memory in electronic devices,
especially for low-end Internet of Things devices for which cryptographic
modules are often unavailable. However, the use of fingerprints in security
functions is challenged by the small, but unpredictable variations in
fingerprint reproductions from the same device due to measurement noise. Our
study formulates a novel and pragmatic approach to achieve highly reliable
fingerprints from device memories. We investigate the transformation of raw
fingerprints into a noise-tolerant space where the generation of fingerprints
is intrinsically highly reliable. We derive formal performance bounds to
support practitioners to easily adopt our methods for applications.
Subsequently, we demonstrate the expressive power of our formalization by using
it to investigate the practicability of extracting noise-tolerant fingerprints
from commodity devices. Together with extensive simulations, we have employed
119 chips from five different manufacturers for extensive experimental
validations. Our results, including an end-to-end implementation demonstration
with a low-cost wearable Bluetooth inertial sensor capable of on-demand and
runtime key generation, show that key generators with failure rates less than
$10^-6$ can be efficiently obtained with noise-tolerant fingerprints with a
single fingerprint snapshot to support ease-of-enrollment.
|
[
{
"version": "v1",
"created": "Tue, 7 Sep 2021 08:49:03 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2022 04:43:04 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Gao",
"Yansong",
""
],
[
"Su",
"Yang",
""
],
[
"Nepal",
"Surya",
""
],
[
"Ranasinghe",
"Damith C.",
""
]
] |
new_dataset
| 0.998984 |
2110.08520
|
Neha Kennard
|
Neha Kennard, Tim O'Gorman, Rajarshi Das, Akshay Sharma, Chhandak
Bagchi, Matthew Clinton, Pranay Kumar Yelugam, Hamed Zamani, Andrew McCallum
|
DISAPERE: A Dataset for Discourse Structure in Peer Review Discussions
| null | null |
10.18653/v1/2022.naacl-main.89
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
At the foundation of scientific evaluation is the labor-intensive process of
peer review. This critical task requires participants to consume vast amounts
of highly technical text. Prior work has annotated different aspects of review
argumentation, but discourse relations between reviews and rebuttals have yet
to be examined. We present DISAPERE, a labeled dataset of 20k sentences
contained in 506 review-rebuttal pairs in English, annotated by experts.
DISAPERE synthesizes label sets from prior work and extends them to include
fine-grained annotation of the rebuttal sentences, characterizing their context
in the review and the authors' stance towards review arguments. Further, we
annotate every review and rebuttal sentence. We show that discourse cues from
rebuttals can shed light on the quality and interpretation of reviews. Further,
an understanding of the argumentative strategies employed by the reviewers and
authors provides useful signal for area chairs and other decision makers.
|
[
{
"version": "v1",
"created": "Sat, 16 Oct 2021 09:18:12 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2022 01:29:28 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Kennard",
"Neha",
""
],
[
"O'Gorman",
"Tim",
""
],
[
"Das",
"Rajarshi",
""
],
[
"Sharma",
"Akshay",
""
],
[
"Bagchi",
"Chhandak",
""
],
[
"Clinton",
"Matthew",
""
],
[
"Yelugam",
"Pranay Kumar",
""
],
[
"Zamani",
"Hamed",
""
],
[
"McCallum",
"Andrew",
""
]
] |
new_dataset
| 0.999651 |
2110.11316
|
Andreas F\"urst
|
Andreas F\"urst, Elisabeth Rumetshofer, Johannes Lehner, Viet Tran,
Fei Tang, Hubert Ramsauer, David Kreil, Michael Kopp, G\"unter Klambauer,
Angela Bitto-Nemling, Sepp Hochreiter
|
CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP
|
Published at NeurIPS 2022; Blog: https://ml-jku.github.io/cloob;
GitHub: https://github.com/ml-jku/cloob
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
CLIP yielded impressive results on zero-shot transfer learning tasks and is
considered as a foundation model like BERT or GPT3. CLIP vision models that
have a rich representation are pre-trained using the InfoNCE objective and
natural language supervision before they are fine-tuned on particular tasks.
Though CLIP excels at zero-shot transfer learning, it suffers from an
explaining away problem, that is, it focuses on one or few features, while
neglecting other relevant features. This problem is caused by insufficiently
extracting the covariance structure in the original multi-modal data. We
suggest to use modern Hopfield networks to tackle the problem of explaining
away. Their retrieved embeddings have an enriched covariance structure derived
from co-occurrences of features in the stored embeddings. However, modern
Hopfield networks increase the saturation effect of the InfoNCE objective which
hampers learning. We propose to use the InfoLOOB objective to mitigate this
saturation effect. We introduce the novel "Contrastive Leave One Out Boost"
(CLOOB), which uses modern Hopfield networks for covariance enrichment together
with the InfoLOOB objective. In experiments we compare CLOOB to CLIP after
pre-training on the Conceptual Captions and the YFCC dataset with respect to
their zero-shot transfer learning performance on other datasets. CLOOB
consistently outperforms CLIP at zero-shot transfer learning across all
considered architectures and datasets.
|
[
{
"version": "v1",
"created": "Thu, 21 Oct 2021 17:50:48 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Feb 2022 09:49:52 GMT"
},
{
"version": "v3",
"created": "Mon, 13 Jun 2022 06:54:47 GMT"
},
{
"version": "v4",
"created": "Mon, 7 Nov 2022 13:57:43 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Fürst",
"Andreas",
""
],
[
"Rumetshofer",
"Elisabeth",
""
],
[
"Lehner",
"Johannes",
""
],
[
"Tran",
"Viet",
""
],
[
"Tang",
"Fei",
""
],
[
"Ramsauer",
"Hubert",
""
],
[
"Kreil",
"David",
""
],
[
"Kopp",
"Michael",
""
],
[
"Klambauer",
"Günter",
""
],
[
"Bitto-Nemling",
"Angela",
""
],
[
"Hochreiter",
"Sepp",
""
]
] |
new_dataset
| 0.998176 |
2203.06488
|
Daniel Rika
|
Daniel Rika, Dror Sholomon, Eli David, Nathan S. Netanyahu
|
TEN: Twin Embedding Networks for the Jigsaw Puzzle Problem with Eroded
Boundaries
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces the novel CNN-based encoder Twin Embedding Network
(TEN), for the jigsaw puzzle problem (JPP), which represents a puzzle piece
with respect to its boundary in a latent embedding space. Combining this latent
representation with a simple distance measure, we demonstrate improved accuracy
levels of our newly proposed pairwise compatibility measure (CM), compared to
that of various classical methods, for degraded puzzles with eroded tile
boundaries. We focus on this problem instance for our case study, as it serves
as an appropriate testbed for real-world scenarios. Specifically, we
demonstrated an improvement of up to 8.5% and 16.8% in reconstruction accuracy,
for so-called Type-1 and Type-2 problem variants, respectively. Furthermore, we
also demonstrated that TEN is faster by a few orders of magnitude, on average,
than a typical deep neural network (NN) model, i.e., it is as fast as the
classical methods. In this regard, the paper makes a significant first attempt
at bridging the gap between the relatively low accuracy (of classical methods
and the intensive computational complexity (of NN models), for practical,
real-world puzzle-like problems.
|
[
{
"version": "v1",
"created": "Sat, 12 Mar 2022 17:18:47 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2022 18:19:35 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Rika",
"Daniel",
""
],
[
"Sholomon",
"Dror",
""
],
[
"David",
"Eli",
""
],
[
"Netanyahu",
"Nathan S.",
""
]
] |
new_dataset
| 0.995279 |
2203.11449
|
Chenyun Wu
|
Chenyun Wu and Subhransu Maji
|
How well does CLIP understand texture?
|
ECCV 2022 CVinW Workshop
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We investigate how well CLIP understands texture in natural images described
by natural language. To this end, we analyze CLIP's ability to: (1) perform
zero-shot learning on various texture and material classification datasets; (2)
represent compositional properties of texture such as red dots or yellow
stripes on the Describable Texture in Detail(DTDD) dataset; and (3) aid
fine-grained categorization of birds in photographs described by color and
texture of their body parts.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 04:07:20 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Nov 2022 02:33:24 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Wu",
"Chenyun",
""
],
[
"Maji",
"Subhransu",
""
]
] |
new_dataset
| 0.999615 |
2203.14457
|
Shancong Mou
|
Shancong Mou, Meng Cao, Haoping Bai, Ping Huang, Jianjun Shi and
Jiulong Shan
|
PAEDID: Patch Autoencoder Based Deep Image Decomposition For Pixel-level
Defective Region Segmentation
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Unsupervised pixel-level defective region segmentation is an important task
in image-based anomaly detection for various industrial applications. The
state-of-the-art methods have their own advantages and limitations:
matrix-decomposition-based methods are robust to noise but lack complex
background image modeling capability; representation-based methods are good at
defective region localization but lack accuracy in defective region shape
contour extraction; reconstruction-based methods detected defective region
match well with the ground truth defective region shape contour but are noisy.
To combine the best of both worlds, we present an unsupervised patch
autoencoder based deep image decomposition (PAEDID) method for defective region
segmentation. In the training stage, we learn the common background as a deep
image prior by a patch autoencoder (PAE) network. In the inference stage, we
formulate anomaly detection as an image decomposition problem with the deep
image prior and domain-specific regularizations. By adopting the proposed
approach, the defective regions in the image can be accurately extracted in an
unsupervised fashion. We demonstrate the effectiveness of the PAEDID method in
simulation studies and an industrial dataset in the case study.
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 02:50:06 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Apr 2022 03:17:11 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Nov 2022 16:27:01 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Mou",
"Shancong",
""
],
[
"Cao",
"Meng",
""
],
[
"Bai",
"Haoping",
""
],
[
"Huang",
"Ping",
""
],
[
"Shi",
"Jianjun",
""
],
[
"Shan",
"Jiulong",
""
]
] |
new_dataset
| 0.999522 |
2203.15643
|
Radityo Eko Prasojo
|
Rendi Chevi, Radityo Eko Prasojo, Alham Fikri Aji, Andros Tjandra,
Sakriani Sakti
|
Nix-TTS: Lightweight and End-to-End Text-to-Speech via Module-wise
Distillation
|
Accepted at SLT 2022 (https://slt2022.org/). Associated materials can
be seen in https://github.com/rendchevi/nix-tts
| null | null | null |
cs.SD cs.CL cs.LG cs.NE eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Several solutions for lightweight TTS have shown promising results. Still,
they either rely on a hand-crafted design that reaches non-optimum size or use
a neural architecture search but often suffer training costs. We present
Nix-TTS, a lightweight TTS achieved via knowledge distillation to a
high-quality yet large-sized, non-autoregressive, and end-to-end (vocoder-free)
TTS teacher model. Specifically, we offer module-wise distillation, enabling
flexible and independent distillation to the encoder and decoder module. The
resulting Nix-TTS inherited the advantageous properties of being
non-autoregressive and end-to-end from the teacher, yet significantly smaller
in size, with only 5.23M parameters or up to 89.34% reduction of the teacher
model; it also achieves over 3.04x and 8.36x inference speedup on Intel-i7 CPU
and Raspberry Pi 3B respectively and still retains a fair voice naturalness and
intelligibility compared to the teacher model. We provide pretrained models and
audio samples of Nix-TTS.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 15:04:26 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Nov 2022 12:43:44 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Chevi",
"Rendi",
""
],
[
"Prasojo",
"Radityo Eko",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Tjandra",
"Andros",
""
],
[
"Sakti",
"Sakriani",
""
]
] |
new_dataset
| 0.998744 |
2205.09586
|
Mo Zhou
|
Mo Zhou, Vishal M. Patel
|
On Trace of PGD-Like Adversarial Attacks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial attacks pose safety and security concerns to deep learning
applications, but their characteristics are under-explored. Yet largely
imperceptible, a strong trace could have been left by PGD-like attacks in an
adversarial example. Recall that PGD-like attacks trigger the ``local
linearity'' of a network, which implies different extents of linearity for
benign or adversarial examples. Inspired by this, we construct an Adversarial
Response Characteristics (ARC) feature to reflect the model's gradient
consistency around the input to indicate the extent of linearity. Under certain
conditions, it qualitatively shows a gradually varying pattern from benign
example to adversarial example, as the latter leads to Sequel Attack Effect
(SAE). To quantitatively evaluate the effectiveness of ARC, we conduct
experiments on CIFAR-10 and ImageNet for attack detection and attack type
recognition in a challenging setting. The results suggest that SAE is an
effective and unique trace of PGD-like attacks reflected through the ARC
feature. The ARC feature is intuitive, light-weighted, non-intrusive, and
data-undemanding.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 14:26:50 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Nov 2022 03:09:55 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Zhou",
"Mo",
""
],
[
"Patel",
"Vishal M.",
""
]
] |
new_dataset
| 0.968371 |
2205.10674
|
Shushan Arakelyan
|
Shushan Arakelyan, Anna Hakhverdyan, Miltiadis Allamanis, Luis Garcia,
Christophe Hauser and Xiang Ren
|
NS3: Neuro-Symbolic Semantic Code Search
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Semantic code search is the task of retrieving a code snippet given a textual
description of its functionality. Recent work has been focused on using
similarity metrics between neural embeddings of text and code. However, current
language models are known to struggle with longer, compositional text, and
multi-step reasoning. To overcome this limitation, we propose supplementing the
query sentence with a layout of its semantic structure. The semantic layout is
used to break down the final reasoning decision into a series of lower-level
decisions. We use a Neural Module Network architecture to implement this idea.
We compare our model - NS3 (Neuro-Symbolic Semantic Search) - to a number of
baselines, including state-of-the-art semantic code retrieval methods, and
evaluate on two datasets - CodeSearchNet and Code Search and Question
Answering. We demonstrate that our approach results in more precise code
retrieval, and we study the effectiveness of our modular design when handling
compositional queries.
|
[
{
"version": "v1",
"created": "Sat, 21 May 2022 20:55:57 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2022 18:48:53 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Arakelyan",
"Shushan",
""
],
[
"Hakhverdyan",
"Anna",
""
],
[
"Allamanis",
"Miltiadis",
""
],
[
"Garcia",
"Luis",
""
],
[
"Hauser",
"Christophe",
""
],
[
"Ren",
"Xiang",
""
]
] |
new_dataset
| 0.994815 |
2205.11315
|
Younghoon Jeong
|
Younghoon Jeong, Juhyun Oh, Jaimeen Ahn, Jongwon Lee, Jihyung Moon,
Sungjoon Park, Alice Oh
|
KOLD: Korean Offensive Language Dataset
|
9 pages, 2 figures
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent directions for offensive language detection are hierarchical modeling,
identifying the type and the target of offensive language, and interpretability
with offensive span annotation and prediction. These improvements are focused
on English and do not transfer well to other languages because of cultural and
linguistic differences. In this paper, we present the Korean Offensive Language
Dataset (KOLD) comprising 40,429 comments, which are annotated hierarchically
with the type and the target of offensive language, accompanied by annotations
of the corresponding text spans. We collect the comments from NAVER news and
YouTube platform and provide the titles of the articles and videos as the
context information for the annotation process. We use these annotated comments
as training data for Korean BERT and RoBERTa models and find that they are
effective at offensiveness detection, target classification, and target span
detection while having room for improvement for target group classification and
offensive span detection. We discover that the target group distribution
differs drastically from the existing English datasets, and observe that
providing the context information improves the model performance in
offensiveness detection (+0.3), target classification (+1.5), and target group
classification (+13.1). We publicly release the dataset and baseline models.
|
[
{
"version": "v1",
"created": "Mon, 23 May 2022 13:58:45 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Nov 2022 01:36:35 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Jeong",
"Younghoon",
""
],
[
"Oh",
"Juhyun",
""
],
[
"Ahn",
"Jaimeen",
""
],
[
"Lee",
"Jongwon",
""
],
[
"Moon",
"Jihyung",
""
],
[
"Park",
"Sungjoon",
""
],
[
"Oh",
"Alice",
""
]
] |
new_dataset
| 0.999785 |
2206.09422
|
Nasif Imtiaz
|
Nasif Imtiaz and Laurie Williams
|
Are your dependencies code reviewed?: Measuring code review coverage in
dependency updates
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As modern software extensively uses free open source packages as
dependencies, developers have to regularly pull in new third-party code through
frequent updates. However, without a proper review of every incoming change,
vulnerable and malicious code can sneak into the codebase through these
dependencies. The goal of this study is to aid developers in securely accepting
dependency updates by measuring if the code changes in an update have passed
through a code review process. We implement Depdive, an update audit tool for
packages in Crates.io, npm, PyPI, and RubyGems registry. Depdive first (i)
identifies the files and the code changes in an update that cannot be traced
back to the package's source repository, i.e., \textit{phantom artifacts}; and
then (ii) measures what portion of changes in the update, excluding the phantom
artifacts, has passed through a code review process, i.e., \textit{code review
coverage}.
Using Depdive, we present an empirical study across the latest ten updates of
the most downloaded 1000 packages in each of the four registries. We further
evaluated our results through a maintainer agreement survey. We find the
updates are typically only partially code-reviewed (52.5\% of the time).
Further, only 9.0\% of the packages had all their updates in our data set fully
code-reviewed, indicating that even the most used packages can introduce
non-reviewed code in the software supply chain. We also observe that updates
either tend to have high \textit{CRC} or low \textit{CRC}, suggesting that
packages at the opposite end of the spectrum may require a separate set of
treatments.
|
[
{
"version": "v1",
"created": "Sun, 19 Jun 2022 14:48:48 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2022 18:17:21 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Imtiaz",
"Nasif",
""
],
[
"Williams",
"Laurie",
""
]
] |
new_dataset
| 0.988176 |
2206.14786
|
Saleh Ashkboos
|
Saleh Ashkboos, Langwen Huang, Nikoli Dryden, Tal Ben-Nun, Peter
Dueben, Lukas Gianinazzi, Luca Kummer, Torsten Hoefler
|
ENS-10: A Dataset For Post-Processing Ensemble Weather Forecasts
|
Accepted version of the paper
| null | null | null |
cs.LG physics.ao-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Post-processing ensemble prediction systems can improve the reliability of
weather forecasting, especially for extreme event prediction. In recent years,
different machine learning models have been developed to improve the quality of
weather post-processing. However, these models require a comprehensive dataset
of weather simulations to produce high-accuracy results, which comes at a high
computational cost to generate. This paper introduces the ENS-10 dataset,
consisting of ten ensemble members spanning 20 years (1998-2017). The ensemble
members are generated by perturbing numerical weather simulations to capture
the chaotic behavior of the Earth. To represent the three-dimensional state of
the atmosphere, ENS-10 provides the most relevant atmospheric variables at 11
distinct pressure levels and the surface at 0.5-degree resolution for forecast
lead times T=0, 24, and 48 hours (two data points per week). We propose the
ENS-10 prediction correction task for improving the forecast quality at a
48-hour lead time through ensemble post-processing. We provide a set of
baselines and compare their skill at correcting the predictions of three
important atmospheric variables. Moreover, we measure the baselines' skill at
improving predictions of extreme weather events using our dataset. The ENS-10
dataset is available under the Creative Commons Attribution 4.0 International
(CC BY 4.0) license.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2022 17:40:56 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2022 12:17:19 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Ashkboos",
"Saleh",
""
],
[
"Huang",
"Langwen",
""
],
[
"Dryden",
"Nikoli",
""
],
[
"Ben-Nun",
"Tal",
""
],
[
"Dueben",
"Peter",
""
],
[
"Gianinazzi",
"Lukas",
""
],
[
"Kummer",
"Luca",
""
],
[
"Hoefler",
"Torsten",
""
]
] |
new_dataset
| 0.997708 |
2207.04706
|
Tom\'a\v{s} Bravenec
|
Tomas Bravenec, Joaqu\'in Torres-Sospedra, Michael Gould, Tomas Fryza
|
What Your Wearable Devices Revealed About You and Possibilities of
Non-Cooperative 802.11 Presence Detection During Your Last IPIN Visit
|
7 pages, 7 figures, submitted to IPIN2022 conference
| null |
10.1109/IPIN54987.2022.9918134
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The focus on privacy-related measures regarding wireless networks grew in
last couple of years. This is especially important with technologies like Wi-Fi
or Bluetooth, which are all around us and our smartphones use them not just for
connection to the internet or other devices, but for localization purposes as
well. In this paper, we analyze and evaluate probe request frames of 802.11
wireless protocol captured during the 11th international conference on Indoor
Positioning and Indoor Navigation (IPIN) 2021. We explore the temporal
occupancy of the conference space during four days of the conference as well as
non-cooperatively track the presence of devices in the proximity of the session
rooms using 802.11 management frames, with and without using MAC address
randomization. We carried out this analysis without trying to identify/reveal
the identity of the users or in any way reverse the MAC address randomization.
As a result of the analysis, we detected that there are still many devices not
adopting MAC randomization, because either it is not implemented, or users
disabled it. In addition, many devices can be easily tracked despite employing
MAC randomization.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 08:37:54 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2022 11:11:46 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Bravenec",
"Tomas",
""
],
[
"Torres-Sospedra",
"Joaquín",
""
],
[
"Gould",
"Michael",
""
],
[
"Fryza",
"Tomas",
""
]
] |
new_dataset
| 0.998981 |
2209.03095
|
Donadel Denis
|
Alessandro Brighente, Mauro Conti, Denis Donadel, Federico Turrin
|
Hyperloop: A Cybersecurity Perspective
|
11 pages, 4 figures, 1 table
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We propose the first analysis of the cybersecurity challenges of the
Hyperloop technology. Based on known threats of similar Cyber-Physical Systems,
we identify the vulnerabilities of the Hyperloop infrastructure. We then
discuss possible countermeasures and future directions for the security of the
future Hyperloop design.
|
[
{
"version": "v1",
"created": "Wed, 7 Sep 2022 12:10:36 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2022 13:22:31 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Brighente",
"Alessandro",
""
],
[
"Conti",
"Mauro",
""
],
[
"Donadel",
"Denis",
""
],
[
"Turrin",
"Federico",
""
]
] |
new_dataset
| 0.987497 |
2209.13659
|
Robin Hankin Dr
|
Robin K. S. Hankin
|
Clifford algebra in R
|
8 pages
| null | null | null |
cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Here I present the 'clifford' package for working with Clifford algebras in
the R programming language. The algebra is described and package idiom is
given.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 19:57:07 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Nov 2022 22:39:19 GMT"
}
] | 2022-11-08T00:00:00 |
[
[
"Hankin",
"Robin K. S.",
""
]
] |
new_dataset
| 0.998162 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.