id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2211.16663
|
Joy Hsu
|
Joy Hsu, Jiajun Wu, Noah D. Goodman
|
Geoclidean: Few-Shot Generalization in Euclidean Geometry
|
To appear at NeurIPS 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Euclidean geometry is among the earliest forms of mathematical thinking.
While the geometric primitives underlying its constructions, such as perfect
lines and circles, do not often occur in the natural world, humans rarely
struggle to perceive and reason with them. Will computer vision models trained
on natural images show the same sensitivity to Euclidean geometry? Here we
explore these questions by studying few-shot generalization in the universe of
Euclidean geometry constructions. We introduce Geoclidean, a domain-specific
language for Euclidean geometry, and use it to generate two datasets of
geometric concept learning tasks for benchmarking generalization judgements of
humans and machines. We find that humans are indeed sensitive to Euclidean
geometry and generalize strongly from a few visual examples of a geometric
concept. In contrast, low-level and high-level visual features from standard
computer vision models pretrained on natural images do not support correct
generalization. Thus Geoclidean represents a novel few-shot generalization
benchmark for geometric concept learning, where the performance of humans and
of AI models diverge. The Geoclidean framework and dataset are publicly
available for download.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 01:17:22 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Hsu",
"Joy",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Goodman",
"Noah D.",
""
]
] |
new_dataset
| 0.99777 |
2211.16678
|
Achyuta Rajaram
|
Kyoungwan Woo, Achyuta Rajaram
|
FREDSR: Fourier Residual Efficient Diffusive GAN for Single Image Super
Resolution
|
8 pages, 7 figures
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
FREDSR is a GAN variant that aims to outperform traditional GAN models in
specific tasks such as Single Image Super Resolution with extreme parameter
efficiency at the cost of per-dataset generalizeability. FREDSR integrates fast
Fourier transformation, residual prediction, diffusive discriminators, etc to
achieve strong performance in comparisons to other models on the UHDSR4K
dataset for Single Image 3x Super Resolution from 360p and 720p with only 37000
parameters. The model follows the characteristics of the given dataset,
resulting in lower generalizeability but higher performance on tasks such as
real time up-scaling.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 01:58:52 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Woo",
"Kyoungwan",
""
],
[
"Rajaram",
"Achyuta",
""
]
] |
new_dataset
| 0.998231 |
2211.16746
|
Adit Magotra
|
Adit Magotra, Aagat Gedam, Tanush Savadi, Emily Li
|
ClaRet -- A CNN Architecture for Optical Coherence Tomography
|
Denotes equal contribution
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Optical Coherence Tomography is a technique used to scan the Retina of the
eye and check for tears. In this paper, we develop a Convolutional Neural
Network Architecture for OCT scan classification. The model is trained to
detect Retinal tears from an OCT scan and classify the type of tear. We
designed a block-based approach to accompany a pre-trained VGG-19 using
Transfer Learning by writing customised layers in blocks for better feature
extraction. The approach achieved substantially better results than the
baseline we initially started out with.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 05:28:47 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Magotra",
"Adit",
""
],
[
"Gedam",
"Aagat",
""
],
[
"Savadi",
"Tanush",
""
],
[
"Li",
"Emily",
""
]
] |
new_dataset
| 0.9975 |
2211.16751
|
Hussein Darir
|
Hussein Darir, Nikita Borisov, Geir Dullerud
|
DiProber: Using Dual Probing to Estimate Tor Relay Capacities in
Underloaded Networks
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Tor is the most popular anonymous communication network. It has millions of
daily users seeking privacy while browsing the internet. It has thousands of
relays to route and anonymize the source and destinations of the users packets.
To create a path, Tor authorities generate a probability distribution over
relays based on the estimates of the capacities of the relays. An incoming user
will then sample this probability distribution and choose three relays for
their paths. The estimates are based on the bandwidths of observation probes
the authority assigns to each relay in the network. Thus, in order to achieve
better load balancing between users, accurate estimates are necessary.
Unfortunately, the currently implemented estimation algorithm generate
inaccurate estimates causing the network to be under utilized and its
capacities unfairly distributed between the users paths. We propose DiProber, a
new relay capacity estimation algorithm. The algorithm proposes a new
measurement scheme in Tor consisting of two probes per relay and uses maximum
likelihood to estimate their capacities. We show that the new technique works
better in the case of under-utilized networks where users tend to have very low
demand on the Tor network.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 05:33:46 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Darir",
"Hussein",
""
],
[
"Borisov",
"Nikita",
""
],
[
"Dullerud",
"Geir",
""
]
] |
new_dataset
| 0.99744 |
2211.16776
|
Jie Liu
|
Jie Liu, Chao Chen, Jie Tang, Gangshan Wu
|
From Coarse to Fine: Hierarchical Pixel Integration for Lightweight
Image Super-Resolution
|
SOTA lightweight image super-resolution. To be appear at AAAI 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image super-resolution (SR) serves as a fundamental tool for the processing
and transmission of multimedia data. Recently, Transformer-based models have
achieved competitive performances in image SR. They divide images into
fixed-size patches and apply self-attention on these patches to model
long-range dependencies among pixels. However, this architecture design is
originated for high-level vision tasks, which lacks design guideline from SR
knowledge. In this paper, we aim to design a new attention block whose insights
are from the interpretation of Local Attribution Map (LAM) for SR networks.
Specifically, LAM presents a hierarchical importance map where the most
important pixels are located in a fine area of a patch and some less important
pixels are spread in a coarse area of the whole image. To access pixels in the
coarse area, instead of using a very large patch size, we propose a lightweight
Global Pixel Access (GPA) module that applies cross-attention with the most
similar patch in an image. In the fine area, we use an Intra-Patch
Self-Attention (IPSA) module to model long-range pixel dependencies in a local
patch, and then a $3\times3$ convolution is applied to process the finest
details. In addition, a Cascaded Patch Division (CPD) strategy is proposed to
enhance perceptual quality of recovered images. Extensive experiments suggest
that our method outperforms state-of-the-art lightweight SR methods by a large
margin. Code is available at https://github.com/passerer/HPINet.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 06:32:34 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Liu",
"Jie",
""
],
[
"Chen",
"Chao",
""
],
[
"Tang",
"Jie",
""
],
[
"Wu",
"Gangshan",
""
]
] |
new_dataset
| 0.998395 |
2211.16785
|
Zeeshan Kaleem
|
Mahnoor Dil, Misha Urooj Khan, Muhammad Zeshan Alam, Farooq Alam
Orakazi, Zeeshan Kaleem, Chau Yuen
|
SafeSpace MFNet: Precise and Efficient MultiFeature Drone Detection
Network
|
Paper under review in IEEE TVT
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned air vehicles (UAVs) popularity is on the rise as it enables the
services like traffic monitoring, emergency communications, deliveries, and
surveillance. However, the unauthorized usage of UAVs (a.k.a drone) may violate
security and privacy protocols for security-sensitive national and
international institutions. The presented challenges require fast, efficient,
and precise detection of UAVs irrespective of harsh weather conditions, the
presence of different objects, and their size to enable SafeSpace. Recently,
there has been significant progress in using the latest deep learning models,
but those models have shortcomings in terms of computational complexity,
precision, and non-scalability. To overcome these limitations, we propose a
precise and efficient multiscale and multifeature UAV detection network for
SafeSpace, i.e., \textit{MultiFeatureNet} (\textit{MFNet}), an improved version
of the popular object detection algorithm YOLOv5s. In \textit{MFNet}, we
perform multiple changes in the backbone and neck of the YOLOv5s network to
focus on the various small and ignored features required for accurate and fast
UAV detection. To further improve the accuracy and focus on the specific
situation and multiscale UAVs, we classify the \textit{MFNet} into small (S),
medium (M), and large (L): these are the combinations of various size filters
in the convolution and the bottleneckCSP layers, reside in the backbone and
neck of the architecture. This classification helps to overcome the
computational cost by training the model on a specific feature map rather than
all the features. The dataset and code are available as an open source:
github.com/ZeeshanKaleem/MultiFeatureNet.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 06:56:39 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Dil",
"Mahnoor",
""
],
[
"Khan",
"Misha Urooj",
""
],
[
"Alam",
"Muhammad Zeshan",
""
],
[
"Orakazi",
"Farooq Alam",
""
],
[
"Kaleem",
"Zeeshan",
""
],
[
"Yuen",
"Chau",
""
]
] |
new_dataset
| 0.993231 |
2211.16807
|
Ossama Obeid
|
Ossama Obeid, Go Inoue, Nizar Habash
|
Camelira: An Arabic Multi-Dialect Morphological Disambiguator
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present Camelira, a web-based Arabic multi-dialect morphological
disambiguation tool that covers four major variants of Arabic: Modern Standard
Arabic, Egyptian, Gulf, and Levantine. Camelira offers a user-friendly web
interface that allows researchers and language learners to explore various
linguistic information, such as part-of-speech, morphological features, and
lemmas. Our system also provides an option to automatically choose an
appropriate dialect-specific disambiguator based on the prediction of a dialect
identification component. Camelira is publicly accessible at
http://camelira.camel-lab.com.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 08:02:11 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Obeid",
"Ossama",
""
],
[
"Inoue",
"Go",
""
],
[
"Habash",
"Nizar",
""
]
] |
new_dataset
| 0.999568 |
2211.16824
|
Petr \v{S}im\'anek
|
Ji\v{r}\'i Pihrt, Rudolf Raevskiy, Petr \v{S}im\'anek, Matej Choma
|
WeatherFusionNet: Predicting Precipitation from Satellite Data
|
NeurIPS 2022, Weather4Cast core challenge
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The short-term prediction of precipitation is critical in many areas of life.
Recently, a large body of work was devoted to forecasting radar reflectivity
images. The radar images are available only in areas with ground weather
radars. Thus, we aim to predict high-resolution precipitation from
lower-resolution satellite radiance images. A neural network called
WeatherFusionNet is employed to predict severe rain up to eight hours in
advance. WeatherFusionNet is a U-Net architecture that fuses three different
ways to process the satellite data; predicting future satellite frames,
extracting rain information from the current frames, and using the input
sequence directly. Using the presented method, we achieved 1st place in the
NeurIPS 2022 Weather4Cast Core challenge. The code and trained parameters are
available at \url{https://github.com/Datalab-FIT-CTU/weather4cast-2022}.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 08:49:13 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Pihrt",
"Jiří",
""
],
[
"Raevskiy",
"Rudolf",
""
],
[
"Šimánek",
"Petr",
""
],
[
"Choma",
"Matej",
""
]
] |
new_dataset
| 0.993479 |
2211.16860
|
Teresa Anna Steiner
|
Philip Bille, Inge Li G{\o}rtz, Moshe Lewenstein, Solon P. Pissis, Eva
Rotenberg, Teresa Anna Steiner
|
Gapped String Indexing in Subquadratic Space and Sublinear Query Time
|
21 pages, 5 figures
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In Gapped String Indexing, the goal is to compactly represent a string $S$ of
length $n$ such that given queries consisting of two strings $P_1$ and $P_2$,
called patterns, and an integer interval $[\alpha, \beta]$, called gap range,
we can quickly find occurrences of $P_1$ and $P_2$ in $S$ with distance in
$[\alpha, \beta]$. Due to the many applications of this fundamental problem in
computational biology and elsewhere, there is a great body of work for
restricted or parameterised variants of the problem. However, for the general
problem statement, no improvements upon the trivial $\mathcal{O}(n)$-space
$\mathcal{O}(n)$-query time or $\Omega(n^2)$-space $\mathcal{\tilde{O}}(|P_1| +
|P_2| + \mathrm{occ})$-query time solutions were known so far. We break this
barrier obtaining interesting trade-offs with polynomially subquadratic space
and polynomially sublinear query time. In particular, we show that, for every
$0\leq \delta \leq 1$, there is a data structure for Gapped String Indexing
with either $\mathcal{\tilde{O}}(n^{2-\delta/3})$ or
$\mathcal{\tilde{O}}(n^{3-2\delta})$ space and $\mathcal{\tilde{O}}(|P_1| +
|P_2| + n^{\delta}\cdot (\mathrm{occ}+1))$ query time, where $\mathrm{occ}$ is
the number of reported occurrences. As a new fundamental tool towards obtaining
our main result, we introduce the Shifted Set Intersection problem: preprocess
a collection of sets $S_1, \ldots, S_k$ of integers such that given queries
consisting of three integers $i,j,s$, we can quickly output YES if and only if
there exist $a \in S_i$ and $b \in S_j$ with $a+s = b$. We start by showing
that the Shifted Set Intersection problem is equivalent to the indexing variant
of 3SUM (3SUM Indexing) [Golovnev et al., STOC 2020]. Via several steps of
reduction we then show that the Gapped String Indexing problem reduces to
polylogarithmically many instances of the Shifted Set Intersection problem.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 10:01:31 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Bille",
"Philip",
""
],
[
"Gørtz",
"Inge Li",
""
],
[
"Lewenstein",
"Moshe",
""
],
[
"Pissis",
"Solon P.",
""
],
[
"Rotenberg",
"Eva",
""
],
[
"Steiner",
"Teresa Anna",
""
]
] |
new_dataset
| 0.950536 |
2211.16883
|
Yekun Chai
|
Yaqian Han, Yekun Chai, Shuohuan Wang, Yu Sun, Hongyi Huang, Guanghao
Chen, Yitong Xu, Yang Yang
|
X-PuDu at SemEval-2022 Task 6: Multilingual Learning for English and
Arabic Sarcasm Detection
|
SemEval-2022 Task 6
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting sarcasm and verbal irony from people's subjective statements is
crucial to understanding their intended meanings and real sentiments and
positions in social scenarios. This paper describes the X-PuDu system that
participated in SemEval-2022 Task 6, iSarcasmEval - Intended Sarcasm Detection
in English and Arabic, which aims at detecting intended sarcasm in various
settings of natural language understanding. Our solution finetunes pre-trained
language models, such as ERNIE-M and DeBERTa, under the multilingual settings
to recognize the irony from Arabic and English texts. Our system ranked second
out of 43, and ninth out of 32 in Task A: one-sentence detection in English and
Arabic; fifth out of 22 in Task B: binary multi-label classification in
English; first out of 16, and fifth out of 13 in Task C: sentence-pair
detection in English and Arabic.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 10:34:08 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Han",
"Yaqian",
""
],
[
"Chai",
"Yekun",
""
],
[
"Wang",
"Shuohuan",
""
],
[
"Sun",
"Yu",
""
],
[
"Huang",
"Hongyi",
""
],
[
"Chen",
"Guanghao",
""
],
[
"Xu",
"Yitong",
""
],
[
"Yang",
"Yang",
""
]
] |
new_dataset
| 0.999443 |
2211.16914
|
William Seymour
|
William Seymour
|
A Design Philosophy for Agents in the Smart Home
| null |
In Extended Abstracts of the 2020 CHI Conference on Human Factors
in Computing Systems
|
10.1145/3334480.3375032
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The home is often the most private space in people's lives, and not one in
which they expect to be surveilled. However, today's market for smart home
devices has quickly evolved to include products that monitor, automate, and
present themselves as human. After documenting some of the more unusual
emergent problems with contemporary devices, this body of work seeks to develop
a design philosophy for intelligent agents in the smart home that can act as an
alternative to the ways that these devices are currently built. This is then
applied to the design of privacy empowering technologies, representing the
first steps from the devices of the present towards a more respectful future.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 11:24:58 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Seymour",
"William",
""
]
] |
new_dataset
| 0.998338 |
2211.16934
|
Yihan Wu
|
Yihan Wu, Junliang Guo, Xu Tan, Chen Zhang, Bohan Li, Ruihua Song, Lei
He, Sheng Zhao, Arul Menezes, Jiang Bian
|
VideoDubber: Machine Translation with Speech-Aware Length Control for
Video Dubbing
|
AAAI 2023 camera version
| null | null | null |
cs.CL cs.AI cs.LG cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video dubbing aims to translate the original speech in a film or television
program into the speech in a target language, which can be achieved with a
cascaded system consisting of speech recognition, machine translation and
speech synthesis. To ensure the translated speech to be well aligned with the
corresponding video, the length/duration of the translated speech should be as
close as possible to that of the original speech, which requires strict length
control. Previous works usually control the number of words or characters
generated by the machine translation model to be similar to the source
sentence, without considering the isochronicity of speech as the speech
duration of words/characters in different languages varies. In this paper, we
propose a machine translation system tailored for the task of video dubbing,
which directly considers the speech duration of each token in translation, to
match the length of source and target speech. Specifically, we control the
speech length of generated sentence by guiding the prediction of each word with
the duration information, including the speech duration of itself as well as
how much duration is left for the remaining words. We design experiments on
four language directions (German -> English, Spanish -> English, Chinese <->
English), and the results show that the proposed method achieves better length
control ability on the generated speech than baseline methods. To make up the
lack of real-world datasets, we also construct a real-world test set collected
from films to provide comprehensive evaluations on the video dubbing task.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 12:09:40 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Wu",
"Yihan",
""
],
[
"Guo",
"Junliang",
""
],
[
"Tan",
"Xu",
""
],
[
"Zhang",
"Chen",
""
],
[
"Li",
"Bohan",
""
],
[
"Song",
"Ruihua",
""
],
[
"He",
"Lei",
""
],
[
"Zhao",
"Sheng",
""
],
[
"Menezes",
"Arul",
""
],
[
"Bian",
"Jiang",
""
]
] |
new_dataset
| 0.992778 |
2211.17010
|
Sachin Kumar Gupta
|
Aman Desai, Shyamal Gandhi, Sachin Gupta, Manan Shah and Samir Patel
|
Carbon Emission Prediction on the World Bank Dataset for Canada
|
Submitted to Annals of Data Science, 2022 - Springer
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The continuous rise in CO2 emission into the environment is one of the most
crucial issues facing the whole world. Many countries are making crucial
decisions to control their carbon footprints to escape some of their
catastrophic outcomes. There has been a lot of research going on to project the
amount of carbon emissions in the future, which can help us to develop
innovative techniques to deal with it in advance. Machine learning is one of
the most advanced and efficient techniques for predicting the amount of carbon
emissions from current data. This paper provides the methods for predicting
carbon emissions (CO2 emissions) for the next few years. The predictions are
based on data from the past 50 years. The dataset, which is used for making the
prediction, is collected from World Bank datasets. This dataset contains CO2
emissions (metric tons per capita) of all the countries from 1960 to 2018. Our
method consists of using machine learning techniques to take the idea of what
carbon emission measures will look like in the next ten years and project them
onto the dataset taken from the World Bank's data repository. The purpose of
this research is to compare how different machine learning models (Decision
Tree, Linear Regression, Random Forest, and Support Vector Machine) perform on
a similar dataset and measure the difference between their predictions.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 07:04:52 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Desai",
"Aman",
""
],
[
"Gandhi",
"Shyamal",
""
],
[
"Gupta",
"Sachin",
""
],
[
"Shah",
"Manan",
""
],
[
"Patel",
"Samir",
""
]
] |
new_dataset
| 0.996893 |
2211.17019
|
Natarajan Venkatachalam
|
Foram P Shingala, Natarajan Venkatachalam, Selvagangai C, Hema Priya
S, Dillibabu S, Pooja Chandravanshi, Ravindra P. Singh
|
Real time QKD Post Processing based on Reconfigurable Hardware
Acceleration
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Key Distillation is an essential component of every Quantum Key Distribution
system because it compensates the inherent transmission errors of quantum
channel. However, throughput and interoperability aspects of post-processing
engine design often neglected, and exiting solutions are not providing any
guarantee. In this paper, we propose multiple protocol support high throughput
key distillation framework implemented in a Field Programmable Gate Array
(FPGA) using High-Level Synthesis (HLS). The proposed design uses a Hadoop
framework with a map-reduce programming model to efficiently process large
chunks of raw data across the limited computing resources of an FPGA. We
present a novel hardware-efficient integrated post-processing architecture that
offer dynamic error correction, a side-channel resistant authentication scheme,
and an inbuilt high-speed encryption application, which uses the key for secure
communication. We develop a semi automated High level synthesis framework
capable of handling different QKD protocols with promising speedup. Overall,
the experimental results shows that there is a significant improvement in
performance and compatible with any discrete variable QKD systems.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 14:12:45 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Shingala",
"Foram P",
""
],
[
"Venkatachalam",
"Natarajan",
""
],
[
"C",
"Selvagangai",
""
],
[
"S",
"Hema Priya",
""
],
[
"S",
"Dillibabu",
""
],
[
"Chandravanshi",
"Pooja",
""
],
[
"Singh",
"Ravindra P.",
""
]
] |
new_dataset
| 0.99575 |
2211.17100
|
Tommy Nilsson
|
Anna Vock, Tommy Nilsson
|
Holistic Outpost Design for Lunar Lava Tubes
|
73rd International Astronautical Congress (IAC)
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
As the space industry continues its rapid development, humanity is poised to
expand beyond Low Earth Orbit (LEO), seeking to establish permanent presence on
the Moon and beyond. While space travel has traditionally been the domain of a
small number of highly specialized professionals, a new era of human
exploration, involving non-space actors and stakeholders, is now becoming a
reality. In spite of this development, most space habitats are still designed
for a narrow target group. This paper seeks to address this deficit by
rethinking the established design approaches, typically limited to tackling
engineering and challenges of human space exploration (such as radiation or
hypogravity), by instead adopting an interdisciplinary "big picture"
perspective encompassing social, psychological and cultural aspects of future
space habitats. By elaborating and reflecting on our concept, this paper seeks
to demonstrate the importance of a trans-disciplinary approach to designing
thriving sustainable colonies beyond LEO. We demonstrate the potentially key
role of design as mediator in advancing macro-strategies promoting thriving
existence and sustainable growth. With this approach we tackle big-picture
questions about humanity's future and prospects amongst the stars.
|
[
{
"version": "v1",
"created": "Wed, 19 Oct 2022 09:30:02 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Vock",
"Anna",
""
],
[
"Nilsson",
"Tommy",
""
]
] |
new_dataset
| 0.978193 |
2211.17111
|
Junjie Huang
|
Junjie Huang and Guan Huang
|
BEVPoolv2: A Cutting-edge Implementation of BEVDet Toward Deployment
|
Technique report
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We release a new codebase version of the BEVDet, dubbed branch dev2.0. With
dev2.0, we propose BEVPoolv2 upgrade the view transformation process from the
perspective of engineering optimization, making it free from a huge burden in
both calculation and storage aspects. It achieves this by omitting the
calculation and preprocessing of the large frustum feature. As a result, it can
be processed within 0.82 ms even with a large input resolution of 640x1600,
which is 15.1 times the previous fastest implementation. Besides, it is also
less cache consumptive when compared with the previous implementation,
naturally as it no longer needs to store the large frustum feature. Last but
not least, this also makes the deployment to the other backend handy. We offer
an example of deployment to the TensorRT backend in branch dev2.0 and show how
fast the BEVDet paradigm can be processed on it. Other than BEVPoolv2, we also
select and integrate some substantial progress that was proposed in the past
year. As an example configuration, BEVDet4D-R50-Depth-CBGS scores 52.3 NDS on
the NuScenes validation set and can be processed at a speed of 16.4 FPS with
the PyTorch backend. The code has been released to facilitate the study on
https://github.com/HuangJunJie2017/BEVDet/tree/dev2.0.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 15:55:38 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Huang",
"Junjie",
""
],
[
"Huang",
"Guan",
""
]
] |
new_dataset
| 0.968856 |
2211.17188
|
Pierre Le Pelletier De Woillemont
|
Pierre Le Pelletier de Woillemont, R\'emi Labory, Vincent Corruble
|
Automated Play-Testing Through RL Based Human-Like Play-Styles
Generation
|
Proceedings of the AAAI Conference on Artificial Intelligence and
Interactive Digital Entertainment, 18(1)
|
Vol. 18 No. 1 (2022): Eighteenth AAAI Conference on Artificial
Intelligence and Interactive Digital Entertainment
|
10.1609/aiide.v18i1.21958
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The increasing complexity of gameplay mechanisms in modern video games is
leading to the emergence of a wider range of ways to play games. The variety of
possible play-styles needs to be anticipated by designers, through automated
tests. Reinforcement Learning is a promising answer to the need of automating
video game testing. To that effect one needs to train an agent to play the
game, while ensuring this agent will generate the same play-styles as the
players in order to give meaningful feedback to the designers. We present
CARMI: a Configurable Agent with Relative Metrics as Input. An agent able to
emulate the players play-styles, even on previously unseen levels. Unlike
current methods it does not rely on having full trajectories, but only summary
data. Moreover it only requires little human data, thus compatible with the
constraints of modern video game production. This novel agent could be used to
investigate behaviors and balancing during the production of a video game with
a realistic amount of training time.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2022 14:17:20 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"de Woillemont",
"Pierre Le Pelletier",
""
],
[
"Labory",
"Rémi",
""
],
[
"Corruble",
"Vincent",
""
]
] |
new_dataset
| 0.981183 |
2211.17222
|
Karl-Ludwig Besser
|
Karl-Ludwig Besser, Eduard A. Jorswieck
|
QMKPy: A Python Testbed for the Quadratic Multiple Knapsack Problem
|
3 pages
|
Journal of Open Source Software, vol. 7, no. 79: 4718, Nov 2022
|
10.21105/joss.04718
| null |
cs.OH math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
QMKPy provides a Python framework for modeling and solving the quadratic
multiple knapsack problem (QMKP). It is primarily aimed at researchers who
develop new solution algorithms for the QMKP. QMKPy therefore mostly functions
as a testbed to quickly implement novel algorithms and compare their results
with existing ones. However, the package also already includes implementations
of established algorithms for those who only need to solve a QMKP as part of
their research.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 10:17:30 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Besser",
"Karl-Ludwig",
""
],
[
"Jorswieck",
"Eduard A.",
""
]
] |
new_dataset
| 0.999691 |
2211.17235
|
Yu Yin
|
Yu Yin, Kamran Ghasedi, HsiangTao Wu, Jiaolong Yang, Xin Tong, Yun Fu
|
NeRFInvertor: High Fidelity NeRF-GAN Inversion for Single-shot Real
Image Animation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Nerf-based Generative models have shown impressive capacity in generating
high-quality images with consistent 3D geometry. Despite successful synthesis
of fake identity images randomly sampled from latent space, adopting these
models for generating face images of real subjects is still a challenging task
due to its so-called inversion issue. In this paper, we propose a universal
method to surgically fine-tune these NeRF-GAN models in order to achieve
high-fidelity animation of real subjects only by a single image. Given the
optimized latent code for an out-of-domain real image, we employ 2D loss
functions on the rendered image to reduce the identity gap. Furthermore, our
method leverages explicit and implicit 3D regularizations using the in-domain
neighborhood samples around the optimized latent code to remove geometrical and
visual artifacts. Our experiments confirm the effectiveness of our method in
realistic, high-fidelity, and 3D consistent animation of real faces on multiple
NeRF-GAN models across different datasets.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 18:36:45 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Yin",
"Yu",
""
],
[
"Ghasedi",
"Kamran",
""
],
[
"Wu",
"HsiangTao",
""
],
[
"Yang",
"Jiaolong",
""
],
[
"Tong",
"Xin",
""
],
[
"Fu",
"Yun",
""
]
] |
new_dataset
| 0.979935 |
2211.17257
|
Xinyan Velocity Yu
|
Xinyan Velocity Yu, Sewon Min, Luke Zettlemoyer and Hannaneh
Hajishirzi
|
CREPE: Open-Domain Question Answering with False Presuppositions
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Information seeking users often pose questions with false presuppositions,
especially when asking about unfamiliar topics. Most existing question
answering (QA) datasets, in contrast, assume all questions have well defined
answers. We introduce CREPE, a QA dataset containing a natural distribution of
presupposition failures from online information-seeking forums. We find that
25% of questions contain false presuppositions, and provide annotations for
these presuppositions and their corrections. Through extensive baseline
experiments, we show that adaptations of existing open-domain QA models can
find presuppositions moderately well, but struggle when predicting whether a
presupposition is factually correct. This is in large part due to difficulty in
retrieving relevant evidence passages from a large text corpus. CREPE provides
a benchmark to study question answering in the wild, and our analyses provide
avenues for future work in better modeling and further studying the task.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 18:54:49 GMT"
}
] | 2022-12-01T00:00:00 |
[
[
"Yu",
"Xinyan Velocity",
""
],
[
"Min",
"Sewon",
""
],
[
"Zettlemoyer",
"Luke",
""
],
[
"Hajishirzi",
"Hannaneh",
""
]
] |
new_dataset
| 0.999773 |
2103.12434
|
Manu Tom
|
Manu Tom and Tianyu Wu and Emmanuel Baltsavias and Konrad Schindler
|
Recent Ice Trends in Swiss Mountain Lakes: 20-year Analysis of MODIS
Imagery
|
This version of the article has been accepted for publication, after
peer review but is not the Version of Record and does not reflect
post-acceptance improvements, or any corrections. The Version of Record is
available online at: http://dx.doi.org/10.1007/s41064-022-00215-x
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Depleting lake ice is a climate change indicator, just like sea-level rise or
glacial retreat. Monitoring Lake Ice Phenology (LIP) is useful because
long-term freezing and thawing patterns serve as sentinels to understand
regional and global climate change. We report a study for the Oberengadin
region of Switzerland, where several small- and medium-sized mountain lakes are
located. We observe the LIP events, such as freeze-up, break-up and ice cover
duration, across two decades (2000-2020) from optical satellite images. We
analyse the time series of MODIS imagery by estimating spatially resolved maps
of lake ice for these Alpine lakes with supervised machine learning. To train
the classifier we rely on reference data annotated manually based on webcam
images. From the ice maps, we derive long-term LIP trends. Since the webcam
data are only available for two winters, we cross-check our results against the
operational MODIS and VIIRS snow products. We find a change in complete freeze
duration of -0.76 and -0.89 days per annum for lakes Sils and Silvaplana,
respectively. Furthermore, we observe plausible correlations of the LIP trends
with climate data measured at nearby meteorological stations. We notice that
mean winter air temperature has a negative correlation with the freeze duration
and break-up events and a positive correlation with the freeze-up events.
Additionally, we observe a strong negative correlation of sunshine during the
winter months with the freeze duration and break-up events.
|
[
{
"version": "v1",
"created": "Tue, 23 Mar 2021 10:25:02 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Aug 2021 21:30:46 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Jul 2022 09:43:47 GMT"
},
{
"version": "v4",
"created": "Tue, 29 Nov 2022 15:03:01 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Tom",
"Manu",
""
],
[
"Wu",
"Tianyu",
""
],
[
"Baltsavias",
"Emmanuel",
""
],
[
"Schindler",
"Konrad",
""
]
] |
new_dataset
| 0.976835 |
2112.12883
|
Liangkun Yu
|
Liangkun Yu, Xiang Sun, Sihua Shao, Yougan Chen, Rana Albelaihi
|
Backhaul-aware Drone Base Station Placement and Resource Management for
FSO-based Drone-assisted Mobile Networks
| null | null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
In drone-assisted mobile networks, Drone-mounted Base Stations (DBSs) are
responsively and flexibly deployed over any Places of Interest (PoI), such as
sporadic hotspots and disaster-struck areas, where the existing mobile network
infrastructure is unable to provide wireless coverage. In this paper, a DBS is
an aerial base station to relay traffic between a nearby Macro Base Station
(MBS) and the users. In addition, Free Space Optics (FSO) is applied as the
backhauling solution to significantly increase the capacity of the backhaul
link between an MBS and a DBS. Most of the existing DBS placement solutions
assume the FSO-based backhaul link provides sufficient link capacity, which may
not be true, especially when a DBS is placed far away from an MBS (e.g., > 10
km in disaster-struck areas) or in a bad weather condition. In this paper, we
formulate a problem to jointly optimize bandwidth allocation and DBS placement
by considering the FSO-based backhaul link capacity constraint. A Backhaul
awaRe bandwidth allOcAtion and DBS placement (BROAD) algorithm is designed to
efficiently solve the problem, and the performance of the algorithm is
demonstrated via extensive simulations.
|
[
{
"version": "v1",
"created": "Fri, 24 Dec 2021 00:12:24 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Nov 2022 20:44:23 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Yu",
"Liangkun",
""
],
[
"Sun",
"Xiang",
""
],
[
"Shao",
"Sihua",
""
],
[
"Chen",
"Yougan",
""
],
[
"Albelaihi",
"Rana",
""
]
] |
new_dataset
| 0.996658 |
2202.13388
|
Kailun Yang
|
Hao Shi, Yifan Zhou, Kailun Yang, Xiaoting Yin, Ze Wang, Yaozu Ye, Zhe
Yin, Shi Meng, Peng Li, Kaiwei Wang
|
PanoFlow: Learning 360{\deg} Optical Flow for Surrounding Temporal
Understanding
|
Code and dataset are publicly available at
https://github.com/MasterHow/PanoFlow
| null | null | null |
cs.CV cs.RO eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optical flow estimation is a basic task in self-driving and robotics systems,
which enables to temporally interpret traffic scenes. Autonomous vehicles
clearly benefit from the ultra-wide Field of View (FoV) offered by 360{\deg}
panoramic sensors. However, due to the unique imaging process of panoramic
cameras, models designed for pinhole images do not directly generalize
satisfactorily to 360{\deg} panoramic images. In this paper, we put forward a
novel network framework--PanoFlow, to learn optical flow for panoramic images.
To overcome the distortions introduced by equirectangular projection in
panoramic transformation, we design a Flow Distortion Augmentation (FDA)
method, which contains radial flow distortion (FDA-R) or equirectangular flow
distortion (FDA-E). We further look into the definition and properties of
cyclic optical flow for panoramic videos, and hereby propose a Cyclic Flow
Estimation (CFE) method by leveraging the cyclicity of spherical images to
infer 360{\deg} optical flow and converting large displacement to relatively
small displacement. PanoFlow is applicable to any existing flow estimation
method and benefits from the progress of narrow-FoV flow estimation. In
addition, we create and release a synthetic panoramic dataset FlowScape based
on CARLA to facilitate training and quantitative analysis. PanoFlow achieves
state-of-the-art performance on the public OmniFlowNet and the established
FlowScape benchmarks. Our proposed approach reduces the End-Point-Error (EPE)
on FlowScape by 27.3%. On OmniFlowNet, PanoFlow achieves a 55.5% error
reduction from the best published result. We also qualitatively validate our
method via a collection vehicle and a public real-world OmniPhotos dataset,
indicating strong potential and robustness for real-world navigation
applications. Code and dataset are publicly available at
https://github.com/MasterHow/PanoFlow.
|
[
{
"version": "v1",
"created": "Sun, 27 Feb 2022 16:03:38 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Jul 2022 09:46:16 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Nov 2022 14:16:05 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Shi",
"Hao",
""
],
[
"Zhou",
"Yifan",
""
],
[
"Yang",
"Kailun",
""
],
[
"Yin",
"Xiaoting",
""
],
[
"Wang",
"Ze",
""
],
[
"Ye",
"Yaozu",
""
],
[
"Yin",
"Zhe",
""
],
[
"Meng",
"Shi",
""
],
[
"Li",
"Peng",
""
],
[
"Wang",
"Kaiwei",
""
]
] |
new_dataset
| 0.989518 |
2204.13004
|
Yong Man Ro
|
Youngjoon Yu, Hong Joo Lee, Hakmin Lee, and Yong Man Ro
|
Defending Person Detection Against Adversarial Patch Attack by using
Universal Defensive Frame
|
Accepted at IEEE Transactions on Image Processing (TIP), 2022
| null |
10.1109/TIP.2022.3217375
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Person detection has attracted great attention in the computer vision area
and is an imperative element in human-centric computer vision. Although the
predictive performances of person detection networks have been improved
dramatically, they are vulnerable to adversarial patch attacks. Changing the
pixels in a restricted region can easily fool the person detection network in
safety-critical applications such as autonomous driving and security systems.
Despite the necessity of countering adversarial patch attacks, very few efforts
have been dedicated to defending person detection against adversarial patch
attack. In this paper, we propose a novel defense strategy that defends against
an adversarial patch attack by optimizing a defensive frame for person
detection. The defensive frame alleviates the effect of the adversarial patch
while maintaining person detection performance with clean person. The proposed
defensive frame in the person detection is generated with a competitive
learning algorithm which makes an iterative competition between detection
threatening module and detection shielding module in person detection.
Comprehensive experimental results demonstrate that the proposed method
effectively defends person detection against adversarial patch attacks.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 15:18:08 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Oct 2022 06:59:15 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Yu",
"Youngjoon",
""
],
[
"Lee",
"Hong Joo",
""
],
[
"Lee",
"Hakmin",
""
],
[
"Ro",
"Yong Man",
""
]
] |
new_dataset
| 0.997313 |
2205.12247
|
Da Yin
|
Da Yin, Hritik Bansal, Masoud Monajatipoor, Liunian Harold Li, Kai-Wei
Chang
|
GeoMLAMA: Geo-Diverse Commonsense Probing on Multilingual Pre-Trained
Language Models
|
EMNLP 2022. Code and data are released at
https://github.com/WadeYin9712/GeoMLAMA/
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recent work has shown that Pre-trained Language Models (PLMs) store the
relational knowledge learned from data and utilize it for performing downstream
tasks. However, commonsense knowledge across different regions may vary. For
instance, the color of bridal dress is white in American weddings whereas it is
red in Chinese weddings. In this paper, we introduce a benchmark dataset,
Geo-Diverse Commonsense Multilingual Language Models Analysis (GeoMLAMA), for
probing the diversity of the relational knowledge in multilingual PLMs.
GeoMLAMA contains 3,125 prompts in English, Chinese, Hindi, Persian, and
Swahili, with a wide coverage of concepts shared by people from American,
Chinese, Indian, Iranian and Kenyan cultures. We benchmark 11 standard
multilingual PLMs on GeoMLAMA. Interestingly, we find that 1) larger
multilingual PLMs variants do not necessarily store geo-diverse concepts better
than its smaller variant; 2) multilingual PLMs are not intrinsically biased
towards knowledge from the Western countries (the United States); 3) the native
language of a country may not be the best language to probe its knowledge and
4) a language may better probe knowledge about a non-native country than its
native country. Code and data are released at
https://github.com/WadeYin9712/GeoMLAMA.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 17:54:50 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Nov 2022 18:37:59 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Yin",
"Da",
""
],
[
"Bansal",
"Hritik",
""
],
[
"Monajatipoor",
"Masoud",
""
],
[
"Li",
"Liunian Harold",
""
],
[
"Chang",
"Kai-Wei",
""
]
] |
new_dataset
| 0.99956 |
2209.15285
|
Sugyeong Eo
|
Sugyeong Eo, Chanjun Park, Hyeonseok Moon, Jaehyung Seo, Gyeongmin
Kim, Jungseob Lee, Heuiseok Lim
|
QUAK: A Synthetic Quality Estimation Dataset for Korean-English Neural
Machine Translation
| null |
COLING 2022
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
With the recent advance in neural machine translation demonstrating its
importance, research on quality estimation (QE) has been steadily progressing.
QE aims to automatically predict the quality of machine translation (MT) output
without reference sentences. Despite its high utility in the real world, there
remain several limitations concerning manual QE data creation: inevitably
incurred non-trivial costs due to the need for translation experts, and issues
with data scaling and language expansion. To tackle these limitations, we
present QUAK, a Korean-English synthetic QE dataset generated in a fully
automatic manner. This consists of three sub-QUAK datasets QUAK-M, QUAK-P, and
QUAK-H, produced through three strategies that are relatively free from
language constraints. Since each strategy requires no human effort, which
facilitates scalability, we scale our data up to 1.58M for QUAK-P, H and 6.58M
for QUAK-M. As an experiment, we quantitatively analyze word-level QE results
in various ways while performing statistical analysis. Moreover, we show that
datasets scaled in an efficient way also contribute to performance improvements
by observing meaningful performance gains in QUAK-M, P when adding data up to
1.58M.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 07:47:44 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Nov 2022 12:05:07 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Eo",
"Sugyeong",
""
],
[
"Park",
"Chanjun",
""
],
[
"Moon",
"Hyeonseok",
""
],
[
"Seo",
"Jaehyung",
""
],
[
"Kim",
"Gyeongmin",
""
],
[
"Lee",
"Jungseob",
""
],
[
"Lim",
"Heuiseok",
""
]
] |
new_dataset
| 0.999509 |
2210.00975
|
Manish Nagaraj
|
Manish Nagaraj, Chamika Mihiranga Liyanagedera and Kaushik Roy
|
DOTIE - Detecting Objects through Temporal Isolation of Events using a
Spiking Architecture
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision-based autonomous navigation systems rely on fast and accurate object
detection algorithms to avoid obstacles. Algorithms and sensors designed for
such systems need to be computationally efficient, due to the limited energy of
the hardware used for deployment. Biologically inspired event cameras are a
good candidate as a vision sensor for such systems due to their speed, energy
efficiency, and robustness to varying lighting conditions. However, traditional
computer vision algorithms fail to work on event-based outputs, as they lack
photometric features such as light intensity and texture. In this work, we
propose a novel technique that utilizes the temporal information inherently
present in the events to efficiently detect moving objects. Our technique
consists of a lightweight spiking neural architecture that is able to separate
events based on the speed of the corresponding objects. These separated events
are then further grouped spatially to determine object boundaries. This method
of object detection is both asynchronous and robust to camera noise. In
addition, it shows good performance in scenarios with events generated by
static objects in the background, where existing event-based algorithms fail.
We show that by utilizing our architecture, autonomous navigation systems can
have minimal latency and energy overheads for performing object detection.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 14:43:11 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Nagaraj",
"Manish",
""
],
[
"Liyanagedera",
"Chamika Mihiranga",
""
],
[
"Roy",
"Kaushik",
""
]
] |
new_dataset
| 0.988295 |
2211.04039
|
Nando Metzger
|
Nando Metzger, John E. Vargas-Mu\~noz, Rodrigo C. Daudt, Benjamin
Kellenberger, Thao Ton-That Whelan, Ferda Ofli, Muhammad Imran, Konrad
Schindler, Devis Tuia
|
Fine-grained Population Mapping from Coarse Census Counts and Open
Geodata
| null | null |
10.1038/s41598-022-24495-w
| null |
cs.LG cs.CV stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
Fine-grained population maps are needed in several domains, like urban
planning, environmental monitoring, public health, and humanitarian operations.
Unfortunately, in many countries only aggregate census counts over large
spatial units are collected, moreover, these are not always up-to-date. We
present POMELO, a deep learning model that employs coarse census counts and
open geodata to estimate fine-grained population maps with 100m ground sampling
distance. Moreover, the model can also estimate population numbers when no
census counts at all are available, by generalizing across countries. In a
series of experiments for several countries in sub-Saharan Africa, the maps
produced with POMELOare in good agreement with the most detailed available
reference counts: disaggregation of coarse census counts reaches R2 values of
85-89%; unconstrained prediction in the absence of any counts reaches 48-69%.
|
[
{
"version": "v1",
"created": "Tue, 8 Nov 2022 06:43:52 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Metzger",
"Nando",
""
],
[
"Vargas-Muñoz",
"John E.",
""
],
[
"Daudt",
"Rodrigo C.",
""
],
[
"Kellenberger",
"Benjamin",
""
],
[
"Whelan",
"Thao Ton-That",
""
],
[
"Ofli",
"Ferda",
""
],
[
"Imran",
"Muhammad",
""
],
[
"Schindler",
"Konrad",
""
],
[
"Tuia",
"Devis",
""
]
] |
new_dataset
| 0.999685 |
2211.12498
|
Fengyu Yang
|
Fengyu Yang, Chenyang Ma, Jiacheng Zhang, Jing Zhu, Wenzhen Yuan,
Andrew Owens
|
Touch and Go: Learning from Human-Collected Vision and Touch
|
Accepted by NeurIPS 2022 Track of Datasets and Benchmarks
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The ability to associate touch with sight is essential for tasks that require
physically interacting with objects in the world. We propose a dataset with
paired visual and tactile data called Touch and Go, in which human data
collectors probe objects in natural environments using tactile sensors, while
simultaneously recording egocentric video. In contrast to previous efforts,
which have largely been confined to lab settings or simulated environments, our
dataset spans a large number of "in the wild" objects and scenes. To
demonstrate our dataset's effectiveness, we successfully apply it to a variety
of tasks: 1) self-supervised visuo-tactile feature learning, 2) tactile-driven
image stylization, i.e., making the visual appearance of an object more
consistent with a given tactile signal, and 3) predicting future frames of a
tactile signal from visuo-tactile inputs.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 18:59:32 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Nov 2022 16:39:06 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Yang",
"Fengyu",
""
],
[
"Ma",
"Chenyang",
""
],
[
"Zhang",
"Jiacheng",
""
],
[
"Zhu",
"Jing",
""
],
[
"Yuan",
"Wenzhen",
""
],
[
"Owens",
"Andrew",
""
]
] |
new_dataset
| 0.99981 |
2211.13979
|
Zhen Wang
|
Zhen Wang, Zheng Feng, Yanjun Li, Bowen Li, Yongrui Wang, Chulin Sha,
Min He, Xiaolin Li
|
BatmanNet: Bi-branch Masked Graph Transformer Autoencoder for Molecular
Representation
|
11 pages, 3 figures
| null | null | null |
cs.LG q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although substantial efforts have been made using graph neural networks
(GNNs) for AI-driven drug discovery (AIDD), effective molecular representation
learning remains an open challenge, especially in the case of insufficient
labeled molecules. Recent studies suggest that big GNN models pre-trained by
self-supervised learning on unlabeled datasets enable better transfer
performance in downstream molecular property prediction tasks. However, they
often require large-scale datasets and considerable computational resources,
which is time-consuming, computationally expensive, and environmentally
unfriendly. To alleviate these limitations, we propose a novel pre-training
model for molecular representation learning, Bi-branch Masked Graph Transformer
Autoencoder (BatmanNet). BatmanNet features two tailored and complementary
graph autoencoders to reconstruct the missing nodes and edges from a masked
molecular graph. To our surprise, BatmanNet discovered that the highly masked
proportion (60%) of the atoms and bonds achieved the best performance. We
further propose an asymmetric graph-based encoder-decoder architecture for
either nodes and edges, where a transformer-based encoder only takes the
visible subset of nodes or edges, and a lightweight decoder reconstructs the
original molecule from the latent representation and mask tokens. With this
simple yet effective asymmetrical design, our BatmanNet can learn efficiently
even from a much smaller-scale unlabeled molecular dataset to capture the
underlying structural and semantic information, overcoming a major limitation
of current deep neural networks for molecular representation learning. For
instance, using only 250K unlabelled molecules as pre-training data, our
BatmanNet with 2.575M parameters achieves a 0.5% improvement on the average AUC
compared with the current state-of-the-art method with 100M parameters
pre-trained on 11M molecules.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 09:44:28 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Nov 2022 07:00:24 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Wang",
"Zhen",
""
],
[
"Feng",
"Zheng",
""
],
[
"Li",
"Yanjun",
""
],
[
"Li",
"Bowen",
""
],
[
"Wang",
"Yongrui",
""
],
[
"Sha",
"Chulin",
""
],
[
"He",
"Min",
""
],
[
"Li",
"Xiaolin",
""
]
] |
new_dataset
| 0.968441 |
2211.15217
|
Micaela Jara
|
Micaela Jara Ten Kathen, Princy Johnson, Isabel Jurado Flores, Daniel
Guti errez Reina
|
AquaFeL-PSO: A Monitoring System for Water Resources using Autonomous
Surface Vehicles based on Multimodal PSO and Federated Learning
| null | null | null |
00-00
|
cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The preservation, monitoring, and control of water resources has been a major
challenge in recent decades. Water resources must be constantly monitored to
know the contamination levels of water. To meet this objective, this paper
proposes a water monitoring system using autonomous surface vehicles, equipped
with water quality sensors, based on a multimodal particle swarm optimization,
and the federated learning technique, with Gaussian process as a surrogate
model, the AquaFeL-PSO algorithm. The proposed monitoring system has two
phases, the exploration phase and the exploitation phase. In the exploration
phase, the vehicles examine the surface of the water resource, and with the
data acquired by the water quality sensors, a first water quality model is
estimated in the central server. In the exploitation phase, the area is divided
into action zones using the model estimated in the exploration phase for a
better exploitation of the contamination zones. To obtain the final water
quality model of the water resource, the models obtained in both phases are
combined. The results demonstrate the efficiency of the proposed path planner
in obtaining water quality models of the pollution zones, with a 14$\%$
improvement over the other path planners compared, and the entire water
resource, obtaining a 400$\%$ better model, as well as in detecting pollution
peaks, the improvement in this case study is 4,000$\%$. It was also proven that
the results obtained by applying the federated learning technique are very
similar to the results of a centralized system.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 10:56:12 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Kathen",
"Micaela Jara Ten",
""
],
[
"Johnson",
"Princy",
""
],
[
"Flores",
"Isabel Jurado",
""
],
[
"Reina",
"Daniel Guti errez",
""
]
] |
new_dataset
| 0.980375 |
2211.15735
|
Shreya Rajkumar Rajkumar
|
Shreya Rajkumar
|
Implementing Software Defined Load Balancer and Firewall
| null |
International Journal of Scientific Research and Engineering
Development-Volume 5 Issue 5, Sep-Oct 2022
| null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Software-defined networking (SDN) is an architecture that aims to make
networks fast and flexible. SDN's goal is to improve network control by
enabling service providers as well as enterprises to respond quickly to
changing business needs. In SDN, the administrator can shape traffic from a
centralized control console without having to modify any of the individual
switches belonging to the network. The SDN controller which is centralized
directs the switches to deliver network services wherever they are needed,
irrespective of the specific connections between a server and devices. This
methodology is a shift from traditional network architecture, in which
individual network devices make traffic decisions based on their configured
routing tables. In this paper, I built and tested an SDN load balancer and
firewall module using the Floodlight controller.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 19:33:14 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Rajkumar",
"Shreya",
""
]
] |
new_dataset
| 0.995346 |
2211.15739
|
Niranjan Hasabnis
|
Mohammad Hossain, Derssie Mebratu, Niranjan Hasabnis, Jun Jin, Gaurav
Chaudhary, Noah Shen
|
CWD: A Machine Learning based Approach to Detect Unknown Cloud Workloads
|
7 pages, 4 figures, Appeared at The MLSys'22 Workshop on Cloud
Intelligence(AIOps), In conjunction with the 5th Conference on Machine
Learning and Systems
| null | null | null |
cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Workloads in modern cloud data centers are becoming increasingly complex. The
number of workloads running in cloud data centers has been growing
exponentially for the last few years, and cloud service providers (CSP) have
been supporting on-demand services in real-time. Realizing the growing
complexity of cloud environment and cloud workloads, hardware vendors such as
Intel and AMD are increasingly introducing cloud-specific workload acceleration
features in their CPU platforms. These features are typically targeted towards
popular and commonly-used cloud workloads. Nonetheless, uncommon,
customer-specific workloads (unknown workloads), if their characteristics are
different from common workloads (known workloads), may not realize the
potential of the underlying platform. To address this problem of realizing the
full potential of the underlying platform, we develop a machine learning based
technique to characterize, profile and predict workloads running in the cloud
environment. Experimental evaluation of our technique demonstrates good
prediction performance. We also develop techniques to analyze the performance
of the model in a standalone manner.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 19:41:56 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Hossain",
"Mohammad",
""
],
[
"Mebratu",
"Derssie",
""
],
[
"Hasabnis",
"Niranjan",
""
],
[
"Jin",
"Jun",
""
],
[
"Chaudhary",
"Gaurav",
""
],
[
"Shen",
"Noah",
""
]
] |
new_dataset
| 0.990407 |
2211.15894
|
Haoran Tang
|
Lukas Zhornyak, Zhengjie Xu, Haoran Tang, Jianbo Shi
|
HashEncoding: Autoencoding with Multiscale Coordinate Hashing
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present HashEncoding, a novel autoencoding architecture that leverages a
non-parametric multiscale coordinate hash function to facilitate a per-pixel
decoder without convolutions. By leveraging the space-folding behaviour of
hashing functions, HashEncoding allows for an inherently multiscale embedding
space that remains much smaller than the original image. As a result, the
decoder requires very few parameters compared with decoders in traditional
autoencoders, approaching a non-parametric reconstruction of the original image
and allowing for greater generalizability. Finally, by allowing backpropagation
directly to the coordinate space, we show that HashEncoding can be exploited
for geometric tasks such as optical flow.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2022 03:22:19 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Zhornyak",
"Lukas",
""
],
[
"Xu",
"Zhengjie",
""
],
[
"Tang",
"Haoran",
""
],
[
"Shi",
"Jianbo",
""
]
] |
new_dataset
| 0.977506 |
2211.16008
|
Joonhyunng Kim
|
Joonhyung Kim, Kyeongho Lee and Jongsun Park
|
A Charge Domain P-8T SRAM Compute-In-Memory with Low-Cost DAC/ADC
Operation for 4-bit Input Processing
|
Presented at ISLPED 2022
|
in Proc. ACM/IEEE Int. Symp. Low Power Electron. and Design 6
(2022) 1-6
|
10.1145/3531437.3539718
|
ISLPED/2022/08
|
cs.AR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a low cost PMOS-based 8T (P-8T) SRAM Compute-In-Memory
(CIM) architecture that efficiently per-forms the multiply-accumulate (MAC)
operations between 4-bit input activations and 8-bit weights. First, bit-line
(BL) charge-sharing technique is employed to design the low-cost and reliable
digital-to-analog conversion of 4-bit input activations in the pro-posed SRAM
CIM, where the charge domain analog computing provides variation tolerant and
linear MAC outputs. The 16 local arrays are also effectively exploited to
implement the analog mul-tiplication unit (AMU) that simultaneously produces 16
multipli-cation results between 4-bit input activations and 1-bit weights. For
the hardware cost reduction of analog-to-digital converter (ADC) without
sacrificing DNN accuracy, hardware aware sys-tem simulations are performed to
decide the ADC bit-resolutions and the number of activated rows in the proposed
CIM macro. In addition, for the ADC operation, the AMU-based reference col-umns
are utilized for generating ADC reference voltages, with which low-cost 4-bit
coarse-fine flash ADC has been designed. The 256X80 P-8T SRAM CIM macro
implementation using 28nm CMOS process shows that the proposed CIM shows the
accuracies of 91.46% and 66.67% with CIFAR-10 and CIFAR-100 dataset,
respectively, with the energy efficiency of 50.07-TOPS/W.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2022 08:15:27 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Kim",
"Joonhyung",
""
],
[
"Lee",
"Kyeongho",
""
],
[
"Park",
"Jongsun",
""
]
] |
new_dataset
| 0.998538 |
2211.16016
|
Zixiang Zhou
|
Zixiang Zhou, Baoyuan Wang
|
UDE: A Unified Driving Engine for Human Motion Generation
|
14 pages, 10 figures, 6 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generating controllable and editable human motion sequences is a key
challenge in 3D Avatar generation. It has been labor-intensive to generate and
animate human motion for a long time until learning-based approaches have been
developed and applied recently. However, these approaches are still
task-specific or modality-specific\cite
{ahuja2019language2pose}\cite{ghosh2021synthesis}\cite{ferreira2021learning}\cite{li2021ai}.
In this paper, we propose ``UDE", the first unified driving engine that enables
generating human motion sequences from natural language or audio sequences (see
Fig.~\ref{fig:teaser}). Specifically, UDE consists of the following key
components: 1) a motion quantization module based on VQVAE that represents
continuous motion sequence as discrete latent code\cite{van2017neural}, 2) a
modality-agnostic transformer encoder\cite{vaswani2017attention} that learns to
map modality-aware driving signals to a joint space, and 3) a unified token
transformer (GPT-like\cite{radford2019language}) network to predict the
quantized latent code index in an auto-regressive manner. 4) a diffusion motion
decoder that takes as input the motion tokens and decodes them into motion
sequences with high diversity. We evaluate our method on
HumanML3D\cite{Guo_2022_CVPR} and AIST++\cite{li2021learn} benchmarks, and the
experiment results demonstrate our method achieves state-of-the-art
performance. Project website: \url{https://github.com/zixiangzhou916/UDE/
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2022 08:30:52 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Zhou",
"Zixiang",
""
],
[
"Wang",
"Baoyuan",
""
]
] |
new_dataset
| 0.997315 |
2211.16096
|
Siddhartha Siddhiprada Bhoi
|
Siddhartha Siddhiprada Bhoi, Paramapalli Udaya and Abhay Kumar Singh
|
Construction of Multiple Constrained DNA Codes
|
10 pages, 2 figures
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
DNA sequences are prone to creating secondary structures by folding back on
themselves by non-specific hybridization among its nucleotides. The formation
of secondary structures makes the sequences chemically inactive towards
synthesis and sequencing processes. In this letter, our goal is to tackle the
problems due to the creation of secondary structures in DNA sequences along
with constraints such as not having a large homopolymer run length. In this
paper, we have presented families of DNA codes with secondary structures of
stem length at most two and homopolymer run length at most four. By mapping the
error correcting codes over $\Z_{11}$ to DNA nucleotides, we obtained DNA codes
with rates $0.5765$ times the rate of corresponding code over $\Z_{11}$, which
include some new secondary structure free and better-performing codes for DNA
based data storage and DNA computing purposes.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2022 11:12:43 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Bhoi",
"Siddhartha Siddhiprada",
""
],
[
"Udaya",
"Paramapalli",
""
],
[
"Singh",
"Abhay Kumar",
""
]
] |
new_dataset
| 0.982091 |
2211.16122
|
Nivedita Bijlani
|
Nivedita Bijlani, Oscar Mendez Maldonado, Samaneh Kouchaki
|
G-CMP: Graph-enhanced Contextual Matrix Profile for unsupervised anomaly
detection in sensor-based remote health monitoring
|
12 pages, 7 figures, Accepted to British Machine Vision Conference
2022
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sensor-based remote health monitoring is used in industrial, urban and
healthcare settings to monitor ongoing operation of equipment and human health.
An important aim is to intervene early if anomalous events or adverse health is
detected. In the wild, these anomaly detection approaches are challenged by
noise, label scarcity, high dimensionality, explainability and wide variability
in operating environments. The Contextual Matrix Profile (CMP) is a
configurable 2-dimensional version of the Matrix Profile (MP) that uses the
distance matrix of all subsequences of a time series to discover patterns and
anomalies. The CMP is shown to enhance the effectiveness of the MP and other
SOTA methods at detecting, visualising and interpreting true anomalies in noisy
real world data from different domains. It excels at zooming out and
identifying temporal patterns at configurable time scales. However, the CMP
does not address cross-sensor information, and cannot scale to high dimensional
data. We propose a novel, self-supervised graph-based approach for temporal
anomaly detection that works on context graphs generated from the CMP distance
matrix. The learned graph embeddings encode the anomalous nature of a time
context. In addition, we evaluate other graph outlier algorithms for the same
task. Given our pipeline is modular, graph construction, generation of graph
embeddings, and pattern recognition logic can all be chosen based on the
specific pattern detection application. We verified the effectiveness of
graph-based anomaly detection and compared it with the CMP and 3 state-of-the
art methods on two real-world healthcare datasets with different anomalies. Our
proposed method demonstrated better recall, alert rate and generalisability.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2022 11:48:50 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Bijlani",
"Nivedita",
""
],
[
"Maldonado",
"Oscar Mendez",
""
],
[
"Kouchaki",
"Samaneh",
""
]
] |
new_dataset
| 0.987617 |
2211.16281
|
Ivica Kostric
|
Ivica Kostric, Krisztian Balog, T{\o}ll{\o}v Alexander Aresvik,
Nolwenn Bernard, Eyvinn Thu D{\o}rheim, Pholit Hantula, Sander
Havn-S{\o}rensen, Rune Henriksen, Hengameh Hosseini, Ekaterina Khlybova,
Weronika Lajewska, Sindre Ekrheim Mosand, Narmin Orujova
|
DAGFiNN: A Conversational Conference Assistant
| null | null |
10.1145/3523227.3551467
| null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
DAGFiNN is a conversational conference assistant that can be made available
for a given conference both as a chatbot on the website and as a Furhat robot
physically exhibited at the conference venue. Conference participants can
interact with the assistant to get advice on various questions, ranging from
where to eat in the city or how to get to the airport to which sessions we
recommend them to attend based on the information we have about them. The
overall objective is to provide a personalized and engaging experience and
allow users to ask a broad range of questions that naturally arise before and
during the conference.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2022 15:10:30 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Kostric",
"Ivica",
""
],
[
"Balog",
"Krisztian",
""
],
[
"Aresvik",
"Tølløv Alexander",
""
],
[
"Bernard",
"Nolwenn",
""
],
[
"Dørheim",
"Eyvinn Thu",
""
],
[
"Hantula",
"Pholit",
""
],
[
"Havn-Sørensen",
"Sander",
""
],
[
"Henriksen",
"Rune",
""
],
[
"Hosseini",
"Hengameh",
""
],
[
"Khlybova",
"Ekaterina",
""
],
[
"Lajewska",
"Weronika",
""
],
[
"Mosand",
"Sindre Ekrheim",
""
],
[
"Orujova",
"Narmin",
""
]
] |
new_dataset
| 0.99939 |
2211.16492
|
Yoav Artzi
|
Anya Ji and Noriyuki Kojima and Noah Rush and Alane Suhr and Wai Keen
Vong and Robert D. Hawkins and Yoav Artzi
|
Abstract Visual Reasoning with Tangram Shapes
|
EMNLP 2022 long paper
| null | null | null |
cs.CL cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce KiloGram, a resource for studying abstract visual reasoning in
humans and machines. Drawing on the history of tangram puzzles as stimuli in
cognitive science, we build a richly annotated dataset that, with >1k distinct
stimuli, is orders of magnitude larger and more diverse than prior resources.
It is both visually and linguistically richer, moving beyond whole shape
descriptions to include segmentation maps and part labels. We use this resource
to evaluate the abstract visual reasoning capacities of recent multi-modal
models. We observe that pre-trained weights demonstrate limited abstract
reasoning, which dramatically improves with fine-tuning. We also observe that
explicitly describing parts aids abstract reasoning for both humans and models,
especially when jointly encoding the linguistic and visual inputs. KiloGram is
available at https://lil.nlp.cornell.edu/kilogram .
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2022 18:57:06 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Ji",
"Anya",
""
],
[
"Kojima",
"Noriyuki",
""
],
[
"Rush",
"Noah",
""
],
[
"Suhr",
"Alane",
""
],
[
"Vong",
"Wai Keen",
""
],
[
"Hawkins",
"Robert D.",
""
],
[
"Artzi",
"Yoav",
""
]
] |
new_dataset
| 0.999782 |
2211.16496
|
Anirudh Srinivasan
|
Anirudh Srinivasan, Eunsol Choi
|
TyDiP: A Dataset for Politeness Classification in Nine Typologically
Diverse Languages
|
EMNLP 2022 Findings. 16 pages, 8 figures, 11 tables. The data and
code is publicly available at https://github.com/Genius1237/TyDiP
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study politeness phenomena in nine typologically diverse languages.
Politeness is an important facet of communication and is sometimes argued to be
cultural-specific, yet existing computational linguistic study is limited to
English. We create TyDiP, a dataset containing three-way politeness annotations
for 500 examples in each language, totaling 4.5K examples. We evaluate how well
multilingual models can identify politeness levels -- they show a fairly robust
zero-shot transfer ability, yet fall short of estimated human accuracy
significantly. We further study mapping the English politeness strategy lexicon
into nine languages via automatic translation and lexicon induction, analyzing
whether each strategy's impact stays consistent across languages. Lastly, we
empirically study the complicated relationship between formality and politeness
through transfer experiments. We hope our dataset will support various research
questions and applications, from evaluating multilingual models to constructing
polite multilingual agents.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2022 18:58:15 GMT"
}
] | 2022-11-30T00:00:00 |
[
[
"Srinivasan",
"Anirudh",
""
],
[
"Choi",
"Eunsol",
""
]
] |
new_dataset
| 0.999733 |
1909.06988
|
Sidhanth Mohanty
|
Sidhanth Mohanty, Ryan O'Donnell, Pedro Paredes
|
Explicit near-Ramanujan graphs of every degree
|
26 pages
| null | null | null |
cs.DS cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For every constant $d \geq 3$ and $\epsilon > 0$, we give a deterministic
$\mathrm{poly}(n)$-time algorithm that outputs a $d$-regular graph on
$\Theta(n)$ vertices that is $\epsilon$-near-Ramanujan; i.e., its eigenvalues
are bounded in magnitude by $2\sqrt{d-1} + \epsilon$ (excluding the single
trivial eigenvalue of~$d$).
|
[
{
"version": "v1",
"created": "Mon, 16 Sep 2019 05:03:38 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Oct 2019 22:03:49 GMT"
},
{
"version": "v3",
"created": "Sun, 27 Nov 2022 06:57:14 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Mohanty",
"Sidhanth",
""
],
[
"O'Donnell",
"Ryan",
""
],
[
"Paredes",
"Pedro",
""
]
] |
new_dataset
| 0.998716 |
2011.07946
|
C\'edric Beaulac
|
C\'edric Beaulac, Jeffrey S. Rosenthal
|
Introducing a new high-resolution handwritten digits data set with
writer characteristics
|
Data set available here :
https://drive.google.com/drive/folders/1f2o1kjXLvcxRgtmMMuDkA2PQ5Zato4Or?usp=sharing
|
SN COMPUT. SCI. 4, 66 (2023)
|
10.1007/s42979-022-01494-2
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The contributions in this article are two-fold. First, we introduce a new
hand-written digit data set that we collected. It contains high-resolution
images of hand-written The contributions in this article are two-fold. First,
we introduce a new handwritten digit data set that we collected. It contains
high-resolution images of handwritten digits together with various writer
characteristics which are not available in the well-known MNIST database. The
multiple writer characteristics gathered are a novelty of our data set and
create new research opportunities. The data set is publicly available online.
Second, we analyse this new data set. We begin with simple supervised tasks. We
assess the predictability of the writer characteristics gathered, the effect of
using some of those characteristics as predictors in classification task and
the effect of higher resolution images on classification accuracy. We also
explore semi-supervised applications; we can leverage the high quantity of
handwritten digits data sets already existing online to improve the accuracy of
various classifications task with noticeable success. Finally, we also
demonstrate the generative perspective offered by this new data set; we are
able to generate images that mimics the writing style of specific writers. The
data set has unique and distinct features and our analysis establishes
benchmarks and showcases some of the new opportunities made possible with this
new data set.
|
[
{
"version": "v1",
"created": "Wed, 4 Nov 2020 18:18:43 GMT"
},
{
"version": "v2",
"created": "Wed, 12 May 2021 18:37:41 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Apr 2022 21:46:00 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Beaulac",
"Cédric",
""
],
[
"Rosenthal",
"Jeffrey S.",
""
]
] |
new_dataset
| 0.986068 |
2103.04423
|
Ricardo de Azambuja
|
Ricardo de Azambuja, Hassan Fouad, Yann Bouteiller, Charles Sol,
Giovanni Beltrame
|
When Being Soft Makes You Tough: A Collision-Resilient Quadcopter
Inspired by Arthropods' Exoskeletons
|
Author's version of the paper presented at ICRA 2022 (still waiting
for the conference proceedings to be online). Added acknowledgements that
were previously removed to fit 6+n ICRA 2022 page limits
|
2022 International Conference on Robotics and Automation (ICRA),
2022, pp. 7854-7860
|
10.1109/ICRA46639.2022.9811841
| null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Flying robots are usually rather delicate and require protective enclosures
when facing the risk of collision, while high complexity and reduced payload
are recurrent problems with collision-resilient flying robots. Inspired by
arthropods' exoskeletons, we design a simple, open source, easily manufactured,
semi-rigid structure with soft joints that can withstand high-velocity impacts.
With an exoskeleton, the protective shell becomes part of the main robot
structure, thereby minimizing its loss in payload capacity. Our design is
simple to build and customize using cheap components (e.g. bamboo skewers) and
consumer-grade 3D printers. The result is CogniFly, a sub-250g autonomous
quadcopter that survives multiple collisions at speeds up to 7m/s. In addition
to its collision-resiliency, CogniFly is easy to program using Python or Buzz,
carries sensors that allow it to fly for approx. 17min without the need of GPS
or an external motion capture system, has enough computing power to run deep
neural network models on-board and was designed to facilitate integration with
an automated battery swapping system. This structure becomes an ideal platform
for high-risk activities (such as flying in a cluttered environment or
reinforcement learning training) by dramatically reducing the risks of damaging
its own hardware or the environment. Source code, 3D files, instructions and
videos are available through the project's website
(https://thecognifly.github.io).
|
[
{
"version": "v1",
"created": "Sun, 7 Mar 2021 18:35:56 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Sep 2021 19:20:48 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Feb 2022 14:54:54 GMT"
},
{
"version": "v4",
"created": "Mon, 27 Jun 2022 13:41:20 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"de Azambuja",
"Ricardo",
""
],
[
"Fouad",
"Hassan",
""
],
[
"Bouteiller",
"Yann",
""
],
[
"Sol",
"Charles",
""
],
[
"Beltrame",
"Giovanni",
""
]
] |
new_dataset
| 0.986513 |
2108.04186
|
Haritha Thilakarathne
|
Haritha Thilakarathne, Aiden Nibali, Zhen He, Stuart Morgan
|
Pose is all you need: The pose only group activity recognition system
(POGARS)
|
12 pages, 7 figures
|
Machine.Vision.and.Applications. 33 (2002)
|
10.1007/s00138-022-01346-2
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We introduce a novel deep learning based group activity recognition approach
called the Pose Only Group Activity Recognition System (POGARS), designed to
use only tracked poses of people to predict the performed group activity. In
contrast to existing approaches for group activity recognition, POGARS uses 1D
CNNs to learn spatiotemporal dynamics of individuals involved in a group
activity and forgo learning features from pixel data. The proposed model uses a
spatial and temporal attention mechanism to infer person-wise importance and
multi-task learning for simultaneously performing group and individual action
classification. Experimental results confirm that POGARS achieves highly
competitive results compared to state-of-the-art methods on a widely used
public volleyball dataset despite only using tracked pose as input. Further our
experiments show by using pose only as input, POGARS has better generalization
capabilities compared to methods that use RGB as input.
|
[
{
"version": "v1",
"created": "Mon, 9 Aug 2021 17:16:04 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Thilakarathne",
"Haritha",
""
],
[
"Nibali",
"Aiden",
""
],
[
"He",
"Zhen",
""
],
[
"Morgan",
"Stuart",
""
]
] |
new_dataset
| 0.956781 |
2108.13408
|
Kai-En Lin
|
Kai-En Lin and Guowei Yang and Lei Xiao and Feng Liu and Ravi
Ramamoorthi
|
View Synthesis of Dynamic Scenes based on Deep 3D Mask Volume
|
This is the extended version of the paper published at ICCV 2021.
Code and dataset available at:
https://cseweb.ucsd.edu//~viscomp/projects/ICCV21Deep/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image view synthesis has seen great success in reconstructing photorealistic
visuals, thanks to deep learning and various novel representations. The next
key step in immersive virtual experiences is view synthesis of dynamic scenes.
However, several challenges exist due to the lack of high-quality training
datasets, and the additional time dimension for videos of dynamic scenes. To
address this issue, we introduce a multi-view video dataset, captured with a
custom 10-camera rig in 120FPS. The dataset contains 96 high-quality scenes
showing various visual effects and human interactions in outdoor scenes. We
develop a new algorithm, Deep 3D Mask Volume, which enables temporally-stable
view extrapolation from binocular videos of dynamic scenes, captured by static
cameras. Our algorithm addresses the temporal inconsistency of disocclusions by
identifying the error-prone areas with a 3D mask volume, and replaces them with
static background observed throughout the video. Our method enables
manipulation in 3D space as opposed to simple 2D masks, We demonstrate better
temporal stability than frame-by-frame static view synthesis methods, or those
that use 2D masks. The resulting view synthesis videos show minimal flickering
artifacts and allow for larger translational movements.
|
[
{
"version": "v1",
"created": "Mon, 30 Aug 2021 17:55:28 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Nov 2022 18:22:49 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Lin",
"Kai-En",
""
],
[
"Yang",
"Guowei",
""
],
[
"Xiao",
"Lei",
""
],
[
"Liu",
"Feng",
""
],
[
"Ramamoorthi",
"Ravi",
""
]
] |
new_dataset
| 0.999746 |
2202.02567
|
Sukhendu Das PhD
|
Binoy Saha and Sukhendu Das
|
Catch Me if You Can: A Novel Task for Detection of Covert Geo-Locations
(CGL)
|
This is an updated version of our accepted paper in: fourth workshop
on Computer Vision Applications (WCVA), 12th Indian Conference on Computer
Vision, Graphics and Image Processing (ICVGIP 20-21), IIT Jodhpur, India,
December 2021. [work sponsored under IMPRINT grant]
| null | null | null |
cs.CV cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most visual scene understanding tasks in the field of computer vision involve
identification of the objects present in the scene. Image regions like
hideouts, turns, & other obscured regions of the scene also contain crucial
information, for specific surveillance tasks. Task proposed in this paper
involves the design of an intelligent visual aid for identification of such
locations in an image, which has either the potential to create an imminent
threat from an adversary or appear as the target zones needing further
investigation. Covert places (CGL) for hiding behind an occluding object are
concealed 3D locations, not detectable from the viewpoint (camera). Hence this
involves delineating specific image regions around the projections of outer
boundary of the occluding objects, as places to be accessed around the
potential hideouts. CGL detection finds applications in military
counter-insurgency operations, surveillance with path planning for an
exploratory robot. Given an RGB image, the goal is to identify all CGLs in the
2D scene. Identification of such regions would require knowledge about the 3D
boundaries of obscuring items (pillars, furniture), their spatial location with
respect to the neighboring regions of the scene. We propose this as a novel
task, termed Covert Geo-Location (CGL) Detection. Classification of any region
of an image as a CGL (as boundary sub-segments of an occluding object that
conceals the hideout) requires examining the 3D relation between boundaries of
occluding objects and their neighborhoods & surroundings. Our method
successfully extracts relevant depth features from a single RGB image and
quantitatively yields significant improvement over existing object detection
and segmentation models adapted and trained for CGL detection. We also
introduce a novel hand-annotated CGL detection dataset containing 1.5K
real-world images for experimentation.
|
[
{
"version": "v1",
"created": "Sat, 5 Feb 2022 14:40:14 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Saha",
"Binoy",
""
],
[
"Das",
"Sukhendu",
""
]
] |
new_dataset
| 0.996945 |
2203.01442
|
Xiangyu Gao
|
Gao Xiangyu, Ding Sihao, Vanas Karl, Dasari Harshavardhan Reddy,
Soderlund Henrik
|
Deformable Radar Polygon: A Lightweight and Predictable Occupancy
Representation for Short-range Collision Avoidance
|
8 pages
| null | null | null |
cs.RO eess.SP
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Inferring the drivable area in a scene is a key capability for ensuring the
vehicle avoids obstacles and enabling safe autonomous driving. However, a
traditional occupancy grid map suffers from high memory consumption when
forming a fine-resolution grid for a large map. In this paper, we propose a
lightweight, accurate, and predictable occupancy representation for automotive
radars working for short-range applications that take interest in instantaneous
free space surrounding the sensor. This new occupancy format is a polygon
composed of a bunch of vertices selected from noisy radar measurements, which
covers free space inside and gives a Doppler moving velocity for each vertex.
It not only takes a very small memory and computing resources for storage and
updating at every timeslot but also has the predictable shape-change property
based on vertex's Doppler velocity. We name this kind of occupancy
representation 'deformable radar polygon'. Two formation algorithms for radar
polygon are introduced for both single timeslot and continuous inverse sensor
model update. To fit this new polygon representation, a matrix-form collision
detection method has been modeled as well. The radar polygon algorithms and
collision detection model have been validated via extensive experiments with
real collected data and simulations, showing that the deformable radar polygon
is very competitive in terms of its completeness, smoothness, accuracy,
lightweight as well as the shape-predictable property. Our codes will be made
available at
https://github.com/Xiangyu-Gao/deformable_radar_polygon_occupancy_representation.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 22:35:55 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Nov 2022 09:10:19 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Xiangyu",
"Gao",
""
],
[
"Sihao",
"Ding",
""
],
[
"Karl",
"Vanas",
""
],
[
"Reddy",
"Dasari Harshavardhan",
""
],
[
"Henrik",
"Soderlund",
""
]
] |
new_dataset
| 0.993814 |
2203.16265
|
Chaoyang Zhu
|
Chaoyang Zhu, Yiyi Zhou, Yunhang Shen, Gen Luo, Xingjia Pan, Mingbao
Lin, Chao Chen, Liujuan Cao, Xiaoshuai Sun, Rongrong Ji
|
SeqTR: A Simple yet Universal Network for Visual Grounding
|
21 pages, 8 figures
|
European Conference on Computer Vision, 2022
|
10.1007/978-3-031-19833-5_35
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a simple yet universal network termed SeqTR for
visual grounding tasks, e.g., phrase localization, referring expression
comprehension (REC) and segmentation (RES). The canonical paradigms for visual
grounding often require substantial expertise in designing network
architectures and loss functions, making them hard to generalize across tasks.
To simplify and unify the modeling, we cast visual grounding as a point
prediction problem conditioned on image and text inputs, where either the
bounding box or binary mask is represented as a sequence of discrete coordinate
tokens. Under this paradigm, visual grounding tasks are unified in our SeqTR
network without task-specific branches or heads, e.g., the convolutional mask
decoder for RES, which greatly reduces the complexity of multi-task modeling.
In addition, SeqTR also shares the same optimization objective for all tasks
with a simple cross-entropy loss, further reducing the complexity of deploying
hand-crafted loss functions. Experiments on five benchmark datasets demonstrate
that the proposed SeqTR outperforms (or is on par with) the existing
state-of-the-arts, proving that a simple yet universal approach for visual
grounding is indeed feasible. Source code is available at
https://github.com/sean-zhuh/SeqTR.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 12:52:46 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Jul 2022 02:13:37 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Zhu",
"Chaoyang",
""
],
[
"Zhou",
"Yiyi",
""
],
[
"Shen",
"Yunhang",
""
],
[
"Luo",
"Gen",
""
],
[
"Pan",
"Xingjia",
""
],
[
"Lin",
"Mingbao",
""
],
[
"Chen",
"Chao",
""
],
[
"Cao",
"Liujuan",
""
],
[
"Sun",
"Xiaoshuai",
""
],
[
"Ji",
"Rongrong",
""
]
] |
new_dataset
| 0.995733 |
2204.07889
|
Hayk Martiros
|
Hayk Martiros, Aaron Miller, Nathan Bucki, Bradley Solliday, Ryan
Kennedy, Jack Zhu, Tung Dang, Dominic Pattison, Harrison Zheng, Teo Tomic,
Peter Henry, Gareth Cross, Josiah VanderMey, Alvin Sun, Samuel Wang, Kristen
Holtz
|
SymForce: Symbolic Computation and Code Generation for Robotics
|
10 pages, 5 figures. RSS 2022
| null |
10.15607/RSS.2022.XVIII.041
| null |
cs.RO cs.CV cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present SymForce, a library for fast symbolic computation, code
generation, and nonlinear optimization for robotics applications like computer
vision, motion planning, and controls. SymForce combines the development speed
and flexibility of symbolic math with the performance of autogenerated, highly
optimized code in C++ or any target runtime language. SymForce provides
geometry and camera types, Lie group operations, and branchless singularity
handling for creating and analyzing complex symbolic expressions in Python,
built on top of SymPy. Generated functions can be integrated as factors into
our tangent-space nonlinear optimizer, which is highly optimized for real-time
production use. We introduce novel methods to automatically compute
tangent-space Jacobians, eliminating the need for bug-prone handwritten
derivatives. This workflow enables faster runtime code, faster development
time, and fewer lines of handwritten code versus the state-of-the-art. Our
experiments demonstrate that our approach can yield order of magnitude speedups
on computational tasks core to robotics. Code is available at
https://github.com/symforce-org/symforce.
|
[
{
"version": "v1",
"created": "Sun, 17 Apr 2022 00:15:10 GMT"
},
{
"version": "v2",
"created": "Fri, 6 May 2022 17:15:46 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Martiros",
"Hayk",
""
],
[
"Miller",
"Aaron",
""
],
[
"Bucki",
"Nathan",
""
],
[
"Solliday",
"Bradley",
""
],
[
"Kennedy",
"Ryan",
""
],
[
"Zhu",
"Jack",
""
],
[
"Dang",
"Tung",
""
],
[
"Pattison",
"Dominic",
""
],
[
"Zheng",
"Harrison",
""
],
[
"Tomic",
"Teo",
""
],
[
"Henry",
"Peter",
""
],
[
"Cross",
"Gareth",
""
],
[
"VanderMey",
"Josiah",
""
],
[
"Sun",
"Alvin",
""
],
[
"Wang",
"Samuel",
""
],
[
"Holtz",
"Kristen",
""
]
] |
new_dataset
| 0.99838 |
2204.10619
|
Tomasz Kryjak
|
Mateusz Wasala and Tomasz Kryjak
|
Real-time HOG+SVM based object detection using SoC FPGA for a UHD video
stream
|
6 pages, accepted for the CPS & IoT 2022 conference
| null |
10.1109/MECO55406.2022.9797113
| null |
cs.CV cs.AR eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Object detection is an essential component of many vision systems. For
example, pedestrian detection is used in advanced driver assistance systems
(ADAS) and advanced video surveillance systems (AVSS). Currently, most
detectors use deep convolutional neural networks (e.g., the YOLO -- You Only
Look Once -- family), which, however, due to their high computational
complexity, are not able to process a very high-resolution video stream in
real-time, especially within a limited energy budget. In this paper we present
a hardware implementation of the well-known pedestrian detector with HOG
(Histogram of Oriented Gradients) feature extraction and SVM (Support Vector
Machine) classification. Our system running on AMD Xilinx Zynq UltraScale+
MPSoC (Multiprocessor System on Chip) device allows real-time processing of 4K
resolution (UHD -- Ultra High Definition, 3840 x 2160 pixels) video for 60
frames per second. The system is capable of detecting a pedestrian in a single
scale. The results obtained confirm the high suitability of reprogrammable
devices in the real-time implementation of embedded vision systems.
|
[
{
"version": "v1",
"created": "Fri, 22 Apr 2022 10:29:21 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Wasala",
"Mateusz",
""
],
[
"Kryjak",
"Tomasz",
""
]
] |
new_dataset
| 0.994375 |
2204.13613
|
Anas Gouda
|
Anas Gouda, Abraham Ghanem, Christopher Reining
|
DoPose-6D dataset for object segmentation and 6D pose estimation
|
accepted for IEEE ICMLA 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene understanding is essential in determining how intelligent robotic
grasping and manipulation could get. It is a problem that can be approached
using different techniques: seen object segmentation, unseen object
segmentation, or 6D pose estimation. These techniques can even be extended to
multi-view. Most of the work on these problems depends on synthetic datasets
due to the lack of real datasets that are big enough for training and merely
use the available real datasets for evaluation.
This encourages us to introduce a new dataset (called DoPose-6D). The dataset
contains annotations for 6D Pose estimation, object segmentation, and
multi-view annotations, which serve all the pre-mentioned techniques. The
dataset contains two types of scenes bin picking and tabletop, with the primary
motive for this dataset collection being bin picking.
We illustrate the effect of this dataset in the context of unseen object
segmentation and provide some insights on mixing synthetic and real data for
the training. We train a Mask R-CNN model that is practical to be used in
industry and robotic grasping applications. Finally, we show how our dataset
boosted the performance of a Mask R-CNN model.
Our DoPose-6D dataset, trained network models, pipeline code, and ROS driver
are available online.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 16:17:48 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Nov 2022 12:30:29 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Gouda",
"Anas",
""
],
[
"Ghanem",
"Abraham",
""
],
[
"Reining",
"Christopher",
""
]
] |
new_dataset
| 0.99951 |
2205.07134
|
Shuming Liu
|
Shuming Liu, Mengmeng Xu, Chen Zhao, Xu Zhao, Bernard Ghanem
|
ETAD: Training Action Detection End to End on a Laptop
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal action detection (TAD) with end-to-end training often suffers from
the pain of huge demand for computing resources due to long video duration. In
this work, we propose an efficient temporal action detector (ETAD) that can
train directly from video frames with extremely low GPU memory consumption. Our
main idea is to minimize and balance the heavy computation among features and
gradients in each training iteration. We propose to sequentially forward the
snippet frame through the video encoder, and backward only a small necessary
portion of gradients to update the encoder. To further alleviate the
computational redundancy in training, we propose to dynamically sample only a
small subset of proposals during training. Moreover, various sampling
strategies and ratios are studied for both the encoder and detector. ETAD
achieves state-of-the-art performance on TAD benchmarks with remarkable
efficiency. On ActivityNet-1.3, training ETAD in 18 hours can reach 38.25%
average mAP with only 1.3 GB memory consumption per video under end-to-end
training. Our code will be publicly released.
|
[
{
"version": "v1",
"created": "Sat, 14 May 2022 21:16:21 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Nov 2022 11:43:24 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Liu",
"Shuming",
""
],
[
"Xu",
"Mengmeng",
""
],
[
"Zhao",
"Chen",
""
],
[
"Zhao",
"Xu",
""
],
[
"Ghanem",
"Bernard",
""
]
] |
new_dataset
| 0.995638 |
2207.02108
|
Benjamin Marais
|
Benjamin Marais and Tony Quertier and St\'ephane Morucci
|
AI-based Malware and Ransomware Detection Models
| null | null | null | null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cybercrime is one of the major digital threats of this century. In
particular, ransomware attacks have significantly increased, resulting in
global damage costs of tens of billion dollars. In this paper, we train and
test different Machine Learning and Deep Learning models for malware detection,
malware classification and ransomware detection. We introduce a novel and
flexible solution that combines two optimized models for malware and ransomware
detection. Our results demonstrate some improvements both in terms of detection
performances and flexibility. In particular, our combined models pave the way
for easier future enhancements using specialized and thus interchangeable
detection modules.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2022 15:22:13 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Nov 2022 10:17:21 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Marais",
"Benjamin",
""
],
[
"Quertier",
"Tony",
""
],
[
"Morucci",
"Stéphane",
""
]
] |
new_dataset
| 0.956356 |
2210.14523
|
Jie Cao
|
Jie Cao, Yin Zhang
|
OTSeq2Set: An Optimal Transport Enhanced Sequence-to-Set Model for
Extreme Multi-label Text Classification
|
EMNLP 2022
|
EMNLP 2022 (Proceedings of the 2022 Conference on Empirical
Methods in Natural Language Processing)
| null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Extreme multi-label text classification (XMTC) is the task of finding the
most relevant subset labels from an extremely large-scale label collection.
Recently, some deep learning models have achieved state-of-the-art results in
XMTC tasks. These models commonly predict scores for all labels by a fully
connected layer as the last layer of the model. However, such models can't
predict a relatively complete and variable-length label subset for each
document, because they select positive labels relevant to the document by a
fixed threshold or take top k labels in descending order of scores. A less
popular type of deep learning models called sequence-to-sequence (Seq2Seq)
focus on predicting variable-length positive labels in sequence style. However,
the labels in XMTC tasks are essentially an unordered set rather than an
ordered sequence, the default order of labels restrains Seq2Seq models in
training. To address this limitation in Seq2Seq, we propose an autoregressive
sequence-to-set model for XMTC tasks named OTSeq2Set. Our model generates
predictions in student-forcing scheme and is trained by a loss function based
on bipartite matching which enables permutation-invariance. Meanwhile, we use
the optimal transport distance as a measurement to force the model to focus on
the closest labels in semantic label space. Experiments show that OTSeq2Set
outperforms other competitive baselines on 4 benchmark datasets. Especially, on
the Wikipedia dataset with 31k labels, it outperforms the state-of-the-art
Seq2Seq method by 16.34% in micro-F1 score. The code is available at
https://github.com/caojie54/OTSeq2Set.
|
[
{
"version": "v1",
"created": "Wed, 26 Oct 2022 07:25:18 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2022 14:24:17 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Cao",
"Jie",
""
],
[
"Zhang",
"Yin",
""
]
] |
new_dataset
| 0.999389 |
2211.01112
|
Amira Guesmi
|
Amira Guesmi, Ihsen Alouani
|
Adversarial Attack on Radar-based Environment Perception Systems
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to their robustness to degraded capturing conditions, radars are widely
used for environment perception, which is a critical task in applications like
autonomous vehicles. More specifically, Ultra-Wide Band (UWB) radars are
particularly efficient for short range settings as they carry rich information
on the environment. Recent UWB-based systems rely on Machine Learning (ML) to
exploit the rich signature of these sensors. However, ML classifiers are
susceptible to adversarial examples, which are created from raw data to fool
the classifier such that it assigns the input to the wrong class. These attacks
represent a serious threat to systems integrity, especially for safety-critical
applications. In this work, we present a new adversarial attack on UWB radars
in which an adversary injects adversarial radio noise in the wireless channel
to cause an obstacle recognition failure. First, based on signals collected in
real-life environment, we show that conventional attacks fail to generate
robust noise under realistic conditions. We propose a-RNA, i.e., Adversarial
Radio Noise Attack to overcome these issues. Specifically, a-RNA generates an
adversarial noise that is efficient without synchronization between the input
signal and the noise. Moreover, a-RNA generated noise is, by-design, robust
against pre-processing countermeasures such as filtering-based defenses.
Moreover, in addition to the undetectability objective by limiting the noise
magnitude budget, a-RNA is also efficient in the presence of sophisticated
defenses in the spectral domain by introducing a frequency budget. We believe
this work should alert about potentially critical implementations of
adversarial attacks on radar systems that should be taken seriously.
|
[
{
"version": "v1",
"created": "Wed, 2 Nov 2022 13:39:25 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Nov 2022 07:04:02 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Guesmi",
"Amira",
""
],
[
"Alouani",
"Ihsen",
""
]
] |
new_dataset
| 0.975426 |
2211.02213
|
Huayi Zhou
|
Huayi Zhou, Fei Jiang, Hongtao Lu
|
SSDA-YOLO: Semi-supervised Domain Adaptive YOLO for Cross-Domain Object
Detection
|
submitted to CVIU
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Domain adaptive object detection (DAOD) aims to alleviate transfer
performance degradation caused by the cross-domain discrepancy. However, most
existing DAOD methods are dominated by outdated and computationally intensive
two-stage Faster R-CNN, which is not the first choice for industrial
applications. In this paper, we propose a novel semi-supervised domain adaptive
YOLO (SSDA-YOLO) based method to improve cross-domain detection performance by
integrating the compact one-stage stronger detector YOLOv5 with domain
adaptation. Specifically, we adapt the knowledge distillation framework with
the Mean Teacher model to assist the student model in obtaining instance-level
features of the unlabeled target domain. We also utilize the scene style
transfer to cross-generate pseudo images in different domains for remedying
image-level differences. In addition, an intuitive consistency loss is proposed
to further align cross-domain predictions. We evaluate SSDA-YOLO on public
benchmarks including PascalVOC, Clipart1k, Cityscapes, and Foggy Cityscapes.
Moreover, to verify its generalization, we conduct experiments on yawning
detection datasets collected from various real classrooms. The results show
considerable improvements of our method in these DAOD tasks, which reveals both
the effectiveness of proposed adaptive modules and the urgency of applying more
advanced detectors in DAOD. Our code is available on
\url{https://github.com/hnuzhy/SSDA-YOLO}.
|
[
{
"version": "v1",
"created": "Fri, 4 Nov 2022 01:50:13 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Nov 2022 09:23:35 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Zhou",
"Huayi",
""
],
[
"Jiang",
"Fei",
""
],
[
"Lu",
"Hongtao",
""
]
] |
new_dataset
| 0.995472 |
2211.12141
|
Weixuan Xiong
|
Weixuan Xiong, Xiaochen Sun
|
MGADN: A Multi-task Graph Anomaly Detection Network for Multivariate
Time Series
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anomaly detection of time series, especially multivariate time series(time
series with multiple sensors), has been focused on for several years. Though
existing method has achieved great progress, there are several challenging
problems to be solved. Firstly, existing method including neural network only
concentrate on the relationship in terms of timestamp. To be exact, they only
want to know how does the data in the past influence which in the future.
However, one sensor sometimes intervenes in other sensor such as the speed of
wind may cause decrease of temperature. Secondly, there exist two categories of
model for time series anomaly detection: prediction model and reconstruction
model. Prediction model is adept at learning timely representation while short
of capability when faced with sparse anomaly. Conversely, reconstruction model
is opposite. Therefore, how can we efficiently get the relationship both in
terms of both timestamp and sensors becomes our main topic. Our approach uses
GAT, which is originated from graph neural network, to obtain connection
between sensors. And LSTM is used to obtain relationships timely. Our approach
is also designed to be double headed to calculate both prediction loss and
reconstruction loss via VAE(Variational Auto-Encoder). In order to take
advantage of two sorts of model, multi-task optimization algorithm is used in
this model.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 10:17:42 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Nov 2022 09:07:25 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Xiong",
"Weixuan",
""
],
[
"Sun",
"Xiaochen",
""
]
] |
new_dataset
| 0.994147 |
2211.13968
|
Tianpeng Bao
|
Tianpeng Bao, Jiadong Chen, Wei Li, Xiang Wang, Jingjing Fei, Liwei
Wu, Rui Zhao, Ye Zheng
|
MIAD: A Maintenance Inspection Dataset for Unsupervised Anomaly
Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual anomaly detection plays a crucial role in not only manufacturing
inspection to find defects of products during manufacturing processes, but also
maintenance inspection to keep equipment in optimum working condition
particularly outdoors. Due to the scarcity of the defective samples,
unsupervised anomaly detection has attracted great attention in recent years.
However, existing datasets for unsupervised anomaly detection are biased
towards manufacturing inspection, not considering maintenance inspection which
is usually conducted under outdoor uncontrolled environment such as varying
camera viewpoints, messy background and degradation of object surface after
long-term working. We focus on outdoor maintenance inspection and contribute a
comprehensive Maintenance Inspection Anomaly Detection (MIAD) dataset which
contains more than 100K high-resolution color images in various outdoor
industrial scenarios. This dataset is generated by a 3D graphics software and
covers both surface and logical anomalies with pixel-precise ground truth.
Extensive evaluations of representative algorithms for unsupervised anomaly
detection are conducted, and we expect MIAD and corresponding experimental
results can inspire research community in outdoor unsupervised anomaly
detection tasks. Worthwhile and related future work can be spawned from our new
dataset.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 09:19:36 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Nov 2022 09:22:02 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Bao",
"Tianpeng",
""
],
[
"Chen",
"Jiadong",
""
],
[
"Li",
"Wei",
""
],
[
"Wang",
"Xiang",
""
],
[
"Fei",
"Jingjing",
""
],
[
"Wu",
"Liwei",
""
],
[
"Zhao",
"Rui",
""
],
[
"Zheng",
"Ye",
""
]
] |
new_dataset
| 0.999806 |
2211.14358
|
Zhixuan Zhou
|
Zhixuan Zhou, Jiao Sun, Jiaxin Pei, Nanyun Peng and Jinjun Xiong
|
A Moral- and Event- Centric Inspection of Gender Bias in Fairy Tales at
A Large Scale
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fairy tales are a common resource for young children to learn a language or
understand how a society works. However, gender bias, e.g., stereotypical
gender roles, in this literature may cause harm and skew children's world view.
Instead of decades of qualitative and manual analysis of gender bias in fairy
tales, we computationally analyze gender bias in a fairy tale dataset
containing 624 fairy tales from 7 different cultures. We specifically examine
gender difference in terms of moral foundations, which are measures of human
morality, and events, which reveal human activities associated with each
character. We find that the number of male characters is two times that of
female characters, showing a disproportionate gender representation. Our
analysis further reveal stereotypical portrayals of both male and female
characters in terms of moral foundations and events. Female characters turn out
more associated with care-, loyalty- and sanctity- related moral words, while
male characters are more associated with fairness- and authority- related moral
words. Female characters' events are often about emotion (e.g., weep),
appearance (e.g., comb), household (e.g., bake), etc.; while male characters'
events are more about profession (e.g., hunt), violence (e.g., destroy),
justice (e.g., judge), etc. Gender bias in terms of moral foundations shows an
obvious difference across cultures. For example, female characters are more
associated with care and sanctity in high uncertainty-avoidance cultures which
are less open to changes and unpredictability. Based on the results, we propose
implications for children's literature and early literacy research.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 19:38:09 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Zhou",
"Zhixuan",
""
],
[
"Sun",
"Jiao",
""
],
[
"Pei",
"Jiaxin",
""
],
[
"Peng",
"Nanyun",
""
],
[
"Xiong",
"Jinjun",
""
]
] |
new_dataset
| 0.988895 |
2211.14364
|
Devansh Agrawal
|
Devansh R. Agrawal, Dimitra Panagou
|
Safe and Robust Observer-Controller Synthesis using Control Barrier
Functions
|
6 pages, 4 figures. Accepted at LCSS, CDC 2023
|
IEEE Control Systems Letters 7 (2022): 127-132
|
10.1109/LCSYS.2022.3185142
| null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper addresses the synthesis of safety-critical controllers using
estimate feedback. We propose an observer-controller interconnection to ensure
that the nonlinear system remains safe despite bounded disturbances on the
system dynamics and measurements that correspond to partial state information.
The co-design of observers and controllers is critical, since even in
undisturbed cases, observers and controllers designed independently may not
render the system safe. We propose two approaches to synthesize
observer-controller interconnections. The first approach utilizes
Input-to-State Stable observers, and the second uses Bounded Error observers.
Using these stability and boundedness properties of the observation error, we
construct novel Control Barrier Functions that impose inequality constraints on
the control inputs which, when satisfied, certifies safety. We propose
quadratic program-based controllers to satisfy these constraints, and prove
Lipschitz continuity of the derived controllers. Simulations and experiments on
a quadrotor demonstrate the efficacy of the proposed methods.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 20:14:31 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Agrawal",
"Devansh R.",
""
],
[
"Panagou",
"Dimitra",
""
]
] |
new_dataset
| 0.99388 |
2211.14369
|
Leonard Tang
|
Leonard Tang, Alexander Cai, Steve Li, Jason Wang
|
The Naughtyformer: A Transformer Understands Offensive Humor
|
AAAI-23 Student Abstract
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Jokes are intentionally written to be funny, but not all jokes are created
the same. Some jokes may be fit for a classroom of kindergarteners, but others
are best reserved for a more mature audience. While recent work has shown
impressive results on humor detection in text, here we instead investigate the
more nuanced task of detecting humor subtypes, especially of the less innocent
variety. To that end, we introduce a novel jokes dataset filtered from Reddit
and solve the subtype classification task using a finetuned Transformer dubbed
the Naughtyformer. Moreover, we show that our model is significantly better at
detecting offensiveness in jokes compared to state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 20:37:58 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Tang",
"Leonard",
""
],
[
"Cai",
"Alexander",
""
],
[
"Li",
"Steve",
""
],
[
"Wang",
"Jason",
""
]
] |
new_dataset
| 0.999722 |
2211.14385
|
Jacob Zietek
|
Jacob Zietek, Nicholas Wade, Cole Roberts, Aref Malek, Manish Pylla,
Will Xu, Sagar Patil
|
Pac-Man Pete: An extensible framework for building AI in VEX Robotics
| null | null | null | null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This technical report details VEX Robotics team BLRSAI's development of a
fully autonomous robot for VEX Robotics' Tipping Point AI Competition. We
identify and develop three separate critical components. This includes a Unity
simulation and reinforcement learning model training pipeline, a malleable
computer vision pipeline, and a data transfer pipeline to offload large
computations from the VEX V5 Brain/micro-controller to an external computer. We
give the community access to all of these components in hopes they can reuse
and improve upon them in the future, and that it'll spark new ideas for
autonomy as well as the necessary infrastructure and programs for AI in
educational robotics.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 21:59:30 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Zietek",
"Jacob",
""
],
[
"Wade",
"Nicholas",
""
],
[
"Roberts",
"Cole",
""
],
[
"Malek",
"Aref",
""
],
[
"Pylla",
"Manish",
""
],
[
"Xu",
"Will",
""
],
[
"Patil",
"Sagar",
""
]
] |
new_dataset
| 0.996542 |
2211.14417
|
Oliver Neumann
|
Oliver Neumann, Marcel Schilling, Markus Reischl, Ralf Mikut
|
EasyMLServe: Easy Deployment of REST Machine Learning Services
| null |
Schulte, H. Proceedings - 32. Workshop Computational Intelligence:
Berlin, 1. - 2. Dezember 2022. KIT Scientific Publishing, 2022
|
10.5445/KSP/1000151141
| null |
cs.LG cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Various research domains use machine learning approaches because they can
solve complex tasks by learning from data. Deploying machine learning models,
however, is not trivial and developers have to implement complete solutions
which are often installed locally and include Graphical User Interfaces (GUIs).
Distributing software to various users on-site has several problems. Therefore,
we propose a concept to deploy software in the cloud. There are several
frameworks available based on Representational State Transfer (REST) which can
be used to implement cloud-based machine learning services. However, machine
learning services for scientific users have special requirements that
state-of-the-art REST frameworks do not cover completely. We contribute an
EasyMLServe software framework to deploy machine learning services in the cloud
using REST interfaces and generic local or web-based GUIs. Furthermore, we
apply our framework on two real-world applications, \ie, energy time-series
forecasting and cell instance segmentation. The EasyMLServe framework and the
use cases are available on GitHub.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 00:42:51 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Neumann",
"Oliver",
""
],
[
"Schilling",
"Marcel",
""
],
[
"Reischl",
"Markus",
""
],
[
"Mikut",
"Ralf",
""
]
] |
new_dataset
| 0.988508 |
2211.14419
|
Xiang Li
|
Xiang Li, Haoyuan Cao, Shijie Zhao, Junlin Li, Li Zhang, Bhiksha Raj
|
Panoramic Video Salient Object Detection with Ambisonic Audio Guidance
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video salient object detection (VSOD), as a fundamental computer vision
problem, has been extensively discussed in the last decade. However, all
existing works focus on addressing the VSOD problem in 2D scenarios. With the
rapid development of VR devices, panoramic videos have been a promising
alternative to 2D videos to provide immersive feelings of the real world. In
this paper, we aim to tackle the video salient object detection problem for
panoramic videos, with their corresponding ambisonic audios. A multimodal
fusion module equipped with two pseudo-siamese audio-visual context fusion
(ACF) blocks is proposed to effectively conduct audio-visual interaction. The
ACF block equipped with spherical positional encoding enables the fusion in the
3D context to capture the spatial correspondence between pixels and sound
sources from the equirectangular frames and ambisonic audios. Experimental
results verify the effectiveness of our proposed components and demonstrate
that our method achieves state-of-the-art performance on the ASOD60K dataset.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 00:50:02 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Li",
"Xiang",
""
],
[
"Cao",
"Haoyuan",
""
],
[
"Zhao",
"Shijie",
""
],
[
"Li",
"Junlin",
""
],
[
"Zhang",
"Li",
""
],
[
"Raj",
"Bhiksha",
""
]
] |
new_dataset
| 0.997616 |
2211.14443
|
Vineet Kumar
|
Vineet Kumar and Suresh Sundaram
|
Siamese based Neural Network for Offline Writer Identification on word
level data
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Handwriting recognition is one of the desirable attributes of document
comprehension and analysis. It is concerned with the documents writing style
and characteristics that distinguish the authors. The diversity of text images,
notably in images with varying handwriting, makes the process of learning good
features difficult in cases where little data is available. In this paper, we
propose a novel scheme to identify the author of a document based on the input
word image. Our method is text independent and does not impose any constraint
on the size of the input image under examination. To begin with, we detect
crucial components in handwriting and extract regions surrounding them using
Scale Invariant Feature Transform (SIFT). These patches are designed to capture
individual writing features (including allographs, characters, or combinations
of characters) that are likely to be unique for an individual writer. These
features are then passed through a deep Convolutional Neural Network (CNN) in
which the weights are learned by applying the concept of Similarity learning
using Siamese network. Siamese network enhances the discrimination power of CNN
by mapping similarity between different pairs of input image. Features learned
at different scales of the extracted SIFT key-points are encoded using Sparse
PCA, each components of the Sparse PCA is assigned a saliency score signifying
its level of significance in discriminating different writers effectively.
Finally, the weighted Sparse PCA corresponding to each SIFT key-points is
combined to arrive at a final classification score for each writer. The
proposed algorithm was evaluated on two publicly available databases (namely
IAM and CVL) and is able to achieve promising result, when compared with other
deep learning based algorithm.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 10:01:46 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Kumar",
"Vineet",
""
],
[
"Sundaram",
"Suresh",
""
]
] |
new_dataset
| 0.958136 |
2211.14444
|
Javier A. Arroyo-Figueroa
|
Javier A. Arroyo-Figueroa
|
MiftyCoin (MFT): A Cryptocurrency Mined with Proof of Human Work
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present in this paper a cryptocurrency called Mobile Fungible Token (MFT)
or "MiftyCoin", which is mined with Proof of Human Work (PoH). Blocks in MFT's
blockchain are mined by users solving unique 24-tile puzzles autogenerated as a
function of block hash values. Each tile in the puzzle is a 4-sided domino-like
square, where the number of dots per square is a function of a subset of the
bits of a corresponding byte in the block's hash value. The objective is to
find a set of tile moves that end up in an arrangement with an optimal score;
where each matching tile side increases the score by one. The block with the
highest score gets a reward. More information about MiftyCoin is available at:
https://www.miftycoin.com.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 03:03:54 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Arroyo-Figueroa",
"Javier A.",
""
]
] |
new_dataset
| 0.999851 |
2211.14445
|
MANUEL DIAZ ZAPATA
|
Manuel Alejandro Diaz-Zapata (CHROMA), \"Ozg\"ur Erkent (CHROMA),
Christian Laugier (CHROMA), Jilles Dibangoye (CHROMA), David Sierra
Gonz\'alez (CHROMA)
|
LAPTNet: LiDAR-Aided Perspective Transform Network
|
ICARCV 2022 - 17th International Conference on Control, Automation,
Robotics and Vision, Dec 2022, Singapore, Singapore
| null | null | null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic grids are a useful representation of the environment around a robot.
They can be used in autonomous vehicles to concisely represent the scene around
the car, capturing vital information for downstream tasks like navigation or
collision assessment. Information from different sensors can be used to
generate these grids. Some methods rely only on RGB images, whereas others
choose to incorporate information from other sensors, such as radar or LiDAR.
In this paper, we present an architecture that fuses LiDAR and camera
information to generate semantic grids. By using the 3D information from a
LiDAR point cloud, the LiDAR-Aided Perspective Transform Network (LAPTNet) is
able to associate features in the camera plane to the bird's eye view without
having to predict any depth information about the scene. Compared to
state-of-theart camera-only methods, LAPTNet achieves an improvement of up to
8.8 points (or 38.13%) over state-of-art competing approaches for the classes
proposed in the NuScenes dataset validation split.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 18:56:02 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Diaz-Zapata",
"Manuel Alejandro",
"",
"CHROMA"
],
[
"Erkent",
"Özgür",
"",
"CHROMA"
],
[
"Laugier",
"Christian",
"",
"CHROMA"
],
[
"Dibangoye",
"Jilles",
"",
"CHROMA"
],
[
"González",
"David Sierra",
"",
"CHROMA"
]
] |
new_dataset
| 0.998816 |
2211.14447
|
Atra Akandeh
|
Atra Akandeh
|
Sentence-Level Sign Language Recognition Framework
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present two solutions to sentence-level SLR. Sentence-level SLR required
mapping videos of sign language sentences to sequences of gloss labels.
Connectionist Temporal Classification (CTC) has been used as the classifier
level of both models. CTC is used to avoid pre-segmenting the sentences into
individual words. The first model is an LRCN-based model, and the second model
is a Multi-Cue Network. LRCN is a model in which a CNN as a feature extractor
is applied to each frame before feeding them into an LSTM. In the first
approach, no prior knowledge has been leveraged. Raw frames are fed into an
18-layer LRCN with a CTC on top. In the second approach, three main
characteristics (hand shape, hand position, and hand movement information)
associated with each sign have been extracted using Mediapipe. 2D landmarks of
hand shape have been used to create the skeleton of the hands and then are fed
to a CONV-LSTM model. Hand locations and hand positions as relative distance to
head are fed to separate LSTMs. All three sources of information have been then
integrated into a Multi-Cue network with a CTC classification layer. We
evaluated the performance of proposed models on RWTH-PHOENIX-Weather. After
performing an excessive search on model hyper-parameters such as the number of
feature maps, input size, batch size, sequence length, LSTM memory cell,
regularization, and dropout, we were able to achieve 35 Word Error Rate (WER).
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 01:45:41 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Akandeh",
"Atra",
""
]
] |
new_dataset
| 0.99379 |
2211.14451
|
V\'aclav Ko\v{s}a\v{r}
|
Vaclav Kosar, Anton\'in Hoskovec, Milan \v{S}ulc, Radek Bartyzal
|
GLAMI-1M: A Multilingual Image-Text Fashion Dataset
| null | null | null | null |
cs.CV cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce GLAMI-1M: the largest multilingual image-text classification
dataset and benchmark. The dataset contains images of fashion products with
item descriptions, each in 1 of 13 languages. Categorization into 191 classes
has high-quality annotations: all 100k images in the test set and 75% of the 1M
training set were human-labeled. The paper presents baselines for image-text
classification showing that the dataset presents a challenging fine-grained
classification problem: The best scoring EmbraceNet model using both visual and
textual features achieves 69.7% accuracy. Experiments with a modified Imagen
model show the dataset is also suitable for image generation conditioned on
text. The dataset, source code and model checkpoints are published at
https://github.com/glami/glami-1m
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 13:19:07 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Kosar",
"Vaclav",
""
],
[
"Hoskovec",
"Antonín",
""
],
[
"Šulc",
"Milan",
""
],
[
"Bartyzal",
"Radek",
""
]
] |
new_dataset
| 0.999835 |
2211.14478
|
Ronghe Jin
|
Ronghe Jin, Yan Wang, Zhi Gao, Xiaoji Niu, Li-Ta Hsu, and Jingnan Liu
|
DynaVIG: Monocular Vision/INS/GNSS Integrated Navigation and Object
Tracking for AGV in Dynamic Scenes
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual-Inertial Odometry (VIO) usually suffers from drifting over long-time
runs, the accuracy is easily affected by dynamic objects. We propose DynaVIG, a
navigation and object tracking system based on the integration of Monocular
Vision, Inertial Navigation System (INS), and Global Navigation Satellite
System (GNSS). Our system aims to provide an accurate global estimation of the
navigation states and object poses for the automated ground vehicle (AGV) in
dynamic scenes. Due to the scale ambiguity of the object, a prior height model
is proposed to initialize the object pose, and the scale is continuously
estimated with the aid of GNSS and INS. To precisely track the object with
complex moving, we establish an accurate dynamics model according to its motion
state. Then the multi-sensor observations are optimized in a unified framework.
Experiments on the KITTI dataset demonstrate that the multisensor fusion can
effectively improve the accuracy of navigation and object tracking, compared to
state-of-the-art methods. In addition, the proposed system achieves good
estimation of the objects that change speed or direction.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 04:29:23 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Jin",
"Ronghe",
""
],
[
"Wang",
"Yan",
""
],
[
"Gao",
"Zhi",
""
],
[
"Niu",
"Xiaoji",
""
],
[
"Hsu",
"Li-Ta",
""
],
[
"Liu",
"Jingnan",
""
]
] |
new_dataset
| 0.998956 |
2211.14485
|
Lixiang Lin
|
Lixiang Lin, Songyou Peng, Qijun Gan, Jianke Zhu
|
PatchShading: High-Quality Human Reconstruction by Patch Warping and
Shading Refinement
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human reconstruction from multi-view images plays an important role in many
applications. Although neural rendering methods have achieved promising results
on synthesising realistic images, it is still difficult to handle the ambiguity
between the geometry and appearance using only rendering loss. Moreover, it is
very computationally intensive to render a whole image as each pixel requires a
forward network inference. To tackle these challenges, we propose a novel
approach called \emph{PatchShading} to reconstruct high-quality mesh of human
body from multi-view posed images. We first present a patch warping strategy to
constrain multi-view photometric consistency explicitly. Second, we adopt
sphere harmonics (SH) illumination and shape from shading image formation to
further refine the geometric details. By taking advantage of the oriented point
clouds shape representation and SH shading, our proposed method significantly
reduce the optimization and rendering time compared to those implicit methods.
The encouraging results on both synthetic and real-world datasets demonstrate
the efficacy of our proposed approach.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 05:16:39 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Lin",
"Lixiang",
""
],
[
"Peng",
"Songyou",
""
],
[
"Gan",
"Qijun",
""
],
[
"Zhu",
"Jianke",
""
]
] |
new_dataset
| 0.975851 |
2211.14522
|
Yang Zhang
|
Yang Zhang, Yang Zhou, Huilin Pan, Bo Wu, and Guodong Sun
|
Visual Fault Detection of Multi-scale Key Components in Freight Trains
|
9 pages, 4 figures
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Fault detection for key components in the braking system of freight trains is
critical for ensuring railway transportation safety. Despite the frequently
employed methods based on deep learning, these fault detectors are highly
reliant on hardware resources and are complex to implement. In addition, no
train fault detectors consider the drop in accuracy induced by scale variation
of fault parts. This paper proposes a lightweight anchor-free framework to
solve the above problems. Specifically, to reduce the amount of computation and
model size, we introduce a lightweight backbone and adopt an anchor-free method
for localization and regression. To improve detection accuracy for multi-scale
parts, we design a feature pyramid network to generate rectangular layers of
different sizes to map parts with similar aspect ratios. Experiments on four
fault datasets show that our framework achieves 98.44% accuracy while the model
size is only 22.5 MB, outperforming state-of-the-art detectors.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 09:20:49 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Zhang",
"Yang",
""
],
[
"Zhou",
"Yang",
""
],
[
"Pan",
"Huilin",
""
],
[
"Wu",
"Bo",
""
],
[
"Sun",
"Guodong",
""
]
] |
new_dataset
| 0.997813 |
2211.14531
|
Yifan Xu
|
Anik Pramanik, Pan Xu and Yifan Xu
|
Equity Promotion in Public Transportation
|
A preliminary version will appear in the 37th AAAI Conference on
Artificial Intelligence (AAAI 23)
| null | null | null |
cs.AI cs.DS cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are many news articles reporting the obstacles confronting
poverty-stricken households in access to public transits. These barriers create
a great deal of inconveniences for these impoverished families and more
importantly, they contribute a lot of social inequalities. A typical approach
addressing the issue is to build more transport infrastructure to offer more
opportunities to access the public transits especially for those deprived
communities. Examples include adding more bus lines connecting needy residents
to railways systems and extending existing bus lines to areas with low
socioeconomic status. Recently, a new strategy is proposed, which is to harness
the ubiquitous ride-hailing services to connect disadvantaged households with
the nearest public transportations. Compared with the former
infrastructure-based solution, the ride-hailing-based strategy enjoys a few
exclusive benefits such as higher effectiveness and more flexibility.
In this paper, we propose an optimization model to study how to integrate the
two approaches together for equity-promotion purposes. Specifically, we aim to
design a strategy of allocating a given limited budget to different candidate
programs such that the overall social equity is maximized, which is defined as
the minimum covering ratio among all pre-specified protected groups of
households (based on race, income, etc.). We have designed a linear-programming
(LP) based rounding algorithm, which proves to achieve an optimal approximation
ratio of 1-1/e. Additionally, we test our algorithm against a few baselines on
real data assembled by outsourcing multiple public datasets collected in the
city of Chicago. Experimental results confirm our theoretical predictions and
demonstrate the effectiveness of our LP-based strategy in promoting social
equity, especially when the budget is insufficient.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 10:06:00 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Pramanik",
"Anik",
""
],
[
"Xu",
"Pan",
""
],
[
"Xu",
"Yifan",
""
]
] |
new_dataset
| 0.98788 |
2211.14541
|
Vladimir Poliakov
|
Vladimir Poliakov and Kenan Niu and Emmanuel Vander Poorten and
Dzmitry Tsetserukou
|
RL-Based Guidance in Outpatient Hysteroscopy Training: A Feasibility
Study
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This work presents an RL-based agent for outpatient hysteroscopy training.
Hysteroscopy is a gynecological procedure for examination of the uterine
cavity. Recent advancements enabled performing this type of intervention in the
outpatient setup without anaesthesia. While being beneficial to the patient,
this approach introduces new challenges for clinicians, who should take
additional measures to maintain the level of patient comfort and prevent tissue
damage. Our prior work has presented a platform for hysteroscopic training with
the focus on the passage of the cervical canal. With this work, we aim to
extend the functionality of the platform by designing a subsystem that
autonomously performs the task of the passage of the cervical canal. This
feature can later be used as a virtual instructor to provide educational cues
for trainees and assess their performance. The developed algorithm is based on
the soft actor critic approach to smooth the learning curve of the agent and
ensure uniform exploration of the workspace. The designed algorithm was tested
against the performance of five clinicians. Overall, the algorithm demonstrated
high efficiency and reliability, succeeding in 98% of trials and outperforming
the expert group in three out of four measured metrics.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 11:16:17 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Poliakov",
"Vladimir",
""
],
[
"Niu",
"Kenan",
""
],
[
"Poorten",
"Emmanuel Vander",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
new_dataset
| 0.996424 |
2211.14542
|
Soojong Kim
|
Soojong Kim, Jisu Kim
|
The Information Ecosystem of Conspiracy Theory: Examining the QAnon
Narrative on Facebook
|
Accepted for publication at CSCW 2023. Forthcoming in the Proceedings
of the ACM on Human-Computer Interaction
| null | null | null |
cs.SI cs.CY cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There has been concern about the proliferation of the "QAnon" conspiracy
theory on Facebook, but little is known about how its misleading narrative
propagated on the world's largest social media platform. Thus, the present
research analyzed content generated by 2,813 Facebook pages and groups that
contributed to promoting the conspiracy narrative between 2017 and 2020. The
result demonstrated that activities of QAnon pages and groups started a
significant surge months before the 2020 U.S. Presidential Election. We found
that these pages and groups increasingly relied on internal sources, i.e.,
Facebook accounts or their content on the platform, while their dependence on
external information sources decreased continuously since 2017. It was also
found that QAnon posts based on the Facebook internal sources attracted
significantly more shares and comments compared with other QAnon posts. These
findings suggest that QAnon pages and groups increasingly isolated themselves
from sources outside Facebook while having more internal interactions within
the platform, and the endogenous creation and circulation of disinformation
might play a significant role in boosting the influence of the misleading
narrative within Facebook. The findings imply that the efforts to tackle down
disinformation on social media should target not only the cross-platform
infiltration of falsehood but also the intra-platform production and
propagation of disinformation.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 11:27:04 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Kim",
"Soojong",
""
],
[
"Kim",
"Jisu",
""
]
] |
new_dataset
| 0.966704 |
2211.14553
|
Gwo Chin Chung
|
Ruven A/L Sundarajoo, Gwo Chin Chung, Wai Leong Pang, Soo Fun Tan
|
A Remote Baby Surveillance System with RFID and GPS Tracking
|
12 pages, 13 figures Published with International Journal of
Engineering Trends and Technology (IJETT)
|
International Journal of Engineering Trends and Technology, vol.
70, no. 11, pp. 81-92, 2022
|
10.14445/22315381/IJETT-V70I11P208
| null |
cs.CY cs.HC cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In the 21st century, sending babies or children to daycare centres has become
more and more common among young guardians. The balance between full-time work
and child care is increasingly challenging nowadays. In Malaysia, thousands of
child abuse cases have been reported from babysitting centres every year, which
indeed triggers the anxiety and stress of the guardians. Hence, this paper
proposes to construct a remote baby surveillance system with radio-frequency
identification (RFID) and global positioning system (GPS) tracking. With the
incorporation of the Internet of Things (IoT), a sensor-based microcontroller
is used to detect the conditions of the baby as well as the surrounding
environment and then display the real-time data as well as notifications to
alert the guardians via a mobile application. These conditions include the
crying and waking of the baby, as well as temperature, the mattress's wetness,
and moving objects around the baby. In addition, RFID and GPS location tracking
are implemented to ensure the safety of the baby, while white noise is used to
increase the comfort of the baby. In the end, a prototype has been successfully
developed for functionality and reliability testing. Several experiments have
been conducted to measure the efficiency of the mattress's wetness detection,
the RFID transmission range, the frequency spectrum of white noise, and also
the output power of the solar panel. The proposed system is expected to assist
guardians in ensuring the safety and comfort of their babies remotely, as well
as prevent any occurrence of child abuse.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 12:42:27 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Sundarajoo",
"Ruven A/L",
""
],
[
"Chung",
"Gwo Chin",
""
],
[
"Pang",
"Wai Leong",
""
],
[
"Tan",
"Soo Fun",
""
]
] |
new_dataset
| 0.993373 |
2211.14564
|
Guangze Zheng
|
Guangze Zheng, Changhong Fu, Junjie Ye, Bowen Li, Geng Lu, Jia Pan
|
Siamese Object Tracking for Vision-Based UAM Approaching with Pairwise
Scale-Channel Attention
|
Accepted by IROS2022
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although the manipulating of the unmanned aerial manipulator (UAM) has been
widely studied, vision-based UAM approaching, which is crucial to the
subsequent manipulating, generally lacks effective design. The key to the
visual UAM approaching lies in object tracking, while current UAM tracking
typically relies on costly model-based methods. Besides, UAM approaching often
confronts more severe object scale variation issues, which makes it
inappropriate to directly employ state-of-the-art model-free Siamese-based
methods from the object tracking field. To address the above problems, this
work proposes a novel Siamese network with pairwise scale-channel attention
(SiamSA) for vision-based UAM approaching. Specifically, SiamSA consists of a
pairwise scale-channel attention network (PSAN) and a scale-aware anchor
proposal network (SA-APN). PSAN acquires valuable scale information for feature
processing, while SA-APN mainly attaches scale awareness to anchor proposing.
Moreover, a new tracking benchmark for UAM approaching, namely UAMT100, is
recorded with 35K frames on a flying UAM platform for evaluation. Exhaustive
experiments on the benchmarks and real-world tests validate the efficiency and
practicality of SiamSA with a promising speed. Both the code and UAMT100
benchmark are now available at https://github.com/vision4robotics/SiamSA.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 13:33:49 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Zheng",
"Guangze",
""
],
[
"Fu",
"Changhong",
""
],
[
"Ye",
"Junjie",
""
],
[
"Li",
"Bowen",
""
],
[
"Lu",
"Geng",
""
],
[
"Pan",
"Jia",
""
]
] |
new_dataset
| 0.95532 |
2211.14572
|
Azzam Habib
|
Azzam Habib
|
Identifying a 3-vertex strongly biconnected directed subgraph with
minimum number of edges
| null | null | null | null |
cs.DS math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
A strongly connected graph is strongly biconnected if after ignoring the
direction of its edges we have an undirected graph with no articulation points.
A 3-vertex strongly biconnected graph is a strongly biconnected digraph that
has the property that deleting any two vertices in this graph leaves a strongly
binconnected subgraph. Jaberi [11] presented approximation algorithms for
minimum cardinality 2-vertex strongly biconnected directed subgraph problem. We
will focus in this paper on polynomial time algorithms which we have
implemented for producing spanning subgraphs that are 3-vertex strongly
biconnected.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 13:58:01 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Habib",
"Azzam",
""
]
] |
new_dataset
| 0.987112 |
2211.14582
|
Zhengjie Huang
|
Zhengjie Huang, Yunyang Huang, Peng Qian, Jianhai Chen, Qinming He
|
Demystifying Bitcoin Address Behavior via Graph Neural Networks
|
This paper has been accepted by IEEE International Conference on Data
Engineering 2023 (Second Research Round)
| null | null | null |
cs.CR cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bitcoin is one of the decentralized cryptocurrencies powered by a
peer-to-peer blockchain network. Parties who trade in the bitcoin network are
not required to disclose any personal information. Such property of anonymity,
however, precipitates potential malicious transactions to a certain extent.
Indeed, various illegal activities such as money laundering, dark network
trading, and gambling in the bitcoin network are nothing new now. While a
proliferation of work has been developed to identify malicious bitcoin
transactions, the behavior analysis and classification of bitcoin addresses are
largely overlooked by existing tools. In this paper, we propose BAClassifier, a
tool that can automatically classify bitcoin addresses based on their
behaviors. Technically, we come up with the following three key designs. First,
we consider casting the transactions of the bitcoin address into an address
graph structure, of which we introduce a graph node compression technique and a
graph structure augmentation method to characterize a unified graph
representation. Furthermore, we leverage a graph feature network to learn the
graph representations of each address and generate the graph embeddings.
Finally, we aggregate all graph embeddings of an address into the address-level
representation, and engage in a classification model to give the address
behavior classification. As a side contribution, we construct and release a
large-scale annotated dataset that consists of over 2 million real-world
bitcoin addresses and concerns 4 types of address behaviors. Experimental
results demonstrate that our proposed framework outperforms state-of-the-art
bitcoin address classifiers and existing classification models, where the
precision and F1-score are 96% and 95%, respectively. Our implementation and
dataset are released, hoping to inspire others.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 14:55:50 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Huang",
"Zhengjie",
""
],
[
"Huang",
"Yunyang",
""
],
[
"Qian",
"Peng",
""
],
[
"Chen",
"Jianhai",
""
],
[
"He",
"Qinming",
""
]
] |
new_dataset
| 0.963852 |
2211.14607
|
Somoy Subandhu Barua
|
Somoy Subandhu Barua, Imam Mohammad Zulkarnain, Abhishek Roy, Md.
Golam Rabiul Alam, Md Zia Uddin
|
Sketch2FullStack: Generating Skeleton Code of Full Stack Website and
Application from Sketch using Deep Learning and Computer Vision
|
12 pages, 10 figures, preprint
| null | null | null |
cs.CV cs.AI cs.NE cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For a full-stack web or app development, it requires a software firm or more
specifically a team of experienced developers to contribute a large portion of
their time and resources to design the website and then convert it to code. As
a result, the efficiency of the development team is significantly reduced when
it comes to converting UI wireframes and database schemas into an actual
working system. It would save valuable resources and fasten the overall
workflow if the clients or developers can automate this process of converting
the pre-made full-stack website design to get a partially working if not fully
working code. In this paper, we present a novel approach of generating the
skeleton code from sketched images using Deep Learning and Computer Vision
approaches. The dataset for training are first-hand sketched images of low
fidelity wireframes, database schemas and class diagrams. The approach consists
of three parts. First, the front-end or UI elements detection and extraction
from custom-made UI wireframes. Second, individual database table creation from
schema designs and lastly, creating a class file from class diagrams.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 16:32:13 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Barua",
"Somoy Subandhu",
""
],
[
"Zulkarnain",
"Imam Mohammad",
""
],
[
"Roy",
"Abhishek",
""
],
[
"Alam",
"Md. Golam Rabiul",
""
],
[
"Uddin",
"Md Zia",
""
]
] |
new_dataset
| 0.998944 |
2211.14613
|
Gabriel Istrate
|
Gabriel Istrate
|
Some Remarks on Almost Periodic Sequences and Languages
|
Reconstructed source file of a paper originally published in 1995 in
a volume currently without an online version (and with limited availability).
Uploaded in order to ensure the online availability (and preservation) of the
paper. This version faithfully reproduces the original, except for the
addition of a note about the solution of Open Problem 3 and the correction of
some minor typos
|
pages 191-195, in "Mathematical linguistics and related topics.
Papers in honor of Solomon Marcus on his 70th birthday". Edited by Gheorghe
P\u{a}un. Editura Academiei Rom\^ane, Bucharest, 1995. xii+364 pp. ISBN:
973-27-0486-1
| null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Almost periodicity has been considered in Formal Language Theory in
connection with some topics in Symbolic Dynamics. In (P\u{a}un and Marcus,
Bulletin of EATCS 53 (1994)) some problems concerning this property are raised.
For instance it is asked whether there exists some almost periodic word
$\alpha$ such that $Sub(\alpha)$, the set of its finite factors, is
context-free non-regular.
We answer negatively (even in a stronger form) this question, as well as
discussing other related topics.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 16:45:36 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Istrate",
"Gabriel",
""
]
] |
new_dataset
| 0.989642 |
2211.14633
|
Congxi Xiao
|
Congxi Xiao, Jingbo Zhou, Jizhou Huang, Hengshu Zhu, Tong Xu, Dejing
Dou, Hui Xiong
|
A Contextual Master-Slave Framework on Urban Region Graph for Urban
Village Detection
| null | null | null | null |
cs.LG cs.CV cs.SI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Urban villages (UVs) refer to the underdeveloped informal settlement falling
behind the rapid urbanization in a city. Since there are high levels of social
inequality and social risks in these UVs, it is critical for city managers to
discover all UVs for making appropriate renovation policies. Existing
approaches to detecting UVs are labor-intensive or have not fully addressed the
unique challenges in UV detection such as the scarcity of labeled UVs and the
diverse urban patterns in different regions. To this end, we first build an
urban region graph (URG) to model the urban area in a hierarchically structured
way. Then, we design a novel contextual master-slave framework to effectively
detect the urban village from the URG. The core idea of such a framework is to
firstly pre-train a basis (or master) model over the URG, and then to
adaptively derive specific (or slave) models from the basis model for different
regions. The proposed framework can learn to balance the generality and
specificity for UV detection in an urban area. Finally, we conduct extensive
experiments in three cities to demonstrate the effectiveness of our approach.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 18:17:39 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Xiao",
"Congxi",
""
],
[
"Zhou",
"Jingbo",
""
],
[
"Huang",
"Jizhou",
""
],
[
"Zhu",
"Hengshu",
""
],
[
"Xu",
"Tong",
""
],
[
"Dou",
"Dejing",
""
],
[
"Xiong",
"Hui",
""
]
] |
new_dataset
| 0.998249 |
2211.14642
|
Moses Ike
|
Moses Ike, Kandy Phan, Keaton Sadoski, Romuald Valme, Wenke Lee
|
SCAPHY: Detecting Modern ICS Attacks by Correlating Behaviors in SCADA
and PHYsical
|
IEEE Security and Privacy 2023
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Modern Industrial Control Systems (ICS) attacks evade existing tools by using
knowledge of ICS processes to blend their activities with benign Supervisory
Control and Data Acquisition (SCADA) operation, causing physical world damages.
We present SCAPHY to detect ICS attacks in SCADA by leveraging the unique
execution phases of SCADA to identify the limited set of legitimate behaviors
to control the physical world in different phases, which differentiates from
attackers activities. For example, it is typical for SCADA to setup ICS device
objects during initialization, but anomalous during processcontrol. To extract
unique behaviors of SCADA execution phases, SCAPHY first leverages open ICS
conventions to generate a novel physical process dependency and impact graph
(PDIG) to identify disruptive physical states. SCAPHY then uses PDIG to inform
a physical process-aware dynamic analysis, whereby code paths of SCADA
process-control execution is induced to reveal API call behaviors unique to
legitimate process-control phases. Using this established behavior, SCAPHY
selectively monitors attackers physical world-targeted activities that violates
legitimate processcontrol behaviors. We evaluated SCAPHY at a U.S. national lab
ICS testbed environment. Using diverse ICS deployment scenarios and attacks
across 4 ICS industries, SCAPHY achieved 95% accuracy & 3.5% false positives
(FP), compared to 47.5% accuracy and 25% FP of existing work. We analyze
SCAPHYs resilience to futuristic attacks where attacker knows our approach.
|
[
{
"version": "v1",
"created": "Sat, 26 Nov 2022 19:03:35 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Ike",
"Moses",
""
],
[
"Phan",
"Kandy",
""
],
[
"Sadoski",
"Keaton",
""
],
[
"Valme",
"Romuald",
""
],
[
"Lee",
"Wenke",
""
]
] |
new_dataset
| 0.999544 |
2211.14739
|
Meihuizi Jia
|
Meihuizi Jia, Lei Shen, Xin Shen, Lejian Liao, Meng Chen, Xiaodong He,
Zhendong Chen, Jiaqi Li
|
MNER-QG: An End-to-End MRC framework for Multimodal Named Entity
Recognition with Query Grounding
|
13 pages, 6 figures, published to AAAI
| null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal named entity recognition (MNER) is a critical step in information
extraction, which aims to detect entity spans and classify them to
corresponding entity types given a sentence-image pair. Existing methods either
(1) obtain named entities with coarse-grained visual clues from attention
mechanisms, or (2) first detect fine-grained visual regions with toolkits and
then recognize named entities. However, they suffer from improper alignment
between entity types and visual regions or error propagation in the two-stage
manner, which finally imports irrelevant visual information into texts. In this
paper, we propose a novel end-to-end framework named MNER-QG that can
simultaneously perform MRC-based multimodal named entity recognition and query
grounding. Specifically, with the assistance of queries, MNER-QG can provide
prior knowledge of entity types and visual regions, and further enhance
representations of both texts and images. To conduct the query grounding task,
we provide manual annotations and weak supervisions that are obtained via
training a highly flexible visual grounding model with transfer learning. We
conduct extensive experiments on two public MNER datasets, Twitter2015 and
Twitter2017. Experimental results show that MNER-QG outperforms the current
state-of-the-art models on the MNER task, and also improves the query grounding
performance.
|
[
{
"version": "v1",
"created": "Sun, 27 Nov 2022 06:10:03 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Jia",
"Meihuizi",
""
],
[
"Shen",
"Lei",
""
],
[
"Shen",
"Xin",
""
],
[
"Liao",
"Lejian",
""
],
[
"Chen",
"Meng",
""
],
[
"He",
"Xiaodong",
""
],
[
"Chen",
"Zhendong",
""
],
[
"Li",
"Jiaqi",
""
]
] |
new_dataset
| 0.9993 |
2211.14835
|
Elad Hirsch
|
Elad Hirsch and Ayellet Tal
|
CLID: Controlled-Length Image Descriptions with Limited Data
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Controllable image captioning models generate human-like image descriptions,
enabling some kind of control over the generated captions. This paper focuses
on controlling the caption length, i.e. a short and concise description or a
long and detailed one. Since existing image captioning datasets contain mostly
short captions, generating long captions is challenging. To address the
shortage of long training examples, we propose to enrich the dataset with
varying-length self-generated captions. These, however, might be of varying
quality and are thus unsuitable for conventional training. We introduce a novel
training strategy that selects the data points to be used at different times
during the training. Our method dramatically improves the length-control
abilities, while exhibiting SoTA performance in terms of caption quality. Our
approach is general and is shown to be applicable also to paragraph generation.
|
[
{
"version": "v1",
"created": "Sun, 27 Nov 2022 14:18:40 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Hirsch",
"Elad",
""
],
[
"Tal",
"Ayellet",
""
]
] |
new_dataset
| 0.999324 |
2211.14882
|
Dimitrios Tyrovolas
|
Prodromos-Vasileios Mekikis, Dimitrios Tyrovolas, Sotiris Tegos,
Alexandros Papadopoulos, Alexandros Pitilakis, Sotiris Ioannidis, Ageliki
Tsiolaridou, Panagiotis Diamantoulakis, Nikolaos Kantartzis, George K.
Karagiannidis, Christos Liaskos
|
Dynamic Programmable Wireless Environment with UAV-mounted Static
Metasurfaces
| null | null | null | null |
cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
Reconfigurable intelligent surfaces (RISs) are artificial planar structures
able to offer a unique way of manipulating propagated wireless signals.
Commonly composed of a number of reconfigurable passive cell components and
basic electronic circuits, RISs can almost freely perform a set of wave
modification functionalities, in order to realize programmable wireless
environments (PWEs). However, a more energy-efficient way to realize a PWE is
through dynamically relocating static metasurfaces that perform a unique
functionality. In this paper, we employ a UAV swarm to dynamically deploy a set
of lowcost passive metasurfaces that are able to perform only one
electromagnetic functionality, but with the benefit of requiring no power.
Specifically, the UAV-mounted static metasurfaces are carefully positioned
across the sky to create cascaded channels for improved user service and
security hardening. The performance evaluation results, based on
|
[
{
"version": "v1",
"created": "Sun, 27 Nov 2022 16:38:32 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Mekikis",
"Prodromos-Vasileios",
""
],
[
"Tyrovolas",
"Dimitrios",
""
],
[
"Tegos",
"Sotiris",
""
],
[
"Papadopoulos",
"Alexandros",
""
],
[
"Pitilakis",
"Alexandros",
""
],
[
"Ioannidis",
"Sotiris",
""
],
[
"Tsiolaridou",
"Ageliki",
""
],
[
"Diamantoulakis",
"Panagiotis",
""
],
[
"Kantartzis",
"Nikolaos",
""
],
[
"Karagiannidis",
"George K.",
""
],
[
"Liaskos",
"Christos",
""
]
] |
new_dataset
| 0.986567 |
2211.14944
|
Luca Valente
|
Luca Valente, Yvan Tortorella, Mattia Sinigaglia, Giuseppe Tagliavini,
Alessandro Capotondi, Luca Benini, Davide Rossi
|
HULK-V: a Heterogeneous Ultra-low-power Linux capable RISC-V SoC
|
This paper has been accepted as full paper at DATE23
https://www.date-conference.com/date-2023-accepted-papers#Regular-Papers
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
IoT applications span a wide range in performance and memory footprint, under
tight cost and power constraints. High-end applications rely on power-hungry
Systems-on-Chip (SoCs) featuring powerful processors, large LPDDR/DDR3/4/5
memories, and supporting full-fledged Operating Systems (OS). On the contrary,
low-end applications typically rely on Ultra-Low-Power ucontrollers with a
"close to metal" software environment and simple micro-kernel-based runtimes.
Emerging applications and trends of IoT require the "best of both worlds":
cheap and low-power SoC systems with a well-known and agile software
environment based on full-fledged OS (e.g., Linux), coupled with extreme energy
efficiency and parallel digital signal processing capabilities. We present
HULK-V: an open-source Heterogeneous Linux-capable RISC-V-based SoC coupling a
64-bit RISC-V processor with an 8-core Programmable Multi-Core Accelerator
(PMCA), delivering up to 13.8 GOps, up to 157 GOps/W and accelerating the
execution of complex DSP and ML tasks by up to 112x over the host processor.
HULK-V leverages a lightweight, fully digital memory hierarchy based on
HyperRAM IoT DRAM that exposes up to 512 MB of DRAM memory to the host CPU.
Featuring HyperRAMs, HULK-V doubles the energy efficiency without significant
performance loss compared to featuring power-hungry LPDDR memories, requiring
expensive and large mixed-signal PHYs. HULK-V, implemented in Global Foundries
22nm FDX technology, is a fully digital ultra-low-cost SoC running a 64-bit
Linux software stack with OpenMP host-to-PMCA offload within a power envelope
of just 250 mW.
|
[
{
"version": "v1",
"created": "Sun, 27 Nov 2022 21:34:03 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Valente",
"Luca",
""
],
[
"Tortorella",
"Yvan",
""
],
[
"Sinigaglia",
"Mattia",
""
],
[
"Tagliavini",
"Giuseppe",
""
],
[
"Capotondi",
"Alessandro",
""
],
[
"Benini",
"Luca",
""
],
[
"Rossi",
"Davide",
""
]
] |
new_dataset
| 0.957951 |
2211.15093
|
Jiawei Zhang
|
Jiawei Zhang
|
Robot Kinematics: Motion, Kinematics and Dynamics
|
56 pages, 18 figures
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This is a follow-up tutorial article of our previous article entitled "Robot
Basics: Representation, Rotation and Velocity". For better understanding of the
topics covered in this articles, we recommend the readers to first read our
previous tutorial article on robot basics. Specifically, in this article, we
will cover some more advanced topics on robot kinematics, including robot
motion, forward kinematics, inverse kinematics, and robot dynamics. For the
topics, terminologies and notations introduced in the previous article, we will
use them directly without re-introducing them again in this article. Also
similar to the previous article, math and formulas will also be heavily used in
this article as well (hope the readers are well prepared for the upcoming math
bomb). After reading this article, readers should be able to have a deeper
understanding about how robot motion, kinematics and dynamics. As to some more
advanced topics about robot control, we will introduce them in the following
tutorial articles for readers instead.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 06:42:14 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Zhang",
"Jiawei",
""
]
] |
new_dataset
| 0.983272 |
2211.15234
|
Ruitian Wu
|
Jingwei Li, Ruitian Wu, Tzu-liang Huang, Zian Pan, Ming-chun Huang
|
Shoupa: An AI System for Early Diagnosis of Parkinson's Disease
|
2 pages, 1 figure, accepted by IEEE/ACM CHASE 2022 (Poster
Presentation)
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Parkinson's Disease (PD) is a progressive nervous system disorder that has
affected more than 5.8 million people, especially the elderly. Due to the
complexity of its symptoms and its similarity to other neurological disorders,
early detection requires neurologists or PD specialists to be involved, which
is not accessible to most old people. Therefore, we integrate smart mobile
devices with AI technologies. In this paper, we introduce the framework of our
developed PD early detection system which combines different tasks evaluating
both motor and non-motor symptoms. With the developed model, we help users
detect PD punctually in non-clinical settings and figure out their most severe
symptoms. The results are expected to be further used for PD rehabilitation
guidance and detection of other neurological disorders.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 11:32:17 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Li",
"Jingwei",
""
],
[
"Wu",
"Ruitian",
""
],
[
"Huang",
"Tzu-liang",
""
],
[
"Pan",
"Zian",
""
],
[
"Huang",
"Ming-chun",
""
]
] |
new_dataset
| 0.999793 |
2211.15262
|
Idris Abdulmumin
|
Saminu Mohammad Aliyu, Gregory Maksha Wajiga, Muhammad Murtala,
Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Ibrahim Said Ahmad
|
HERDPhobia: A Dataset for Hate Speech against Fulani in Nigeria
|
To appear in the Proceedings of the Sixth Workshop on Widening
Natural Language Processing at EMNLP2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Social media platforms allow users to freely share their opinions about
issues or anything they feel like. However, they also make it easier to spread
hate and abusive content. The Fulani ethnic group has been the victim of this
unfortunate phenomenon. This paper introduces the HERDPhobia - the first
annotated hate speech dataset on Fulani herders in Nigeria - in three
languages: English, Nigerian-Pidgin, and Hausa. We present a benchmark
experiment using pre-trained languages models to classify the tweets as either
hateful or non-hateful. Our experiment shows that the XML-T model provides
better performance with 99.83% weighted F1. We released the dataset at
https://github.com/hausanlp/HERDPhobia for further research.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 12:30:11 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Aliyu",
"Saminu Mohammad",
""
],
[
"Wajiga",
"Gregory Maksha",
""
],
[
"Murtala",
"Muhammad",
""
],
[
"Muhammad",
"Shamsuddeen Hassan",
""
],
[
"Abdulmumin",
"Idris",
""
],
[
"Ahmad",
"Ibrahim Said",
""
]
] |
new_dataset
| 0.999812 |
2211.15287
|
Yagmur Yigit
|
Yagmur Yigit (1), Khayal Huseynov (1)(2), Hamed Ahmadi (3) and Berk
Canberk (4)(5) ((1) Department of Computer Engineering, Istanbul Technical
University, Turkey, (2) BTS Group, Istanbul, Turkey, (3) Department of
Electronic Engineering, University of York, United Kingdom, (4) School of
Computing, Engineering and The Build Environment, Edinburgh Napier
University, United Kingdom, (5) Department of Artificial Intelligence and
Data Engineering, Istanbul Technical University, Turkey)
|
YA-DA: YAng-Based DAta Model for Fine-Grained IIoT Air Quality
Monitoring
|
This paper has been accepted at the 4th Workshop on Future of
Wireless Access and Sensing for Industrial IoT (FUTUREIIOT) in IEEE Global
Communications Conference (IEEE GLOBECOM) 2022
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the development of industrialization, air pollution is also steadily on
the rise since both industrial and daily activities generate a massive amount
of air pollution. Since decreasing air pollution is critical for citizens'
health and well-being, air pollution monitoring is becoming an essential topic.
Industrial Internet of Things (IIoT) research focuses on this crucial area.
Several attempts already exist for air pollution monitoring. However, none of
them are improving the performance of IoT data collection at the desired level.
Inspired by the genuine Yet Another Next Generation (YANG) data model, we
propose a YAng-based DAta model (YA-DA) to improve the performance of IIoT data
collection. Moreover, by taking advantage of digital twin (DT) technology, we
propose a DT-enabled fine-grained IIoT air quality monitoring system using
YA-DA. As a result, DT synchronization becomes fine-grained. In turn, we
improve the performance of IIoT data collection resulting in lower round-trip
time (RTT), higher DT synchronization, and lower DT latency.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 13:18:07 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Yigit",
"Yagmur",
""
],
[
"Huseynov",
"Khayal",
""
],
[
"Ahmadi",
"Hamed",
""
],
[
"Canberk",
"Berk",
""
]
] |
new_dataset
| 0.999311 |
2211.15346
|
Enrica Tricomi
|
Enrica Tricomi, Mirko Mossini, Francesco Missiroli, Nicola Lotti,
Michele Xiloyannis, Loris Roveda, and Lorenzo Masia
|
Environment-based Assistance Modulation for a Hip Exosuit via Computer
Vision
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Just like in humans vision plays a fundamental role in guiding adaptive
locomotion, when designing the control strategy for a walking assistive
technology, Computer Vision may bring substantial improvements when performing
an environment-based assistance modulation. In this work, we developed a hip
exosuit controller able to distinguish among three different walking terrains
through the use of an RGB camera and to adapt the assistance accordingly. The
system was tested with seven healthy participants walking throughout an
overground path comprising of staircases and level ground. Subjects performed
the task with the exosuit disabled (Exo Off), constant assistance profile
(Vision Off ), and with assistance modulation (Vision On). Our results showed
that the controller was able to promptly classify in real-time the path in
front of the user with an overall accuracy per class above the 85%, and to
perform assistance modulation accordingly. Evaluation related to the effects on
the user showed that Vision On was able to outperform the other two conditions:
we obtained significantly higher metabolic savings than Exo Off, with a peak of
about -20% when climbing up the staircase and about -16% in the overall path,
and than Vision Off when ascending or descending stairs. Such advancements in
the field may yield to a step forward for the exploitation of lightweight
walking assistive technologies in real-life scenarios.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 14:26:42 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Tricomi",
"Enrica",
""
],
[
"Mossini",
"Mirko",
""
],
[
"Missiroli",
"Francesco",
""
],
[
"Lotti",
"Nicola",
""
],
[
"Xiloyannis",
"Michele",
""
],
[
"Roveda",
"Loris",
""
],
[
"Masia",
"Lorenzo",
""
]
] |
new_dataset
| 0.989927 |
2211.15395
|
Haotian Cui
|
Haotian Cui, Chenglong Wang, Junjie Huang, Jeevana Priya Inala, Todd
Mytkowicz, Bo Wang, Jianfeng Gao, Nan Duan
|
CodeExp: Explanatory Code Document Generation
|
Accepted in Findings of EMNLP 2022
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developing models that can automatically generate detailed code explanation
can greatly benefit software maintenance and programming education. However,
existing code-to-text generation models often produce only high-level summaries
of code that do not capture implementation-level choices essential for these
scenarios. To fill in this gap, we propose the code explanation generation
task. We first conducted a human study to identify the criteria for
high-quality explanatory docstring for code. Based on that, we collected and
refined a large-scale code docstring corpus and formulated automatic evaluation
metrics that best match human assessments. Finally, we present a multi-stage
fine-tuning strategy and baseline models for the task. Our experiments show
that (1) our refined training dataset lets models achieve better performance in
the explanation generation tasks compared to larger unrefined data (15x
larger), and (2) fine-tuned models can generate well-structured long docstrings
comparable to human-written ones. We envision our training dataset,
human-evaluation protocol, recommended metrics, and fine-tuning strategy can
boost future code explanation research. The code and annotated data are
available at https://github.com/subercui/CodeExp.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 18:05:44 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Cui",
"Haotian",
""
],
[
"Wang",
"Chenglong",
""
],
[
"Huang",
"Junjie",
""
],
[
"Inala",
"Jeevana Priya",
""
],
[
"Mytkowicz",
"Todd",
""
],
[
"Wang",
"Bo",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Duan",
"Nan",
""
]
] |
new_dataset
| 0.995321 |
2211.15402
|
Jiangyong Huang
|
Jiangyong Huang, William Yicheng Zhu, Baoxiong Jia, Zan Wang, Xiaojian
Ma, Qing Li, Siyuan Huang
|
Perceive, Ground, Reason, and Act: A Benchmark for General-purpose
Visual Representation
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current computer vision models, unlike the human visual system, cannot yet
achieve general-purpose visual understanding. Existing efforts to create a
general vision model are limited in the scope of assessed tasks and offer no
overarching framework to perform them holistically. We present a new
comprehensive benchmark, General-purpose Visual Understanding Evaluation
(G-VUE), covering the full spectrum of visual cognitive abilities with four
functional domains $\unicode{x2014}$ Perceive, Ground, Reason, and Act. The
four domains are embodied in 11 carefully curated tasks, from 3D reconstruction
to visual reasoning and manipulation. Along with the benchmark, we provide a
general encoder-decoder framework to allow for the evaluation of arbitrary
visual representation on all 11 tasks. We evaluate various pre-trained visual
representations with our framework and observe that (1) Transformer-based
visual backbone generally outperforms CNN-based backbone on G-VUE, (2) visual
representations from vision-language pre-training are superior to those with
vision-only pre-training across visual tasks. With G-VUE, we provide a holistic
evaluation standard to motivate research toward building general-purpose visual
systems via obtaining more general-purpose visual representations.
|
[
{
"version": "v1",
"created": "Mon, 28 Nov 2022 15:06:07 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Huang",
"Jiangyong",
""
],
[
"Zhu",
"William Yicheng",
""
],
[
"Jia",
"Baoxiong",
""
],
[
"Wang",
"Zan",
""
],
[
"Ma",
"Xiaojian",
""
],
[
"Li",
"Qing",
""
],
[
"Huang",
"Siyuan",
""
]
] |
new_dataset
| 0.99866 |
2211.15405
|
Jonah Burgess
|
Jonah Burgess
|
Malware and Exploits on the Dark Web
|
5 pages, 0 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, the darknet has become the key location for the distribution
of malware and exploits. We have seen scenarios where software vulnerabilities
have been disclosed by vendors and shortly after, operational exploits are
available on darknet forums and marketplaces. Many marketplace vendors offer
zero-day exploits that have not yet been discovered or disclosed. This trend
has led to security companies offering darknet analysis services to detect new
exploits and malware, providing proactive threat intelligence. This paper
presents information on the scale of malware distribution, the trends of
malware types offered, the methods for discovering new exploits and the
effectiveness of darknet analysis in detecting malware at the earliest possible
stage.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 12:44:44 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Burgess",
"Jonah",
""
]
] |
new_dataset
| 0.999487 |
2211.15424
|
Ankit Pal
|
Ankit Pal
|
DeepParliament: A Legal domain Benchmark & Dataset for Parliament Bills
Prediction
|
Accepted at EMNLP 2022 (UM-IoS)
| null | null | null |
cs.CL cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces DeepParliament, a legal domain Benchmark Dataset that
gathers bill documents and metadata and performs various bill status
classification tasks. The proposed dataset text covers a broad range of bills
from 1986 to the present and contains richer information on parliament bill
content. Data collection, detailed statistics and analyses are provided in the
paper. Moreover, we experimented with different types of models ranging from
RNN to pretrained and reported the results. We are proposing two new
benchmarks: Binary and Multi-Class Bill Status classification. Models developed
for bill documents and relevant supportive tasks may assist Members of
Parliament (MPs), presidents, and other legal practitioners. It will help
review or prioritise bills, thus speeding up the billing process, improving the
quality of decisions and reducing the time consumption in both houses.
Considering that the foundation of the country's democracy is Parliament and
state legislatures, we anticipate that our research will be an essential
addition to the Legal NLP community. This work will be the first to present a
Parliament bill prediction task. In order to improve the accessibility of legal
AI resources and promote reproducibility, we have made our code and dataset
publicly accessible at github.com/monk1337/DeepParliament
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 04:55:32 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Pal",
"Ankit",
""
]
] |
new_dataset
| 0.999802 |
2211.15425
|
Lei Ma
|
Zhongyu Fang, Aoyun He, Qihui Yu, Baopeng Gao, Weiping Ding, Tong
Zhang, Lei Ma
|
FAF: A novel multimodal emotion recognition approach integrating face,
body and text
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Multimodal emotion analysis performed better in emotion recognition depending
on more comprehensive emotional clues and multimodal emotion dataset. In this
paper, we developed a large multimodal emotion dataset, named "HED" dataset, to
facilitate the emotion recognition task, and accordingly propose a multimodal
emotion recognition method. To promote recognition accuracy, "Feature After
Feature" framework was used to explore crucial emotional information from the
aligned face, body and text samples. We employ various benchmarks to evaluate
the "HED" dataset and compare the performance with our method. The results show
that the five classification accuracy of the proposed multimodal fusion method
is about 83.75%, and the performance is improved by 1.83%, 9.38%, and 21.62%
respectively compared with that of individual modalities. The complementarity
between each channel is effectively used to improve the performance of emotion
recognition. We had also established a multimodal online emotion prediction
platform, aiming to provide free emotion prediction to more users.
|
[
{
"version": "v1",
"created": "Sun, 20 Nov 2022 14:43:36 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Fang",
"Zhongyu",
""
],
[
"He",
"Aoyun",
""
],
[
"Yu",
"Qihui",
""
],
[
"Gao",
"Baopeng",
""
],
[
"Ding",
"Weiping",
""
],
[
"Zhang",
"Tong",
""
],
[
"Ma",
"Lei",
""
]
] |
new_dataset
| 0.999604 |
2211.15481
|
Facundo Quiroga
|
Pedro Dal Bianco and Gast\'on R\'ios and Franco Ronchetti and Facundo
Quiroga and Oscar Stanchi and Waldo Hasperu\'e and Alejandro Rosete
|
LSA-T: The first continuous Argentinian Sign Language dataset for Sign
Language Translation
|
Accepted at IBERAMIA 2022. Dataset download info at
https://github.com/midusi/LSA-T
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sign language translation (SLT) is an active field of study that encompasses
human-computer interaction, computer vision, natural language processing and
machine learning. Progress on this field could lead to higher levels of
integration of deaf people. This paper presents, to the best of our knowledge,
the first continuous Argentinian Sign Language (LSA) dataset. It contains
14,880 sentence level videos of LSA extracted from the CN Sordos YouTube
channel with labels and keypoints annotations for each signer. We also present
a method for inferring the active signer, a detailed analysis of the
characteristics of the dataset, a visualization tool to explore the dataset and
a neural SLT model to serve as baseline for future experiments.
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 14:46:44 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Bianco",
"Pedro Dal",
""
],
[
"Ríos",
"Gastón",
""
],
[
"Ronchetti",
"Franco",
""
],
[
"Quiroga",
"Facundo",
""
],
[
"Stanchi",
"Oscar",
""
],
[
"Hasperué",
"Waldo",
""
],
[
"Rosete",
"Alejandro",
""
]
] |
new_dataset
| 0.999594 |
2211.15502
|
Yuezhi Yang
|
Yuezhi Yang, Zhiming Cui, Changjian Li, Wenping Wang
|
ToothInpaintor: Tooth Inpainting from Partial 3D Dental Model and 2D
Panoramic Image
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In orthodontic treatment, a full tooth model consisting of both the crown and
root is indispensable in making the treatment plan. However, acquiring tooth
root information to obtain the full tooth model from CBCT images is sometimes
restricted due to the massive radiation of CBCT scanning. Thus, reconstructing
the full tooth shape from the ready-to-use input, e.g., the partial intra-oral
scan and the 2D panoramic image, is an applicable and valuable solution. In
this paper, we propose a neural network, called ToothInpaintor, that takes as
input a partial 3D dental model and a 2D panoramic image and reconstructs the
full tooth model with high-quality root(s). Technically, we utilize the
implicit representation for both the 3D and 2D inputs, and learn a latent space
of the full tooth shapes. At test time, given an input, we successfully project
it to the learned latent space via neural optimization to obtain the full tooth
model conditioned on the input. To help find the robust projection, a novel
adversarial learning module is exploited in our pipeline. We extensively
evaluate our method on a dataset collected from real-world clinics. The
evaluation, comparison, and comprehensive ablation studies demonstrate that our
approach produces accurate complete tooth models robustly and outperforms the
state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Fri, 25 Nov 2022 18:15:22 GMT"
}
] | 2022-11-29T00:00:00 |
[
[
"Yang",
"Yuezhi",
""
],
[
"Cui",
"Zhiming",
""
],
[
"Li",
"Changjian",
""
],
[
"Wang",
"Wenping",
""
]
] |
new_dataset
| 0.999118 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.