id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2304.12592
|
Han Wang
|
Han Wang, Jiayuan Zhang, Lipeng Wan, Xingyu Chen, Xuguang Lan, Nanning
Zheng
|
MMRDN: Consistent Representation for Multi-View Manipulation
Relationship Detection in Object-Stacked Scenes
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Manipulation relationship detection (MRD) aims to guide the robot to grasp
objects in the right order, which is important to ensure the safety and
reliability of grasping in object stacked scenes. Previous works infer
manipulation relationship by deep neural network trained with data collected
from a predefined view, which has limitation in visual dislocation in
unstructured environments. Multi-view data provide more comprehensive
information in space, while a challenge of multi-view MRD is domain shift. In
this paper, we propose a novel multi-view fusion framework, namely multi-view
MRD network (MMRDN), which is trained by 2D and 3D multi-view data. We project
the 2D data from different views into a common hidden space and fit the
embeddings with a set of Von-Mises-Fisher distributions to learn the consistent
representations. Besides, taking advantage of position information within the
3D data, we select a set of $K$ Maximum Vertical Neighbors (KMVN) points from
the point cloud of each object pair, which encodes the relative position of
these two objects. Finally, the features of multi-view 2D and 3D data are
concatenated to predict the pairwise relationship of objects. Experimental
results on the challenging REGRAD dataset show that MMRDN outperforms the
state-of-the-art methods in multi-view MRD tasks. The results also demonstrate
that our model trained by synthetic data is capable to transfer to real-world
scenarios.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 05:55:29 GMT"
}
] | 2023-04-26T00:00:00 |
[
[
"Wang",
"Han",
""
],
[
"Zhang",
"Jiayuan",
""
],
[
"Wan",
"Lipeng",
""
],
[
"Chen",
"Xingyu",
""
],
[
"Lan",
"Xuguang",
""
],
[
"Zheng",
"Nanning",
""
]
] |
new_dataset
| 0.990416 |
2304.12636
|
Nolwenn Bernard
|
Nolwenn Bernard and Krisztian Balog
|
MG-ShopDial: A Multi-Goal Conversational Dataset for e-Commerce
|
Proceedings of the 46th International ACM SIGIR Conference on
Research and Development in Information Retrieval (SIGIR '23), July 23--27,
2023, Taipei, Taiwan
| null |
10.1145/3539618.3591883
| null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conversational systems can be particularly effective in supporting complex
information seeking scenarios with evolving information needs. Finding the
right products on an e-commerce platform is one such scenario, where a
conversational agent would need to be able to provide search capabilities over
the item catalog, understand and make recommendations based on the user's
preferences, and answer a range of questions related to items and their usage.
Yet, existing conversational datasets do not fully support the idea of mixing
different conversational goals (i.e., search, recommendation, and question
answering) and instead focus on a single goal. To address this, we introduce
MG-ShopDial: a dataset of conversations mixing different goals in the domain of
e-commerce. Specifically, we make the following contributions. First, we
develop a coached human-human data collection protocol where each dialogue
participant is given a set of instructions, instead of a specific script or
answers to choose from. Second, we implement a data collection tool to
facilitate the collection of multi-goal conversations via a web chat interface,
using the above protocol. Third, we create the MG-ShopDial collection, which
contains 64 high-quality dialogues with a total of 2,196 utterances for
e-commerce scenarios of varying complexity. The dataset is additionally
annotated with both intents and goals on the utterance level. Finally, we
present an analysis of this dataset and identify multi-goal conversational
patterns.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 08:07:21 GMT"
}
] | 2023-04-26T00:00:00 |
[
[
"Bernard",
"Nolwenn",
""
],
[
"Balog",
"Krisztian",
""
]
] |
new_dataset
| 0.999161 |
2304.12650
|
Haitao Li
|
Jia Chen, Haitao Li, Weihang Su, Qingyao Ai, Yiqun Liu
|
THUIR at WSDM Cup 2023 Task 1: Unbiased Learning to Rank
|
3 pages
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces the approaches we have used to participate in the WSDM
Cup 2023 Task 1: Unbiased Learning to Rank. In brief, we have attempted a
combination of both traditional IR models and transformer-based cross-encoder
architectures. To further enhance the ranking performance, we also considered a
series of features for learning to rank. As a result, we won 2nd place on the
final leaderboard.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 08:32:27 GMT"
}
] | 2023-04-26T00:00:00 |
[
[
"Chen",
"Jia",
""
],
[
"Li",
"Haitao",
""
],
[
"Su",
"Weihang",
""
],
[
"Ai",
"Qingyao",
""
],
[
"Liu",
"Yiqun",
""
]
] |
new_dataset
| 0.99518 |
2304.12700
|
Mark Kennedy
|
Mark Thomas Kennedy, Nelson Phillips
|
The Participation Game
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Inspired by Turing's famous "imitation game" and recent advances in
generative pre-trained transformers, we pose the participation game to point to
a new frontier in AI evolution where machines will join with humans as
participants in social construction processes. The participation game is a
creative, playful competition that calls for applying, bending, and stretching
the categories humans use to make sense of and order their worlds. After
defining the game and giving reasons for moving beyond imitation as a test of
AI, we highlight parallels between the participation game and processes of
social construction, a hallmark of human intelligence. We then discuss
implications for fundamental constructs of societies and options for
governance.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 10:07:13 GMT"
}
] | 2023-04-26T00:00:00 |
[
[
"Kennedy",
"Mark Thomas",
""
],
[
"Phillips",
"Nelson",
""
]
] |
new_dataset
| 0.981182 |
2304.12704
|
Haolin Zhuang
|
Haolin Zhuang, Shun Lei, Long Xiao, Weiqin Li, Liyang Chen, Sicheng
Yang, Zhiyong Wu, Shiyin Kang, Helen Meng
|
GTN-Bailando: Genre Consistent Long-Term 3D Dance Generation based on
Pre-trained Genre Token Network
|
Accepted by ICASSP2023.Demo page:
https://im1eon.github.io/ICASSP23-GTNB-DG/
| null | null | null |
cs.SD cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Music-driven 3D dance generation has become an intensive research topic in
recent years with great potential for real-world applications. Most existing
methods lack the consideration of genre, which results in genre inconsistency
in the generated dance movements. In addition, the correlation between the
dance genre and the music has not been investigated. To address these issues,
we propose a genre-consistent dance generation framework, GTN-Bailando. First,
we propose the Genre Token Network (GTN), which infers the genre from music to
enhance the genre consistency of long-term dance generation. Second, to improve
the generalization capability of the model, the strategy of pre-training and
fine-tuning is adopted.Experimental results on the AIST++ dataset show that the
proposed dance generation framework outperforms state-of-the-art methods in
terms of motion quality and genre consistency.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 10:17:29 GMT"
}
] | 2023-04-26T00:00:00 |
[
[
"Zhuang",
"Haolin",
""
],
[
"Lei",
"Shun",
""
],
[
"Xiao",
"Long",
""
],
[
"Li",
"Weiqin",
""
],
[
"Chen",
"Liyang",
""
],
[
"Yang",
"Sicheng",
""
],
[
"Wu",
"Zhiyong",
""
],
[
"Kang",
"Shiyin",
""
],
[
"Meng",
"Helen",
""
]
] |
new_dataset
| 0.993933 |
2304.12781
|
Stephanie Jean-Daubias
|
St\'ephanie Jean-Daubias (LIRIS, TWEAK)
|
SAPHIR: A Pluricultural Authoring Tool to Produce Resources in Support
of Education for Sustainable Development
| null |
CSEDU 2023, Apr 2023, Prague, Czech Republic
| null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present SAPHIR, a multilingual authoring tool producing a
Progressive Web App, usable on computers, tablets, and smartphones, online or
offline. We presented our design process, the architecture of the system, the
model on which it is based, and its main parts: SAPHIR it-self is the main
software proposing activities to children to learn and play; MINE is the
authoring tool used by pedagogical designers and resources translators to
create and translate resources without requiring any programming skills; TAILLE
is dedicated to teachers to whom he provides educational explanations to use
SAPHIR with their learners. The different parts were used with both pedagogical
designers and students.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 13:04:46 GMT"
}
] | 2023-04-26T00:00:00 |
[
[
"Jean-Daubias",
"Stéphanie",
"",
"LIRIS, TWEAK"
]
] |
new_dataset
| 0.996749 |
2304.12811
|
Katie Seaborn
|
Shun Hidaka, Sota Kobuki, Mizuki Watanabe, Katie Seaborn
|
Linguistic Dead-Ends and Alphabet Soup: Finding Dark Patterns in
Japanese Apps
|
13 pages
|
In Proceedings of the 2023 CHI Conference on Human Factors in
Computing Systems (CHI '23). Association for Computing Machinery, New York,
NY, USA, Article 3, 1-13
|
10.1145/3544548.3580942
| null |
cs.HC cs.CY cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Dark patterns are deceptive and malicious properties of user interfaces that
lead the end-user to do something different from intended or expected. While
now a key topic in critical computing, most work has been conducted in Western
contexts. Japan, with its booming app market, is a relatively uncharted context
that offers culturally- and linguistically-sensitive differences in design
standards, contexts of use, values, and language, all of which could influence
the presence and expression of dark patterns. In this work, we analyzed 200
popular mobile apps in the Japanese market. We found that most apps had dark
patterns, with an average of 3.9 per app. We also identified a new class of
dark pattern: "Linguistic Dead-Ends" in the forms of "Untranslation" and
"Alphabet Soup." We outline the implications for design and research practice,
especially for future cross-cultural research on dark patterns.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 08:22:32 GMT"
}
] | 2023-04-26T00:00:00 |
[
[
"Hidaka",
"Shun",
""
],
[
"Kobuki",
"Sota",
""
],
[
"Watanabe",
"Mizuki",
""
],
[
"Seaborn",
"Katie",
""
]
] |
new_dataset
| 0.994385 |
2304.12904
|
Carlos Lassance
|
Carlos Lassance, St\'ephane Clinchant
|
The tale of two MS MARCO -- and their unfair comparisons
|
Short paper accepted at SIGIR 2023
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The MS MARCO-passage dataset has been the main large-scale dataset open to
the IR community and it has fostered successfully the development of novel
neural retrieval models over the years. But, it turns out that two different
corpora of MS MARCO are used in the literature, the official one and a second
one where passages were augmented with titles, mostly due to the introduction
of the Tevatron code base. However, the addition of titles actually leaks
relevance information, while breaking the original guidelines of the MS
MARCO-passage dataset. In this work, we investigate the differences between the
two corpora and demonstrate empirically that they make a significant difference
when evaluating a new method. In other words, we show that if a paper does not
properly report which version is used, reproducing fairly its results is
basically impossible. Furthermore, given the current status of reviewing, where
monitoring state-of-the-art results is of great importance, having two
different versions of a dataset is a large problem. This is why this paper aims
to report the importance of this issue so that researchers can be made aware of
this problem and appropriately report their results.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 15:15:49 GMT"
}
] | 2023-04-26T00:00:00 |
[
[
"Lassance",
"Carlos",
""
],
[
"Clinchant",
"Stéphane",
""
]
] |
new_dataset
| 0.961354 |
2304.12931
|
Victor Jung
|
Victor J.B. Jung, Arne Symons, Linyan Mei, Marian Verhelst, Luca
Benini
|
SALSA: Simulated Annealing based Loop-Ordering Scheduler for DNN
Accelerators
|
5 pages, 6 figures, open-source at
https://github.com/ZigZag-Project/zigzag
| null | null | null |
cs.AR cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
To meet the growing need for computational power for DNNs, multiple
specialized hardware architectures have been proposed. Each DNN layer should be
mapped onto the hardware with the most efficient schedule, however, SotA
schedulers struggle to consistently provide optimum schedules in a reasonable
time across all DNN-HW combinations.
This paper proposes SALSA, a fast dual-engine scheduler to generate optimal
execution schedules for both even and uneven mapping. We introduce a new
strategy, combining exhaustive search with simulated annealing to address the
dynamic nature of the loop ordering design space size across layers. SALSA is
extensively benchmarked against two SotA schedulers, LOMA and Timeloop on 5
different DNNs, on average SALSA finds schedules with 11.9% and 7.6% lower
energy while speeding up the search by 1.7x and 24x compared to LOMA and
Timeloop, respectively.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 12:00:08 GMT"
}
] | 2023-04-26T00:00:00 |
[
[
"Jung",
"Victor J. B.",
""
],
[
"Symons",
"Arne",
""
],
[
"Mei",
"Linyan",
""
],
[
"Verhelst",
"Marian",
""
],
[
"Benini",
"Luca",
""
]
] |
new_dataset
| 0.987231 |
2304.12979
|
Ruoyu Xie
|
Md Mahfuz Ibn Alam, Ruoyu Xie, Fahim Faisal, Antonios Anastasopoulos
|
GMNLP at SemEval-2023 Task 12: Sentiment Analysis with Phylogeny-Based
Adapters
|
Accepted at SemEval Workshop at ACL 2023
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This report describes GMU's sentiment analysis system for the SemEval-2023
shared task AfriSenti-SemEval. We participated in all three sub-tasks:
Monolingual, Multilingual, and Zero-Shot. Our approach uses models initialized
with AfroXLMR-large, a pre-trained multilingual language model trained on
African languages and fine-tuned correspondingly. We also introduce augmented
training data along with original training data. Alongside finetuning, we
perform phylogeny-based adapter tuning to create several models and ensemble
the best models for the final submission. Our system achieves the best F1-score
on track 5: Amharic, with 6.2 points higher F1-score than the second-best
performing system on this track. Overall, our system ranks 5th among the 10
systems participating in all 15 tracks.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 16:39:51 GMT"
}
] | 2023-04-26T00:00:00 |
[
[
"Alam",
"Md Mahfuz Ibn",
""
],
[
"Xie",
"Ruoyu",
""
],
[
"Faisal",
"Fahim",
""
],
[
"Anastasopoulos",
"Antonios",
""
]
] |
new_dataset
| 0.964572 |
2304.12998
|
Rui Hao
|
Rui Hao, Linmei Hu, Weijian Qi, Qingliu Wu, Yirui Zhang, Liqiang Nie
|
ChatLLM Network: More brains, More intelligence
| null | null | null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dialogue-based language models mark a huge milestone in the field of
artificial intelligence, by their impressive ability to interact with users, as
well as a series of challenging tasks prompted by customized instructions.
However, the prevalent large-scale dialogue-based language models like ChatGPT
still have room for improvement, such as unstable responses to questions and
the inability to think cooperatively like humans. Considering the ability of
dialogue-based language models in conversation and their inherent randomness in
thinking, we propose ChatLLM network that allows multiple dialogue-based
language models to interact, provide feedback, and think together. We design
the network of ChatLLMs based on ChatGPT. Specifically, individual instances of
ChatGPT may possess distinct perspectives towards the same problem, and by
consolidating these diverse viewpoints via a separate ChatGPT, the ChatLLM
network system can conduct decision-making more objectively and
comprehensively. In addition, a language-based feedback mechanism comparable to
backpropagation is devised to update the ChatGPTs within the network.
Experiments on two datasets demonstrate that our network attains significant
improvements in problem-solving, leading to observable progress amongst each
member.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 08:29:14 GMT"
}
] | 2023-04-26T00:00:00 |
[
[
"Hao",
"Rui",
""
],
[
"Hu",
"Linmei",
""
],
[
"Qi",
"Weijian",
""
],
[
"Wu",
"Qingliu",
""
],
[
"Zhang",
"Yirui",
""
],
[
"Nie",
"Liqiang",
""
]
] |
new_dataset
| 0.989525 |
2304.13015
|
Juan Tapia Dr.
|
Diego Pasmino, Carlos Aravena, Juan Tapia and Christoph Busch
|
Flickr-PAD: New Face High-Resolution Presentation Attack Detection
Database
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Nowadays, Presentation Attack Detection is a very active research area.
Several databases are constituted in the state-of-the-art using images
extracted from videos. One of the main problems identified is that many
databases present a low-quality, small image size and do not represent an
operational scenario in a real remote biometric system. Currently, these images
are captured from smartphones with high-quality and bigger resolutions. In
order to increase the diversity of image quality, this work presents a new PAD
database based on open-access Flickr images called: "Flickr-PAD". Our new
hand-made database shows high-quality printed and screen scenarios. This will
help researchers to compare new approaches to existing algorithms on a wider
database. This database will be available for other researchers. A
leave-one-out protocol was used to train and evaluate three PAD models based on
MobileNet-V3 (small and large) and EfficientNet-B0. The best result was reached
with MobileNet-V3 large with BPCER10 of 7.08% and BPCER20 of 11.15%.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 17:42:49 GMT"
}
] | 2023-04-26T00:00:00 |
[
[
"Pasmino",
"Diego",
""
],
[
"Aravena",
"Carlos",
""
],
[
"Tapia",
"Juan",
""
],
[
"Busch",
"Christoph",
""
]
] |
new_dataset
| 0.999485 |
1804.05039
|
Robail Yasrab Dr.
|
Robail Yasrab
|
Mitigating Docker Security Issues
|
13 pages
| null | null | null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Docker offers an ecosystem that offers a platform for application packaging,
distributing, and managing within containers. However, the Docker platform has
not yet matured. Presently, Docker is less secured than virtual machines (VM)
and most of the other cloud technologies. The key to Dockers inadequate
security protocols is container sharing of Linux kernel, which can lead to the
risk of privileged escalations. This research will outline some significant
security vulnerabilities at Docker and counter solutions to neutralize such
attacks. There are a variety of security attacks like insider and outsider.
This research will outline both types of attacks and their mitigations
strategies. Taking some precautionary measures can save from massive disasters.
This research will also present Docker secure deployment guidelines. These
guidelines will suggest different configurations to deploy Docker containers in
a more secure way.
|
[
{
"version": "v1",
"created": "Fri, 13 Apr 2018 17:10:17 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Aug 2021 19:15:55 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Apr 2023 08:54:33 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Yasrab",
"Robail",
""
]
] |
new_dataset
| 0.973347 |
2007.08368
|
David Orden
|
Bengt J. Nilsson, David Orden, Leonidas Palios, Carlos Seara, Pawe{\l}
\.Zyli\'nski
|
Shortest Watchman Tours in Simple Polygons under Rotated Monotone
Visibility
|
18 pages, 3 figures, an extended abstract will appear in Proceedings
of COCOON 2020 (Lecture Notes in Computer Science)
| null |
10.1007/978-3-030-58150-3_25
| null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an $O(nrG)$ time algorithm for computing and maintaining a minimum
length shortest watchman tour that sees a simple polygon under monotone
visibility in direction $\theta$, while $\theta$ varies in $[0,180^{\circ})$,
obtaining the directions for the tour to be the shortest one over all tours,
where $n$ is the number of vertices, $r$ is the number of reflex vertices, and
$G\leq r$ is the maximum number of gates of the polygon used at any time in the
algorithm.
|
[
{
"version": "v1",
"created": "Thu, 16 Jul 2020 14:43:59 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Nilsson",
"Bengt J.",
""
],
[
"Orden",
"David",
""
],
[
"Palios",
"Leonidas",
""
],
[
"Seara",
"Carlos",
""
],
[
"Żyliński",
"Paweł",
""
]
] |
new_dataset
| 0.973439 |
2007.10139
|
David Orden
|
David Flores-Pe\~naloza, Mikio Kano, Leonardo Mart\'inez-Sandoval,
David Orden, Javier Tejel, Csaba D. T\'oth, Jorge Urrutia, Birgit Vogtenhuber
|
Rainbow polygons for colored point sets in the plane
|
23 pages, 11 figures, to appear at Discrete Mathematics
| null |
10.1016/j.disc.2021.112406
| null |
cs.CG cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a colored point set in the plane, a perfect rainbow polygon is a simple
polygon that contains exactly one point of each color, either in its interior
or on its boundary. Let $\operatorname{rb-index}(S)$ denote the smallest size
of a perfect rainbow polygon for a colored point set $S$, and let
$\operatorname{rb-index}(k)$ be the maximum of $\operatorname{rb-index}(S)$
over all $k$-colored point sets in general position; that is, every $k$-colored
point set $S$ has a perfect rainbow polygon with at most
$\operatorname{rb-index}(k)$ vertices. In this paper, we determine the values
of $\operatorname{rb-index}(k)$ up to $k=7$, which is the first case where
$\operatorname{rb-index}(k)\neq k$, and we prove that for $k\ge 5$, \[
\frac{40\lfloor (k-1)/2 \rfloor -8}{19} %Birgit:
\leq\operatorname{rb-index}(k)\leq 10 \bigg\lfloor\frac{k}{7}\bigg\rfloor + 11.
\] Furthermore, for a $k$-colored set of $n$ points in the plane in general
position, a perfect rainbow polygon with at most $10 \lfloor\frac{k}{7}\rfloor
+ 11$ vertices can be computed in $O(n\log n)$ time.
|
[
{
"version": "v1",
"created": "Mon, 20 Jul 2020 14:17:26 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Mar 2021 07:02:30 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Flores-Peñaloza",
"David",
""
],
[
"Kano",
"Mikio",
""
],
[
"Martínez-Sandoval",
"Leonardo",
""
],
[
"Orden",
"David",
""
],
[
"Tejel",
"Javier",
""
],
[
"Tóth",
"Csaba D.",
""
],
[
"Urrutia",
"Jorge",
""
],
[
"Vogtenhuber",
"Birgit",
""
]
] |
new_dataset
| 0.998149 |
2201.07373
|
Robert Kent
|
Robert E. Kent
|
FOLE Equivalence
|
48 pages, 16 figures, 14 tables
| null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
The first-order logical environment FOLE [5] provides a rigorous and
principled approach to distributed interoperable first-order information
systems. FOLE has been developed in two forms: a classification form and an
interpretation form. Two papers represent FOLE in a classification form
corresponding to ideas of the Information Flow Framework [11],[12],[13]: the
first paper [6] provides a foundation that connects elements of the ERA data
model [2] with components of the first-order logical environment FOLE; the
second paper [7] provides a superstructure that extends FOLE to the formalisms
of first-order logic. The formalisms in the classification form of FOLE provide
an appropriate framework for developing the relational calculus. Two other
papers represent FOLE in an interpretation form: the first paper [8] develops
the notion of the FOLE table following the relational model [3]; the second
paper [9] discusses the notion of a FOLE relational database. All the
operations of the relational algebra have been rigorously developed [10] using
the interpretation form of FOLE. The present study demonstrates that the
classification form of FOLE is informationally equivalent to the interpretation
form of FOLE. In general, the FOLE representation uses a conceptual structures
approach, that is completely compatible with formal concept analysis [4] and
information flow [1].
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 01:20:21 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Jan 2022 20:59:58 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Feb 2023 21:04:59 GMT"
},
{
"version": "v4",
"created": "Wed, 8 Mar 2023 19:37:00 GMT"
},
{
"version": "v5",
"created": "Sat, 22 Apr 2023 18:58:32 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Kent",
"Robert E.",
""
]
] |
new_dataset
| 0.997006 |
2202.10240
|
Tsingsong Zhao
|
Qingsong Zhao, Zhipeng Zhou, Yi Wang, Yu Qiao, Cairong Zhao
|
Localformer: a Locality-Preserving Vision Transformer
|
Updating more experiments, and introducing more innovative content
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Zigzag flattening (ZF) is commonly used in computer vision as a default
option to unfold matrices, \eg in patch slicing for Vision Transformer (ViT).
However, when decomposing multi-scale-object web images, ZF cannot preserve the
smoothness of local information well. To address this, we draw inspiration from
Space-Filling Curves (SFC) and investigate Hilbert flattening (HF) as an
alternative for visual models. We provide a comprehensive theoretical
discussion and practical analysis, demonstrating the superiority of HF over
other SFC in locality and multi-scale robustness. We leverage HF to alleviate
the problem of the lack of locality bias in the shallow layers of ViT, which
formulates our Localformer. Extensive experiments demonstrate that Localformer
consistently improves performance for several common visual tasks.
Additionally, upon inspection, we find that Localformer enhances representation
learning and length extrapolation abilities of ViT.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 13:53:04 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2022 00:17:08 GMT"
},
{
"version": "v3",
"created": "Thu, 29 Dec 2022 10:58:04 GMT"
},
{
"version": "v4",
"created": "Mon, 30 Jan 2023 02:42:06 GMT"
},
{
"version": "v5",
"created": "Sun, 23 Apr 2023 11:04:22 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Zhao",
"Qingsong",
""
],
[
"Zhou",
"Zhipeng",
""
],
[
"Wang",
"Yi",
""
],
[
"Qiao",
"Yu",
""
],
[
"Zhao",
"Cairong",
""
]
] |
new_dataset
| 0.977965 |
2204.08516
|
Pedro Miguel Sanchez Sanchez
|
Pedro Miguel S\'anchez S\'anchez, Jos\'e Mar\'ia Jorquera Valero,
Alberto Huertas Celdr\'an, G\'er\^ome Bovet, Manuel Gil P\'erez, Gregorio
Mart\'inez P\'erez
|
LwHBench: A low-level hardware component benchmark and dataset for
Single Board Computers
| null | null |
10.1016/j.iot.2023.100764
| null |
cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
In today's computing environment, where Artificial Intelligence (AI) and data
processing are moving toward the Internet of Things (IoT) and Edge computing
paradigms, benchmarking resource-constrained devices is a critical task to
evaluate their suitability and performance. Between the employed devices,
Single-Board Computers arise as multi-purpose and affordable systems. The
literature has explored Single-Board Computers performance when running
high-level benchmarks specialized in particular application scenarios, such as
AI or medical applications. However, lower-level benchmarking applications and
datasets are needed to enable new Edge-based AI solutions for network, system
and service management based on device and component performance, such as
individual device identification. Thus, this paper presents LwHBench, a
low-level hardware benchmarking application for Single-Board Computers that
measures the performance of CPU, GPU, Memory and Storage taking into account
the component constraints in these types of devices. LwHBench has been
implemented for Raspberry Pi devices and run for 100 days on a set of 45
devices to generate an extensive dataset that allows the usage of AI techniques
in scenarios where performance data can help in the device management process.
Besides, to demonstrate the inter-scenario capability of the dataset, a series
of AI-enabled use cases about device identification and context impact on
performance are presented as exploration of the published data. Finally, the
benchmark application has been adapted and applied to an agriculture-focused
scenario where three RockPro64 devices are present.
|
[
{
"version": "v1",
"created": "Mon, 18 Apr 2022 18:58:38 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Oct 2022 12:02:35 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Sánchez",
"Pedro Miguel Sánchez",
""
],
[
"Valero",
"José María Jorquera",
""
],
[
"Celdrán",
"Alberto Huertas",
""
],
[
"Bovet",
"Gérôme",
""
],
[
"Pérez",
"Manuel Gil",
""
],
[
"Pérez",
"Gregorio Martínez",
""
]
] |
new_dataset
| 0.999799 |
2204.13662
|
Zicong Fan
|
Zicong Fan, Omid Taheri, Dimitrios Tzionas, Muhammed Kocabas, Manuel
Kaufmann, Michael J. Black, and Otmar Hilliges
|
ARCTIC: A Dataset for Dexterous Bimanual Hand-Object Manipulation
|
Project page: https://arctic.is.tue.mpg.de
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans intuitively understand that inanimate objects do not move by
themselves, but that state changes are typically caused by human manipulation
(e.g., the opening of a book). This is not yet the case for machines. In part
this is because there exist no datasets with ground-truth 3D annotations for
the study of physically consistent and synchronised motion of hands and
articulated objects. To this end, we introduce ARCTIC -- a dataset of two hands
that dexterously manipulate objects, containing 2.1M video frames paired with
accurate 3D hand and object meshes and detailed, dynamic contact information.
It contains bi-manual articulation of objects such as scissors or laptops,
where hand poses and object states evolve jointly in time. We propose two novel
articulated hand-object interaction tasks: (1) Consistent motion
reconstruction: Given a monocular video, the goal is to reconstruct two hands
and articulated objects in 3D, so that their motions are spatio-temporally
consistent. (2) Interaction field estimation: Dense relative hand-object
distances must be estimated from images. We introduce two baselines ArcticNet
and InterField, respectively and evaluate them qualitatively and quantitatively
on ARCTIC. Our code and data are available at https://arctic.is.tue.mpg.de.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 17:23:59 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Feb 2023 08:48:07 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Apr 2023 13:11:57 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Fan",
"Zicong",
""
],
[
"Taheri",
"Omid",
""
],
[
"Tzionas",
"Dimitrios",
""
],
[
"Kocabas",
"Muhammed",
""
],
[
"Kaufmann",
"Manuel",
""
],
[
"Black",
"Michael J.",
""
],
[
"Hilliges",
"Otmar",
""
]
] |
new_dataset
| 0.999822 |
2207.04785
|
Emily Wenger
|
Emily Wenger, Mingjie Chen, Fran\c{c}ois Charton, Kristin Lauter
|
SALSA: Attacking Lattice Cryptography with Transformers
|
Extended version of work published at NeurIPS 2022
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Currently deployed public-key cryptosystems will be vulnerable to attacks by
full-scale quantum computers. Consequently, "quantum resistant" cryptosystems
are in high demand, and lattice-based cryptosystems, based on a hard problem
known as Learning With Errors (LWE), have emerged as strong contenders for
standardization. In this work, we train transformers to perform modular
arithmetic and combine half-trained models with statistical cryptanalysis
techniques to propose SALSA: a machine learning attack on LWE-based
cryptographic schemes. SALSA can fully recover secrets for small-to-mid size
LWE instances with sparse binary secrets, and may scale to attack real-world
LWE-based cryptosystems.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2022 11:35:43 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Apr 2023 22:03:01 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Wenger",
"Emily",
""
],
[
"Chen",
"Mingjie",
""
],
[
"Charton",
"François",
""
],
[
"Lauter",
"Kristin",
""
]
] |
new_dataset
| 0.999605 |
2209.02755
|
Antonio Bucchiarone Dr.
|
Antonio Bucchiarone, Simone Bassanelli, Massimiliano Luca, Simone
Centellegher, Piergiorgio Cipriano, Luca Giovannini, Bruno Lepri, Annapaola
Marconi
|
Play&Go Corporate: An End-to-End Solution for Facilitating Urban
Cyclability
|
14 pages, 9 figures
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Mobility plays a fundamental role in modern cities. How citizens experience
the urban environment, access city core services, and participate in city life,
strongly depends on its mobility organization and efficiency. The challenges
that municipalities face are very ambitious: on the one hand, administrators
must guarantee their citizens the right to mobility and to easily access local
services; on the other hand, they need to minimize the economic, social, and
environmental costs of the mobility system. Municipalities are increasingly
facing problems of traffic congestion, road safety, energy dependency and air
pollution, and therefore encouraging a shift towards sustainable mobility
habits based on active mobility is of central importance. Active modes, such as
cycling, should be particularly encouraged, especially for local recurrent
journeys (e.g., home--to--school, home--to--work). In this context, addressing
and mitigating commuter-generated traffic requires engaging public and private
stakeholders through innovative and collaborative approaches that focus not
only on supply (e.g., roads and vehicles) but also on transportation demand
management. In this paper, we present an end-to-end solution, called Play&Go
Corporate, for enabling urban cyclability and its concrete exploitation in the
realization of a home-to-work sustainable mobility campaign (i.e., Bike2Work)
targeting employees of public and private companies. To evaluate the
effectiveness of the proposed solution we developed two analyses: the first to
carefully analyze the user experience and any behaviour change related to the
Bike2Work mobility campaign, and the second to demonstrate how exploiting the
collected data we can potentially inform and guide the involved municipality
(i.e., Ferrara, a city in Northern Italy) in improving urban cyclability.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 18:21:06 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Apr 2023 12:48:33 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Bucchiarone",
"Antonio",
""
],
[
"Bassanelli",
"Simone",
""
],
[
"Luca",
"Massimiliano",
""
],
[
"Centellegher",
"Simone",
""
],
[
"Cipriano",
"Piergiorgio",
""
],
[
"Giovannini",
"Luca",
""
],
[
"Lepri",
"Bruno",
""
],
[
"Marconi",
"Annapaola",
""
]
] |
new_dataset
| 0.999669 |
2209.14941
|
Yanmin Wu
|
Yanmin Wu, Xinhua Cheng, Renrui Zhang, Zesen Cheng, Jian Zhang
|
EDA: Explicit Text-Decoupling and Dense Alignment for 3D Visual
Grounding
|
CVPR2023, with supplementary material
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D visual grounding aims to find the object within point clouds mentioned by
free-form natural language descriptions with rich semantic cues. However,
existing methods either extract the sentence-level features coupling all words
or focus more on object names, which would lose the word-level information or
neglect other attributes. To alleviate these issues, we present EDA that
Explicitly Decouples the textual attributes in a sentence and conducts Dense
Alignment between such fine-grained language and point cloud objects.
Specifically, we first propose a text decoupling module to produce textual
features for every semantic component. Then, we design two losses to supervise
the dense matching between two modalities: position alignment loss and semantic
alignment loss. On top of that, we further introduce a new visual grounding
task, locating objects without object names, which can thoroughly evaluate the
model's dense alignment capacity. Through experiments, we achieve
state-of-the-art performance on two widely-adopted 3D visual grounding
datasets, ScanRefer and SR3D/NR3D, and obtain absolute leadership on our
newly-proposed task. The source code is available at
https://github.com/yanmin-wu/EDA.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2022 17:00:22 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Nov 2022 14:23:48 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Apr 2023 13:16:57 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Wu",
"Yanmin",
""
],
[
"Cheng",
"Xinhua",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Cheng",
"Zesen",
""
],
[
"Zhang",
"Jian",
""
]
] |
new_dataset
| 0.999637 |
2211.11082
|
Zhengqi Li
|
Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker, Noah
Snavely
|
DynIBaR: Neural Dynamic Image-Based Rendering
|
Award Candidate, CVPR 2023 Project page: dynibar.github.io
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the problem of synthesizing novel views from a monocular video
depicting a complex dynamic scene. State-of-the-art methods based on temporally
varying Neural Radiance Fields (aka dynamic NeRFs) have shown impressive
results on this task. However, for long videos with complex object motions and
uncontrolled camera trajectories, these methods can produce blurry or
inaccurate renderings, hampering their use in real-world applications. Instead
of encoding the entire dynamic scene within the weights of MLPs, we present a
new approach that addresses these limitations by adopting a volumetric
image-based rendering framework that synthesizes new viewpoints by aggregating
features from nearby views in a scene-motion-aware manner. Our system retains
the advantages of prior methods in its ability to model complex scenes and
view-dependent effects, but also enables synthesizing photo-realistic novel
views from long videos featuring complex scene dynamics with unconstrained
camera trajectories. We demonstrate significant improvements over
state-of-the-art methods on dynamic scene datasets, and also apply our approach
to in-the-wild videos with challenging camera and object motion, where prior
methods fail to produce high-quality renderings. Our project webpage is at
dynibar.github.io.
|
[
{
"version": "v1",
"created": "Sun, 20 Nov 2022 20:57:02 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Nov 2022 17:29:18 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Apr 2023 16:42:08 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Li",
"Zhengqi",
""
],
[
"Wang",
"Qianqian",
""
],
[
"Cole",
"Forrester",
""
],
[
"Tucker",
"Richard",
""
],
[
"Snavely",
"Noah",
""
]
] |
new_dataset
| 0.994602 |
2211.15444
|
Weihua Chen
|
Xianzhe Xu, Yiqi Jiang, Weihua Chen, Yilun Huang, Yuan Zhang, Xiuyu
Sun
|
DAMO-YOLO : A Report on Real-Time Object Detection Design
|
Project Website: https://github.com/tinyvision/damo-yolo
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this report, we present a fast and accurate object detection method dubbed
DAMO-YOLO, which achieves higher performance than the state-of-the-art YOLO
series. DAMO-YOLO is extended from YOLO with some new technologies, including
Neural Architecture Search (NAS), efficient Reparameterized Generalized-FPN
(RepGFPN), a lightweight head with AlignedOTA label assignment, and
distillation enhancement. In particular, we use MAE-NAS, a method guided by the
principle of maximum entropy, to search our detection backbone under the
constraints of low latency and high performance, producing ResNet/CSP-like
structures with spatial pyramid pooling and focus modules. In the design of
necks and heads, we follow the rule of ``large neck, small head''.We import
Generalized-FPN with accelerated queen-fusion to build the detector neck and
upgrade its CSPNet with efficient layer aggregation networks (ELAN) and
reparameterization. Then we investigate how detector head size affects
detection performance and find that a heavy neck with only one task projection
layer would yield better results.In addition, AlignedOTA is proposed to solve
the misalignment problem in label assignment. And a distillation schema is
introduced to improve performance to a higher level. Based on these new techs,
we build a suite of models at various scales to meet the needs of different
scenarios. For general industry requirements, we propose DAMO-YOLO-T/S/M/L.
They can achieve 43.6/47.7/50.2/51.9 mAPs on COCO with the latency of
2.78/3.83/5.62/7.95 ms on T4 GPUs respectively. Additionally, for edge devices
with limited computing power, we have also proposed DAMO-YOLO-Ns/Nm/Nl
lightweight models. They can achieve 32.3/38.2/40.5 mAPs on COCO with the
latency of 4.08/5.05/6.69 ms on X86-CPU. Our proposed general and lightweight
models have outperformed other YOLO series models in their respective
application scenarios.
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2022 17:59:12 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Dec 2022 10:03:25 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Mar 2023 14:35:16 GMT"
},
{
"version": "v4",
"created": "Mon, 24 Apr 2023 03:32:15 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Xu",
"Xianzhe",
""
],
[
"Jiang",
"Yiqi",
""
],
[
"Chen",
"Weihua",
""
],
[
"Huang",
"Yilun",
""
],
[
"Zhang",
"Yuan",
""
],
[
"Sun",
"Xiuyu",
""
]
] |
new_dataset
| 0.997985 |
2212.11172
|
Colin Decourt
|
Colin Decourt, Rufin VanRullen, Didier Salle and Thomas Oberlin
|
A recurrent CNN for online object detection on raw radar frames
|
10 pages, 3 figures
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automotive radar sensors provide valuable information for advanced driving
assistance systems (ADAS). Radars can reliably estimate the distance to an
object and the relative velocity, regardless of weather and light conditions.
However, radar sensors suffer from low resolution and huge intra-class
variations in the shape of objects. Exploiting the time information (e.g.,
multiple frames) has been shown to help to capture better the dynamics of
objects and, therefore, the variation in the shape of objects. Most temporal
radar object detectors use 3D convolutions to learn spatial and temporal
information. However, these methods are often non-causal and unsuitable for
real-time applications. This work presents RECORD, a new recurrent CNN
architecture for online radar object detection. We propose an end-to-end
trainable architecture mixing convolutions and ConvLSTMs to learn
spatio-temporal dependencies between successive frames. Our model is causal and
requires only the past information encoded in the memory of the ConvLSTMs to
detect objects. Our experiments show such a method's relevance for detecting
objects in different radar representations (range-Doppler, range-angle) and
outperform state-of-the-art models on the ROD2021 and CARRADA datasets while
being less computationally expensive.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 16:36:36 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Apr 2023 19:24:20 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Decourt",
"Colin",
""
],
[
"VanRullen",
"Rufin",
""
],
[
"Salle",
"Didier",
""
],
[
"Oberlin",
"Thomas",
""
]
] |
new_dataset
| 0.990855 |
2212.12061
|
Nuno Fachada
|
Alina Petukhova, Nuno Fachada
|
MN-DS: A Multilabeled News Dataset for News Articles Hierarchical
Classification
|
The peer-reviewed version of this paper is published in Data at
https://doi.org/10.3390/data8050074. This version is typeset by the authors
and differs only in pagination and typographical detail
|
Data, 8(5), 74, 2023
|
10.3390/data8050074
| null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This article presents a dataset of 10,917 news articles with hierarchical
news categories collected between 1 January 2019 and 31 December 2019. We
manually labeled the articles based on a hierarchical taxonomy with 17
first-level and 109 second-level categories. This dataset can be used to train
machine learning models for automatically classifying news articles by topic.
This dataset can be helpful for researchers working on news structuring,
classification, and predicting future events based on released news.
|
[
{
"version": "v1",
"created": "Thu, 22 Dec 2022 22:27:26 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Mar 2023 12:10:02 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Apr 2023 14:49:44 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Petukhova",
"Alina",
""
],
[
"Fachada",
"Nuno",
""
]
] |
new_dataset
| 0.999804 |
2302.06180
|
Yuntao Du
|
Yuntao Du, Yujia Hu, Zhikun Zhang, Ziquan Fang, Lu Chen, Baihua Zheng,
Yunjun Gao
|
LDPTrace: Locally Differentially Private Trajectory Synthesis
|
Accepted by VLDB 2023. Code is available:
https://github.com/zealscott/LDPTrace
| null |
10.14778/3594512.3594520
| null |
cs.DB cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Trajectory data has the potential to greatly benefit a wide-range of
real-world applications, such as tracking the spread of the disease through
people's movement patterns and providing personalized location-based services
based on travel preference. However, privay concerns and data protection
regulations have limited the extent to which this data is shared and utilized.
To overcome this challenge, local differential privacy provides a solution by
allowing people to share a perturbed version of their data, ensuring privacy as
only the data owners have access to the original information. Despite its
potential, existing point-based perturbation mechanisms are not suitable for
real-world scenarios due to poor utility, dependence on external knowledge,
high computational overhead, and vulnerability to attacks. To address these
limitations, we introduce LDPTrace, a novel locally differentially private
trajectory synthesis framework. Our framework takes into account three crucial
patterns inferred from users' trajectories in the local setting, allowing us to
synthesize trajectories that closely resemble real ones with minimal
computational cost. Additionally, we present a new method for selecting a
proper grid granularity without compromising privacy. Our extensive experiments
using real-world data, various utility metrics and attacks, demonstrate the
efficacy and efficiency of LDPTrace.
|
[
{
"version": "v1",
"created": "Mon, 13 Feb 2023 08:28:49 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Apr 2023 02:40:49 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Du",
"Yuntao",
""
],
[
"Hu",
"Yujia",
""
],
[
"Zhang",
"Zhikun",
""
],
[
"Fang",
"Ziquan",
""
],
[
"Chen",
"Lu",
""
],
[
"Zheng",
"Baihua",
""
],
[
"Gao",
"Yunjun",
""
]
] |
new_dataset
| 0.999416 |
2303.00173
|
Jingyao Zhang
|
Jingyao Zhang, Mohsen Imani, Elaheh Sadredini
|
BP-NTT: Fast and Compact in-SRAM Number Theoretic Transform with
Bit-Parallel Modular Multiplication
|
This work is accepted to the 60th Design Automation Conference (DAC),
2023
| null | null | null |
cs.AR cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Number Theoretic Transform (NTT) is an essential mathematical tool for
computing polynomial multiplication in promising lattice-based cryptography.
However, costly division operations and complex data dependencies make
efficient and flexible hardware design to be challenging, especially on
resource-constrained edge devices. Existing approaches either focus on only
limited parameter settings or impose substantial hardware overhead. In this
paper, we introduce a hardware-algorithm methodology to efficiently accelerate
NTT in various settings using in-cache computing. By leveraging an optimized
bit-parallel modular multiplication and introducing costless shift operations,
our proposed solution provides up to 29x higher throughput-per-area and
2.8-100x better throughput-per-area-per-joule compared to the state-of-the-art.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 02:02:47 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Mar 2023 06:22:15 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Apr 2023 11:24:08 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Zhang",
"Jingyao",
""
],
[
"Imani",
"Mohsen",
""
],
[
"Sadredini",
"Elaheh",
""
]
] |
new_dataset
| 0.999367 |
2303.03470
|
Robert Hallyburton
|
R. Spencer Hallyburton, Qingzhao Zhang, Z. Morley Mao, Miroslav Pajic
|
Partial-Information, Longitudinal Cyber Attacks on LiDAR in Autonomous
Vehicles
| null | null | null | null |
cs.CR cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
What happens to an autonomous vehicle (AV) if its data are adversarially
compromised? Prior security studies have addressed this question through mostly
unrealistic threat models, with limited practical relevance, such as white-box
adversarial learning or nanometer-scale laser aiming and spoofing. With growing
evidence that cyber threats pose real, imminent danger to AVs and
cyber-physical systems (CPS) in general, we present and evaluate a novel AV
threat model: a cyber-level attacker capable of disrupting sensor data but
lacking any situational awareness. We demonstrate that even though the attacker
has minimal knowledge and only access to raw data from a single sensor (i.e.,
LiDAR), she can design several attacks that critically compromise perception
and tracking in multi-sensor AVs. To mitigate vulnerabilities and advance
secure architectures in AVs, we introduce two improvements for security-aware
fusion: a probabilistic data-asymmetry monitor and a scalable track-to-track
fusion of 3D LiDAR and monocular detections (T2T-3DLM); we demonstrate that the
approaches significantly reduce attack effectiveness. To support objective
safety and security evaluations in AVs, we release our security evaluation
platform, AVsec, which is built on security-relevant metrics to benchmark AVs
on gold-standard longitudinal AV datasets and AV simulators.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 19:52:41 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Apr 2023 11:41:26 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Hallyburton",
"R. Spencer",
""
],
[
"Zhang",
"Qingzhao",
""
],
[
"Mao",
"Z. Morley",
""
],
[
"Pajic",
"Miroslav",
""
]
] |
new_dataset
| 0.979963 |
2304.10532
|
Aleksander Holynski
|
Frederik Warburg, Ethan Weber, Matthew Tancik, Aleksander Holynski,
Angjoo Kanazawa
|
Nerfbusters: Removing Ghostly Artifacts from Casually Captured NeRFs
|
https://ethanweber.me/nerfbusters
| null | null | null |
cs.CV cs.AI cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Casually captured Neural Radiance Fields (NeRFs) suffer from artifacts such
as floaters or flawed geometry when rendered outside the camera trajectory.
Existing evaluation protocols often do not capture these effects, since they
usually only assess image quality at every 8th frame of the training capture.
To push forward progress in novel-view synthesis, we propose a new dataset and
evaluation procedure, where two camera trajectories are recorded of the scene:
one used for training, and the other for evaluation. In this more challenging
in-the-wild setting, we find that existing hand-crafted regularizers do not
remove floaters nor improve scene geometry. Thus, we propose a 3D
diffusion-based method that leverages local 3D priors and a novel density-based
score distillation sampling loss to discourage artifacts during NeRF
optimization. We show that this data-driven prior removes floaters and improves
scene geometry for casual captures.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 17:59:05 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Apr 2023 22:41:20 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Warburg",
"Frederik",
""
],
[
"Weber",
"Ethan",
""
],
[
"Tancik",
"Matthew",
""
],
[
"Holynski",
"Aleksander",
""
],
[
"Kanazawa",
"Angjoo",
""
]
] |
new_dataset
| 0.999654 |
2304.11161
|
E. Canessa
|
E. Canessa and L. Tenze
|
altiro3D: Scene representation from single image and novel view
synthesis
|
9 pages, 4 figues
| null | null | null |
cs.CV cs.GR cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce altiro3D, a free extended library developed to represent reality
starting from a given original RGB image or flat video. It allows to generate a
light-field (or Native) image or video and get a realistic 3D experience. To
synthesize N-number of virtual images and add them sequentially into a Quilt
collage, we apply MiDaS models for the monocular depth estimation, simple
OpenCV and Telea inpainting techniques to map all pixels, and implement a
'Fast' algorithm to handle 3D projection camera and scene transformations along
N-viewpoints. We use the degree of depth to move proportionally the pixels,
assuming the original image to be at the center of all the viewpoints. altiro3D
can also be used with DIBR algorithm to compute intermediate snapshots from a
equivalent 'Real (slower)' camera with N-geometric viewpoints, which requires
to calibrate a priori several intrinsic and extrinsic camera parameters. We
adopt a pixel- and device-based Lookup Table to optimize computing time. The
multiple viewpoints and video generated from a single image or frame can be
displayed in a free-view LCD display.
|
[
{
"version": "v1",
"created": "Sun, 2 Apr 2023 16:03:44 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Canessa",
"E.",
""
],
[
"Tenze",
"L.",
""
]
] |
new_dataset
| 0.999476 |
2304.11163
|
Atoosa Kasirzadeh
|
Atoosa Kasirzadeh
|
ChatGPT, Large Language Technologies, and the Bumpy Road of Benefiting
Humanity
|
As part of a series on Dailynous : "Philosophers on next-generation
large language models"
| null | null | null |
cs.CY cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The allure of emerging AI technologies is undoubtedly thrilling. However, the
promise that AI technologies will benefit all of humanity is empty so long as
we lack a nuanced understanding of what humanity is supposed to be in the face
of widening global inequality and pressing existential threats. Going forward,
it is crucial to invest in rigorous and collaborative AI safety and ethics
research. We also need to develop standards in a sustainable and equitable way
that differentiate between merely speculative and well-researched questions.
Only the latter enable us to co-construct and deploy the values that are
necessary for creating beneficial AI. Failure to do so could result in a future
in which our AI technological advancements outstrip our ability to navigate
their ethical and social implications. This path we do not want to go down.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 22:53:45 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Kasirzadeh",
"Atoosa",
""
]
] |
new_dataset
| 0.976276 |
2304.11196
|
Saeejith Nair
|
Alexander Wong, Yifan Wu, Saad Abbasi, Saeejith Nair, Yuhao Chen,
Mohammad Javad Shafiee
|
Fast GraspNeXt: A Fast Self-Attention Neural Network Architecture for
Multi-task Learning in Computer Vision Tasks for Robotic Grasping on the Edge
|
Accepted at CVPR-NAS 2023 Workshop
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-task learning has shown considerable promise for improving the
performance of deep learning-driven vision systems for the purpose of robotic
grasping. However, high architectural and computational complexity can result
in poor suitability for deployment on embedded devices that are typically
leveraged in robotic arms for real-world manufacturing and warehouse
environments. As such, the design of highly efficient multi-task deep neural
network architectures tailored for computer vision tasks for robotic grasping
on the edge is highly desired for widespread adoption in manufacturing
environments. Motivated by this, we propose Fast GraspNeXt, a fast
self-attention neural network architecture tailored for embedded multi-task
learning in computer vision tasks for robotic grasping. To build Fast
GraspNeXt, we leverage a generative network architecture search strategy with a
set of architectural constraints customized to achieve a strong balance between
multi-task learning performance and embedded inference efficiency. Experimental
results on the MetaGraspNet benchmark dataset show that the Fast GraspNeXt
network design achieves the highest performance (average precision (AP),
accuracy, and mean squared error (MSE)) across multiple computer vision tasks
when compared to other efficient multi-task network architecture designs, while
having only 17.8M parameters (about >5x smaller), 259 GFLOPs (as much as >5x
lower) and as much as >3.15x faster on a NVIDIA Jetson TX2 embedded processor.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 18:07:14 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Wong",
"Alexander",
""
],
[
"Wu",
"Yifan",
""
],
[
"Abbasi",
"Saad",
""
],
[
"Nair",
"Saeejith",
""
],
[
"Chen",
"Yuhao",
""
],
[
"Shafiee",
"Mohammad Javad",
""
]
] |
new_dataset
| 0.959438 |
2304.11219
|
Rishov Sarkar
|
Rishov Sarkar, Cong Hao
|
LightningSim: Fast and Accurate Trace-Based Simulation for High-Level
Synthesis
|
11 pages, 7 figures. Accepted at FCCM 2023
| null | null | null |
cs.PF cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-Level Synthesis allows hardware designers to create complex RTL designs
using C/C++. The traditional HLS workflow involves iterations of C/C++
simulation for partial functional verification and HLS synthesis for coarse
timing estimates. However, neither C/C++ simulation nor HLS synthesis estimates
can account for complex behaviors like FIFO interactions and pipeline stalls,
thereby obscuring problems like deadlocks and latency overheads. Such problems
are revealed only through C/RTL co-simulation, which is typically orders of
magnitude slower than either C/C++ simulation or HLS synthesis, far too slow to
integrate into the edit-run development cycle. Addressing this, we propose
LightningSim, a fast simulation tool for HLS that combines the speed of native
C/C++ with the accuracy of C/RTL co-simulation. LightningSim directly operates
on the LLVM intermediate representation (IR) code and accurately simulates a
hardware design's dynamic behavior. First, it traces LLVM IR execution to
capture the run-time information; second, it maps the static HLS scheduling
information to the trace to simulate the dynamic behavior; third, it calculates
stalls and deadlocks from inter-function interactions to get precise cycle
counts. Evaluated on 33 benchmarks, LightningSim produces 99.9%-accurate timing
estimates up to 95x faster than RTL simulation. Our code is publicly available
on GitHub.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 18:58:54 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Sarkar",
"Rishov",
""
],
[
"Hao",
"Cong",
""
]
] |
new_dataset
| 0.959441 |
2304.11249
|
Matija Ter\v{s}ek
|
Matija Ter\v{s}ek and Lojze \v{Z}ust and Matej Kristan
|
eWaSR -- an embedded-compute-ready maritime obstacle detection network
|
18 pages, 7 figures, submitted to MDPI Sensors
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Maritime obstacle detection is critical for safe navigation of autonomous
surface vehicles (ASVs). While the accuracy of image-based detection methods
has advanced substantially, their computational and memory requirements
prohibit deployment on embedded devices. In this paper we analyze the currently
best-performing maritime obstacle detection network WaSR. Based on the analysis
we then propose replacements for the most computationally intensive stages and
propose its embedded-compute-ready variant eWaSR. In particular, the new design
follows the most recent advancements of transformer-based lightweight networks.
eWaSR achieves comparable detection results to state-of-the-art WaSR with only
0.52% F1 score performance drop and outperforms other state-of-the-art
embedded-ready architectures by over 9.74% in F1 score. On a standard GPU,
eWaSR runs 10x faster than the original WaSR (115 FPS vs 11 FPS). Tests on a
real embedded device OAK-D show that, while WaSR cannot run due to memory
restrictions, eWaSR runs comfortably at 5.5 FPS. This makes eWaSR the first
practical embedded-compute-ready maritime obstacle detection network. The
source code and trained eWaSR models are publicly available here:
https://github.com/tersekmatija/eWaSR.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 20:53:51 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Teršek",
"Matija",
""
],
[
"Žust",
"Lojze",
""
],
[
"Kristan",
"Matej",
""
]
] |
new_dataset
| 0.999197 |
2304.11291
|
Noreen Anwar
|
Noreen Anwar, Philippe Duplessis-Guindon, Guillaume-Alexandre Bilodeau
and Wassim Bouachir
|
VisiTherS: Visible-thermal infrared stereo disparity estimation of human
silhouette
|
8 pages,3 Figures,CVPR workshop
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel approach for visible-thermal infrared
stereoscopy, focusing on the estimation of disparities of human silhouettes.
Visible-thermal infrared stereo poses several challenges, including occlusions
and differently textured matching regions in both spectra. Finding matches
between two spectra with varying colors, textures, and shapes adds further
complexity to the task. To address the aforementioned challenges, this paper
proposes a novel approach where a high-resolution convolutional neural network
is used to better capture relationships between the two spectra. To do so, a
modified HRNet backbone is used for feature extraction. This HRNet backbone is
capable of capturing fine details and textures as it extracts features at
multiple scales, thereby enabling the utilization of both local and global
information. For matching visible and thermal infrared regions, our method
extracts features on each patch using two modified HRNet streams. Features from
the two streams are then combined for predicting the disparities by
concatenation and correlation. Results on public datasets demonstrate the
effectiveness of the proposed approach by improving the results by
approximately 18 percentage points on the $\leq$ 1 pixel error, highlighting
its potential for improving accuracy in this task. The code of VisiTherS is
available on GitHub at the following link
https://github.com/philippeDG/VisiTherS.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 01:53:28 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Anwar",
"Noreen",
""
],
[
"Duplessis-Guindon",
"Philippe",
""
],
[
"Bilodeau",
"Guillaume-Alexandre",
""
],
[
"Bouachir",
"Wassim",
""
]
] |
new_dataset
| 0.974634 |
2304.11293
|
Katie Seaborn
|
Jacqueline Urakami, Katie Seaborn
|
Nonverbal Cues in Human-Robot Interaction: A Communication Studies
Perspective
|
21 pages
|
J. Hum.-Robot Interact. 12, 2, Article 22 (June 2023), 21 pages
|
10.1145/3570169
|
Article 22
|
cs.RO cs.AI cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Communication between people is characterized by a broad range of nonverbal
cues. Transferring these cues into the design of robots and other artificial
agents that interact with people may foster more natural, inviting, and
accessible experiences. In this position paper, we offer a series of definitive
nonverbal codes for human-robot interaction (HRI) that address the five human
sensory systems (visual, auditory, haptic, olfactory, gustatory) drawn from the
field of communication studies. We discuss how these codes can be translated
into design patterns for HRI using a curated sample of the communication
studies and HRI literatures. As nonverbal codes are an essential mode in human
communication, we argue that integrating robotic nonverbal codes in HRI will
afford robots a feeling of "aliveness" or "social agency" that would otherwise
be missing. We end with suggestions for research directions to stimulate work
on nonverbal communication within the field of HRI and improve communication
between human and robots.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 02:15:48 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Urakami",
"Jacqueline",
""
],
[
"Seaborn",
"Katie",
""
]
] |
new_dataset
| 0.991964 |
2304.11300
|
Zilong Lin
|
Zilong Lin, Zhengyi Li, Xiaojing Liao, XiaoFeng Wang, Xiaozhong Liu
|
MAWSEO: Adversarial Wiki Search Poisoning for Illicit Online Promotion
| null | null | null | null |
cs.CR cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a prominent instance of vandalism edits, Wiki search poisoning for illicit
promotion is a cybercrime in which the adversary aims at editing Wiki articles
to promote illicit businesses through Wiki search results of relevant queries.
In this paper, we report a study that, for the first time, shows that such
stealthy blackhat SEO on Wiki can be automated. Our technique, called MAWSEO,
employs adversarial revisions to achieve real-world cybercriminal objectives,
including rank boosting, vandalism detection evasion, topic relevancy, semantic
consistency, user awareness (but not alarming) of promotional content, etc. Our
evaluation and user study demonstrate that MAWSEO is able to effectively and
efficiently generate adversarial vandalism edits, which can bypass
state-of-the-art built-in Wiki vandalism detectors, and also get promotional
content through to Wiki users without triggering their alarms. In addition, we
investigated potential defense, including coherence based detection and
adversarial training of vandalism detection, against our attack in the Wiki
ecosystem.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 03:13:05 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Lin",
"Zilong",
""
],
[
"Li",
"Zhengyi",
""
],
[
"Liao",
"Xiaojing",
""
],
[
"Wang",
"XiaoFeng",
""
],
[
"Liu",
"Xiaozhong",
""
]
] |
new_dataset
| 0.997843 |
2304.11342
|
Baao Xie
|
Baao Xie, Bohan Li, Zequn Zhang, Junting Dong, Xin Jin, Jingyu Yang,
Wenjun Zeng
|
NaviNeRF: NeRF-based 3D Representation Disentanglement by Latent
Semantic Navigation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D representation disentanglement aims to identify, decompose, and manipulate
the underlying explanatory factors of 3D data, which helps AI fundamentally
understand our 3D world. This task is currently under-explored and poses great
challenges: (i) the 3D representations are complex and in general contains much
more information than 2D image; (ii) many 3D representations are not well
suited for gradient-based optimization, let alone disentanglement. To address
these challenges, we use NeRF as a differentiable 3D representation, and
introduce a self-supervised Navigation to identify interpretable semantic
directions in the latent space. To our best knowledge, this novel method,
dubbed NaviNeRF, is the first work to achieve fine-grained 3D disentanglement
without any priors or supervisions. Specifically, NaviNeRF is built upon the
generative NeRF pipeline, and equipped with an Outer Navigation Branch and an
Inner Refinement Branch. They are complementary -- the outer navigation is to
identify global-view semantic directions, and the inner refinement dedicates to
fine-grained attributes. A synergistic loss is further devised to coordinate
two branches. Extensive experiments demonstrate that NaviNeRF has a superior
fine-grained 3D disentanglement ability than the previous 3D-aware models. Its
performance is also comparable to editing-oriented models relying on semantic
or geometry priors.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 07:48:17 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Xie",
"Baao",
""
],
[
"Li",
"Bohan",
""
],
[
"Zhang",
"Zequn",
""
],
[
"Dong",
"Junting",
""
],
[
"Jin",
"Xin",
""
],
[
"Yang",
"Jingyu",
""
],
[
"Zeng",
"Wenjun",
""
]
] |
new_dataset
| 0.999309 |
2304.11377
|
Sibi Chakkaravarthy S
|
Sibi Chakkaravarthy Sethuraman, Gaurav Reddy Tadkapally, Athresh
Kiran, Saraju P. Mohanty, Anitha Subramanian
|
SimplyMime: A Control at Our Fingertips
| null | null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The utilization of consumer electronics, such as televisions, set-top boxes,
home theaters, and air conditioners, has become increasingly prevalent in
modern society as technology continues to evolve. As new devices enter our
homes each year, the accumulation of multiple infrared remote controls to
operate them not only results in a waste of energy and resources, but also
creates a cumbersome and cluttered environment for the user. This paper
presents a novel system, named SimplyMime, which aims to eliminate the need for
multiple remote controls for consumer electronics and provide the user with
intuitive control without the need for additional devices. SimplyMime leverages
a dynamic hand gesture recognition architecture, incorporating Artificial
Intelligence and Human-Computer Interaction, to create a sophisticated system
that enables users to interact with a vast majority of consumer electronics
with ease. Additionally, SimplyMime has a security aspect where it can verify
and authenticate the user utilising the palmprint, which ensures that only
authorized users can control the devices. The performance of the proposed
method for detecting and recognizing gestures in a stream of motion was
thoroughly tested and validated using multiple benchmark datasets, resulting in
commendable accuracy levels. One of the distinct advantages of the proposed
method is its minimal computational power requirements, making it highly
adaptable and reliable in a wide range of circumstances. The paper proposes
incorporating this technology into all consumer electronic devices that
currently require a secondary remote for operation, thus promoting a more
efficient and sustainable living environment.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 11:25:19 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Sethuraman",
"Sibi Chakkaravarthy",
""
],
[
"Tadkapally",
"Gaurav Reddy",
""
],
[
"Kiran",
"Athresh",
""
],
[
"Mohanty",
"Saraju P.",
""
],
[
"Subramanian",
"Anitha",
""
]
] |
new_dataset
| 0.999907 |
2304.11385
|
Hyuckjin Choi
|
Hyuckjin Choi, Jaehoon Chung, Jaeky Oh, George C. Alexandropoulos, and
Junil Choi
|
WiThRay: A Versatile Ray-Tracing Simulator for Smart Wireless
Environments
|
23 pages, 25 figures, submitted to IEEE Access
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the development and evaluation of WiThRay, a new wireless
three-dimensional ray-tracing (RT) simulator. RT-based simulators are widely
used for generating realistic channel data by combining RT methodology to get
signal trajectories and electromagnetic (EM) equations, resulting in
generalized channel impulse responses (CIRs). This paper first provides a
comprehensive comparison on methodologies of existing RT-based simulators. We
then introduce WiThRay, which can evaluate the performance of various wireless
communication techniques such as channel estimation/tracking, beamforming, and
localization in realistic EM wave propagation. WiThRay implements its own RT
methodology, the bypassing on edge (BE) algorithm, that follows the Fermat's
principle and has low computational complexity. The scattering ray calibration
in WiThRay also provides a precise solution in the analysis of EM propagation.
Different from most of the previous RT-based simulators, WiThRay incorporates
reconfigurable intelligent surfaces (RIS), which will be a key component of
future wireless communications. We thoroughly show that the channel data from
WiThRay match sufficiently well with the fundamental theory of wireless
channels. The virtue of WiThRay lies in its feature of not making any
assumption about the channel, like being slow/fast fading or frequency
selective. A realistic wireless environment, which can be conveniently
simulated via WiThRay, naturally defines the physical properties of the
wireless channels. WiThRay is open to the public, and anyone can exploit this
versatile simulator to develop and test their communications and signal
processing techniques.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 12:30:02 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Choi",
"Hyuckjin",
""
],
[
"Chung",
"Jaehoon",
""
],
[
"Oh",
"Jaeky",
""
],
[
"Alexandropoulos",
"George C.",
""
],
[
"Choi",
"Junil",
""
]
] |
new_dataset
| 0.994769 |
2304.11408
|
Siddique Latif
|
Ahlam Husni Abu Nada, Siddique Latif, and Junaid Qadir
|
Lightweight Toxicity Detection in Spoken Language: A Transformer-based
Approach for Edge Devices
|
Under Rewiew
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Toxicity is a prevalent social behavior that involves the use of hate speech,
offensive language, bullying, and abusive speech. While text-based approaches
for toxicity detection are common, there is limited research on processing
speech signals in the physical world. Detecting toxicity in the physical world
is challenging due to the difficulty of integrating AI-capable computers into
the environment. We propose a lightweight transformer model based on wav2vec2.0
and optimize it using techniques such as quantization and knowledge
distillation. Our model uses multitask learning and achieves an average macro
F1-score of 90.3\% and a weighted accuracy of 88\%, outperforming
state-of-the-art methods on DeToxy-B and a public dataset. Our results show
that quantization reduces the model size by almost 4 times and RAM usage by
3.3\%, with only a 1\% F1 score decrease. Knowledge distillation reduces the
model size by 3.7 times, RAM usage by 1.9, and inference time by 2 times, but
decreases accuracy by 8\%. Combining both techniques reduces the model size by
14.6 times and RAM usage by around 4.3 times, with a two-fold inference time
improvement. Our compact model is the first end-to-end speech-based toxicity
detection model based on a lightweight transformer model suitable for
deployment in physical spaces. The results show its feasibility for toxicity
detection on edge devices in real-world environments.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 13:45:38 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Nada",
"Ahlam Husni Abu",
""
],
[
"Latif",
"Siddique",
""
],
[
"Qadir",
"Junaid",
""
]
] |
new_dataset
| 0.988818 |
2304.11411
|
Heng Wang
|
Heng Wang, Wenqian Zhang, Yuyang Bai, Zhaoxuan Tan, Shangbin Feng,
Qinghua Zheng, Minnan Luo
|
Detecting Spoilers in Movie Reviews with External Movie Knowledge and
User Networks
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Online movie review platforms are providing crowdsourced feedback for the
film industry and the general public, while spoiler reviews greatly compromise
user experience. Although preliminary research efforts were made to
automatically identify spoilers, they merely focus on the review content
itself, while robust spoiler detection requires putting the review into the
context of facts and knowledge regarding movies, user behavior on film review
platforms, and more. In light of these challenges, we first curate a
large-scale network-based spoiler detection dataset LCS and a comprehensive and
up-to-date movie knowledge base UKM. We then propose MVSD, a novel Multi-View
Spoiler Detection framework that takes into account the external knowledge
about movies and user activities on movie review platforms. Specifically, MVSD
constructs three interconnecting heterogeneous information networks to model
diverse data sources and their multi-view attributes, while we design and
employ a novel heterogeneous graph neural network architecture for spoiler
detection as node-level classification. Extensive experiments demonstrate that
MVSD advances the state-of-the-art on two spoiler detection datasets, while the
introduction of external knowledge and user interactions help ground robust
spoiler detection. Our data and code are available at
https://github.com/Arthur-Heng/Spoiler-Detection
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 13:54:31 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Wang",
"Heng",
""
],
[
"Zhang",
"Wenqian",
""
],
[
"Bai",
"Yuyang",
""
],
[
"Tan",
"Zhaoxuan",
""
],
[
"Feng",
"Shangbin",
""
],
[
"Zheng",
"Qinghua",
""
],
[
"Luo",
"Minnan",
""
]
] |
new_dataset
| 0.999586 |
2304.11422
|
Xiaowen Ma
|
Xiaowen Ma, Jiawei Yang, Tingfeng Hong, Mengting Ma, Ziyan Zhao, Tian
Feng and Wei Zhang
|
STNet: Spatial and Temporal feature fusion network for change detection
in remote sensing images
|
Accepted by ICME 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As an important task in remote sensing image analysis, remote sensing change
detection (RSCD) aims to identify changes of interest in a region from
spatially co-registered multi-temporal remote sensing images, so as to monitor
the local development. Existing RSCD methods usually formulate RSCD as a binary
classification task, representing changes of interest by merely feature
concatenation or feature subtraction and recovering the spatial details via
densely connected change representations, whose performances need further
improvement. In this paper, we propose STNet, a RSCD network based on spatial
and temporal feature fusions. Specifically, we design a temporal feature fusion
(TFF) module to combine bi-temporal features using a cross-temporal gating
mechanism for emphasizing changes of interest; a spatial feature fusion module
is deployed to capture fine-grained information using a cross-scale attention
mechanism for recovering the spatial details of change representations.
Experimental results on three benchmark datasets for RSCD demonstrate that the
proposed method achieves the state-of-the-art performance. Code is available at
https://github.com/xwmaxwma/rschange.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 14:40:41 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Ma",
"Xiaowen",
""
],
[
"Yang",
"Jiawei",
""
],
[
"Hong",
"Tingfeng",
""
],
[
"Ma",
"Mengting",
""
],
[
"Zhao",
"Ziyan",
""
],
[
"Feng",
"Tian",
""
],
[
"Zhang",
"Wei",
""
]
] |
new_dataset
| 0.996173 |
2304.11429
|
Ioannis Mantas
|
Carlos Alegr\'ia, Ioannis Mantas, Evanthia Papadopoulou, Marko
Savi\'c, Carlos Seara, Martin Suderland
|
The Voronoi Diagram of Rotating Rays with applications to Floodlight
Illumination
| null |
In Proceedings of the 29th Annual European Symposium on Algorithms
(ESA 2021), pages 5:1-5:16, 2021
|
10.4230/LIPIcs.ESA.2021.5
| null |
cs.CG cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
We study the Voronoi Diagram of Rotating Rays, a Voronoi structure where the
input sites are rays and the distance function between a point and a site/ray,
is the counterclockwise angular distance. This novel Voronoi diagram is
motivated by illumination or coverage problems, where a domain must be covered
by floodlights/wedges of uniform angle, and the goal is to find the minimum
angle necessary to cover the domain. We study the diagram in the plane, and we
present structural properties, combinatorial complexity bounds, and a
construction algorithm. If the rays are induced by a convex polygon, we show
how to construct the Voronoi diagram within this polygon in linear time. Using
this information, we can find in optimal linear time the Brocard angle, the
minimum angle required to illuminate a convex polygon with floodlights of
uniform angle.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 15:25:01 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Alegría",
"Carlos",
""
],
[
"Mantas",
"Ioannis",
""
],
[
"Papadopoulou",
"Evanthia",
""
],
[
"Savić",
"Marko",
""
],
[
"Seara",
"Carlos",
""
],
[
"Suderland",
"Martin",
""
]
] |
new_dataset
| 0.993601 |
2304.11448
|
Tian Li
|
Tian Li, LU Li, Wei Wang, Zhangchi Feng
|
Dehazing-NeRF: Neural Radiance Fields from Hazy Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural Radiance Field (NeRF) has received much attention in recent years due
to the impressively high quality in 3D scene reconstruction and novel view
synthesis. However, image degradation caused by the scattering of atmospheric
light and object light by particles in the atmosphere can significantly
decrease the reconstruction quality when shooting scenes in hazy conditions. To
address this issue, we propose Dehazing-NeRF, a method that can recover clear
NeRF from hazy image inputs. Our method simulates the physical imaging process
of hazy images using an atmospheric scattering model, and jointly learns the
atmospheric scattering model and a clean NeRF model for both image dehazing and
novel view synthesis. Different from previous approaches, Dehazing-NeRF is an
unsupervised method with only hazy images as the input, and also does not rely
on hand-designed dehazing priors. By jointly combining the depth estimated from
the NeRF 3D scene with the atmospheric scattering model, our proposed model
breaks through the ill-posed problem of single-image dehazing while maintaining
geometric consistency. Besides, to alleviate the degradation of image quality
caused by information loss, soft margin consistency regularization, as well as
atmospheric consistency and contrast discriminative loss, are addressed during
the model training process. Extensive experiments demonstrate that our method
outperforms the simple combination of single-image dehazing and NeRF on both
image dehazing and novel view image synthesis.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 17:09:05 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Li",
"Tian",
""
],
[
"Li",
"LU",
""
],
[
"Wang",
"Wei",
""
],
[
"Feng",
"Zhangchi",
""
]
] |
new_dataset
| 0.983041 |
2304.11487
|
Ibrahim Fayad
|
Ibrahim Fayad, Philippe Ciais, Martin Schwartz, Jean-Pierre Wigneron,
Nicolas Baghdadi, Aur\'elien de Truchis, Alexandre d'Aspremont, Frederic
Frappart, Sassan Saatchi, Agnes Pellissier-Tanon and Hassan Bazzi
|
Vision Transformers, a new approach for high-resolution and large-scale
mapping of canopy heights
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Accurate and timely monitoring of forest canopy heights is critical for
assessing forest dynamics, biodiversity, carbon sequestration as well as forest
degradation and deforestation. Recent advances in deep learning techniques,
coupled with the vast amount of spaceborne remote sensing data offer an
unprecedented opportunity to map canopy height at high spatial and temporal
resolutions. Current techniques for wall-to-wall canopy height mapping
correlate remotely sensed 2D information from optical and radar sensors to the
vertical structure of trees using LiDAR measurements. While studies using deep
learning algorithms have shown promising performances for the accurate mapping
of canopy heights, they have limitations due to the type of architectures and
loss functions employed. Moreover, mapping canopy heights over tropical forests
remains poorly studied, and the accurate height estimation of tall canopies is
a challenge due to signal saturation from optical and radar sensors, persistent
cloud covers and sometimes the limited penetration capabilities of LiDARs.
Here, we map heights at 10 m resolution across the diverse landscape of Ghana
with a new vision transformer (ViT) model optimized concurrently with a
classification (discrete) and a regression (continuous) loss function. This
model achieves better accuracy than previously used convolutional based
approaches (ConvNets) optimized with only a continuous loss function. The ViT
model results show that our proposed discrete/continuous loss significantly
increases the sensitivity for very tall trees (i.e., > 35m), for which other
approaches show saturation effects. The height maps generated by the ViT also
have better ground sampling distance and better sensitivity to sparse
vegetation in comparison to a convolutional model. Our ViT model has a RMSE of
3.12m in comparison to a reference dataset while the ConvNet model has a RMSE
of 4.3m.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 22:39:03 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Fayad",
"Ibrahim",
""
],
[
"Ciais",
"Philippe",
""
],
[
"Schwartz",
"Martin",
""
],
[
"Wigneron",
"Jean-Pierre",
""
],
[
"Baghdadi",
"Nicolas",
""
],
[
"de Truchis",
"Aurélien",
""
],
[
"d'Aspremont",
"Alexandre",
""
],
[
"Frappart",
"Frederic",
""
],
[
"Saatchi",
"Sassan",
""
],
[
"Pellissier-Tanon",
"Agnes",
""
],
[
"Bazzi",
"Hassan",
""
]
] |
new_dataset
| 0.964753 |
2304.11527
|
Jake Buzhardt
|
Jake Buzhardt, Prashanth Chivkula, and Phanindra Tallapragada
|
A Pendulum-Driven Legless Rolling Jumping Robot
|
7 pages, 7 figures. Submitted to IROS 2023. View the supplemental
video at https://youtu.be/9hKQilCpeaw
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a novel rolling, jumping robot. The robot consists
of a driven pendulum mounted to a wheel in a compact, lightweight, 3D printed
design. We show that by using the driven pendulum to change it's weight
distribution, the robot is able to obtain significant rolling speed, achieve
jumps of up to 2.5 body lengths vertically, and also clear horizontal distances
of over 6 body lengths while jumping. The robot's dynamic model is derived and
simulation results indicate that it is consistent with the motion and jumping
observed on the robot. The ability to both roll and jump effectively using a
minimalistic design makes this robot unique and could inspire the use of
similar mechanisms on robots intended for applications in which agile
locomotion on unstructured terrain is necessary, such as disaster response or
planetary exploration.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 03:55:52 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Buzhardt",
"Jake",
""
],
[
"Chivkula",
"Prashanth",
""
],
[
"Tallapragada",
"Phanindra",
""
]
] |
new_dataset
| 0.984665 |
2304.11567
|
Wenxiong Liao
|
Wenxiong Liao, Zhengliang Liu, Haixing Dai, Shaochen Xu, Zihao Wu,
Yiyang Zhang, Xiaoke Huang, Dajiang Zhu, Hongmin Cai, Tianming Liu, Xiang Li
|
Differentiate ChatGPT-generated and Human-written Medical Texts
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background: Large language models such as ChatGPT are capable of generating
grammatically perfect and human-like text content, and a large number of
ChatGPT-generated texts have appeared on the Internet. However, medical texts
such as clinical notes and diagnoses require rigorous validation, and erroneous
medical content generated by ChatGPT could potentially lead to disinformation
that poses significant harm to healthcare and the general public.
Objective: This research is among the first studies on responsible and
ethical AIGC (Artificial Intelligence Generated Content) in medicine. We focus
on analyzing the differences between medical texts written by human experts and
generated by ChatGPT, and designing machine learning workflows to effectively
detect and differentiate medical texts generated by ChatGPT.
Methods: We first construct a suite of datasets containing medical texts
written by human experts and generated by ChatGPT. In the next step, we analyze
the linguistic features of these two types of content and uncover differences
in vocabulary, part-of-speech, dependency, sentiment, perplexity, etc. Finally,
we design and implement machine learning methods to detect medical text
generated by ChatGPT.
Results: Medical texts written by humans are more concrete, more diverse, and
typically contain more useful information, while medical texts generated by
ChatGPT pay more attention to fluency and logic, and usually express general
terminologies rather than effective information specific to the context of the
problem. A BERT-based model can effectively detect medical texts generated by
ChatGPT, and the F1 exceeds 95%.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 07:38:07 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Liao",
"Wenxiong",
""
],
[
"Liu",
"Zhengliang",
""
],
[
"Dai",
"Haixing",
""
],
[
"Xu",
"Shaochen",
""
],
[
"Wu",
"Zihao",
""
],
[
"Zhang",
"Yiyang",
""
],
[
"Huang",
"Xiaoke",
""
],
[
"Zhu",
"Dajiang",
""
],
[
"Cai",
"Hongmin",
""
],
[
"Liu",
"Tianming",
""
],
[
"Li",
"Xiang",
""
]
] |
new_dataset
| 0.993053 |
2304.11600
|
Nader Meskin Dr.
|
Vahid Hamdipoor, Nader Meskin, and Christos G. Cassandras
|
Safe Control Synthesis Using Environmentally Robust Control Barrier
Functions
| null | null | null | null |
cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study a safe control design for dynamical systems in the
presence of uncertainty in a dynamical environment. The worst-case error
approach is considered to formulate robust Control Barrier Functions (CBFs) in
an optimization-based control synthesis framework. It is first shown that
environmentally robust CBF formulations result in second-order cone programs
(SOCPs). Then, a novel scheme is presented to formulate robust CBFs which takes
the nominally safe control as its desired control input in optimization-based
control design and then tries to minimally modify it whenever the robust CBF
constraint is violated. This proposed scheme leads to quadratic programs (QPs)
which can be easily solved. Finally, the effectiveness of the proposed approach
is demonstrated on an adaptive cruise control example.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 10:13:27 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Hamdipoor",
"Vahid",
""
],
[
"Meskin",
"Nader",
""
],
[
"Cassandras",
"Christos G.",
""
]
] |
new_dataset
| 0.994675 |
2304.11619
|
Jonathan Roberts
|
Jonathan Roberts, Kai Han, Samuel Albanie
|
SATIN: A Multi-Task Metadataset for Classifying Satellite Imagery using
Vision-Language Models
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interpreting remote sensing imagery enables numerous downstream applications
ranging from land-use planning to deforestation monitoring. Robustly
classifying this data is challenging due to the Earth's geographic diversity.
While many distinct satellite and aerial image classification datasets exist,
there is yet to be a benchmark curated that suitably covers this diversity. In
this work, we introduce SATellite ImageNet (SATIN), a metadataset curated from
27 existing remotely sensed datasets, and comprehensively evaluate the
zero-shot transfer classification capabilities of a broad range of
vision-language (VL) models on SATIN. We find SATIN to be a challenging
benchmark-the strongest method we evaluate achieves a classification accuracy
of 52.0%. We provide a $\href{https://satinbenchmark.github.io}{\text{public
leaderboard}}$ to guide and track the progress of VL models in this important
domain.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 11:23:05 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Roberts",
"Jonathan",
""
],
[
"Han",
"Kai",
""
],
[
"Albanie",
"Samuel",
""
]
] |
new_dataset
| 0.999576 |
2304.11631
|
Dongjingdian Liu
|
Dongjingdin Liu, Pengpeng Chen, Miao Yao, Yijing Lu, Zijie Cai, Yuxin
Tian
|
TSGCNeXt: Dynamic-Static Multi-Graph Convolution for Efficient
Skeleton-Based Action Recognition with Long-term Learning Potential
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Skeleton-based action recognition has achieved remarkable results in human
action recognition with the development of graph convolutional networks (GCNs).
However, the recent works tend to construct complex learning mechanisms with
redundant training and exist a bottleneck for long time-series. To solve these
problems, we propose the Temporal-Spatio Graph ConvNeXt (TSGCNeXt) to explore
efficient learning mechanism of long temporal skeleton sequences. Firstly, a
new graph learning mechanism with simple structure, Dynamic-Static Separate
Multi-graph Convolution (DS-SMG) is proposed to aggregate features of multiple
independent topological graphs and avoid the node information being ignored
during dynamic convolution. Next, we construct a graph convolution training
acceleration mechanism to optimize the back-propagation computing of dynamic
graph learning with 55.08\% speed-up. Finally, the TSGCNeXt restructure the
overall structure of GCN with three Spatio-temporal learning
modules,efficiently modeling long temporal features. In comparison with
existing previous methods on large-scale datasets NTU RGB+D 60 and 120,
TSGCNeXt outperforms on single-stream networks. In addition, with the ema model
introduced into the multi-stream fusion, TSGCNeXt achieves SOTA levels. On the
cross-subject and cross-set of the NTU 120, accuracies reach 90.22% and 91.74%.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 12:10:36 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Liu",
"Dongjingdin",
""
],
[
"Chen",
"Pengpeng",
""
],
[
"Yao",
"Miao",
""
],
[
"Lu",
"Yijing",
""
],
[
"Cai",
"Zijie",
""
],
[
"Tian",
"Yuxin",
""
]
] |
new_dataset
| 0.973392 |
2304.11636
|
Markus Borg
|
Markus Borg and Adam Tornhill and Enys Mones
|
U Owns the Code That Changes and How Marginal Owners Resolve Issues
Slower in Low-Quality Source Code
|
Accepted for publication in the Proc. of the 27th International
Conference on Evaluation and Assessment in Software Engineering
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
[Context] Accurate time estimation is a critical aspect of predictable
software engineering. Previous work shows that low source code quality
increases the uncertainty in issue resolution times. [Objective] Our goal is to
evaluate how developers' project experience and file ownership are related to
issue resolution times. [Method] We mine 40 proprietary software repositories
and conduct an observational study. Using CodeScene, we measure source code
quality and active development time connected to Jira issues. [Results] Most
source code changes are made by either a marginal or dominant code owner. Also,
most changes to low-quality source code are made by developers with low levels
of ownership. In low-quality source code, marginal owners need 45\% more time
for small changes, and 93\% more time for large changes. [Conclusions]
Collective code ownership is a popular target, but industry practice results in
many dominant and marginal owners. Marginal owners are particularly hampered
when working with low-quality source code, which leads to productivity losses.
In codebases plagued by technical debt, newly onboarded developers will require
more time to complete tasks.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 12:38:48 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Borg",
"Markus",
""
],
[
"Tornhill",
"Adam",
""
],
[
"Mones",
"Enys",
""
]
] |
new_dataset
| 0.994 |
2304.11662
|
Hongyu Sun
|
Hongyu Sun, Yongcai Wang, Xudong Cai, Peng Wang, Zhe Huang, Deying Li,
Yu Shao, Shuo Wang
|
AirBirds: A Large-scale Challenging Dataset for Bird Strike Prevention
in Real-world Airports
|
17 pages, 9 figures, 3 tables; accepted by ACCV 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
One fundamental limitation to the research of bird strike prevention is the
lack of a large-scale dataset taken directly from real-world airports. Existing
relevant datasets are either small in size or not dedicated for this purpose.
To advance the research and practical solutions for bird strike prevention, in
this paper, we present a large-scale challenging dataset AirBirds that consists
of 118,312 time-series images, where a total of 409,967 bounding boxes of
flying birds are manually, carefully annotated. The average size of all
annotated instances is smaller than 10 pixels in 1920x1080 images. Images in
the dataset are captured over 4 seasons of a whole year by a network of cameras
deployed at a real-world airport, covering diverse bird species, lighting
conditions and 13 meteorological scenarios. To the best of our knowledge, it is
the first large-scale image dataset that directly collects flying birds in
real-world airports for bird strike prevention. This dataset is publicly
available at https://airbirdsdata.github.io/.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 14:19:28 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Sun",
"Hongyu",
""
],
[
"Wang",
"Yongcai",
""
],
[
"Cai",
"Xudong",
""
],
[
"Wang",
"Peng",
""
],
[
"Huang",
"Zhe",
""
],
[
"Li",
"Deying",
""
],
[
"Shao",
"Yu",
""
],
[
"Wang",
"Shuo",
""
]
] |
new_dataset
| 0.999879 |
2304.11664
|
Arash Ghafouri
|
Arash Ghafouri, Hasan Naderi, Mohammad Aghajani asl and Mahdi
Firouzmandi
|
IslamicPCQA: A Dataset for Persian Multi-hop Complex Question Answering
in Islamic Text Resources
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Nowadays, one of the main challenges for Question Answering Systems is to
answer complex questions using various sources of information. Multi-hop
questions are a type of complex questions that require multi-step reasoning to
answer. In this article, the IslamicPCQA dataset is introduced. This is the
first Persian dataset for answering complex questions based on non-structured
information sources and consists of 12,282 question-answer pairs extracted from
9 Islamic encyclopedias. This dataset has been created inspired by the HotpotQA
English dataset approach, which was customized to suit the complexities of the
Persian language. Answering questions in this dataset requires more than one
paragraph and reasoning. The questions are not limited to any prior knowledge
base or ontology, and to provide robust reasoning ability, the dataset also
includes supporting facts and key sentences. The prepared dataset covers a wide
range of Islamic topics and aims to facilitate answering complex Persian
questions within this subject matter
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 14:20:58 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Ghafouri",
"Arash",
""
],
[
"Naderi",
"Hasan",
""
],
[
"asl",
"Mohammad Aghajani",
""
],
[
"Firouzmandi",
"Mahdi",
""
]
] |
new_dataset
| 0.999815 |
2304.11677
|
Guolei Sun
|
Guolei Sun, Zhaochong An, Yun Liu, Ce Liu, Christos Sakaridis,
Deng-Ping Fan, Luc Van Gool
|
Indiscernible Object Counting in Underwater Scenes
|
To appear in CVPR 2023. The resources are available at
https://github.com/GuoleiSun/Indiscernible-Object-Counting
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, indiscernible scene understanding has attracted a lot of attention
in the vision community. We further advance the frontier of this field by
systematically studying a new challenge named indiscernible object counting
(IOC), the goal of which is to count objects that are blended with respect to
their surroundings. Due to a lack of appropriate IOC datasets, we present a
large-scale dataset IOCfish5K which contains a total of 5,637 high-resolution
images and 659,024 annotated center points. Our dataset consists of a large
number of indiscernible objects (mainly fish) in underwater scenes, making the
annotation process all the more challenging. IOCfish5K is superior to existing
datasets with indiscernible scenes because of its larger scale, higher image
resolutions, more annotations, and denser scenes. All these aspects make it the
most challenging dataset for IOC so far, supporting progress in this area. For
benchmarking purposes, we select 14 mainstream methods for object counting and
carefully evaluate them on IOCfish5K. Furthermore, we propose IOCFormer, a new
strong baseline that combines density and regression branches in a unified
framework and can effectively tackle object counting under concealed scenes.
Experiments show that IOCFormer achieves state-of-the-art scores on IOCfish5K.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 15:09:02 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Sun",
"Guolei",
""
],
[
"An",
"Zhaochong",
""
],
[
"Liu",
"Yun",
""
],
[
"Liu",
"Ce",
""
],
[
"Sakaridis",
"Christos",
""
],
[
"Fan",
"Deng-Ping",
""
],
[
"Van Gool",
"Luc",
""
]
] |
new_dataset
| 0.999747 |
2304.11685
|
Mathias Ibsen
|
Magnus Falkenberg, Anders Bensen Ottsen, Mathias Ibsen, Christian
Rathgeb
|
Child Face Recognition at Scale: Synthetic Data Generation and
Performance Benchmark
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the need for a large-scale database of children's faces by using
generative adversarial networks (GANs) and face age progression (FAP) models to
synthesize a realistic dataset referred to as HDA-SynChildFaces. To this end,
we proposed a processing pipeline that initially utilizes StyleGAN3 to sample
adult subjects, which are subsequently progressed to children of varying ages
using InterFaceGAN. Intra-subject variations, such as facial expression and
pose, are created by further manipulating the subjects in their latent space.
Additionally, the presented pipeline allows to evenly distribute the races of
subjects, allowing to generate a balanced and fair dataset with respect to race
distribution. The created HDA-SynChildFaces consists of 1,652 subjects and a
total of 188,832 images, each subject being present at various ages and with
many different intra-subject variations. Subsequently, we evaluates the
performance of various facial recognition systems on the generated database and
compare the results of adults and children at different ages. The study reveals
that children consistently perform worse than adults, on all tested systems,
and the degradation in performance is proportional to age. Additionally, our
study uncovers some biases in the recognition systems, with Asian and Black
subjects and females performing worse than White and Latino Hispanic subjects
and males.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 15:29:26 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Falkenberg",
"Magnus",
""
],
[
"Ottsen",
"Anders Bensen",
""
],
[
"Ibsen",
"Mathias",
""
],
[
"Rathgeb",
"Christian",
""
]
] |
new_dataset
| 0.99385 |
2304.11688
|
Wei Ju
|
Wei Ju, Xiao Luo, Meng Qu, Yifan Wang, Chong Chen, Minghua Deng,
Xian-Sheng Hua, Ming Zhang
|
TGNN: A Joint Semi-supervised Framework for Graph-level Classification
|
Accepted by Proceedings of the Thirty-First International Joint
Conference on Artificial Intelligence (IJCAI 2022)
| null | null | null |
cs.LG cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies semi-supervised graph classification, a crucial task with
a wide range of applications in social network analysis and bioinformatics.
Recent works typically adopt graph neural networks to learn graph-level
representations for classification, failing to explicitly leverage features
derived from graph topology (e.g., paths). Moreover, when labeled data is
scarce, these methods are far from satisfactory due to their insufficient
topology exploration of unlabeled data. We address the challenge by proposing a
novel semi-supervised framework called Twin Graph Neural Network (TGNN). To
explore graph structural information from complementary views, our TGNN has a
message passing module and a graph kernel module. To fully utilize unlabeled
data, for each module, we calculate the similarity of each unlabeled graph to
other labeled graphs in the memory bank and our consistency loss encourages
consistency between two similarity distributions in different embedding spaces.
The two twin modules collaborate with each other by exchanging instance
similarity knowledge to fully explore the structure information of both labeled
and unlabeled data. We evaluate our TGNN on various public datasets and show
that it achieves strong performance.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 15:42:11 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Ju",
"Wei",
""
],
[
"Luo",
"Xiao",
""
],
[
"Qu",
"Meng",
""
],
[
"Wang",
"Yifan",
""
],
[
"Chen",
"Chong",
""
],
[
"Deng",
"Minghua",
""
],
[
"Hua",
"Xian-Sheng",
""
],
[
"Zhang",
"Ming",
""
]
] |
new_dataset
| 0.983515 |
2304.11708
|
Wonjun Yi
|
Wonjun Yi, Jung-Woo Choi and Jae-Woo Lee
|
Sound-based drone fault classification using multitask learning
|
Accepted at 29th International Congress on Sound and Vibration
(ICSV29). Dataset available: https://zenodo.org/record/7779574#.ZEVncnZBwQ-
| null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The drone has been used for various purposes, including military
applications, aerial photography, and pesticide spraying. However, the drone is
vulnerable to external disturbances, and malfunction in propellers and motors
can easily occur. To improve the safety of drone operations, one should detect
the mechanical faults of drones in real-time. This paper proposes a sound-based
deep neural network (DNN) fault classifier and drone sound dataset. The dataset
was constructed by collecting the operating sounds of drones from microphones
mounted on three different drones in an anechoic chamber. The dataset includes
various operating conditions of drones, such as flight directions (front, back,
right, left, clockwise, counterclockwise) and faults on propellers and motors.
The drone sounds were then mixed with noises recorded in five different spots
on the university campus, with a signal-to-noise ratio (SNR) varying from 10 dB
to 15 dB. Using the acquired dataset, we train a DNN classifier, 1DCNN-ResNet,
that classifies the types of mechanical faults and their locations from
short-time input waveforms. We employ multitask learning (MTL) and incorporate
the direction classification task as an auxiliary task to make the classifier
learn more general audio features. The test over unseen data reveals that the
proposed multitask model can successfully classify faults in drones and
outperforms single-task models even with less training data.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 17:55:40 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Yi",
"Wonjun",
""
],
[
"Choi",
"Jung-Woo",
""
],
[
"Lee",
"Jae-Woo",
""
]
] |
new_dataset
| 0.999644 |
2304.11743
|
Hoang Le
|
Hoang M. Le, Brian Price, Scott Cohen, Michael S. Brown
|
GamutMLP: A Lightweight MLP for Color Loss Recovery
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cameras and image-editing software often process images in the wide-gamut
ProPhoto color space, encompassing 90% of all visible colors. However, when
images are encoded for sharing, this color-rich representation is transformed
and clipped to fit within the small-gamut standard RGB (sRGB) color space,
representing only 30% of visible colors. Recovering the lost color information
is challenging due to the clipping procedure. Inspired by neural implicit
representations for 2D images, we propose a method that optimizes a lightweight
multi-layer-perceptron (MLP) model during the gamut reduction step to predict
the clipped values. GamutMLP takes approximately 2 seconds to optimize and
requires only 23 KB of storage. The small memory footprint allows our GamutMLP
model to be saved as metadata in the sRGB image -- the model can be extracted
when needed to restore wide-gamut color values. We demonstrate the
effectiveness of our approach for color recovery and compare it with
alternative strategies, including pre-trained DNN-based gamut expansion
networks and other implicit neural representation methods. As part of this
effort, we introduce a new color gamut dataset of 2200 wide-gamut/small-gamut
images for training and testing. Our code and dataset can be found on the
project website: https://gamut-mlp.github.io.
|
[
{
"version": "v1",
"created": "Sun, 23 Apr 2023 20:26:11 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Le",
"Hoang M.",
""
],
[
"Price",
"Brian",
""
],
[
"Cohen",
"Scott",
""
],
[
"Brown",
"Michael S.",
""
]
] |
new_dataset
| 0.997094 |
2304.11796
|
Min Hua
|
Dongmei Wu, Yuying Guan, Xin Xia, Changqing Du, Fuwu Yan, Yang Li, Min
Hua, Wei Liu
|
Coordinated Control of Path Tracking and Yaw Stability for Distributed
Drive Electric Vehicle Based on AMPC and DYC
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Maintaining both path-tracking accuracy and yaw stability of distributed
drive electric vehicles (DDEVs) under various driving conditions presents a
significant challenge in the field of vehicle control. To address this
limitation, a coordinated control strategy that integrates adaptive model
predictive control (AMPC) path-tracking control and direct yaw moment control
(DYC) is proposed for DDEVs. The proposed strategy, inspired by a hierarchical
framework, is coordinated by the upper layer of path-tracking control and the
lower layer of direct yaw moment control. Based on the linear time-varying
model predictive control (LTV MPC) algorithm, the effects of prediction horizon
and weight coefficients on the path-tracking accuracy and yaw stability of the
vehicle are compared and analyzed first. According to the aforementioned
analysis, an AMPC path-tracking controller with variable prediction horizon and
weight coefficients is designed considering the vehicle speed's variation in
the upper layer. The lower layer involves DYC based on the linear quadratic
regulator (LQR) technique. Specifically, the intervention rule of DYC is
determined by the threshold of the yaw rate error and the phase diagram of the
sideslip angle. Extensive simulation experiments are conducted to evaluate the
proposed coordinated control strategy under different driving conditions. The
results show that, under variable speed and low adhesion conditions, the
vehicle's yaw stability and path-tracking accuracy have been improved by
21.58\% and 14.43\%, respectively, compared to AMPC. Similarly, under high
speed and low adhesion conditions, the vehicle's yaw stability and
path-tracking accuracy have been improved by 44.30\% and 14.25\%, respectively,
compared to the coordination of LTV MPC and DYC. The results indicate that the
proposed adaptive path-tracking controller is effective across different
speeds.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 02:50:46 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Wu",
"Dongmei",
""
],
[
"Guan",
"Yuying",
""
],
[
"Xia",
"Xin",
""
],
[
"Du",
"Changqing",
""
],
[
"Yan",
"Fuwu",
""
],
[
"Li",
"Yang",
""
],
[
"Hua",
"Min",
""
],
[
"Liu",
"Wei",
""
]
] |
new_dataset
| 0.998417 |
2304.11812
|
Guangzhe Hou
|
Guangzhe Hou, Guihe Qin, Minghui Sun, Yanhua Liang, Jie Yan, Zhonghan
Zhang
|
NoiseTrans: Point Cloud Denoising with Transformers
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point clouds obtained from capture devices or 3D reconstruction techniques
are often noisy and interfere with downstream tasks. The paper aims to recover
the underlying surface of noisy point clouds. We design a novel model,
NoiseTrans, which uses transformer encoder architecture for point cloud
denoising. Specifically, we obtain structural similarity of point-based point
clouds with the assistance of the transformer's core self-attention mechanism.
By expressing the noisy point cloud as a set of unordered vectors, we convert
point clouds into point embeddings and employ Transformer to generate clean
point clouds. To make the Transformer preserve details when sensing the point
cloud, we design the Local Point Attention to prevent the point cloud from
being over-smooth. In addition, we also propose sparse encoding, which enables
the Transformer to better perceive the structural relationships of the point
cloud and improve the denoising performance. Experiments show that our model
outperforms state-of-the-art methods in various datasets and noise
environments.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 04:01:23 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Hou",
"Guangzhe",
""
],
[
"Qin",
"Guihe",
""
],
[
"Sun",
"Minghui",
""
],
[
"Liang",
"Yanhua",
""
],
[
"Yan",
"Jie",
""
],
[
"Zhang",
"Zhonghan",
""
]
] |
new_dataset
| 0.995809 |
2304.11827
|
Shivansh Walia
|
Shivansh Walia, Tejas Iyer, Shubham Tripathi and Akshith Vanaparthy
|
Safe and Secure Smart Home using Cisco Packet Tracer
|
11 pages
| null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
This project presents an implementation and designing of safe, secure and
smart home with enhanced levels of security features which uses IoT-based
technology. We got our motivation for this project after learning about
movement of west towards smart homes and designs. This galvanized us to engage
in this work as we wanted for homeowners to have a greater control over their
in-house environment while also promising more safety and security features for
the denizen. This contrivance of smart-home archetype has been intended to
assimilate many kinds of sensors, boards along with advanced IoT devices and
programming languages all of which in conjunction validate control and
monitoring prowess over discrete electronic items present in home.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 05:29:08 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Walia",
"Shivansh",
""
],
[
"Iyer",
"Tejas",
""
],
[
"Tripathi",
"Shubham",
""
],
[
"Vanaparthy",
"Akshith",
""
]
] |
new_dataset
| 0.990804 |
2304.11858
|
Juan Tapia Dr.
|
Pamela C. Zurita, Daniel P. Benalcazar, Juan E. Tapia
|
Fitness-for-Duty Classification using Temporal Sequences of Iris
Periocular images
| null | null | null | null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Fitness for Duty (FFD) techniques detects whether a subject is Fit to perform
their work safely, which means no reduced alertness condition and security, or
if they are Unfit, which means alertness condition reduced by sleepiness or
consumption of alcohol and drugs. Human iris behaviour provides valuable
information to predict FFD since pupil and iris movements are controlled by the
central nervous system and are influenced by illumination, fatigue, alcohol,
and drugs. This work aims to classify FFD using sequences of 8 iris images and
to extract spatial and temporal information using Convolutional Neural Networks
(CNN) and Long Short Term Memory Networks (LSTM). Our results achieved a
precision of 81.4\% and 96.9\% for the prediction of Fit and Unfit subjects,
respectively. The results also show that it is possible to determine if a
subject is under alcohol, drug, and sleepiness conditions. Sleepiness can be
identified as the most difficult condition to be determined. This system opens
a different insight into iris biometric applications.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 07:14:46 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Zurita",
"Pamela C.",
""
],
[
"Benalcazar",
"Daniel P.",
""
],
[
"Tapia",
"Juan E.",
""
]
] |
new_dataset
| 0.973368 |
2304.11868
|
Mingjie Li
|
Mingjie Li, Tharindu Rathnayake, Ben Beck, Lingheng Meng, Zijue Chen,
Akansel Cosgun, Xiaojun Chang, Dana Kuli\'c
|
A Benchmark for Cycling Close Pass Near Miss Event Detection from Video
Streams
|
15 pages, 19 figurers and 2 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Cycling is a healthy and sustainable mode of transport. However, interactions
with motor vehicles remain a key barrier to increased cycling participation.
The ability to detect potentially dangerous interactions from on-bike sensing
could provide important information to riders and policy makers. Thus,
automated detection of conflict between cyclists and drivers has attracted
researchers from both computer vision and road safety communities. In this
paper, we introduce a novel benchmark, called Cyc-CP, towards cycling close
pass near miss event detection from video streams. We first divide this task
into scene-level and instance-level problems. Scene-level detection asks an
algorithm to predict whether there is a close pass near miss event in the input
video clip. Instance-level detection aims to detect which vehicle in the scene
gives rise to a close pass near miss. We propose two benchmark models based on
deep learning techniques for these two problems. For training and testing those
models, we construct a synthetic dataset and also collect a real-world dataset.
Our models can achieve 88.13% and 84.60% accuracy on the real-world dataset,
respectively. We envision this benchmark as a test-bed to accelerate cycling
close pass near miss detection and facilitate interaction between the fields of
road safety, intelligent transportation systems and artificial intelligence.
Both the benchmark datasets and detection models will be available at
https://github.com/SustainableMobility/cyc-cp to facilitate experimental
reproducibility and encourage more in-depth research in the field.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 07:30:01 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Li",
"Mingjie",
""
],
[
"Rathnayake",
"Tharindu",
""
],
[
"Beck",
"Ben",
""
],
[
"Meng",
"Lingheng",
""
],
[
"Chen",
"Zijue",
""
],
[
"Cosgun",
"Akansel",
""
],
[
"Chang",
"Xiaojun",
""
],
[
"Kulić",
"Dana",
""
]
] |
new_dataset
| 0.999785 |
2304.11892
|
Gilles Dowek
|
Gilles Dowek, Ying Jiang
|
On the Expressive Power of Schemes
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a calculus, called the scheme-calculus, that permits to express
natural deduction proofs in various theories. Unlike $\lambda$-calculus, the
syntax of this calculus sticks closely to the syntax of proofs, in particular,
no names are introduced for the hypotheses. We show that despite its
non-determinism, some typed scheme-calculi have the same expressivity as the
corresponding typed $\lambda$-calculi.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 07:59:31 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Dowek",
"Gilles",
""
],
[
"Jiang",
"Ying",
""
]
] |
new_dataset
| 0.987485 |
2304.11924
|
Ivan Srba
|
Timo Hromadka, Timotej Smolen, Tomas Remis, Branislav Pecher, Ivan
Srba
|
KInITVeraAI at SemEval-2023 Task 3: Simple yet Powerful Multilingual
Fine-Tuning for Persuasion Techniques Detection
|
System paper within SemEval 2023 Task 3 on the subtask 3
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the best-performing solution to the SemEval 2023 Task 3
on the subtask 3 dedicated to persuasion techniques detection. Due to a high
multilingual character of the input data and a large number of 23 predicted
labels (causing a lack of labelled data for some language-label combinations),
we opted for fine-tuning pre-trained transformer-based language models.
Conducting multiple experiments, we find the best configuration, which consists
of large multilingual model (XLM-RoBERTa large) trained jointly on all input
data, with carefully calibrated confidence thresholds for seen and surprise
languages separately. Our final system performed the best on 6 out of 9
languages (including two surprise languages) and achieved highly competitive
results on the remaining three languages.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 09:06:43 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Hromadka",
"Timo",
""
],
[
"Smolen",
"Timotej",
""
],
[
"Remis",
"Tomas",
""
],
[
"Pecher",
"Branislav",
""
],
[
"Srba",
"Ivan",
""
]
] |
new_dataset
| 0.989782 |
2304.11940
|
Arthur Vervaet
|
Arthur Vervaet
|
MoniLog: An Automated Log-Based Anomaly Detection System for Cloud
Computing Infrastructures
| null |
IEEE 37th International Conference on Data Engineering (ICDE),
2021
| null | null |
cs.AI cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Within today's large-scale systems, one anomaly can impact millions of users.
Detecting such events in real-time is essential to maintain the quality of
services. It allows the monitoring team to prevent or diminish the impact of a
failure. Logs are a core part of software development and maintenance, by
recording detailed information at runtime. Such log data are universally
available in nearly all computer systems. They enable developers as well as
system maintainers to monitor and dissect anomalous events. For Cloud computing
companies and large online platforms in general, growth is linked to the
scaling potential. Automatizing the anomaly detection process is a promising
way to ensure the scalability of monitoring capacities regarding the increasing
volume of logs generated by modern systems. In this paper, we will introduce
MoniLog, a distributed approach to detect real-time anomalies within
large-scale environments. It aims to detect sequential and quantitative
anomalies within a multi-source log stream. MoniLog is designed to structure a
log stream and perform the monitoring of anomalous sequences. Its output
classifier learns from the administrator's actions to label and evaluate the
criticality level of anomalies.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 09:21:52 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Vervaet",
"Arthur",
""
]
] |
new_dataset
| 0.989951 |
2304.11952
|
Francois Durand
|
Emma Caizergues (LINCS), Fran\c{c}ois Durand (LINCS), Fabien Mathieu
(LINCS)
|
Sorting wild pigs
|
in French language, AlgoTel 2023 - 25{\`e}mes Rencontres Francophones
sur les Aspects Algorithmiques des T{\'e}l{\'e}communications, May 2023,
Cargese, France
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Chjara, breeder in Carg{\`e}se, has n wild pigs. She would like to sort her
herd by weight to better meet the demands of her buyers. Each beast has a
distinct weight, alas unknown to Chjara. All she has at her disposal is a
Roberval scale, which allows her to compare two pigs only at the cost of an
acrobatic manoeuvre. The balance, quite old, can break at any time. Chjara
therefore wants to sort his herd in a minimum of weighings, but also to have a
good estimate of the result after each weighing.To help Chjara, we pose the
problem of finding a good anytime sorting algorithm, in the sense of Kendall's
tau distance between provisional result and perfectly sorted list, and we bring
the following contributions:- We introduce Corsort, a family of anytime sorting
algorithms based on estimators.- By simulation, we show that a well-configured
Corsort has a near-optimal termination time, and provides better intermediate
estimates than the best sorting algorithms we are aware of.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 09:41:21 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Caizergues",
"Emma",
"",
"LINCS"
],
[
"Durand",
"François",
"",
"LINCS"
],
[
"Mathieu",
"Fabien",
"",
"LINCS"
]
] |
new_dataset
| 0.999204 |
2304.11970
|
Zerui Chen
|
Zerui Chen, Shizhe Chen, Cordelia Schmid, Ivan Laptev
|
gSDF: Geometry-Driven Signed Distance Functions for 3D Hand-Object
Reconstruction
|
Accepted by CVPR 2023. Project Page:
https://zerchen.github.io/projects/gsdf.html
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Signed distance functions (SDFs) is an attractive framework that has recently
shown promising results for 3D shape reconstruction from images. SDFs
seamlessly generalize to different shape resolutions and topologies but lack
explicit modelling of the underlying 3D geometry. In this work, we exploit the
hand structure and use it as guidance for SDF-based shape reconstruction. In
particular, we address reconstruction of hands and manipulated objects from
monocular RGB images. To this end, we estimate poses of hands and objects and
use them to guide 3D reconstruction. More specifically, we predict kinematic
chains of pose transformations and align SDFs with highly-articulated hand
poses. We improve the visual features of 3D points with geometry alignment and
further leverage temporal information to enhance the robustness to occlusion
and motion blurs. We conduct extensive experiments on the challenging ObMan and
DexYCB benchmarks and demonstrate significant improvements of the proposed
method over the state of the art.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 10:05:48 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Chen",
"Zerui",
""
],
[
"Chen",
"Shizhe",
""
],
[
"Schmid",
"Cordelia",
""
],
[
"Laptev",
"Ivan",
""
]
] |
new_dataset
| 0.997594 |
2304.11975
|
Yin-Dong Zheng
|
Yin-Dong Zheng, Guo Chen, Minglei Yuan, Tong Lu
|
MRSN: Multi-Relation Support Network for Video Action Detection
|
6 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Action detection is a challenging video understanding task, requiring
modeling spatio-temporal and interaction relations. Current methods usually
model actor-actor and actor-context relations separately, ignoring their
complementarity and mutual support. To solve this problem, we propose a novel
network called Multi-Relation Support Network (MRSN). In MRSN, Actor-Context
Relation Encoder (ACRE) and Actor-Actor Relation Encoder (AARE) model the
actor-context and actor-actor relation separately. Then Relation Support
Encoder (RSE) computes the supports between the two relations and performs
relation-level interactions. Finally, Relation Consensus Module (RCM) enhances
two relations with the long-term relations from the Long-term Relation Bank
(LRB) and yields a consensus. Our experiments demonstrate that modeling
relations separately and performing relation-level interactions can achieve and
outperformer state-of-the-art results on two challenging video datasets: AVA
and UCF101-24.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 10:15:31 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Zheng",
"Yin-Dong",
""
],
[
"Chen",
"Guo",
""
],
[
"Yuan",
"Minglei",
""
],
[
"Lu",
"Tong",
""
]
] |
new_dataset
| 0.99906 |
2304.12008
|
Peipeng Yu
|
Peipeng Yu, Jiahan Chen, Xuan Feng, Zhihua Xia
|
CHEAT: A Large-scale Dataset for Detecting ChatGPT-writtEn AbsTracts
|
9 pages, 6 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The powerful ability of ChatGPT has caused widespread concern in the academic
community. Malicious users could synthesize dummy academic content through
ChatGPT, which is extremely harmful to academic rigor and originality. The need
to develop ChatGPT-written content detection algorithms call for large-scale
datasets. In this paper, we initially investigate the possible negative impact
of ChatGPT on academia,and present a large-scale CHatGPT-writtEn AbsTract
dataset (CHEAT) to support the development of detection algorithms. In
particular, the ChatGPT-written abstract dataset contains 35,304 synthetic
abstracts, with Generation, Polish, and Mix as prominent representatives. Based
on these data, we perform a thorough analysis of the existing text synthesis
detection algorithms. We show that ChatGPT-written abstracts are detectable,
while the detection difficulty increases with human involvement.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 11:19:33 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Yu",
"Peipeng",
""
],
[
"Chen",
"Jiahan",
""
],
[
"Feng",
"Xuan",
""
],
[
"Xia",
"Zhihua",
""
]
] |
new_dataset
| 0.999403 |
2304.12026
|
Haolan Zhan
|
Haolan Zhan and Zhuang Li and Yufei Wang and Linhao Luo and Tao Feng
and Xiaoxi Kang and Yuncheng Hua and Lizhen Qu and Lay-Ki Soon and Suraj
Sharma and Ingrid Zukerman and Zhaleh Semnani-Azad and Gholamreza Haffari
|
SocialDial: A Benchmark for Socially-Aware Dialogue Systems
|
Accepted by SIGIR 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dialogue systems have been widely applied in many scenarios and are now more
powerful and ubiquitous than ever before. With large neural models and massive
available data, current dialogue systems have access to more knowledge than any
people in their life. However, current dialogue systems still do not perform at
a human level. One major gap between conversational agents and humans lies in
their abilities to be aware of social norms. The development of socially-aware
dialogue systems is impeded due to the lack of resources. In this paper, we
present the first socially-aware dialogue corpus - SocialDial, based on Chinese
social culture. SocialDial consists of two parts: 1,563 multi-turn dialogues
between two human speakers with fine-grained labels, and 4,870 synthetic
conversations generated by ChatGPT. The human corpus covers five categories of
social norms, which have 14 sub-categories in total. Specifically, it contains
social factor annotations including social relation, context, social distance,
and social norms. However, collecting sufficient socially-aware dialogues is
costly. Thus, we harness the power of ChatGPT and devise an ontology-based
synthetic data generation framework. This framework is able to generate
synthetic data at scale. To ensure the quality of synthetic dialogues, we
design several mechanisms for quality control during data collection. Finally,
we evaluate our dataset using several pre-trained models, such as BERT and
RoBERTa. Comprehensive empirical results based on state-of-the-art neural
models demonstrate that modeling of social norms for dialogue systems is a
promising research direction. To the best of our knowledge, SocialDial is the
first socially-aware dialogue dataset that covers multiple social factors and
has fine-grained labels.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 11:55:22 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Zhan",
"Haolan",
""
],
[
"Li",
"Zhuang",
""
],
[
"Wang",
"Yufei",
""
],
[
"Luo",
"Linhao",
""
],
[
"Feng",
"Tao",
""
],
[
"Kang",
"Xiaoxi",
""
],
[
"Hua",
"Yuncheng",
""
],
[
"Qu",
"Lizhen",
""
],
[
"Soon",
"Lay-Ki",
""
],
[
"Sharma",
"Suraj",
""
],
[
"Zukerman",
"Ingrid",
""
],
[
"Semnani-Azad",
"Zhaleh",
""
],
[
"Haffari",
"Gholamreza",
""
]
] |
new_dataset
| 0.992817 |
2304.12031
|
Yi Feng
|
Yi Feng, Bohuan Xue, Ming Liu, Qijun Chen, Rui Fan
|
D2NT: A High-Performing Depth-to-Normal Translator
|
Accepted to ICRA 2023. The source code, demo video, and supplement
are publicly available at mias.group/D2NT
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Surface normal holds significant importance in visual environmental
perception, serving as a source of rich geometric information. However, the
state-of-the-art (SoTA) surface normal estimators (SNEs) generally suffer from
an unsatisfactory trade-off between efficiency and accuracy. To resolve this
dilemma, this paper first presents a superfast depth-to-normal translator
(D2NT), which can directly translate depth images into surface normal maps
without calculating 3D coordinates. We then propose a discontinuity-aware
gradient (DAG) filter, which adaptively generates gradient convolution kernels
to improve depth gradient estimation. Finally, we propose a surface normal
refinement module that can easily be integrated into any depth-to-normal SNEs,
substantially improving the surface normal estimation accuracy. Our proposed
algorithm demonstrates the best accuracy among all other existing real-time
SNEs and achieves the SoTA trade-off between efficiency and accuracy.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 12:08:03 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Feng",
"Yi",
""
],
[
"Xue",
"Bohuan",
""
],
[
"Liu",
"Ming",
""
],
[
"Chen",
"Qijun",
""
],
[
"Fan",
"Rui",
""
]
] |
new_dataset
| 0.996942 |
2304.12079
|
Yoshiki Nakamura
|
Yoshiki Nakamura
|
Existential Calculi of Relations with Transitive Closure: Complexity and
Edge Saturations
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We study the decidability and complexity of equational theories of the
existential calculus of relations with transitive closure (ECoR*) and its
fragments, where ECoR* is the positive calculus of relations with transitive
closure extended with complements of term variables and constants. We give
characterizations of these equational theories by using edge saturations and we
show that the equational theory is 1) coNP-complete for ECoR* without
transitive closure; 2) in coNEXP for ECoR* without intersection and
PSPACE-complete for two smaller fragments; 3) $\Pi_{1}^{0}$-complete for ECoR*.
The second result gives PSPACE-upper bounds for some extensions of Kleene
algebra, including Kleene algebra with top w.r.t. binary relations.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 13:23:49 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Nakamura",
"Yoshiki",
""
]
] |
new_dataset
| 0.996467 |
2304.12095
|
Umberto Martinez-Penas
|
Elisa Gorla, Umberto Mart\'inez-Pe\~nas, Flavio Salizzoni
|
Sum-rank metric codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sum-rank metric codes are a natural extension of both linear block codes and
rank-metric codes. They have several applications in information theory,
including multishot network coding and distributed storage systems. The aim of
this chapter is to present the mathematical theory of sum-rank metric codes,
paying special attention to the $\mathbb{F}_q$-linear case in which different
sizes of matrices are allowed. We provide a comprehensive overview of the main
results in the area. In particular, we discuss invariants, optimal anticodes,
and MSRD codes. In the last section, we concentrate on
$\mathbb{F}_{q^m}$-linear codes.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 13:44:43 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Gorla",
"Elisa",
""
],
[
"Martínez-Peñas",
"Umberto",
""
],
[
"Salizzoni",
"Flavio",
""
]
] |
new_dataset
| 0.998813 |
2304.12155
|
Chris C. Emezue
|
Chris Emezue, Hellina Nigatu, Cynthia Thinwa, Helper Zhou, Shamsuddeen
Muhammad, Lerato Louis, Idris Abdulmumin, Samuel Oyerinde, Benjamin Ajibade,
Olanrewaju Samuel, Oviawe Joshua, Emeka Onwuegbuzia, Handel Emezue,
Ifeoluwatayo A. Ige, Atnafu Lambebo Tonja, Chiamaka Chukwuneke, Bonaventure
F.P. Dossou, Naome A. Etori, Mbonu Chinedu Emmanuel, Oreen Yousuf, Kaosarat
Aina, Davis David
|
The African Stopwords project: curating stopwords for African languages
|
Accepted at the AfricaNLP workshop at ICLR2022
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Stopwords are fundamental in Natural Language Processing (NLP) techniques for
information retrieval. One of the common tasks in preprocessing of text data is
the removal of stopwords. Currently, while high-resource languages like English
benefit from the availability of several stopwords, low-resource languages,
such as those found in the African continent, have none that are standardized
and available for use in NLP packages. Stopwords in the context of African
languages are understudied and can reveal information about the crossover
between languages. The \textit{African Stopwords} project aims to study and
curate stopwords for African languages. In this paper, we present our current
progress on ten African languages as well as future plans for the project.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 17:32:01 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Emezue",
"Chris",
""
],
[
"Nigatu",
"Hellina",
""
],
[
"Thinwa",
"Cynthia",
""
],
[
"Zhou",
"Helper",
""
],
[
"Muhammad",
"Shamsuddeen",
""
],
[
"Louis",
"Lerato",
""
],
[
"Abdulmumin",
"Idris",
""
],
[
"Oyerinde",
"Samuel",
""
],
[
"Ajibade",
"Benjamin",
""
],
[
"Samuel",
"Olanrewaju",
""
],
[
"Joshua",
"Oviawe",
""
],
[
"Onwuegbuzia",
"Emeka",
""
],
[
"Emezue",
"Handel",
""
],
[
"Ige",
"Ifeoluwatayo A.",
""
],
[
"Tonja",
"Atnafu Lambebo",
""
],
[
"Chukwuneke",
"Chiamaka",
""
],
[
"Dossou",
"Bonaventure F. P.",
""
],
[
"Etori",
"Naome A.",
""
],
[
"Emmanuel",
"Mbonu Chinedu",
""
],
[
"Yousuf",
"Oreen",
""
],
[
"Aina",
"Kaosarat",
""
],
[
"David",
"Davis",
""
]
] |
new_dataset
| 0.995116 |
2304.12158
|
Micha{\l} Skrzypczak
|
Damian Niwi\'nski, Pawe{\l} Parys, Micha{\l} Skrzypczak
|
The Probabilistic Rabin Tree Theorem
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
The Rabin tree theorem yields an algorithm to solve the satisfiability
problem for monadic second-order logic over infinite trees. Here we solve the
probabilistic variant of this problem. Namely, we show how to compute the
probability that a randomly chosen tree satisfies a given formula. We
additionally show that this probability is an algebraic number. This closes a
line of research where similar results were shown for formalisms weaker than
the full monadic second-order logic.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 15:09:59 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Niwiński",
"Damian",
""
],
[
"Parys",
"Paweł",
""
],
[
"Skrzypczak",
"Michał",
""
]
] |
new_dataset
| 0.989624 |
2304.12183
|
Zuhaib Akhtar
|
Zuhaib Akhtar, Mohammad Omar Khursheed, Dongsu Du, Yuzong Liu
|
Small-footprint slimmable networks for keyword spotting
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we present Slimmable Neural Networks applied to the problem of
small-footprint keyword spotting. We show that slimmable neural networks allow
us to create super-nets from Convolutioanl Neural Networks and Transformers,
from which sub-networks of different sizes can be extracted. We demonstrate the
usefulness of these models on in-house Alexa data and Google Speech Commands,
and focus our efforts on models for the on-device use case, limiting ourselves
to less than 250k parameters. We show that slimmable models can match (and in
some cases, outperform) models trained from scratch. Slimmable neural networks
are therefore a class of models particularly useful when the same functionality
is to be replicated at different memory and compute budgets, with different
accuracy requirements.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 12:59:37 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Akhtar",
"Zuhaib",
""
],
[
"Khursheed",
"Mohammad Omar",
""
],
[
"Du",
"Dongsu",
""
],
[
"Liu",
"Yuzong",
""
]
] |
new_dataset
| 0.983891 |
2304.12202
|
Ilias Chalkidis
|
Ilias Chalkidis
|
ChatGPT may Pass the Bar Exam soon, but has a Long Way to Go for the
LexGLUE benchmark
|
Working paper
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Following the hype around OpenAI's ChatGPT conversational agent, the last
straw in the recent development of Large Language Models (LLMs) that
demonstrate emergent unprecedented zero-shot capabilities, we audit the latest
OpenAI's GPT-3.5 model, `gpt-3.5-turbo', the first available ChatGPT model, in
the LexGLUE benchmark in a zero-shot fashion providing examples in a templated
instruction-following format. The results indicate that ChatGPT achieves an
average micro-F1 score of 47.6% across LexGLUE tasks, surpassing the baseline
guessing rates. Notably, the model performs exceptionally well in some
datasets, achieving micro-F1 scores of 62.8% and 70.2% in the ECtHR B and
LEDGAR datasets, respectively. The code base and model predictions are
available for review on https://github.com/coastalcph/zeroshot_lexglue.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 16:42:29 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Chalkidis",
"Ilias",
""
]
] |
new_dataset
| 0.968373 |
2304.12229
|
Zohreh Aliabadi
|
Zohreh Aliabadi, Tekg\"ul Kalayc{\i}
|
A note on the hull and linear complementary pair of cyclic codes
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The Euclidean hull of a linear code $C$ is defined as $C\cap C^{\perp}$,
where $C^\perp$ denotes the dual of $C$ under the Euclidean inner product. A
linear code with zero hull dimension is called a linear complementary dual
(LCD) code. A pair $(C, D)$ of linear codes of length $n$ over $\mathbb{F}_q$
is called a linear complementary pair (LCP) of codes if $C\oplus
D=\mathbb{F}_q^n$. In this paper, we give a characterization of LCD and LCP of
cyclic codes of length $q^m-1$, $m \geq 1$, over the finite field
$\mathbb{F}_q$ in terms of their basic dual zeros and their trace
representations. We also formulate the hull dimension of a cyclic code of
arbitrary length over $\mathbb{F}_q$ with respect to its basic dual zero.
Moreover, we provide a general formula for the dimension of the intersection of
two cyclic codes of arbitrary length over $\mathbb{F}_q$ based on their basic
dual zeros.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 12:11:58 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Aliabadi",
"Zohreh",
""
],
[
"Kalaycı",
"Tekgül",
""
]
] |
new_dataset
| 0.9859 |
2304.12301
|
Kun He
|
Takehiko Ohkawa, Kun He, Fadime Sener, Tomas Hodan, Luan Tran, Cem
Keskin
|
AssemblyHands: Towards Egocentric Activity Understanding via 3D Hand
Pose Estimation
|
CVPR 2023. Project page: https://assemblyhands.github.io/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present AssemblyHands, a large-scale benchmark dataset with accurate 3D
hand pose annotations, to facilitate the study of egocentric activities with
challenging hand-object interactions. The dataset includes synchronized
egocentric and exocentric images sampled from the recent Assembly101 dataset,
in which participants assemble and disassemble take-apart toys. To obtain
high-quality 3D hand pose annotations for the egocentric images, we develop an
efficient pipeline, where we use an initial set of manual annotations to train
a model to automatically annotate a much larger dataset. Our annotation model
uses multi-view feature fusion and an iterative refinement scheme, and achieves
an average keypoint error of 4.20 mm, which is 85% lower than the error of the
original annotations in Assembly101. AssemblyHands provides 3.0M annotated
images, including 490K egocentric images, making it the largest existing
benchmark dataset for egocentric 3D hand pose estimation. Using this data, we
develop a strong single-view baseline of 3D hand pose estimation from
egocentric images. Furthermore, we design a novel action classification task to
evaluate predicted 3D hand poses. Our study shows that having higher-quality
hand poses directly improves the ability to recognize actions.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 17:52:57 GMT"
}
] | 2023-04-25T00:00:00 |
[
[
"Ohkawa",
"Takehiko",
""
],
[
"He",
"Kun",
""
],
[
"Sener",
"Fadime",
""
],
[
"Hodan",
"Tomas",
""
],
[
"Tran",
"Luan",
""
],
[
"Keskin",
"Cem",
""
]
] |
new_dataset
| 0.990972 |
1503.05972
|
Jing Du
|
Jing Du
|
Serious Game for Human Environmental Consciousness Education in
Residents Daily Life
|
Research has discontinued
| null | null | null |
cs.CY cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It has been challenging to find ways to educate people to have better
environmental consciousness. In some cases, people do not know what the right
behaviors are to protect the environment. Game engine has been used in the AEC
industry for visualization. However, it has barely been used in environmental
consciousness education, for example, what operation can reduce building energy
consumption, what items are recyclables. As social psychology studies show that
video game can influence human behavior, a good designed game should provide
the game player with right incentives and guide the users to make wiser choices
for better environmental protection. This paper discussed a method to use
serious game engines to educate the players the right actions that should be
taken under in different scenarios. These actions in real life will results in
a better environmental protection. The game proposed in this study is for
residential home operation. Other scenarios such as restaurant operation,
grocery store operations are discussed as expansion of this study. The game
players points will be calculated based on their performance on different
choices and when they surpass a certain level, different rewards will be gained
in order for them to adjust their current living style. The purpose of the game
is to raise the environmental consciousness among the game players and educate
them the right actions they can make to better protect the environment while
they are spending time on games.
|
[
{
"version": "v1",
"created": "Fri, 20 Mar 2015 00:24:29 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Oct 2016 00:20:03 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Apr 2023 15:07:24 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Du",
"Jing",
""
]
] |
new_dataset
| 0.998894 |
2201.10001
|
Ye Gao
|
Ye Gao, Brian Baucom, Karen Rose, Kristina Gordon, Hongning Wang, John
Stankovic
|
E-ADDA: Unsupervised Adversarial Domain Adaptation Enhanced by a New
Mahalanobis Distance Loss for Smart Computing
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In smart computing, the labels of training samples for a specific task are
not always abundant. However, the labels of samples in a relevant but different
dataset are available. As a result, researchers have relied on unsupervised
domain adaptation to leverage the labels in a dataset (the source domain) to
perform better classification in a different, unlabeled dataset (target
domain). Existing non-generative adversarial solutions for UDA aim at achieving
domain confusion through adversarial training. The ideal scenario is that
perfect domain confusion is achieved, but this is not guaranteed to be true. To
further enforce domain confusion on top of the adversarial training, we propose
a novel UDA algorithm, \textit{E-ADDA}, which uses both a novel variation of
the Mahalanobis distance loss and an out-of-distribution detection subroutine.
The Mahalanobis distance loss minimizes the distribution-wise distance between
the encoded target samples and the distribution of the source domain, thus
enforcing additional domain confusion on top of adversarial training. Then, the
OOD subroutine further eliminates samples on which the domain confusion is
unsuccessful. We have performed extensive and comprehensive evaluations of
E-ADDA in the acoustic and computer vision modalities. In the acoustic
modality, E-ADDA outperforms several state-of-the-art UDA algorithms by up to
29.8%, measured in the f1 score. In the computer vision modality, the
evaluation results suggest that we achieve new state-of-the-art performance on
popular UDA benchmarks such as Office-31 and Office-Home, outperforming the
second best-performing algorithms by up to 17.9%.
|
[
{
"version": "v1",
"created": "Mon, 24 Jan 2022 23:20:55 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Feb 2022 10:51:08 GMT"
},
{
"version": "v3",
"created": "Mon, 15 Aug 2022 04:21:10 GMT"
},
{
"version": "v4",
"created": "Wed, 9 Nov 2022 22:36:31 GMT"
},
{
"version": "v5",
"created": "Fri, 21 Apr 2023 15:53:46 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Gao",
"Ye",
""
],
[
"Baucom",
"Brian",
""
],
[
"Rose",
"Karen",
""
],
[
"Gordon",
"Kristina",
""
],
[
"Wang",
"Hongning",
""
],
[
"Stankovic",
"John",
""
]
] |
new_dataset
| 0.995785 |
2202.10075
|
Constantine Doumanidis
|
Constantine Doumanidis (1), Prashant Hari Narayan Rajput (2), Michail
Maniatakos (1) ((1) New York University Abu Dhabi, (2) NYU Tandon School of
Engineering)
|
ICSML: Industrial Control Systems ML Framework for native inference
using IEC 61131-3 code
|
12 pages, 8 figures, code available at
https://github.com/momalab/ICSML, to appear in CPSS 2023 workshop (ACM
AsiaCCS'23)
| null | null | null |
cs.LG cs.CR cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Industrial Control Systems (ICS) have played a catalytic role in enabling the
4th Industrial Revolution. ICS devices like Programmable Logic Controllers
(PLCs), automate, monitor, and control critical processes in industrial,
energy, and commercial environments. The convergence of traditional Operational
Technology (OT) with Information Technology (IT) has opened a new and unique
threat landscape. This has inspired defense research that focuses heavily on
Machine Learning (ML) based anomaly detection methods that run on external IT
hardware, which means an increase in costs and the further expansion of the
threat landscape. To remove this requirement, we introduce the ICS machine
learning inference framework (ICSML) which enables executing ML model inference
natively on the PLC. ICSML is implemented in IEC 61131-3 code and provides
several optimizations to bypass the limitations imposed by the domain-specific
languages. Therefore, it works on every PLC without the need for vendor
support. ICSML provides a complete set of components for creating full ML
models similarly to established ML frameworks. We run a series of benchmarks
studying memory and performance, and compare our solution to the TFLite
inference framework. At the same time, we develop domain-specific model
optimizations to improve the efficiency of ICSML. To demonstrate the abilities
of ICSML, we evaluate a case study of a real defense for process-aware attacks
targeting a desalination plant.
|
[
{
"version": "v1",
"created": "Mon, 21 Feb 2022 09:37:28 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Apr 2022 11:09:37 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Apr 2023 08:25:13 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Doumanidis",
"Constantine",
""
],
[
"Rajput",
"Prashant Hari Narayan",
""
],
[
"Maniatakos",
"Michail",
""
]
] |
new_dataset
| 0.99976 |
2206.06420
|
Wenhao Li
|
Wenhao Li, Hong Liu, Tianyu Guo, Runwei Ding, and Hao Tang
|
GraphMLP: A Graph MLP-Like Architecture for 3D Human Pose Estimation
|
Open Sourced
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern multi-layer perceptron (MLP) models have shown competitive results in
learning visual representations without self-attention. However, existing MLP
models are not good at capturing local details and lack prior knowledge of
human body configurations, which limits their modeling power for skeletal
representation learning. To address these issues, we propose a simple yet
effective graph-reinforced MLP-Like architecture, named GraphMLP, that combines
MLPs and graph convolutional networks (GCNs) in a global-local-graphical
unified architecture for 3D human pose estimation. GraphMLP incorporates the
graph structure of human bodies into an MLP model to meet the domain-specific
demand of the 3D human pose, while allowing for both local and global spatial
interactions. Furthermore, we propose to flexibly and efficiently extend the
GraphMLP to the video domain and show that complex temporal dynamics can be
effectively modeled in a simple way with negligible computational cost gains in
the sequence length. To the best of our knowledge, this is the first MLP-Like
architecture for 3D human pose estimation in a single frame and a video
sequence. Extensive experiments show that the proposed GraphMLP achieves
state-of-the-art performance on two datasets, i.e., Human3.6M and MPI-INF-3DHP.
Code and models are available at https://github.com/Vegetebird/GraphMLP.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 18:59:31 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Sep 2022 07:22:39 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Apr 2023 13:45:17 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Li",
"Wenhao",
""
],
[
"Liu",
"Hong",
""
],
[
"Guo",
"Tianyu",
""
],
[
"Ding",
"Runwei",
""
],
[
"Tang",
"Hao",
""
]
] |
new_dataset
| 0.998224 |
2211.10540
|
Lo\"ic H\'elou\"et
|
Lo\"ic H\'elou\"et, Pranay Agrawal
|
Waiting Nets: State Classes and Taxonomy
| null | null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
In time Petri nets (TPNs), time and control are tightly connected: time
measurement for a transition starts only when all resources needed to fire it
are available. Further, upper bounds on duration of enabledness can force
transitions to fire (this is called urgency). For many systems, one wants to
decouple control and time, i.e. start measuring time as soon as a part of the
preset of a transition is filled, and fire it after some delay \underline{and}
when all needed resources are available. This paper considers an extension of
TPN called waiting nets that dissociates time measurement and control. Their
semantics allows time measurement to start with incomplete presets, and can
ignore urgency when upper bounds of intervals are reached but all resources
needed to fire are not yet available. Firing of a transition is then allowed as
soon as missing resources are available. It is known that extending bounded
TPNs with stopwatches leads to undecidability. Our extension is weaker, and we
show how to compute a finite state class graph for bounded waiting nets,
yielding decidability of reachability and coverability. We then compare
expressiveness of waiting nets with that of other models w.r.t. timed language
equivalence, and show that they are strictly more expressive than TPNs.
|
[
{
"version": "v1",
"created": "Sat, 19 Nov 2022 00:01:08 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 10:54:42 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Apr 2023 12:35:28 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Hélouët",
"Loïc",
""
],
[
"Agrawal",
"Pranay",
""
]
] |
new_dataset
| 0.97481 |
2302.13501
|
Laura Dodds
|
Laura Dodds, Isaac Perper, Aline Eid, Fadel Adib
|
A Handheld Fine-Grained RFID Localization System with Complex-Controlled
Polarization
| null | null |
10.1145/3570361.3592504
| null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
There is much interest in fine-grained RFID localization systems. Existing
systems for accurate localization typically require infrastructure, either in
the form of extensive reference tags or many antennas (e.g., antenna arrays) to
localize RFID tags within their radio range. Yet, there remains a need for
fine-grained RFID localization solutions that are in a compact, portable,
mobile form, that can be held by users as they walk around areas to map them,
such as in retail stores, warehouses, or manufacturing plants.
We present the design, implementation, and evaluation of POLAR, a portable
handheld system for fine-grained RFID localization. Our design introduces two
key innovations that enable robust, accurate, and real-time localization of
RFID tags. The first is complex-controlled polarization (CCP), a mechanism for
localizing RFIDs at all orientations through software-controlled polarization
of two linearly polarized antennas. The second is joint tag discovery and
localization (JTDL), a method for simultaneously localizing and reading tags
with zero-overhead regardless of tag orientation. Building on these two
techniques, we develop an end-to-end handheld system that addresses a number of
practical challenges in self-interference, efficient inventorying, and
self-localization. Our evaluation demonstrates that POLAR achieves a median
accuracy of a few centimeters in each of the x/y/z dimensions in practical
indoor environments.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 03:53:48 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2023 20:09:22 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Dodds",
"Laura",
""
],
[
"Perper",
"Isaac",
""
],
[
"Eid",
"Aline",
""
],
[
"Adib",
"Fadel",
""
]
] |
new_dataset
| 0.99958 |
2303.09565
|
Wojciech Dudek PhD
|
Wojciech Dudek, Narcis Miguel, Tomasz Winiarski
|
SPSysML: A meta-model for quantitative evaluation of Simulation-Physical
Systems
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.SE cs.AR cs.MA cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Robotic systems are complex cyber-physical systems (CPS) commonly equipped
with multiple sensors and effectors. Recent simulation methods enable the
Digital Twin (DT) concept realisation. However, DT employment in robotic system
development, e.g. in-development testing, is unclear. During the system
development, its parts evolve from simulated mockups to physical parts which
run software deployed on the actual hardware. Therefore, a design tool and a
flexible development procedure ensuring the integrity of the simulated and
physical parts are required.
We aim to maximise the integration between a CPS's simulated and physical
parts in various setups. The better integration, the better simulation-based
testing coverage of the physical part (hardware and software).
We propose a Domain Specification Language (DSL) based on Systems Modeling
Language (SysML) that we refer to as SPSysML (Simulation-Physical System
Modeling Language). SPSysML defines the taxonomy of a Simulation-Physical
System (SPSys), being a CPS consisting of at least a physical or simulated
part. In particular, the simulated ones can be DTs. We propose a SPSys
Development Procedure (SPSysDP) that enables the maximisation of the
simulation-physical integrity of SPSys by evaluating the proposed factors.
SPSysDP is used to develop a complex robotic system for the INCARE project.
In subsequent iterations of SPSysDP, the simulation-physical integrity of the
system is maximised. As a result, the system model consists of fewer
components, and a greater fraction of the system components are shared between
various system setups. We implement and test the system with popular
frameworks, Robot Operating System (ROS) and Gazebo simulator.
SPSysML with SPSysDP enables the design of SPSys (including DT and CPS),
multi-setup system development featuring maximised integrity between simulation
and physical parts in its setups.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 16:56:48 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Apr 2023 16:56:34 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Dudek",
"Wojciech",
""
],
[
"Miguel",
"Narcis",
""
],
[
"Winiarski",
"Tomasz",
""
]
] |
new_dataset
| 0.999634 |
2304.01433
|
Cliff Young
|
Norman P. Jouppi, George Kurian, Sheng Li, Peter Ma, Rahul Nagarajan,
Lifeng Nai, Nishant Patil, Suvinay Subramanian, Andy Swing, Brian Towles,
Cliff Young, Xiang Zhou, Zongwei Zhou, and David Patterson
|
TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning
with Hardware Support for Embeddings
|
15 pages; 16 figures; to be published at ISCA 2023 (the International
Symposium on Computer Architecture)
| null | null | null |
cs.AR cs.AI cs.LG cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
In response to innovations in machine learning (ML) models, production
workloads changed radically and rapidly. TPU v4 is the fifth Google domain
specific architecture (DSA) and its third supercomputer for such ML models.
Optical circuit switches (OCSes) dynamically reconfigure its interconnect
topology to improve scale, availability, utilization, modularity, deployment,
security, power, and performance; users can pick a twisted 3D torus topology if
desired. Much cheaper, lower power, and faster than Infiniband, OCSes and
underlying optical components are <5% of system cost and <3% of system power.
Each TPU v4 includes SparseCores, dataflow processors that accelerate models
that rely on embeddings by 5x-7x yet use only 5% of die area and power.
Deployed since 2020, TPU v4 outperforms TPU v3 by 2.1x and improves
performance/Watt by 2.7x. The TPU v4 supercomputer is 4x larger at 4096 chips
and thus ~10x faster overall, which along with OCS flexibility helps large
language models. For similar sized systems, it is ~4.3x-4.5x faster than the
Graphcore IPU Bow and is 1.2x-1.7x faster and uses 1.3x-1.9x less power than
the Nvidia A100. TPU v4s inside the energy-optimized warehouse scale computers
of Google Cloud use ~3x less energy and produce ~20x less CO2e than
contemporary DSAs in a typical on-premise data center.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 00:52:46 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Apr 2023 14:50:57 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Apr 2023 22:25:51 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Jouppi",
"Norman P.",
""
],
[
"Kurian",
"George",
""
],
[
"Li",
"Sheng",
""
],
[
"Ma",
"Peter",
""
],
[
"Nagarajan",
"Rahul",
""
],
[
"Nai",
"Lifeng",
""
],
[
"Patil",
"Nishant",
""
],
[
"Subramanian",
"Suvinay",
""
],
[
"Swing",
"Andy",
""
],
[
"Towles",
"Brian",
""
],
[
"Young",
"Cliff",
""
],
[
"Zhou",
"Xiang",
""
],
[
"Zhou",
"Zongwei",
""
],
[
"Patterson",
"David",
""
]
] |
new_dataset
| 0.99832 |
2304.06793
|
Ole Richter
|
Ole Richter (1,3,4), Yannan Xing (2), Michele De Marchi (1), Carsten
Nielsen (1), Merkourios Katsimpris (1), Roberto Cattaneo (1), Yudi Ren (2),
Qian Liu (1), Sadique Sheik (1), Tugba Demirci (1,2), Ning Qiao (1,2) ((1)
SynSense AG, Swizerland, (2) SynSense, PR China, (3) Bio-Inspired Circuits
and Systems (BICS) Lab, Zernike Institute for Advanced Materials, University
of Groningen, Netherlands, (4) Groningen Cognitive Systems and Materials
Center (CogniGron), University of Groningen, Netherlands.)
|
Speck: A Smart event-based Vision Sensor with a low latency 327K Neuron
Convolutional Neuronal Network Processing Pipeline
|
This article has been removed by arXiv administrators because the
submitter did not have the authority to grant a license at the time of
submission
| null | null | null |
cs.NE cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Edge computing solutions that enable the extraction of high level information
from a variety of sensors is in increasingly high demand. This is due to the
increasing number of smart devices that require sensory processing for their
application on the edge. To tackle this problem, we present a smart vision
sensor System on Chip (Soc), featuring an event-based camera and a low power
asynchronous spiking Convolutional Neuronal Network (sCNN) computing
architecture embedded on a single chip. By combining both sensor and processing
on a single die, we can lower unit production costs significantly. Moreover,
the simple end-to-end nature of the SoC facilitates small stand-alone
applications as well as functioning as an edge node in a larger systems. The
event-driven nature of the vision sensor delivers high-speed signals in a
sparse data stream. This is reflected in the processing pipeline, focuses on
optimising highly sparse computation and minimising latency for 9 sCNN layers
to $3.36\mu s$. Overall, this results in an extremely low-latency visual
processing pipeline deployed on a small form factor with a low energy budget
and sensor cost. We present the asynchronous architecture, the individual
blocks, the sCNN processing principle and benchmark against other sCNN capable
processors.
|
[
{
"version": "v1",
"created": "Thu, 13 Apr 2023 19:28:57 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Richter",
"Ole",
""
],
[
"Xing",
"Yannan",
""
],
[
"De Marchi",
"Michele",
""
],
[
"Nielsen",
"Carsten",
""
],
[
"Katsimpris",
"Merkourios",
""
],
[
"Cattaneo",
"Roberto",
""
],
[
"Ren",
"Yudi",
""
],
[
"Liu",
"Qian",
""
],
[
"Sheik",
"Sadique",
""
],
[
"Demirci",
"Tugba",
""
],
[
"Qiao",
"Ning",
""
]
] |
new_dataset
| 0.998906 |
2304.07004
|
Hongshi Tan
|
Hongshi Tan, Xinyu Chen, Yao Chen, Bingsheng He, Weng-Fai Wong
|
LightRW: FPGA Accelerated Graph Dynamic Random Walks
|
Accepted to SIGMOD 2023
| null |
10.1145/3588944
| null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Graph dynamic random walks (GDRWs) have recently emerged as a powerful
paradigm for graph analytics and learning applications, including graph
embedding and graph neural networks. Despite the fact that many existing
studies optimize the performance of GDRWs on multi-core CPUs, massive random
memory accesses and costly synchronizations cause severe resource
underutilization, and the processing of GDRWs is usually the key performance
bottleneck in many graph applications. This paper studies an alternative
architecture, FPGA, to address these issues in GDRWs, as FPGA has the ability
of hardware customization so that we are able to explore fine-grained pipeline
execution and specialized memory access optimizations. Specifically, we propose
LightRW, a novel FPGA-based accelerator for GDRWs. LightRW embraces a series of
optimizations to enable fine-grained pipeline execution on the chip and to
exploit the massive parallelism of FPGA while significantly reducing memory
accesses. As current commonly used sampling methods in GDRWs do not efficiently
support fine-grained pipeline execution, we develop a parallelized reservoir
sampling method to sample multiple vertices per cycle for efficient pipeline
execution. To address the random memory access issues, we propose a
degree-aware configurable caching method that buffers hot vertices on-chip to
alleviate random memory accesses and a dynamic burst access engine that
efficiently retrieves neighbors. Experimental results show that our
optimization techniques are able to improve the performance of GDRWs on FPGA
significantly. Moreover, LightRW delivers up to 9.55x and 9.10x speedup over
the state-of-the-art CPU-based MetaPath and Node2vec random walks,
respectively. This work is open-sourced on GitHub at
https://github.com/Xtra-Computing/LightRW.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 09:00:44 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Apr 2023 05:02:37 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Tan",
"Hongshi",
""
],
[
"Chen",
"Xinyu",
""
],
[
"Chen",
"Yao",
""
],
[
"He",
"Bingsheng",
""
],
[
"Wong",
"Weng-Fai",
""
]
] |
new_dataset
| 0.997842 |
2304.08927
|
Berat Senel
|
Berat Can Senel, Maxime Mouchet, Justin Cappos, Olivier Fourmaux,
Timur Friedman, Rick McGeer
|
Multitenant Containers as a Service (CaaS) for Clouds and Edge Clouds
| null | null | null | null |
cs.NI cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Cloud computing, offering on-demand access to computing resources through the
Internet and the pay-as-you-go model, has marked the last decade with its three
main service models; Infrastructure as a Service (IaaS), Platform as a Service
(PaaS), and Software as a Service (SaaS). The lightweight nature of containers
compared to virtual machines has led to the rapid uptake of another in recent
years, called Containers as a Service (CaaS), which falls between IaaS and PaaS
regarding control abstraction. However, when CaaS is offered to multiple
independent users, or tenants, a multi-instance approach is used, in which each
tenant receives its own separate cluster, which reimposes significant overhead
due to employing virtual machines for isolation. If CaaS is to be offered not
just at the cloud, but also at the edge cloud, where resources are limited,
another solution is required. We introduce a native CaaS multitenancy
framework, meaning that tenants share a cluster, which is more efficient than
the one tenant per cluster model. Whenever there are shared resources,
isolation of multitenant workloads is an issue. Such workloads can be isolated
by Kata Containers today. Besides, our framework esteems the application
requirements that compel complete isolation and a fully customized environment.
Node-level slicing empowers tenants to programmatically reserve isolated
subclusters where they can choose the container runtime that suits application
needs. The framework is publicly available as liberally-licensed, free,
open-source software that extends Kubernetes, the de facto standard container
orchestration system. It is in production use within the EdgeNet testbed for
researchers.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 12:07:50 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Apr 2023 15:22:39 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Senel",
"Berat Can",
""
],
[
"Mouchet",
"Maxime",
""
],
[
"Cappos",
"Justin",
""
],
[
"Fourmaux",
"Olivier",
""
],
[
"Friedman",
"Timur",
""
],
[
"McGeer",
"Rick",
""
]
] |
new_dataset
| 0.999406 |
2304.09938
|
Claire Pagetti
|
M\'elanie Ducoffe, Maxime Carrere, L\'eo F\'eliers, Adrien Gauffriau,
Vincent Mussot, Claire Pagetti, Thierry Sammour
|
LARD -- Landing Approach Runway Detection -- Dataset for Vision Based
Landing
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the interest in autonomous systems continues to grow, one of the major
challenges is collecting sufficient and representative real-world data. Despite
the strong practical and commercial interest in autonomous landing systems in
the aerospace field, there is a lack of open-source datasets of aerial images.
To address this issue, we present a dataset-lard-of high-quality aerial images
for the task of runway detection during approach and landing phases. Most of
the dataset is composed of synthetic images but we also provide manually
labelled images from real landing footages, to extend the detection task to a
more realistic setting. In addition, we offer the generator which can produce
such synthetic front-view images and enables automatic annotation of the runway
corners through geometric transformations. This dataset paves the way for
further research such as the analysis of dataset quality or the development of
models to cope with the detection tasks. Find data, code and more up-to-date
information at https://github.com/deel-ai/LARD
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 08:25:55 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Apr 2023 13:58:29 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Ducoffe",
"Mélanie",
""
],
[
"Carrere",
"Maxime",
""
],
[
"Féliers",
"Léo",
""
],
[
"Gauffriau",
"Adrien",
""
],
[
"Mussot",
"Vincent",
""
],
[
"Pagetti",
"Claire",
""
],
[
"Sammour",
"Thierry",
""
]
] |
new_dataset
| 0.999898 |
2304.10546
|
M.C. Schraefel
|
Alexander Dawid Bincalar, M.C. Schraefel, Christopher Freeman
|
Introducing Vibration for use in Interaction Designs to support Human
Performance: A Pilot Study
|
10 pages, 5 figures; pilot study report
| null | null |
WellthLab - ECS - J23-01ab
|
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
While vibration is a well-used output signal in HCI as part of haptic
interaction, vibration outside HCI is used in many other modes to support human
performance, from rehabilitation to cognition. In this late breaking work, we
present preliminary positive results of a novel protocol that informs how
vibration might be used to enrich HCI interventions for aspects of both health
and intellectual performance. We also present a novel apparatus specifically
designed to help HCI researchers explore different vibration amplitudes and
frequencies for such applications.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 13:48:08 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Bincalar",
"Alexander Dawid",
""
],
[
"Schraefel",
"M. C.",
""
],
[
"Freeman",
"Christopher",
""
]
] |
new_dataset
| 0.998806 |
2304.10585
|
Ratun Rahman
|
Ratun Rahman and Md Rafid Islam
|
VREd: A Virtual Reality-Based Classroom for Online Education Using
Unity3D WebGL
|
4 pages, 4 figures, 31 references
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Virtual reality is the way of the future. The use of virtual reality is
expanding over time across all sectors, from the entertainment industry to the
military and space. VREd is a similar concept where a virtual reality-based
classroom is used for online education where the user will have better
interaction and more control. Unity3D and WebGL software have been used for
implementation. Students or learners accustomed to contemporary technologies
may find the traditional educational system unappealing because of its flaws.
Incorporating the latest technologies can increase the curiosity and learning
abilities of students. The system architecture of VREd is similar to that of an
actual classroom, allowing both students and teachers to access all of the
course materials and interact with one another using only an internet
connection. The environment and the background are also customizable.
Therefore, all the users can comfortably use the system and feel at home. We
can create an effective educational system that raises educational quality by
utilizing virtual reality.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 18:18:47 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Rahman",
"Ratun",
""
],
[
"Islam",
"Md Rafid",
""
]
] |
new_dataset
| 0.979395 |
2304.10612
|
Erich Bremer
|
Erich Bremer, Tammy DiPrima, Joseph Balsamo, Jonas Almeida, Rajarsi
Gupta, and Joel Saltz
|
Halcyon -- A Pathology Imaging and Feature analysis and Management
System
|
15 pages, 11 figures. arXiv admin note: text overlap with
arXiv:2005.06469
| null | null | null |
cs.HC cs.CV q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Halcyon is a new pathology imaging analysis and feature management system
based on W3C linked-data open standards and is designed to scale to support the
needs for the voluminous production of features from deep-learning feature
pipelines. Halcyon can support multiple users with a web-based UX with access
to all user data over a standards-based web API allowing for integration with
other processes and software systems. Identity management and data security is
also provided.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 19:18:16 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Bremer",
"Erich",
""
],
[
"DiPrima",
"Tammy",
""
],
[
"Balsamo",
"Joseph",
""
],
[
"Almeida",
"Jonas",
""
],
[
"Gupta",
"Rajarsi",
""
],
[
"Saltz",
"Joel",
""
]
] |
new_dataset
| 0.999685 |
2304.10618
|
Zachary Susskind
|
Zachary Susskind, Aman Arora, Igor D. S. Miranda, Alan T. L. Bacellar,
Luis A. Q. Villon, Rafael F. Katopodis, Leandro S. de Araujo, Diego L. C.
Dutra, Priscila M. V. Lima, Felipe M. G. Franca, Mauricio Breternitz Jr., and
Lizy K. John
|
ULEEN: A Novel Architecture for Ultra Low-Energy Edge Neural Networks
|
14 pages, 14 figures Portions of this article draw heavily from
arXiv:2203.01479, most notably sections 5E and 5F.2
| null | null | null |
cs.AR eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
The deployment of AI models on low-power, real-time edge devices requires
accelerators for which energy, latency, and area are all first-order concerns.
There are many approaches to enabling deep neural networks (DNNs) in this
domain, including pruning, quantization, compression, and binary neural
networks (BNNs), but with the emergence of the "extreme edge", there is now a
demand for even more efficient models. In order to meet the constraints of
ultra-low-energy devices, we propose ULEEN, a model architecture based on
weightless neural networks. Weightless neural networks (WNNs) are a class of
neural model which use table lookups, not arithmetic, to perform computation.
The elimination of energy-intensive arithmetic operations makes WNNs
theoretically well suited for edge inference; however, they have historically
suffered from poor accuracy and excessive memory usage. ULEEN incorporates
algorithmic improvements and a novel training strategy inspired by BNNs to make
significant strides in improving accuracy and reducing model size. We compare
FPGA and ASIC implementations of an inference accelerator for ULEEN against
edge-optimized DNN and BNN devices. On a Xilinx Zynq Z-7045 FPGA, we
demonstrate classification on the MNIST dataset at 14.3 million inferences per
second (13 million inferences/Joule) with 0.21 $\mu$s latency and 96.2%
accuracy, while Xilinx FINN achieves 12.3 million inferences per second (1.69
million inferences/Joule) with 0.31 $\mu$s latency and 95.83% accuracy. In a
45nm ASIC, we achieve 5.1 million inferences/Joule and 38.5 million
inferences/second at 98.46% accuracy, while a quantized Bit Fusion model
achieves 9230 inferences/Joule and 19,100 inferences/second at 99.35% accuracy.
In our search for ever more efficient edge devices, ULEEN shows that WNNs are
deserving of consideration.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 19:40:01 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Susskind",
"Zachary",
""
],
[
"Arora",
"Aman",
""
],
[
"Miranda",
"Igor D. S.",
""
],
[
"Bacellar",
"Alan T. L.",
""
],
[
"Villon",
"Luis A. Q.",
""
],
[
"Katopodis",
"Rafael F.",
""
],
[
"de Araujo",
"Leandro S.",
""
],
[
"Dutra",
"Diego L. C.",
""
],
[
"Lima",
"Priscila M. V.",
""
],
[
"Franca",
"Felipe M. G.",
""
],
[
"Breternitz",
"Mauricio",
"Jr."
],
[
"John",
"Lizy K.",
""
]
] |
new_dataset
| 0.992235 |
2304.10621
|
Giuseppe Attanasio
|
Patrick John Chia, Giuseppe Attanasio, Jacopo Tagliabue, Federico
Bianchi, Ciro Greco, Gabriel de Souza P. Moreira, Davide Eynard, Fahd Husain
|
E Pluribus Unum: Guidelines on Multi-Objective Evaluation of Recommender
Systems
|
15 pages, under submission
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Recommender Systems today are still mostly evaluated in terms of accuracy,
with other aspects beyond the immediate relevance of recommendations, such as
diversity, long-term user retention and fairness, often taking a back seat.
Moreover, reconciling multiple performance perspectives is by definition
indeterminate, presenting a stumbling block to those in the pursuit of rounded
evaluation of Recommender Systems. EvalRS 2022 -- a data challenge designed
around Multi-Objective Evaluation -- was a first practical endeavour, providing
many insights into the requirements and challenges of balancing multiple
objectives in evaluation. In this work, we reflect on EvalRS 2022 and expound
upon crucial learnings to formulate a first-principles approach toward
Multi-Objective model selection, and outline a set of guidelines for carrying
out a Multi-Objective Evaluation challenge, with potential applicability to the
problem of rounded evaluation of competing models in real-world deployments.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 19:48:41 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Chia",
"Patrick John",
""
],
[
"Attanasio",
"Giuseppe",
""
],
[
"Tagliabue",
"Jacopo",
""
],
[
"Bianchi",
"Federico",
""
],
[
"Greco",
"Ciro",
""
],
[
"Moreira",
"Gabriel de Souza P.",
""
],
[
"Eynard",
"Davide",
""
],
[
"Husain",
"Fahd",
""
]
] |
new_dataset
| 0.956952 |
2304.10628
|
Hao Xiang
|
Hao Xiang, Runsheng Xu, Jiaqi Ma
|
HM-ViT: Hetero-modal Vehicle-to-Vehicle Cooperative perception with
vision transformer
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Vehicle-to-Vehicle technologies have enabled autonomous vehicles to share
information to see through occlusions, greatly enhancing perception
performance. Nevertheless, existing works all focused on homogeneous traffic
where vehicles are equipped with the same type of sensors, which significantly
hampers the scale of collaboration and benefit of cross-modality interactions.
In this paper, we investigate the multi-agent hetero-modal cooperative
perception problem where agents may have distinct sensor modalities. We present
HM-ViT, the first unified multi-agent hetero-modal cooperative perception
framework that can collaboratively predict 3D objects for highly dynamic
vehicle-to-vehicle (V2V) collaborations with varying numbers and types of
agents. To effectively fuse features from multi-view images and LiDAR point
clouds, we design a novel heterogeneous 3D graph transformer to jointly reason
inter-agent and intra-agent interactions. The extensive experiments on the V2V
perception dataset OPV2V demonstrate that the HM-ViT outperforms SOTA
cooperative perception methods for V2V hetero-modal cooperative perception. We
will release codes to facilitate future research.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 20:09:59 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Xiang",
"Hao",
""
],
[
"Xu",
"Runsheng",
""
],
[
"Ma",
"Jiaqi",
""
]
] |
new_dataset
| 0.998965 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.