id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.03045
|
Peng-Shuai Wang
|
Peng-Shuai Wang
|
OctFormer: Octree-based Transformers for 3D Point Clouds
|
SIGGRAPH 2023, Journal Track
|
ACM Transactions on Graphics (SIGGRAPH), 42, 4 (August 2023), 11
pages
|
10.1145/3592131
| null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose octree-based transformers, named OctFormer, for 3D point cloud
learning. OctFormer can not only serve as a general and effective backbone for
3D point cloud segmentation and object detection but also have linear
complexity and is scalable for large-scale point clouds. The key challenge in
applying transformers to point clouds is reducing the quadratic, thus
overwhelming, computation complexity of attentions. To combat this issue,
several works divide point clouds into non-overlapping windows and constrain
attentions in each local window. However, the point number in each window
varies greatly, impeding the efficient execution on GPU. Observing that
attentions are robust to the shapes of local windows, we propose a novel octree
attention, which leverages sorted shuffled keys of octrees to partition point
clouds into local windows containing a fixed number of points while permitting
shapes of windows to change freely. And we also introduce dilated octree
attention to expand the receptive field further. Our octree attention can be
implemented in 10 lines of code with open-sourced libraries and runs 17 times
faster than other point cloud attentions when the point number exceeds 200k.
Built upon the octree attention, OctFormer can be easily scaled up and achieves
state-of-the-art performances on a series of 3D segmentation and detection
benchmarks, surpassing previous sparse-voxel-based CNNs and point cloud
transformers in terms of both efficiency and effectiveness. Notably, on the
challenging ScanNet200 dataset, OctFormer outperforms sparse-voxel-based CNNs
by 7.3 in mIoU. Our code and trained models are available at
https://wang-ps.github.io/octformer.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 17:59:05 GMT"
},
{
"version": "v2",
"created": "Mon, 8 May 2023 00:31:54 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Wang",
"Peng-Shuai",
""
]
] |
new_dataset
| 0.99871 |
2305.03873
|
Zhong Zhou
|
Zhong Zhou, Jan Niehues, Alex Waibel
|
Train Global, Tailor Local: Minimalist Multilingual Translation into
Endangered Languages
|
In Proceedings of the 6th Workshop on Technologies for Machine
Translation of Low-Resource Languages (LoResMT) of the 17th Conference of the
European Chapter of the Association for Computational Linguistic in 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In many humanitarian scenarios, translation into severely low resource
languages often does not require a universal translation engine, but a
dedicated text-specific translation engine. For example, healthcare records,
hygienic procedures, government communication, emergency procedures and
religious texts are all limited texts. While generic translation engines for
all languages do not exist, translation of multilingually known limited texts
into new, endangered languages may be possible and reduce human translation
effort. We attempt to leverage translation resources from many rich resource
languages to efficiently produce best possible translation quality for a well
known text, which is available in multiple languages, in a new, severely low
resource language. We examine two approaches: 1. best selection of seed
sentences to jump start translations in a new language in view of best
generalization to the remainder of a larger targeted text(s), and 2. we adapt
large general multilingual translation engines from many other languages to
focus on a specific text in a new, unknown language. We find that adapting
large pretrained multilingual models to the domain/text first and then to the
severely low resource language works best. If we also select a best set of seed
sentences, we can improve average chrF performance on new test languages from a
baseline of 21.9 to 50.7, while reducing the number of seed sentences to only
around 1,000 in the new, unknown language.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 23:22:16 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Zhou",
"Zhong",
""
],
[
"Niehues",
"Jan",
""
],
[
"Waibel",
"Alex",
""
]
] |
new_dataset
| 0.997175 |
2305.03880
|
David Samuel
|
David Samuel, Andrey Kutuzov, Samia Touileb, Erik Velldal, Lilja
{\O}vrelid, Egil R{\o}nningstad, Elina Sigdel and Anna Palatkina
|
NorBench -- A Benchmark for Norwegian Language Models
|
Accepted to NoDaLiDa 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present NorBench: a streamlined suite of NLP tasks and probes for
evaluating Norwegian language models (LMs) on standardized data splits and
evaluation metrics. We also introduce a range of new Norwegian language models
(both encoder and encoder-decoder based). Finally, we compare and analyze their
performance, along with other existing LMs, across the different benchmark
tests of NorBench.
|
[
{
"version": "v1",
"created": "Sat, 6 May 2023 00:20:24 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Samuel",
"David",
""
],
[
"Kutuzov",
"Andrey",
""
],
[
"Touileb",
"Samia",
""
],
[
"Velldal",
"Erik",
""
],
[
"Øvrelid",
"Lilja",
""
],
[
"Rønningstad",
"Egil",
""
],
[
"Sigdel",
"Elina",
""
],
[
"Palatkina",
"Anna",
""
]
] |
new_dataset
| 0.999231 |
2305.03895
|
Changlin Yang
|
Changlin Yang, Alexei Ashikhmin, Xiaodong Wang, Zibin Zheng
|
Rateless Coded Blockchain for Dynamic IoT Networks
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A key constraint that limits the implementation of blockchain in Internet of
Things (IoT) is its large storage requirement resulting from the fact that each
blockchain node has to store the entire blockchain. This increases the burden
on blockchain nodes, and increases the communication overhead for new nodes
joining the network since they have to copy the entire blockchain. In order to
reduce storage requirements without compromising on system security and
integrity, coded blockchains, based on error correcting codes with fixed rates
and lengths, have been recently proposed. This approach, however, does not fit
well with dynamic IoT networks in which nodes actively leave and join. In such
dynamic blockchains, the existing coded blockchain approaches lead to high
communication overheads for new joining nodes and may have high decoding
failure probability. This paper proposes a rateless coded blockchain with
coding parameters adjusted to network conditions. Our goals are to minimize
both the storage requirement at each blockchain node and the communication
overhead for each new joining node, subject to a target decoding failure
probability. We evaluate the proposed scheme in the context of real-world
Bitcoin blockchain and show that both storage and communication overhead are
reduced by 99.6\% with a maximum $10^{-12}$ decoding failure probability.
|
[
{
"version": "v1",
"created": "Sat, 6 May 2023 02:15:00 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Yang",
"Changlin",
""
],
[
"Ashikhmin",
"Alexei",
""
],
[
"Wang",
"Xiaodong",
""
],
[
"Zheng",
"Zibin",
""
]
] |
new_dataset
| 0.95911 |
2305.03915
|
Mithun Das
|
Mithun Das, Rohit Raj, Punyajoy Saha, Binny Mathew, Manish Gupta,
Animesh Mukherjee
|
HateMM: A Multi-Modal Dataset for Hate Video Classification
|
Accepted at ICWSM 2023(dataset track)
| null | null | null |
cs.CV cs.CL cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hate speech has become one of the most significant issues in modern society,
having implications in both the online and the offline world. Due to this, hate
speech research has recently gained a lot of traction. However, most of the
work has primarily focused on text media with relatively little work on images
and even lesser on videos. Thus, early stage automated video moderation
techniques are needed to handle the videos that are being uploaded to keep the
platform safe and healthy. With a view to detect and remove hateful content
from the video sharing platforms, our work focuses on hate video detection
using multi-modalities. To this end, we curate ~43 hours of videos from
BitChute and manually annotate them as hate or non-hate, along with the frame
spans which could explain the labelling decision. To collect the relevant
videos we harnessed search keywords from hate lexicons. We observe various cues
in images and audio of hateful videos. Further, we build deep learning
multi-modal models to classify the hate videos and observe that using all the
modalities of the videos improves the overall hate speech detection performance
(accuracy=0.798, macro F1-score=0.790) by ~5.7% compared to the best uni-modal
model in terms of macro F1 score. In summary, our work takes the first step
toward understanding and modeling hateful videos on video hosting platforms
such as BitChute.
|
[
{
"version": "v1",
"created": "Sat, 6 May 2023 03:39:00 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Das",
"Mithun",
""
],
[
"Raj",
"Rohit",
""
],
[
"Saha",
"Punyajoy",
""
],
[
"Mathew",
"Binny",
""
],
[
"Gupta",
"Manish",
""
],
[
"Mukherjee",
"Animesh",
""
]
] |
new_dataset
| 0.999896 |
2305.03919
|
Yuwen Heng
|
Yuwen Heng, Srinandan Dasmahapatra, Hansung Kim
|
DBAT: Dynamic Backward Attention Transformer for Material Segmentation
with Cross-Resolution Patches
|
13 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The objective of dense material segmentation is to identify the material
categories for every image pixel. Recent studies adopt image patches to extract
material features. Although the trained networks can improve the segmentation
performance, their methods choose a fixed patch resolution which fails to take
into account the variation in pixel area covered by each material. In this
paper, we propose the Dynamic Backward Attention Transformer (DBAT) to
aggregate cross-resolution features. The DBAT takes cropped image patches as
input and gradually increases the patch resolution by merging adjacent patches
at each transformer stage, instead of fixing the patch resolution during
training. We explicitly gather the intermediate features extracted from
cross-resolution patches and merge them dynamically with predicted attention
masks. Experiments show that our DBAT achieves an accuracy of 86.85%, which is
the best performance among state-of-the-art real-time models. Like other
successful deep learning solutions with complex architectures, the DBAT also
suffers from lack of interpretability. To address this problem, this paper
examines the properties that the DBAT makes use of. By analysing the
cross-resolution features and the attention weights, this paper interprets how
the DBAT learns from image patches. We further align features to semantic
labels, performing network dissection, to infer that the proposed model can
extract material-related features better than other methods. We show that the
DBAT model is more robust to network initialisation, and yields fewer variable
predictions compared to other models. The project code is available at
https://github.com/heng-yuwen/Dynamic-Backward-Attention-Transformer.
|
[
{
"version": "v1",
"created": "Sat, 6 May 2023 03:47:20 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Heng",
"Yuwen",
""
],
[
"Dasmahapatra",
"Srinandan",
""
],
[
"Kim",
"Hansung",
""
]
] |
new_dataset
| 0.968349 |
2305.03946
|
Jittat Fakcharoenphol
|
Nonthaphat Wongwattanakij, Nattawut Phetmak, Chaiporn Jaikaeo, Jittat
Fakcharoenphol
|
An Improved PTAS for Covering Targets with Mobile Sensors
| null | null | null | null |
cs.CG cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper considers a movement minimization problem for mobile sensors.
Given a set of $n$ point targets, the $k$-Sink Minimum Movement Target Coverage
Problem is to schedule mobile sensors, initially located at $k$ base stations,
to cover all targets minimizing the total moving distance of the sensors. We
present a polynomial-time approximation scheme for finding a $(1+\epsilon)$
approximate solution running in time $n^{O(1/\epsilon)}$ for this problem when
$k$, the number of base stations, is constant. Our algorithm improves the
running time exponentially from the previous work that runs in time
$n^{O(1/\epsilon^2)}$, without any target distribution assumption. To devise a
faster algorithm, we prove a stronger bound on the number of sensors in any
unit area in the optimal solution and employ a more refined dynamic programming
algorithm whose complexity depends only on the width of the problem.
|
[
{
"version": "v1",
"created": "Sat, 6 May 2023 06:15:12 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Wongwattanakij",
"Nonthaphat",
""
],
[
"Phetmak",
"Nattawut",
""
],
[
"Jaikaeo",
"Chaiporn",
""
],
[
"Fakcharoenphol",
"Jittat",
""
]
] |
new_dataset
| 0.996964 |
2305.03955
|
Yuan-An Xiao
|
Yuan-An Xiao, Chenyang Yang, Bo Wang, Yingfei Xiong
|
Accelerating Patch Validation for Program Repair with Interception-Based
Execution Scheduling
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Long patch validation time is a limiting factor for automated program repair
(APR). Though the duality between patch validation and mutation testing is
recognized, so far there exists no study of systematically adapting mutation
testing techniques to general-purpose patch validation. To address this gap, we
investigate existing mutation testing techniques and recognize five classes of
acceleration techniques that are suitable for general-purpose patch validation.
Among them, mutant schemata and mutant deduplication have not been adapted to
general-purpose patch validation due to the arbitrary changes that third-party
APR approaches may introduce. This presents two problems for adaption: 1) the
difficulty of implementing the static equivalence analysis required by the
state-of-the-art mutant deduplication approach; 2) the difficulty of capturing
patches' changes to the system state at runtime.
To overcome these problems, we propose two novel approaches: 1) execution
scheduling, which detects the equivalence between patches online, avoiding the
static equivalence analysis and its imprecision; 2) interception-based
instrumentation, which intercepts patches' changes to the system state,
avoiding a full interpreter and its overhead.
Based on the contributions above, we implement ExpressAPR, a general-purpose
patch validator for Java that integrates all recognized classes of techniques
suitable for patch validation. Our large-scale evaluation with four APR
approaches shows that ExpressAPR accelerates patch validation by 137.1x over
plain validation or 8.8x over the state-of-the-art approach, making patch
validation no longer the time bottleneck of APR. Patch validation time for a
single bug can be reduced to within a few minutes on mainstream CPUs.
|
[
{
"version": "v1",
"created": "Sat, 6 May 2023 06:45:25 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Xiao",
"Yuan-An",
""
],
[
"Yang",
"Chenyang",
""
],
[
"Wang",
"Bo",
""
],
[
"Xiong",
"Yingfei",
""
]
] |
new_dataset
| 0.998059 |
2305.04017
|
Wanli Xing
|
Wanli Xing, Shijie Lin, Lei Yang, Jia Pan
|
Target-free Extrinsic Calibration of Event-LiDAR Dyad using Edge
Correspondences
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Calibrating the extrinsic parameters of sensory devices is crucial for fusing
multi-modal data. Recently, event cameras have emerged as a promising type of
neuromorphic sensors, with many potential applications in fields such as mobile
robotics and autonomous driving. When combined with LiDAR, they can provide
more comprehensive information about the surrounding environment. Nonetheless,
due to the distinctive representation of event cameras compared to traditional
frame-based cameras, calibrating them with LiDAR presents a significant
challenge. In this paper, we propose a novel method to calibrate the extrinsic
parameters between a dyad of an event camera and a LiDAR without the need for a
calibration board or other equipment. Our approach takes advantage of the fact
that when an event camera is in motion, changes in reflectivity and geometric
edges in the environment trigger numerous events, which can also be captured by
LiDAR. Our proposed method leverages the edges extracted from events and point
clouds and correlates them to estimate extrinsic parameters. Experimental
results demonstrate that our proposed method is highly robust and effective in
various scenes.
|
[
{
"version": "v1",
"created": "Sat, 6 May 2023 11:28:04 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Xing",
"Wanli",
""
],
[
"Lin",
"Shijie",
""
],
[
"Yang",
"Lei",
""
],
[
"Pan",
"Jia",
""
]
] |
new_dataset
| 0.990226 |
2305.04034
|
Zihao Wang
|
Zihao Wang, Weizhi Fei, Hang Yin, Yangqiu Song, Ginny Y. Wong, Simon
See
|
Wasserstein-Fisher-Rao Embedding: Logical Query Embeddings with Local
Comparison and Global Transport
|
Findings in ACL 2023. 16 pages, 6 figures, and 8 tables. Our
implementation can be found at https://github.com/HKUST-KnowComp/WFRE
| null | null | null |
cs.AI cs.DB cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Answering complex queries on knowledge graphs is important but particularly
challenging because of the data incompleteness. Query embedding methods address
this issue by learning-based models and simulating logical reasoning with set
operators. Previous works focus on specific forms of embeddings, but scoring
functions between embeddings are underexplored. In contrast to existing scoring
functions motivated by local comparison or global transport, this work
investigates the local and global trade-off with unbalanced optimal transport
theory. Specifically, we embed sets as bounded measures in $\real$ endowed with
a scoring function motivated by the Wasserstein-Fisher-Rao metric. Such a
design also facilitates closed-form set operators in the embedding space.
Moreover, we introduce a convolution-based algorithm for linear time
computation and a block-diagonal kernel to enforce the trade-off. Results show
that WFRE can outperform existing query embedding methods on standard datasets,
evaluation sets with combinatorially complex queries, and hierarchical
knowledge graphs. Ablation study shows that finding a better local and global
trade-off is essential for performance improvement.
|
[
{
"version": "v1",
"created": "Sat, 6 May 2023 12:48:17 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Wang",
"Zihao",
""
],
[
"Fei",
"Weizhi",
""
],
[
"Yin",
"Hang",
""
],
[
"Song",
"Yangqiu",
""
],
[
"Wong",
"Ginny Y.",
""
],
[
"See",
"Simon",
""
]
] |
new_dataset
| 0.980609 |
2305.04096
|
Shibashis Guha
|
Guy Avni, Pranav Ghorpade, Shibashis Guha
|
A Game of Pawns
| null | null | null | null |
cs.GT cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce and study pawn games, a class of two-player zero-sum turn-based
graph games. A turn-based graph game proceeds by placing a token on an initial
vertex, and whoever controls the vertex on which the token is located, chooses
its next location. This leads to a path in the graph, which determines the
winner. Traditionally, the control of vertices is predetermined and fixed. The
novelty of pawn games is that control of vertices changes dynamically
throughout the game as follows. Each vertex of a pawn game is owned by a pawn.
In each turn, the pawns are partitioned between the two players, and the player
who controls the pawn that owns the vertex on which the token is located,
chooses the next location of the token. Control of pawns changes dynamically
throughout the game according to a fixed mechanism. Specifically, we define
several grabbing-based mechanisms in which control of at most one pawn
transfers at the end of each turn. We study the complexity of solving pawn
games, where we focus on reachability objectives and parameterize the problem
by the mechanism that is being used and by restrictions on pawn ownership of
vertices. On the positive side, even though pawn games are
exponentially-succinct turn-based games, we identify several natural classes
that can be solved in PTIME. On the negative side, we identify several
EXPTIME-complete classes, where our hardness proofs are based on a new class of
games called lock & Key games, which may be of independent interest.
|
[
{
"version": "v1",
"created": "Sat, 6 May 2023 16:48:17 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Avni",
"Guy",
""
],
[
"Ghorpade",
"Pranav",
""
],
[
"Guha",
"Shibashis",
""
]
] |
new_dataset
| 0.991861 |
2305.04097
|
Huaishu Peng
|
Jiasheng Li, Zeyu Yan, Arush Shah, Jonathan Lazar, Huaishu Peng
|
Toucha11y: Making Inaccessible Public Touchscreens Accessible
| null | null |
10.1145/3544548.3581254
| null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Despite their growing popularity, many public kiosks with touchscreens are
inaccessible to blind people. Toucha11y is a working prototype that allows
blind users to use existing inaccessible touchscreen kiosks independently and
with little effort. Toucha11y consists of a mechanical bot that can be
instrumented to an arbitrary touchscreen kiosk by a blind user and a companion
app on their smartphone. The bot, once attached to a touchscreen, will
recognize its content, retrieve the corresponding information from a database,
and render it on the user's smartphone. As a result, a blind person can use the
smartphone's built-in accessibility features to access content and make
selections. The mechanical bot will detect and activate the corresponding
touchscreen interface. We present the system design of Toucha11y along with a
series of technical evaluations. Through a user study, we found out that
Toucha11y could help blind users operate inaccessible touchscreen devices.
|
[
{
"version": "v1",
"created": "Sat, 6 May 2023 16:50:59 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Li",
"Jiasheng",
""
],
[
"Yan",
"Zeyu",
""
],
[
"Shah",
"Arush",
""
],
[
"Lazar",
"Jonathan",
""
],
[
"Peng",
"Huaishu",
""
]
] |
new_dataset
| 0.999819 |
2305.04115
|
Ichiro Kawashima Ph.D.
|
Ichiro Kawashima
|
Symmetric Ternary Logic and Its Systematic Logic Composition Methodology
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ternary logic is expected to increase the area efficiency of VLSI due to its
expressiveness compared to the traditional binary logic. This paper proposes a
new symmetric ternary logic and a systematic logic composition methodology that
enables us to design any ternary logic circuits. The methodology is
demonstrated by implementing the ternary inverters, ternary NAND, ternary NOR,
and ternary half-adder operators with the proposed symmetric ternary operators.
|
[
{
"version": "v1",
"created": "Sat, 6 May 2023 18:37:36 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Kawashima",
"Ichiro",
""
]
] |
new_dataset
| 0.994634 |
2305.04203
|
Wenhai Wan
|
Wenhai Wan, Xinrui Wang, Mingkun Xie, Shengjun Huang, Songcan Chen,
Shaoyuan Li
|
Unlocking the Power of Open Set : A New Perspective for Open-set Noisy
Label Learning
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning from noisy data has attracted much attention, where most methods
focus on closed-set label noise. However, a more common scenario in the real
world is the presence of both open-set and closed-set noise. Existing methods
typically identify and handle these two types of label noise separately by
designing a specific strategy for each type. However, in many real-world
scenarios, it would be challenging to identify open-set examples, especially
when the dataset has been severely corrupted. Unlike the previous works, we
explore how models behave when faced open-set examples, and find that a part of
open-set examples gradually get integrated into certain known classes, which is
beneficial for the seperation among known classes. Motivated by the phenomenon,
in this paper, we propose a novel two-step contrastive learning method called
CECL, which aims to deal with both types of label noise by exploiting the
useful information of open-set examples. Specifically, we incorporate some
open-set examples into closed-set classes to enhance performance while treating
others as delimiters to improve representative ability. Extensive experiments
on synthetic and real-world datasets with diverse label noise demonstrate that
CECL can outperform state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sun, 7 May 2023 06:55:28 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Wan",
"Wenhai",
""
],
[
"Wang",
"Xinrui",
""
],
[
"Xie",
"Mingkun",
""
],
[
"Huang",
"Shengjun",
""
],
[
"Chen",
"Songcan",
""
],
[
"Li",
"Shaoyuan",
""
]
] |
new_dataset
| 0.95868 |
2305.04232
|
George Martvel
|
George Martvel and Nareed Farhat and Ilan Shimshoni and Anna Zamansky
|
CatFLW: Cat Facial Landmarks in the Wild Dataset
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Animal affective computing is a quickly growing field of research, where only
recently first efforts to go beyond animal tracking into recognizing their
internal states, such as pain and emotions, have emerged. In most mammals,
facial expressions are an important channel for communicating information about
these states. However, unlike the human domain, there is an acute lack of
datasets that make automation of facial analysis of animals feasible.
This paper aims to fill this gap by presenting a dataset called Cat Facial
Landmarks in the Wild (CatFLW) which contains 2016 images of cat faces in
different environments and conditions, annotated with 48 facial landmarks
specifically chosen for their relationship with underlying musculature, and
relevance to cat-specific facial Action Units (CatFACS). To the best of our
knowledge, this dataset has the largest amount of cat facial landmarks
available.
In addition, we describe a semi-supervised (human-in-the-loop) method of
annotating images with landmarks, used for creating this dataset, which
significantly reduces the annotation time and could be used for creating
similar datasets for other animals.
The dataset is available on request.
|
[
{
"version": "v1",
"created": "Sun, 7 May 2023 09:39:12 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Martvel",
"George",
""
],
[
"Farhat",
"Nareed",
""
],
[
"Shimshoni",
"Ilan",
""
],
[
"Zamansky",
"Anna",
""
]
] |
new_dataset
| 0.999849 |
2305.04311
|
Saul Shanabrook
|
Saul Shanabrook
|
Egg-smol Python: A Pythonic Library for E-graphs
| null | null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
E-graphs have emerged as a versatile data structure with applications in
synthesis, optimization, and verification through techniques such as equality
saturation. This paper introduces Python bindings for the experimental egg-smol
library, which aims to bring the benefits of e-graphs to the Python ecosystem.
The bindings offer a high-level, Pythonic API providing an accessible and
familiar interface for Python users. By integrating e-graph techniques with
Python, we hope to enable collaboration and innovation across various domains
in the scientific computing and machine learning communities. We discuss the
advantages of using Python bindings for both Python and existing egg-smol
users, as well as possible future directions for development.
|
[
{
"version": "v1",
"created": "Sun, 7 May 2023 15:35:17 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Shanabrook",
"Saul",
""
]
] |
new_dataset
| 0.999253 |
2305.04346
|
Maxwell Crouse
|
Maxwell Crouse, Pavan Kapanipathi, Subhajit Chaudhury, Tahira Naseem,
Ramon Astudillo, Achille Fokoue, Tim Klinger
|
Laziness Is a Virtue When It Comes to Compositionality in Neural
Semantic Parsing
|
Accepted to ACL main conference
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Nearly all general-purpose neural semantic parsers generate logical forms in
a strictly top-down autoregressive fashion. Though such systems have achieved
impressive results across a variety of datasets and domains, recent works have
called into question whether they are ultimately limited in their ability to
compositionally generalize. In this work, we approach semantic parsing from,
quite literally, the opposite direction; that is, we introduce a neural
semantic parsing generation method that constructs logical forms from the
bottom up, beginning from the logical form's leaves. The system we introduce is
lazy in that it incrementally builds up a set of potential semantic parses, but
only expands and processes the most promising candidate parses at each
generation step. Such a parsimonious expansion scheme allows the system to
maintain an arbitrarily large set of parse hypotheses that are never realized
and thus incur minimal computational overhead. We evaluate our approach on
compositional generalization; specifically, on the challenging CFQ dataset and
three Text-to-SQL datasets where we show that our novel, bottom-up semantic
parsing technique outperforms general-purpose semantic parsers while also being
competitive with comparable neural parsers that have been designed for each
task.
|
[
{
"version": "v1",
"created": "Sun, 7 May 2023 17:53:08 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Crouse",
"Maxwell",
""
],
[
"Kapanipathi",
"Pavan",
""
],
[
"Chaudhury",
"Subhajit",
""
],
[
"Naseem",
"Tahira",
""
],
[
"Astudillo",
"Ramon",
""
],
[
"Fokoue",
"Achille",
""
],
[
"Klinger",
"Tim",
""
]
] |
new_dataset
| 0.958127 |
2305.04396
|
Yi Liu
|
Yi Liu, Shoukun Xu, Dingwen Zhang, Jungong Han
|
SegGPT Meets Co-Saliency Scene
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Co-salient object detection targets at detecting co-existed salient objects
among a group of images. Recently, a generalist model for segmenting everything
in context, called SegGPT, is gaining public attention. In view of its
breakthrough for segmentation, we can hardly wait to probe into its
contribution to the task of co-salient object detection. In this report, we
first design a framework to enable SegGPT for the problem of co-salient object
detection. Proceed to the next step, we evaluate the performance of SegGPT on
the problem of co-salient object detection on three available datasets. We
achieve a finding that co-saliency scenes challenges SegGPT due to context
discrepancy within a group of co-saliency images.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 00:19:05 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Liu",
"Yi",
""
],
[
"Xu",
"Shoukun",
""
],
[
"Zhang",
"Dingwen",
""
],
[
"Han",
"Jungong",
""
]
] |
new_dataset
| 0.999685 |
2305.04446
|
Junyu Lu
|
Junyu Lu, Bo Xu, Xiaokun Zhang, Changrong Min, Liang Yang, Hongfei Lin
|
Facilitating Fine-grained Detection of Chinese Toxic Language:
Hierarchical Taxonomy, Resources, and Benchmarks
|
13 pages, 4 figures. The paper has been accepted in ACL 2023
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The widespread dissemination of toxic online posts is increasingly damaging
to society. However, research on detecting toxic language in Chinese has lagged
significantly. Existing datasets lack fine-grained annotation of toxic types
and expressions, and ignore the samples with indirect toxicity. In addition, it
is crucial to introduce lexical knowledge to detect the toxicity of posts,
which has been a challenge for researchers. In this paper, we facilitate the
fine-grained detection of Chinese toxic language. First, we built Monitor Toxic
Frame, a hierarchical taxonomy to analyze toxic types and expressions. Then, a
fine-grained dataset ToxiCN is presented, including both direct and indirect
toxic samples. We also build an insult lexicon containing implicit profanity
and propose Toxic Knowledge Enhancement (TKE) as a benchmark, incorporating the
lexical feature to detect toxic language. In the experimental stage, we
demonstrate the effectiveness of TKE. After that, a systematic quantitative and
qualitative analysis of the findings is given.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 03:50:38 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Lu",
"Junyu",
""
],
[
"Xu",
"Bo",
""
],
[
"Zhang",
"Xiaokun",
""
],
[
"Min",
"Changrong",
""
],
[
"Yang",
"Liang",
""
],
[
"Lin",
"Hongfei",
""
]
] |
new_dataset
| 0.963794 |
2305.04451
|
Anran Lin
|
Anran Lin, Nanxuan Zhao, Shuliang Ning, Yuda Qiu, Baoyuan Wang,
Xiaoguang Han
|
FashionTex: Controllable Virtual Try-on with Text and Texture
|
Accepted to SIGGRAPH 2023 (Conference Proceedings)
| null |
10.1145/3588432.3591568
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Virtual try-on attracts increasing research attention as a promising way for
enhancing the user experience for online cloth shopping. Though existing
methods can generate impressive results, users need to provide a well-designed
reference image containing the target fashion clothes that often do not exist.
To support user-friendly fashion customization in full-body portraits, we
propose a multi-modal interactive setting by combining the advantages of both
text and texture for multi-level fashion manipulation. With the carefully
designed fashion editing module and loss functions, FashionTex framework can
semantically control cloth types and local texture patterns without annotated
pairwise training data. We further introduce an ID recovery module to maintain
the identity of input portrait. Extensive experiments have demonstrated the
effectiveness of our proposed pipeline.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 04:10:36 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Lin",
"Anran",
""
],
[
"Zhao",
"Nanxuan",
""
],
[
"Ning",
"Shuliang",
""
],
[
"Qiu",
"Yuda",
""
],
[
"Wang",
"Baoyuan",
""
],
[
"Han",
"Xiaoguang",
""
]
] |
new_dataset
| 0.994818 |
2305.04497
|
A Venkata Subramanyam
|
A V Subramanyam, Niranjan Sundararajan, Vibhu Dubey, Brejesh Lall
|
IIITD-20K: Dense captioning for Text-Image ReID
| null | null | null | null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Text-to-Image (T2I) ReID has attracted a lot of attention in the recent past.
CUHK-PEDES, RSTPReid and ICFG-PEDES are the three available benchmarks to
evaluate T2I ReID methods. RSTPReid and ICFG-PEDES comprise of identities from
MSMT17 but due to limited number of unique persons, the diversity is limited.
On the other hand, CUHK-PEDES comprises of 13,003 identities but has relatively
shorter text description on average. Further, these datasets are captured in a
restricted environment with limited number of cameras. In order to further
diversify the identities and provide dense captions, we propose a novel dataset
called IIITD-20K. IIITD-20K comprises of 20,000 unique identities captured in
the wild and provides a rich dataset for text-to-image ReID. With a minimum of
26 words for a description, each image is densely captioned. We further
synthetically generate images and fine-grained captions using Stable-diffusion
and BLIP models trained on our dataset. We perform elaborate experiments using
state-of-art text-to-image ReID models and vision-language pre-trained models
and present a comprehensive analysis of the dataset. Our experiments also
reveal that synthetically generated data leads to a substantial performance
improvement in both same dataset as well as cross dataset settings. Our dataset
is available at https://bit.ly/3pkA3Rj.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 06:46:56 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Subramanyam",
"A V",
""
],
[
"Sundararajan",
"Niranjan",
""
],
[
"Dubey",
"Vibhu",
""
],
[
"Lall",
"Brejesh",
""
]
] |
new_dataset
| 0.989951 |
2305.04506
|
Ross Greer
|
Ross Greer, Samveed Desai, Lulua Rakla, Akshay Gopalkrishnan, Afnan
Alofi, Mohan Trivedi
|
Pedestrian Behavior Maps for Safety Advisories: CHAMP Framework and
Real-World Data Analysis
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is critical for vehicles to prevent any collisions with pedestrians.
Current methods for pedestrian collision prevention focus on integrating visual
pedestrian detectors with Automatic Emergency Braking (AEB) systems which can
trigger warnings and apply brakes as a pedestrian enters a vehicle's path.
Unfortunately, pedestrian-detection-based systems can be hindered in certain
situations such as night-time or when pedestrians are occluded. Our system
addresses such issues using an online, map-based pedestrian detection
aggregation system where common pedestrian locations are learned after repeated
passes of locations. Using a carefully collected and annotated dataset in La
Jolla, CA, we demonstrate the system's ability to learn pedestrian zones and
generate advisory notices when a vehicle is approaching a pedestrian despite
challenges like dark lighting or pedestrian occlusion. Using the number of
correct advisories, false advisories, and missed advisories to define precision
and recall performance metrics, we evaluate our system and discuss future
positive effects with further data collection. We have made our code available
at https://github.com/s7desai/ped-mapping, and a video demonstration of the
CHAMP system at https://youtu.be/dxeCrS_Gpkw.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 07:03:26 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Greer",
"Ross",
""
],
[
"Desai",
"Samveed",
""
],
[
"Rakla",
"Lulua",
""
],
[
"Gopalkrishnan",
"Akshay",
""
],
[
"Alofi",
"Afnan",
""
],
[
"Trivedi",
"Mohan",
""
]
] |
new_dataset
| 0.998375 |
2305.04534
|
Jiafeng Zhang Zhang
|
Jiafeng Zhang and Xuejing Pu
|
Smart Home Device Detection Algorithm Based on FSA-YOLOv5
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Smart home device detection is a critical aspect of human-computer
interaction. However, detecting targets in indoor environments can be
challenging due to interference from ambient light and background noise. In
this paper, we present a new model called FSA-YOLOv5, which addresses the
limitations of traditional convolutional neural networks by introducing the
Transformer to learn long-range dependencies. Additionally, we propose a new
attention module, the full-separation attention module, which integrates
spatial and channel dimensional information to learn contextual information. To
improve tiny device detection, we include a prediction head for the indoor
smart home device detection task. We also release the Southeast University
Indoor Smart Speaker Dataset (SUSSD) to supplement existing data samples.
Through a series of experiments on SUSSD, we demonstrate that our method
outperforms other methods, highlighting the effectiveness of FSA-YOLOv5.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 08:10:24 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Zhang",
"Jiafeng",
""
],
[
"Pu",
"Xuejing",
""
]
] |
new_dataset
| 0.989317 |
2305.04618
|
YIming Bian
|
Yiming Bian
|
A LSTM and Cost-Sensitive Learning-Based Real-Time Warning for Civil
Aviation Over-limit
|
7 pages, 6 figures
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The issue of over-limit during passenger aircraft flights has drawn
increasing attention in civil aviation due to its potential safety risks. To
address this issue, real-time automated warning systems are essential. In this
study, a real-time warning model for civil aviation over-limit is proposed
based on QAR data monitoring. Firstly, highly correlated attributes to
over-limit are extracted from a vast QAR dataset using the Spearman rank
correlation coefficient. Because flight over-limit poses a binary
classification problem with unbalanced samples, this paper incorporates
cost-sensitive learning in the LSTM model. Finally, the time step length,
number of LSTM cells, and learning rate in the LSTM model are optimized using a
grid search approach. The model is trained on a real dataset, and its
performance is evaluated on a validation set. The experimental results show
that the proposed model achieves an F1 score of 0.991 and an accuracy of 0.978,
indicating its effectiveness in real-time warning of civil aviation over-limit.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 10:56:06 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Bian",
"Yiming",
""
]
] |
new_dataset
| 0.999655 |
2305.04639
|
Arda Inceoglu
|
Arda Inceoglu, Eren Erdal Aksoy, Sanem Sariel
|
Multimodal Detection and Identification of Robot Manipulation Failures
|
arXiv admin note: text overlap with arXiv:2011.05817
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An autonomous service robot should be able to interact with its environment
safely and robustly without requiring human assistance. Unstructured
environments are challenging for robots since the exact prediction of outcomes
is not always possible. Even when the robot behaviors are well-designed, the
unpredictable nature of physical robot-object interaction may prevent success
in object manipulation. Therefore, execution of a manipulation action may
result in an undesirable outcome involving accidents or damages to the objects
or environment. Situation awareness becomes important in such cases to enable
the robot to (i) maintain the integrity of both itself and the environment,
(ii) recover from failed tasks in the short term, and (iii) learn to avoid
failures in the long term. For this purpose, robot executions should be
continuously monitored, and failures should be detected and classified
appropriately. In this work, we focus on detecting and classifying both
manipulation and post-manipulation phase failures using the same exteroception
setup. We cover a diverse set of failure types for primary tabletop
manipulation actions. In order to detect these failures, we propose FINO-Net
[1], a deep multimodal sensor fusion based classifier network. Proposed network
accurately detects and classifies failures from raw sensory data without any
prior knowledge. In this work, we use our extended FAILURE dataset [1] with 99
new multimodal manipulation recordings and annotate them with their
corresponding failure types. FINO-Net achieves 0.87 failure detection and 0.80
failure classification F1 scores. Experimental results show that proposed
architecture is also appropriate for real-time use.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 11:38:19 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Inceoglu",
"Arda",
""
],
[
"Aksoy",
"Eren Erdal",
""
],
[
"Sariel",
"Sanem",
""
]
] |
new_dataset
| 0.968684 |
2305.04685
|
Chelsea Zou
|
Chelsea Zou, Kishan Chandan, Yan Ding, Shiqi Zhang
|
ARDIE: AR, Dialogue, and Eye Gaze Policies for Human-Robot Collaboration
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Human-robot collaboration (HRC) has become increasingly relevant in
industrial, household, and commercial settings. However, the effectiveness of
such collaborations is highly dependent on the human and robots' situational
awareness of the environment. Improving this awareness includes not only
aligning perceptions in a shared workspace, but also bidirectionally
communicating intent and visualizing different states of the environment to
enhance scene understanding. In this paper, we propose ARDIE (Augmented Reality
with Dialogue and Eye Gaze), a novel intelligent agent that leverages
multi-modal feedback cues to enhance HRC. Our system utilizes a decision
theoretic framework to formulate a joint policy that incorporates interactive
augmented reality (AR), natural language, and eye gaze to portray current and
future states of the environment. Through object-specific AR renders, the human
can visualize future object interactions to make adjustments as needed,
ultimately providing an interactive and efficient collaboration between humans
and robots.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 13:01:27 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Zou",
"Chelsea",
""
],
[
"Chandan",
"Kishan",
""
],
[
"Ding",
"Yan",
""
],
[
"Zhang",
"Shiqi",
""
]
] |
new_dataset
| 0.995912 |
2305.04719
|
Zhiling Yan
|
Shaozu Yuan, Aijun Dai, Zhiling Yan, Ruixue Liu, Meng Chen, Baoyang
Chen, Zhijie Qiu, Xiaodong He
|
Learning to Generate Poetic Chinese Landscape Painting with Calligraphy
|
Accepted by IJCAI 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a novel system (denoted as Polaca) to generate
poetic Chinese landscape painting with calligraphy. Unlike previous single
image-to-image painting generation, Polaca takes the classic poetry as input
and outputs the artistic landscape painting image with the corresponding
calligraphy. It is equipped with three different modules to complete the whole
piece of landscape painting artwork: the first one is a text-to-image module to
generate landscape painting image, the second one is an image-to-image module
to generate stylistic calligraphy image, and the third one is an image fusion
module to fuse the two images into a whole piece of aesthetic artwork.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 14:10:10 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Yuan",
"Shaozu",
""
],
[
"Dai",
"Aijun",
""
],
[
"Yan",
"Zhiling",
""
],
[
"Liu",
"Ruixue",
""
],
[
"Chen",
"Meng",
""
],
[
"Chen",
"Baoyang",
""
],
[
"Qiu",
"Zhijie",
""
],
[
"He",
"Xiaodong",
""
]
] |
new_dataset
| 0.990058 |
2305.04723
|
Collin Connors
|
Collin Connors, Dilip Sarkar
|
PBL: System for Creating and Maintaining Personal Blockchain Ledgers
| null | null | null | null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Blockchain technology has experienced substantial growth in recent years, yet
the diversity of blockchain applications has been limited. Blockchain provides
many desirable features for applications, including being append-only,
immutable, tamper-evident, tamper-resistant, and fault-tolerant; however, many
applications that would benefit from these features cannot incorporate current
blockchains. This work presents a novel architecture for creating and
maintaining personal blockchain ledgers that address these concerns. Our system
utilizes independent modular services, enabling individuals to securely store
their data in a personal blockchain ledger. Unlike traditional blockchain,
which stores all transactions of multiple users, our novel personal blockchains
are designed to allow individuals to maintain their privacy without requiring
extensive technical expertise. Using rigorous mathematical methods, we prove
that our system produces append-only, immutable, tamper-evident,
tamper-resistant ledgers. Our system addresses use cases not addressed by
traditional blockchain development platforms. Our system creates a new
blockchain paradigm, enabling more individuals and applications to leverage
blockchain technology for their needs.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 14:17:27 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Connors",
"Collin",
""
],
[
"Sarkar",
"Dilip",
""
]
] |
new_dataset
| 0.984433 |
2305.04764
|
Chen Zhi
|
Zhuokui Xie, Yinghao Chen, Chen Zhi, Shuiguang Deng, Jianwei Yin
|
ChatUniTest: a ChatGPT-based automated unit test generation tool
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Unit testing is a crucial, yet often tedious and time-consuming task. To
relieve developers from this burden, automated unit test generation techniques
are developed. Existing automated unit test generation tools, such as
program-analysis-based tools like EvoSuite and Randoop, lack program
comprehension, resulting in unit tests with poor readability and limited
assertions. Language-model-based tools, such as AthenaTest and A3Test, have
limitations in the generation of correct unit tests. In this paper, we
introduce ChatUniTest, a ChatGPT-based automated unit test generation tool
developed under the Generation-Validation-Repair framework. ChatUniTest
generates tests by parsing the project, extracting essential information, and
creating an adaptive focal context that includes the focal method and its
dependencies within the pre-defined maximum prompt token limit. The context is
incorporated into a prompt and subsequently submitted to ChatGPT. Once
ChatGPT's response is received, ChatUniTest proceeds to extract the raw test
from the response. It then validates the test and employs rule-based repair to
fix syntactic and simple compile errors, followed by ChatGPT-based repair to
address challenging errors. Our rigorous evaluation demonstrates that
ChatUniTest outperforms EvoSuite in branch and line coverage, surpasses
AthenaTest and A3Test in focal method coverage, and effectively generates
assertions while utilizing mock objects and reflection to achieve test
objectives.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 15:12:07 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Xie",
"Zhuokui",
""
],
[
"Chen",
"Yinghao",
""
],
[
"Zhi",
"Chen",
""
],
[
"Deng",
"Shuiguang",
""
],
[
"Yin",
"Jianwei",
""
]
] |
new_dataset
| 0.999113 |
2305.04769
|
Elahe Arani
|
Kishaan Jeeveswaran, Prashant Bhat, Bahram Zonooz, Elahe Arani
|
BiRT: Bio-inspired Replay in Vision Transformers for Continual Learning
|
Accepted at 40th International Conference on Machine Learning (ICML
2023)
| null | null | null |
cs.CV cs.LG cs.NE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The ability of deep neural networks to continually learn and adapt to a
sequence of tasks has remained challenging due to catastrophic forgetting of
previously learned tasks. Humans, on the other hand, have a remarkable ability
to acquire, assimilate, and transfer knowledge across tasks throughout their
lifetime without catastrophic forgetting. The versatility of the brain can be
attributed to the rehearsal of abstract experiences through a complementary
learning system. However, representation rehearsal in vision transformers lacks
diversity, resulting in overfitting and consequently, performance drops
significantly compared to raw image rehearsal. Therefore, we propose BiRT, a
novel representation rehearsal-based continual learning approach using vision
transformers. Specifically, we introduce constructive noises at various stages
of the vision transformer and enforce consistency in predictions with respect
to an exponential moving average of the working model. Our method provides
consistent performance gain over raw image and vanilla representation rehearsal
on several challenging CL benchmarks, while being memory efficient and robust
to natural and adversarial corruptions.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 15:19:39 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Jeeveswaran",
"Kishaan",
""
],
[
"Bhat",
"Prashant",
""
],
[
"Zonooz",
"Bahram",
""
],
[
"Arani",
"Elahe",
""
]
] |
new_dataset
| 0.996354 |
2305.04773
|
Baxi Chong
|
Baxi Chong, Juntao He, Daniel Soto, Tianyu Wang, Daniel Irvine,
Grigoriy Blekherman, Daniel I. Goldman
|
Multi-legged matter transport: a framework for locomotion on noisy
landscapes
| null | null |
10.1126/science.ade4985
| null |
cs.RO physics.app-ph
|
http://creativecommons.org/licenses/by/4.0/
|
While the transport of matter by wheeled vehicles or legged robots can be
guaranteed in engineered landscapes like roads or rails, locomotion prediction
in complex environments like collapsed buildings or crop fields remains
challenging. Inspired by principles of information transmission which allow
signals to be reliably transmitted over noisy channels, we develop a ``matter
transport" framework demonstrating that non-inertial locomotion can be provably
generated over ``noisy" rugose landscapes (heterogeneities on the scale of
locomotor dimensions). Experiments confirm that sufficient spatial redundancy
in the form of serially-connected legged robots leads to reliable transport on
such terrain without requiring sensing and control. Further analogies from
communication theory coupled to advances in gaits (coding) and sensor-based
feedback control (error detection/correction) can lead to agile locomotion in
complex terradynamic regimes.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 15:25:36 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Chong",
"Baxi",
""
],
[
"He",
"Juntao",
""
],
[
"Soto",
"Daniel",
""
],
[
"Wang",
"Tianyu",
""
],
[
"Irvine",
"Daniel",
""
],
[
"Blekherman",
"Grigoriy",
""
],
[
"Goldman",
"Daniel I.",
""
]
] |
new_dataset
| 0.992108 |
2305.04774
|
Yuanxing Liu
|
Yuanxing Liu, Weinan Zhang, Baohua Dong, Yan Fan, Hang Wang, Fan Feng,
Yifan Chen, Ziyu Zhuang, Hengbin Cui, Yongbin Li, Wanxiang Che
|
U-NEED: A Fine-grained Dataset for User Needs-Centric E-commerce
Conversational Recommendation
|
SIGIR23 Resource Track
| null |
10.1145/3539618.3591878
| null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conversational recommender systems (CRSs) aim to understand the information
needs and preferences expressed in a dialogue to recommend suitable items to
the user. Most of the existing conversational recommendation datasets are
synthesized or simulated with crowdsourcing, which has a large gap with
real-world scenarios. To bridge the gap, previous work contributes a dataset
E-ConvRec, based on pre-sales dialogues between users and customer service
staff in E-commerce scenarios. However, E-ConvRec only supplies coarse-grained
annotations and general tasks for making recommendations in pre-sales
dialogues. Different from that, we use real user needs as a clue to explore the
E-commerce conversational recommendation in complex pre-sales dialogues, namely
user needs-centric E-commerce conversational recommendation (UNECR).
In this paper, we construct a user needs-centric E-commerce conversational
recommendation dataset (U-NEED) from real-world E-commerce scenarios. U-NEED
consists of 3 types of resources: (i) 7,698 fine-grained annotated pre-sales
dialogues in 5 top categories (ii) 333,879 user behaviors and (iii) 332,148
product knowledge tuples. To facilitate the research of UNECR, we propose 5
critical tasks: (i) pre-sales dialogue understanding (ii) user needs
elicitation (iii) user needs-based recommendation (iv) pre-sales dialogue
generation and (v) pre-sales dialogue evaluation. We establish baseline methods
and evaluation metrics for each task. We report experimental results of 5 tasks
on U-NEED. We also report results in 3 typical categories. Experimental results
indicate that the challenges of UNECR in various categories are different.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 01:44:35 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Liu",
"Yuanxing",
""
],
[
"Zhang",
"Weinan",
""
],
[
"Dong",
"Baohua",
""
],
[
"Fan",
"Yan",
""
],
[
"Wang",
"Hang",
""
],
[
"Feng",
"Fan",
""
],
[
"Chen",
"Yifan",
""
],
[
"Zhuang",
"Ziyu",
""
],
[
"Cui",
"Hengbin",
""
],
[
"Li",
"Yongbin",
""
],
[
"Che",
"Wanxiang",
""
]
] |
new_dataset
| 0.999801 |
2305.04789
|
Zerong Zheng
|
Zerong Zheng, Xiaochen Zhao, Hongwen Zhang, Boning Liu, Yebin Liu
|
AvatarReX: Real-time Expressive Full-body Avatars
|
To appear in SIGGRAPH 2023 Journal Track. Project page at
https://liuyebin.com/AvatarRex/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present AvatarReX, a new method for learning NeRF-based full-body avatars
from video data. The learnt avatar not only provides expressive control of the
body, hands and the face together, but also supports real-time animation and
rendering. To this end, we propose a compositional avatar representation, where
the body, hands and the face are separately modeled in a way that the
structural prior from parametric mesh templates is properly utilized without
compromising representation flexibility. Furthermore, we disentangle the
geometry and appearance for each part. With these technical designs, we propose
a dedicated deferred rendering pipeline, which can be executed in real-time
framerate to synthesize high-quality free-view images. The disentanglement of
geometry and appearance also allows us to design a two-pass training strategy
that combines volume rendering and surface rendering for network training. In
this way, patch-level supervision can be applied to force the network to learn
sharp appearance details on the basis of geometry estimation. Overall, our
method enables automatic construction of expressive full-body avatars with
real-time rendering capability, and can generate photo-realistic images with
dynamic details for novel body motions and facial expressions.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 15:43:00 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Zheng",
"Zerong",
""
],
[
"Zhao",
"Xiaochen",
""
],
[
"Zhang",
"Hongwen",
""
],
[
"Liu",
"Boning",
""
],
[
"Liu",
"Yebin",
""
]
] |
new_dataset
| 0.997821 |
2305.04793
|
Martin Gruber
|
Martin Gruber, Gordon Fraser
|
FlaPy: Mining Flaky Python Tests at Scale
|
5 pages, to be presented on the DEMO track of the 45th International
Conference on Software Engineering (ICSE-DEMO)
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Flaky tests obstruct software development, and studying and proposing
mitigations against them has therefore become an important focus of software
engineering research. To conduct sound investigations on test flakiness, it is
crucial to have large, diverse, and unbiased datasets of flaky tests. A common
method to build such datasets is by rerunning the test suites of selected
projects multiple times and checking for tests that produce different outcomes.
While using this technique on a single project is mostly straightforward,
applying it to a large and diverse set of projects raises several
implementation challenges such as (1) isolating the test executions, (2)
supporting multiple build mechanisms, (3) achieving feasible run times on large
datasets, and (4) analyzing and presenting the test outcomes. To address these
challenges we introduce FlaPy, a framework for researchers to mine flaky tests
in a given or automatically sampled set of Python projects by rerunning their
test suites. FlaPy isolates the test executions using containerization and
fresh execution environments to simulate real-world CI conditions and to
achieve accurate results. By supporting multiple dependency installation
strategies, it promotes diversity among the studied projects. FlaPy supports
parallelizing the test executions using SLURM, making it feasible to scan
thousands of projects for test flakiness. Finally, FlaPy analyzes the test
outcomes to determine which tests are flaky and depicts the results in a
concise table. A demo video of FlaPy is available at
https://youtu.be/ejy-be-FvDY
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 15:48:57 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Gruber",
"Martin",
""
],
[
"Fraser",
"Gordon",
""
]
] |
new_dataset
| 0.999013 |
2305.04804
|
Markku M\"akitalo
|
Erfan Momeni Yazdi, Markku M\"akitalo, Julius Ikkala, Pekka
J\"a\"askel\"ainen
|
TauBench 1.1: A Dynamic Benchmark for Graphics Rendering
|
The dataset is downloadable at https://zenodo.org/record/7906987
| null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Many graphics rendering algorithms used in both real-time games and virtual
reality applications can get performance boosts by temporally reusing previous
computations. However, algorithms based on temporal reuse are typically
measured using trivial benchmarks with very limited dynamic features. To this
end, in [1] we presented TauBench 1.0, a benchmark designed to stress temporal
reuse algorithms. Now, we release TauBench version 1.1, which improves the
usability of the original benchmark. In particular, these improvements reduce
the size of the dataset significantly, resulting in faster loading and
rendering times, and in better compatibility with 3D software that impose
strict size limits for the scenes.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 16:02:43 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Yazdi",
"Erfan Momeni",
""
],
[
"Mäkitalo",
"Markku",
""
],
[
"Ikkala",
"Julius",
""
],
[
"Jääskeläinen",
"Pekka",
""
]
] |
new_dataset
| 0.998021 |
2305.04825
|
Rob Procter
|
Wenjia Zhang and Lin Gui and Rob Procter and Yulan He
|
NewsQuote: A Dataset Built on Quote Extraction and Attribution for
Expert Recommendation in Fact-Checking
|
11 pages, 5 figures. 17TH International AAAI Conference on Web and
Social Media; Mediate 2023: News Media and Computational Journalism Workshop
| null | null | null |
cs.IR cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
To enhance the ability to find credible evidence in news articles, we propose
a novel task of expert recommendation, which aims to identify trustworthy
experts on a specific news topic. To achieve the aim, we describe the
construction of a novel NewsQuote dataset consisting of 24,031 quote-speaker
pairs that appeared on a COVID-19 news corpus. We demonstrate an automatic
pipeline for speaker and quote extraction via a BERT-based Question Answering
model. Then, we formulate expert recommendations as document retrieval task by
retrieving relevant quotes first as an intermediate step for expert
identification, and expert retrieval by directly retrieving sources based on
the probability of a query conditional on a candidate expert. Experimental
results on NewsQuote show that document retrieval is more effective in
identifying relevant experts for a given news topic compared to expert
retrieval
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 11:10:48 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Zhang",
"Wenjia",
""
],
[
"Gui",
"Lin",
""
],
[
"Procter",
"Rob",
""
],
[
"He",
"Yulan",
""
]
] |
new_dataset
| 0.999802 |
2305.04851
|
Mikhail Kurenkov
|
Nikolay Zherdev, Mikhail Kurenkov, Kristina Belikova and Dzmitry
Tsetserukou
|
SwipeBot: DNN-based Autonomous Robot Navigation among Movable Obstacles
in Cluttered Environments
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel approach to wheeled robot navigation
through an environment with movable obstacles. A robot exploits knowledge about
different obstacle classes and selects the minimally invasive action to perform
to clear the path. We trained a convolutional neural network (CNN), so the
robot can classify an RGB-D image and decide whether to push a blocking object
and which force to apply. After known objects are segmented, they are being
projected to a cost-map, and a robot calculates an optimal path to the goal. If
the blocking objects are allowed to be moved, a robot drives through them while
pushing them away. We implemented our algorithm in ROS, and an extensive set of
simulations showed that the robot successfully overcomes the blocked regions.
Our approach allows a robot to successfully build a path through regions, where
it would have stuck with traditional path-planning techniques.
|
[
{
"version": "v1",
"created": "Mon, 8 May 2023 16:49:32 GMT"
}
] | 2023-05-09T00:00:00 |
[
[
"Zherdev",
"Nikolay",
""
],
[
"Kurenkov",
"Mikhail",
""
],
[
"Belikova",
"Kristina",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
new_dataset
| 0.998515 |
2107.09199
|
Bashir Mohammad Sabquat Bahar Talukder
|
B. M. S. Bahar Talukder, Farah Ferdaus, and Md Tauhidur Rahman
|
A Non-invasive Technique to Detect Authentic/Counterfeit SRAM Chips
| null |
ACM Journal on Emerging Technologies in Computing Systems, 2023
| null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many commercially available memory chips are fabricated worldwide in
untrusted facilities. Therefore, a counterfeit memory chip can easily enter
into the supply chain in different formats. Deploying these counterfeit memory
chips into an electronic system can severely affect security and reliability
domains because of their sub-standard quality, poor performance, and shorter
lifespan. Therefore, a proper solution is required to identify counterfeit
memory chips before deploying them in mission-, safety-, and security-critical
systems. However, a single solution to prevent counterfeiting is challenging
due to the diversity of counterfeit types, sources, and refinement techniques.
Besides, the chips can pass initial testing and still fail while being used in
the system. Furthermore, existing solutions focus on detecting a single
counterfeit type (e.g., detecting recycled memory chips). This work proposes a
framework that detects major counterfeit static random-access memory (SRAM)
types by attesting/identifying the origin of the manufacturer. The proposed
technique generates a single signature for a manufacturer and does not require
any exhaustive registration/authentication process. We validate our proposed
technique using 345 SRAM chips produced by major manufacturers. The silicon
results show that the test scores ($F_{1}$ score) of our proposed technique of
identifying memory manufacturer and part-number are 93% and 71%, respectively.
|
[
{
"version": "v1",
"created": "Mon, 19 Jul 2021 23:40:03 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Dec 2021 16:36:58 GMT"
},
{
"version": "v3",
"created": "Fri, 5 May 2023 06:58:32 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Talukder",
"B. M. S. Bahar",
""
],
[
"Ferdaus",
"Farah",
""
],
[
"Rahman",
"Md Tauhidur",
""
]
] |
new_dataset
| 0.98581 |
2108.11590
|
Wasi Uddin Ahmad
|
Wasi Uddin Ahmad, Md Golam Rahman Tushar, Saikat Chakraborty, Kai-Wei
Chang
|
AVATAR: A Parallel Corpus for Java-Python Program Translation
|
Accepted to Findings of ACL 2023
| null | null | null |
cs.SE cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Program translation refers to migrating source code from one programming
language to another. It has tremendous practical value in software development,
as porting software across languages is time-consuming and costly. Automating
program translation is of paramount importance in software migration, and
recently researchers explored unsupervised approaches due to the unavailability
of parallel corpora. However, the availability of pre-trained language models
for programming languages enables supervised fine-tuning with a small number of
labeled examples. Therefore, we present AVATAR, a collection of 9,515
programming problems and their solutions written in two popular languages, Java
and Python. AVATAR is collected from competitive programming sites, online
platforms, and open-source repositories. Furthermore, AVATAR includes unit
tests for 250 examples to facilitate functional correctness evaluation. We
benchmark several pre-trained language models fine-tuned on AVATAR. Experiment
results show that the models lack in generating functionally accurate code.
|
[
{
"version": "v1",
"created": "Thu, 26 Aug 2021 05:44:20 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2023 20:22:25 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Ahmad",
"Wasi Uddin",
""
],
[
"Tushar",
"Md Golam Rahman",
""
],
[
"Chakraborty",
"Saikat",
""
],
[
"Chang",
"Kai-Wei",
""
]
] |
new_dataset
| 0.998237 |
2202.04101
|
Constantino \'Alvarez Casado
|
Constantino \'Alvarez Casado and Miguel Bordallo L\'opez
|
Face2PPG: An unsupervised pipeline for blood volume pulse extraction
from faces
|
23 pages, 13 figures, 4 tables, 3 equations
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Photoplethysmography (PPG) signals have become a key technology in many
fields, such as medicine, well-being, or sports. Our work proposes a set of
pipelines to extract remote PPG signals (rPPG) from the face robustly,
reliably, and configurable. We identify and evaluate the possible choices in
the critical steps of unsupervised rPPG methodologies. We assess a
state-of-the-art processing pipeline in six different datasets, incorporating
important corrections in the methodology that ensure reproducible and fair
comparisons. In addition, we extend the pipeline by proposing three novel
ideas; 1) a new method to stabilize the detected face based on a rigid mesh
normalization; 2) a new method to dynamically select the different regions in
the face that provide the best raw signals, and 3) a new RGB to rPPG
transformation method, called Orthogonal Matrix Image Transformation (OMIT)
based on QR decomposition, that increases robustness against compression
artifacts. We show that all three changes introduce noticeable improvements in
retrieving rPPG signals from faces, obtaining state-of-the-art results compared
with unsupervised, non-learning-based methodologies and, in some databases,
very close to supervised, learning-based methods. We perform a comparative
study to quantify the contribution of each proposed idea. In addition, we
depict a series of observations that could help in future implementations.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 19:06:20 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Feb 2022 07:19:26 GMT"
},
{
"version": "v3",
"created": "Thu, 4 May 2023 18:25:35 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Casado",
"Constantino Álvarez",
""
],
[
"López",
"Miguel Bordallo",
""
]
] |
new_dataset
| 0.98285 |
2203.05823
|
Hanlei Zhang
|
Hanlei Zhang, Hua Xu, Shaojie Zhao, Qianrui Zhou
|
Learning Discriminative Representations and Decision Boundaries for Open
Intent Detection
|
Accepted by IEEE/ACM TASLP
|
IEEE/ACM Transactions on Audio, Speech, and Language Processing
2023
|
10.1109/TASLP.2023.3265203
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open intent detection is a significant problem in natural language
understanding, which aims to identify the unseen open intent while ensuring
known intent identification performance. However, current methods face two
major challenges. Firstly, they struggle to learn friendly representations to
detect the open intent with prior knowledge of only known intents. Secondly,
there is a lack of an effective approach to obtaining specific and compact
decision boundaries for known intents. To address these issues, this paper
presents an original framework called DA-ADB, which successively learns
distance-aware intent representations and adaptive decision boundaries for open
intent detection. Specifically, we first leverage distance information to
enhance the distinguishing capability of the intent representations. Then, we
design a novel loss function to obtain appropriate decision boundaries by
balancing both empirical and open space risks. Extensive experiments
demonstrate the effectiveness of the proposed distance-aware and boundary
learning strategies. Compared to state-of-the-art methods, our framework
achieves substantial improvements on three benchmark datasets. Furthermore, it
yields robust performance with varying proportions of labeled data and known
categories.
|
[
{
"version": "v1",
"created": "Fri, 11 Mar 2022 10:02:09 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Jul 2022 03:10:44 GMT"
},
{
"version": "v3",
"created": "Fri, 5 May 2023 15:02:53 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Zhang",
"Hanlei",
""
],
[
"Xu",
"Hua",
""
],
[
"Zhao",
"Shaojie",
""
],
[
"Zhou",
"Qianrui",
""
]
] |
new_dataset
| 0.995574 |
2204.01392
|
Libor Pol\v{c}\'ak
|
Libor Pol\v{c}\'ak (1), Marek Salo\v{n} (1), Giorgio Maone (2), Radek
Hranick\'y (1), Michael McMahon (3) ((1) Faculty of Information Technology,
Brno University of Technology, Brno, Czech Republic, (2) Hackademix, Palermo,
Italy, (3) Free Software Foundation, Boston, MA, USA)
|
JShelter: Give Me My Browser Back
|
Paper update after internal review, update according to the latest
development, transform into extended version of the SECRYPT paper that was
accepted
|
Libor Pol\v{c}\'ak, Marek Salo\v{n}, Giorgio Maone, Radek
Hranick\'y, and Michael McMahon. JShelter: Give Me My Browser Back. In
SECRYPT 2023 (Rome, IT). SciTePress
| null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The web is used daily by billions. Even so, users are not protected from many
threats by default. This position paper builds on previous web privacy and
security research and introduces JShelter, a webextension that fights to return
the browser to users. Moreover, we introduce a library helping with common
webextension development tasks and fixing loopholes misused by previous
research. JShelter focuses on fingerprinting prevention, limitations of rich
web APIs, prevention of attacks connected to timing, and learning information
about the device, the browser, the user, and surrounding physical environment
and location. We discovered a loophole in the sensor timestamps that lets any
page observe the device boot time if sensor APIs are enabled in Chromium-based
browsers. JShelter provides a fingerprinting report and other feedback that can
be used by future security research and data protection authorities. Thousands
of users around the world use the webextension every day.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2022 11:20:45 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Jun 2022 10:08:19 GMT"
},
{
"version": "v3",
"created": "Fri, 5 May 2023 10:18:38 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Polčák",
"Libor",
""
],
[
"Saloň",
"Marek",
""
],
[
"Maone",
"Giorgio",
""
],
[
"Hranický",
"Radek",
""
],
[
"McMahon",
"Michael",
""
]
] |
new_dataset
| 0.984838 |
2207.10739
|
Paul Kassianik
|
Paul Kassianik, Erik Nijkamp, Bo Pang, Yingbo Zhou, Caiming Xiong
|
BigIssue: A Realistic Bug Localization Benchmark
| null | null | null | null |
cs.LG cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
As machine learning tools progress, the inevitable question arises: How can
machine learning help us write better code? With significant progress being
achieved in natural language processing with models like GPT-3 and Bert, the
applications of natural language processing techniques to code are starting to
be explored. Most of the research has been focused on automatic program repair
(APR), and while the results on synthetic or highly filtered datasets are
promising, such models are hard to apply in real-world scenarios because of
inadequate bug localization. We propose BigIssue: a benchmark for realistic bug
localization. The goal of the benchmark is two-fold. We provide (1) a general
benchmark with a diversity of real and synthetic Java bugs and (2) a motivation
to improve bug localization capabilities of models through attention to the
full repository context. With the introduction of BigIssue, we hope to advance
the state of the art in bug localization, in turn improving APR performance and
increasing its applicability to the modern development cycle.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 20:17:53 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2023 22:31:12 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Kassianik",
"Paul",
""
],
[
"Nijkamp",
"Erik",
""
],
[
"Pang",
"Bo",
""
],
[
"Zhou",
"Yingbo",
""
],
[
"Xiong",
"Caiming",
""
]
] |
new_dataset
| 0.986537 |
2208.01779
|
James Noeckel
|
James Noeckel, Benjamin T. Jones, Karl Willis, Brian Curless, Adriana
Schulz
|
Mates2Motion: Learning How Mechanical CAD Assemblies Work
|
Contains 5 pages, 2 figures. Presented at the ICML 2022 Workshop on
Machine Learning in Computational Design
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We describe our work on inferring the degrees of freedom between mated parts
in mechanical assemblies using deep learning on CAD representations. We train
our model using a large dataset of real-world mechanical assemblies consisting
of CAD parts and mates joining them together. We present methods for
re-defining these mates to make them better reflect the motion of the assembly,
as well as narrowing down the possible axes of motion. We also conduct a user
study to create a motion-annotated test set with more reliable labels.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 23:12:37 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2023 22:39:40 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Noeckel",
"James",
""
],
[
"Jones",
"Benjamin T.",
""
],
[
"Willis",
"Karl",
""
],
[
"Curless",
"Brian",
""
],
[
"Schulz",
"Adriana",
""
]
] |
new_dataset
| 0.999601 |
2210.04620
|
Jean Ogier du Terrail
|
Jean Ogier du Terrail, Samy-Safwan Ayed, Edwige Cyffers, Felix
Grimberg, Chaoyang He, Regis Loeb, Paul Mangold, Tanguy Marchand, Othmane
Marfoq, Erum Mushtaq, Boris Muzellec, Constantin Philippenko, Santiago Silva,
Maria Tele\'nczuk, Shadi Albarqouni, Salman Avestimehr, Aur\'elien Bellet,
Aymeric Dieuleveut, Martin Jaggi, Sai Praneeth Karimireddy, Marco Lorenzi,
Giovanni Neglia, Marc Tommasi, Mathieu Andreux
|
FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in
Realistic Healthcare Settings
|
Accepted to NeurIPS, Datasets and Benchmarks Track, this version
fixes typos in the datasets' table and the appendix
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated Learning (FL) is a novel approach enabling several clients holding
sensitive data to collaboratively train machine learning models, without
centralizing data. The cross-silo FL setting corresponds to the case of few
($2$--$50$) reliable clients, each holding medium to large datasets, and is
typically found in applications such as healthcare, finance, or industry. While
previous works have proposed representative datasets for cross-device FL, few
realistic healthcare cross-silo FL datasets exist, thereby slowing algorithmic
research in this critical application. In this work, we propose a novel
cross-silo dataset suite focused on healthcare, FLamby (Federated Learning
AMple Benchmark of Your cross-silo strategies), to bridge the gap between
theory and practice of cross-silo FL. FLamby encompasses 7 healthcare datasets
with natural splits, covering multiple tasks, modalities, and data volumes,
each accompanied with baseline training code. As an illustration, we
additionally benchmark standard FL algorithms on all datasets. Our flexible and
modular suite allows researchers to easily download datasets, reproduce results
and re-use the different components for their research. FLamby is available
at~\url{www.github.com/owkin/flamby}.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 12:17:30 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Oct 2022 08:22:16 GMT"
},
{
"version": "v3",
"created": "Fri, 5 May 2023 08:48:12 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Terrail",
"Jean Ogier du",
""
],
[
"Ayed",
"Samy-Safwan",
""
],
[
"Cyffers",
"Edwige",
""
],
[
"Grimberg",
"Felix",
""
],
[
"He",
"Chaoyang",
""
],
[
"Loeb",
"Regis",
""
],
[
"Mangold",
"Paul",
""
],
[
"Marchand",
"Tanguy",
""
],
[
"Marfoq",
"Othmane",
""
],
[
"Mushtaq",
"Erum",
""
],
[
"Muzellec",
"Boris",
""
],
[
"Philippenko",
"Constantin",
""
],
[
"Silva",
"Santiago",
""
],
[
"Teleńczuk",
"Maria",
""
],
[
"Albarqouni",
"Shadi",
""
],
[
"Avestimehr",
"Salman",
""
],
[
"Bellet",
"Aurélien",
""
],
[
"Dieuleveut",
"Aymeric",
""
],
[
"Jaggi",
"Martin",
""
],
[
"Karimireddy",
"Sai Praneeth",
""
],
[
"Lorenzi",
"Marco",
""
],
[
"Neglia",
"Giovanni",
""
],
[
"Tommasi",
"Marc",
""
],
[
"Andreux",
"Mathieu",
""
]
] |
new_dataset
| 0.995783 |
2210.04941
|
Nathaniel Hanson
|
Nathaniel Hanson, Wesley Lewis, Kavya Puthuveetil, Donelle Furline,
Akhil Padmanabha, Ta\c{s}k{\i}n Pad{\i}r, Zackory Erickson
|
SLURP! Spectroscopy of Liquids Using Robot Pre-Touch Sensing
| null | null | null | null |
cs.RO eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Liquids and granular media are pervasive throughout human environments. Their
free-flowing nature causes people to constrain them into containers. We do so
with thousands of different types of containers made out of different materials
with varying sizes, shapes, and colors. In this work, we present a
state-of-the-art sensing technique for robots to perceive what liquid is inside
of an unknown container. We do so by integrating Visible to Near Infrared
(VNIR) reflectance spectroscopy into a robot's end effector. We introduce a
hierarchical model for inferring the material classes of both containers and
internal contents given spectral measurements from two integrated
spectrometers. To train these inference models, we capture and open source a
dataset of spectral measurements from over 180 different combinations of
containers and liquids. Our technique demonstrates over 85% accuracy in
identifying 13 different liquids and granular media contained within 13
different containers. The sensitivity of our spectral readings allow our model
to also identify the material composition of the containers themselves with 96%
accuracy. Overall, VNIR spectroscopy presents a promising method to give
household robots a general-purpose ability to infer the liquids inside of
containers, without needing to open or manipulate the containers.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 18:18:17 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2023 20:48:37 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Hanson",
"Nathaniel",
""
],
[
"Lewis",
"Wesley",
""
],
[
"Puthuveetil",
"Kavya",
""
],
[
"Furline",
"Donelle",
""
],
[
"Padmanabha",
"Akhil",
""
],
[
"Padır",
"Taşkın",
""
],
[
"Erickson",
"Zackory",
""
]
] |
new_dataset
| 0.999645 |
2301.03852
|
Tushar Nagrare
|
Tushar Nagrare, Parul Sindhwad, Faruk Kazi
|
BLE Protocol in IoT Devices and Smart Wearable Devices: Security and
Privacy Threats
| null | null | null | null |
cs.CR cs.AR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Bluetooth Low Energy (BLE) has become the primary transmission media due to
its extremely low energy consumption, good network scope, and data transfer
speed for the Internet of Things (IoT) and smart wearable devices. With the
exponential boom of the Internet of Things (IoT) and the Bluetooth Low Energy
(BLE) connection protocol, a requirement to discover defensive techniques to
protect it with practical security analysis. Unfortunately, IoT-BLE is at risk
of spoofing assaults where an attacker can pose as a gadget and provide its
users a harmful information. Furthermore, due to the simplified strategy of
this protocol, there were many security and privacy vulnerabilities. Justifying
this quantitative security analysis with STRIDE Methodology change to create a
framework to deal with protection issues for the IoT-BLE sensors. Therefore,
providing probable attack scenarios for various exposures in this analysis, and
offer mitigating strategies. In light of this authors performed STRIDE threat
modeling to understand the attack surface for smart wearable devices supporting
BLE. The study evaluates different exploitation scenarios Denial of Service
(DoS), Elevation of privilege, Information disclosure, spoofing, Tampering, and
repudiation on MI Band, One plus Band, Boat Storm smartwatch, and Fire Bolt
Invincible.
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 08:46:55 GMT"
},
{
"version": "v2",
"created": "Fri, 5 May 2023 07:36:08 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Nagrare",
"Tushar",
""
],
[
"Sindhwad",
"Parul",
""
],
[
"Kazi",
"Faruk",
""
]
] |
new_dataset
| 0.988457 |
2303.02287
|
Yuxiang Zhang
|
Yuxiang Zhang, Suresh Devalapalli, Sachin Mehta, Anat Caspi
|
OASIS: Automated Assessment of Urban Pedestrian Paths at Scale
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The inspection of the Public Right of Way (PROW) for accessibility barriers
is necessary for monitoring and maintaining the built environment for
communities' walkability, rollability, safety, active transportation, and
sustainability. However, an inspection of the PROW, by surveyors or crowds, is
laborious, inconsistent, costly, and unscalable. The core of smart city
developments involves the application of information technologies toward
municipal assets assessment and management. Sidewalks, in comparison to
automobile roads, have not been regularly integrated into information systems
to optimize or inform civic services. We develop an Open Automated Sidewalks
Inspection System (OASIS), a free and open-source automated mapping system, to
extract sidewalk network data using mobile physical devices. OASIS leverages
advances in neural networks, image sensing, location-based methods, and compact
hardware to perform sidewalk segmentation and mapping along with the
identification of barriers to generate a GIS pedestrian transportation layer
that is available for routing as well as analytic and operational reports. We
describe a prototype system trained and tested with imagery collected in
real-world settings, alongside human surveyors who are part of the local
transit pathway review team. Pilots show promising precision and recall for
path mapping (0.94, 0.98 respectively). Moreover, surveyor teams' functional
efficiency increased in the field. By design, OASIS takes adoption aspects into
consideration to ensure the system could be easily integrated with governmental
pathway review teams' workflows, and that the outcome data would be
interoperable with public data commons.
|
[
{
"version": "v1",
"created": "Sat, 4 Mar 2023 01:32:59 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2023 20:03:14 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Zhang",
"Yuxiang",
""
],
[
"Devalapalli",
"Suresh",
""
],
[
"Mehta",
"Sachin",
""
],
[
"Caspi",
"Anat",
""
]
] |
new_dataset
| 0.998256 |
2303.05325
|
Md. Istiak Hossain Shihab
|
Md. Istiak Hossain Shihab, Md. Rakibul Hasan, Mahfuzur Rahman Emon,
Syed Mobassir Hossen, Md. Nazmuddoha Ansary, Intesur Ahmed, Fazle Rabbi
Rakib, Shahriar Elahi Dhruvo, Souhardya Saha Dip, Akib Hasan Pavel, Marsia
Haque Meghla, Md. Rezwanul Haque, Sayma Sultana Chowdhury, Farig Sadeque,
Tahsin Reasat, Ahmed Imtiaz Humayun, Asif Shahriyar Sushmit
|
BaDLAD: A Large Multi-Domain Bengali Document Layout Analysis Dataset
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While strides have been made in deep learning based Bengali Optical Character
Recognition (OCR) in the past decade, the absence of large Document Layout
Analysis (DLA) datasets has hindered the application of OCR in document
transcription, e.g., transcribing historical documents and newspapers.
Moreover, rule-based DLA systems that are currently being employed in practice
are not robust to domain variations and out-of-distribution layouts. To this
end, we present the first multidomain large Bengali Document Layout Analysis
Dataset: BaDLAD. This dataset contains 33,695 human annotated document samples
from six domains - i) books and magazines, ii) public domain govt. documents,
iii) liberation war documents, iv) newspapers, v) historical newspapers, and
vi) property deeds, with 710K polygon annotations for four unit types:
text-box, paragraph, image, and table. Through preliminary experiments
benchmarking the performance of existing state-of-the-art deep learning
architectures for English DLA, we demonstrate the efficacy of our dataset in
training deep learning based Bengali document digitization models.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 15:15:55 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Mar 2023 07:39:42 GMT"
},
{
"version": "v3",
"created": "Fri, 5 May 2023 07:35:54 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Shihab",
"Md. Istiak Hossain",
""
],
[
"Hasan",
"Md. Rakibul",
""
],
[
"Emon",
"Mahfuzur Rahman",
""
],
[
"Hossen",
"Syed Mobassir",
""
],
[
"Ansary",
"Md. Nazmuddoha",
""
],
[
"Ahmed",
"Intesur",
""
],
[
"Rakib",
"Fazle Rabbi",
""
],
[
"Dhruvo",
"Shahriar Elahi",
""
],
[
"Dip",
"Souhardya Saha",
""
],
[
"Pavel",
"Akib Hasan",
""
],
[
"Meghla",
"Marsia Haque",
""
],
[
"Haque",
"Md. Rezwanul",
""
],
[
"Chowdhury",
"Sayma Sultana",
""
],
[
"Sadeque",
"Farig",
""
],
[
"Reasat",
"Tahsin",
""
],
[
"Humayun",
"Ahmed Imtiaz",
""
],
[
"Sushmit",
"Asif Shahriyar",
""
]
] |
new_dataset
| 0.999846 |
2304.03439
|
Hanmeng Liu
|
Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, Yue Zhang
|
Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Harnessing logical reasoning ability is a comprehensive natural language
understanding endeavor. With the release of Generative Pretrained Transformer 4
(GPT-4), highlighted as "advanced" at reasoning tasks, we are eager to learn
the GPT-4 performance on various logical reasoning tasks. This report analyses
multiple logical reasoning datasets, with popular benchmarks like LogiQA and
ReClor, and newly-released datasets like AR-LSAT. We test the multi-choice
reading comprehension and natural language inference tasks with benchmarks
requiring logical reasoning. We further construct a logical reasoning
out-of-distribution dataset to investigate the robustness of ChatGPT and GPT-4.
We also make a performance comparison between ChatGPT and GPT-4. Experiment
results show that ChatGPT performs significantly better than the RoBERTa
fine-tuning method on most logical reasoning benchmarks. With early access to
the GPT-4 API we are able to conduct intense experiments on the GPT-4 model.
The results show GPT-4 yields even higher performance on most logical reasoning
datasets. Among benchmarks, ChatGPT and GPT-4 do relatively well on well-known
datasets like LogiQA and ReClor. However, the performance drops significantly
when handling newly released and out-of-distribution datasets. Logical
reasoning remains challenging for ChatGPT and GPT-4, especially on
out-of-distribution and natural language inference datasets. We release the
prompt-style logical reasoning datasets as a benchmark suite and name it
LogiEval.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 01:37:45 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2023 15:25:44 GMT"
},
{
"version": "v3",
"created": "Fri, 5 May 2023 07:24:48 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Liu",
"Hanmeng",
""
],
[
"Ning",
"Ruoxi",
""
],
[
"Teng",
"Zhiyang",
""
],
[
"Liu",
"Jian",
""
],
[
"Zhou",
"Qiji",
""
],
[
"Zhang",
"Yue",
""
]
] |
new_dataset
| 0.985649 |
2304.06025
|
Johanna Suvi Karras
|
Johanna Karras, Aleksander Holynski, Ting-Chun Wang, Ira
Kemelmacher-Shlizerman
|
DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion
|
Project page: https://grail.cs.washington.edu/projects/dreampose/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present DreamPose, a diffusion-based method for generating animated
fashion videos from still images. Given an image and a sequence of human body
poses, our method synthesizes a video containing both human and fabric motion.
To achieve this, we transform a pretrained text-to-image model (Stable
Diffusion) into a pose-and-image guided video synthesis model, using a novel
finetuning strategy, a set of architectural changes to support the added
conditioning signals, and techniques to encourage temporal consistency. We
fine-tune on a collection of fashion videos from the UBC Fashion dataset. We
evaluate our method on a variety of clothing styles and poses, and demonstrate
that our method produces state-of-the-art results on fashion video animation.
Video results are available on our project page.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 17:59:17 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Apr 2023 15:36:09 GMT"
},
{
"version": "v3",
"created": "Thu, 4 May 2023 22:29:51 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Karras",
"Johanna",
""
],
[
"Holynski",
"Aleksander",
""
],
[
"Wang",
"Ting-Chun",
""
],
[
"Kemelmacher-Shlizerman",
"Ira",
""
]
] |
new_dataset
| 0.999629 |
2305.01232
|
Sebastian M\"uller
|
Bing-Yang Lin, Daria Dziuba{\l}towska, Piotr Macek, Andreas Penzkofer,
Sebastian M\"uller
|
TangleSim: An Agent-based, Modular Simulator for DAG-based Distributed
Ledger Technologies
|
IEEE ICBC 2023, short paper
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
DAG-based DLTs allow for parallel, asynchronous writing access to a ledger.
Consequently, the perception of the most recent blocks may differ considerably
between nodes, and the underlying network properties of the P2P layer have a
direct impact on the performance of the protocol. Moreover, the stronger
inter-dependencies of several core components demand a more complex and
complete approach to studying such DLTs. This paper presents an agent-based,
open-sourced simulator for large-scale networks that implement the leaderless
Tangle 2.0 consensus protocol. Its scope includes modelling the underlying
peer-to-peer communication with network topology, package loss, heterogeneous
latency, the gossip protocol with reliable broadcast qualities, the underlying
DAG-based data structure, and the consensus protocol. The simulator allows us
to explore the performance of the protocol in different network environments,
as well as different attack scenarios.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 07:14:14 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Lin",
"Bing-Yang",
""
],
[
"Dziubałtowska",
"Daria",
""
],
[
"Macek",
"Piotr",
""
],
[
"Penzkofer",
"Andreas",
""
],
[
"Müller",
"Sebastian",
""
]
] |
new_dataset
| 0.987659 |
2305.03089
|
Breandan Considine
|
Breandan Considine, Nicholas Albion, Xujie Si
|
Idiolect: A Reconfigurable Voice Coding Assistant
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents Idiolect, an open source
(https://github.com/OpenASR/idiolect) IDE plugin for voice coding and a novel
approach to building bots that allows for users to define custom commands
on-the-fly. Unlike traditional chatbots, Idiolect does not pretend to be an
omniscient virtual assistant but rather a reconfigurable voice programming
system that empowers users to create their own commands and actions
dynamically, without rebuilding or restarting the application. We offer an
experience report describing the tool itself, illustrate some example use
cases, and reflect on several lessons learned during the tool's development.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 18:08:29 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Considine",
"Breandan",
""
],
[
"Albion",
"Nicholas",
""
],
[
"Si",
"Xujie",
""
]
] |
new_dataset
| 0.999202 |
2305.03129
|
Noah Patton
|
Noah Patton, Kia Rahmani, Meghana Missula, Joydeep Biswas, I\c{s}il
Dillig
|
Program Synthesis for Robot Learning from Demonstrations
|
31 Pages, Submitted for Review
| null | null | null |
cs.PL cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a new synthesis-based approach for solving the Learning
from Demonstration (LfD) problem in robotics. Given a set of user
demonstrations, the goal of programmatic LfD is to learn a policy in a
programming language that can be used to control a robot's behavior. We address
this problem through a novel program synthesis algorithm that leverages two key
ideas: First, to perform fast and effective generalization from user
demonstrations, our synthesis algorithm views these demonstrations as strings
over a finite alphabet and abstracts programs in our DSL as regular expressions
over the same alphabet. This regex abstraction facilitates synthesis by helping
infer useful program sketches and pruning infeasible parts of the search space.
Second, to deal with the large number of object types in the environment, our
method leverages a Large Language Model (LLM) to guide search. We have
implemented our approach in a tool called Prolex and present the results of a
comprehensive experimental evaluation on 120 benchmarks involving 40 unique
tasks in three different environments. We show that, given a 120 second time
limit, Prolex can find a program consistent with the demonstrations in 80% of
the cases. Furthermore, for 81% of the tasks for which a solution is returned,
Prolex is able to find the ground truth program with just one demonstration. To
put these results in perspective, we conduct a comparison against two baselines
and show that both perform much worse.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 20:13:07 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Patton",
"Noah",
""
],
[
"Rahmani",
"Kia",
""
],
[
"Missula",
"Meghana",
""
],
[
"Biswas",
"Joydeep",
""
],
[
"Dillig",
"Işil",
""
]
] |
new_dataset
| 0.959432 |
2305.03176
|
Pedro Martin
|
Pedro Martin, Ant\'onio Rodrigues, Jo\~ao Ascenso, and Maria Paula
Queluz
|
NeRF-QA: Neural Radiance Fields Quality Assessment Database
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This short paper proposes a new database - NeRF-QA - containing 48 videos
synthesized with seven NeRF based methods, along with their perceived quality
scores, resulting from subjective assessment tests; for the videos selection,
both real and synthetic, 360 degrees scenes were considered. This database will
allow to evaluate the suitability, to NeRF based synthesized views, of existing
objective quality metrics and also the development of new quality metrics,
specific for this case.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 21:47:43 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Martin",
"Pedro",
""
],
[
"Rodrigues",
"António",
""
],
[
"Ascenso",
"João",
""
],
[
"Queluz",
"Maria Paula",
""
]
] |
new_dataset
| 0.998757 |
2305.03204
|
Xilun Chen
|
Xilun Chen, Lili Yu, Wenhan Xiong, Barlas O\u{g}uz, Yashar Mehdad,
Wen-tau Yih
|
VideoOFA: Two-Stage Pre-Training for Video-to-Text Generation
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new two-stage pre-training framework for video-to-text
generation tasks such as video captioning and video question answering: A
generative encoder-decoder model is first jointly pre-trained on massive
image-text data to learn fundamental vision-language concepts, and then adapted
to video data in an intermediate video-text pre-training stage to learn
video-specific skills such as spatio-temporal reasoning. As a result, our
VideoOFA model achieves new state-of-the-art performance on four Video
Captioning benchmarks, beating prior art by an average of 9.7 points in CIDEr
score. It also outperforms existing models on two open-ended Video Question
Answering datasets, showcasing its generalization capability as a universal
video-to-text model.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 23:27:21 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Chen",
"Xilun",
""
],
[
"Yu",
"Lili",
""
],
[
"Xiong",
"Wenhan",
""
],
[
"Oğuz",
"Barlas",
""
],
[
"Mehdad",
"Yashar",
""
],
[
"Yih",
"Wen-tau",
""
]
] |
new_dataset
| 0.998893 |
2305.03249
|
Jinseok Bae
|
Jinseok Bae, Jungdam Won, Donggeun Lim, Cheol-Hui Min, Young Min Kim
|
PMP: Learning to Physically Interact with Environments using Part-wise
Motion Priors
|
13 pages, 11 figures
| null | null | null |
cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a method to animate a character incorporating multiple part-wise
motion priors (PMP). While previous works allow creating realistic articulated
motions from reference data, the range of motion is largely limited by the
available samples. Especially for the interaction-rich scenarios, it is
impractical to attempt acquiring every possible interacting motion, as the
combination of physical parameters increases exponentially. The proposed PMP
allows us to assemble multiple part skills to animate a character, creating a
diverse set of motions with different combinations of existing data. In our
pipeline, we can train an agent with a wide range of part-wise priors.
Therefore, each body part can obtain a kinematic insight of the style from the
motion captures, or at the same time extract dynamics-related information from
the additional part-specific simulation. For example, we can first train a
general interaction skill, e.g. grasping, only for the dexterous part, and then
combine the expert trajectories from the pre-trained agent with the kinematic
priors of other limbs. Eventually, our whole-body agent learns a novel physical
interaction skill even with the absence of the object trajectories in the
reference motion sequence.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 02:27:27 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Bae",
"Jinseok",
""
],
[
"Won",
"Jungdam",
""
],
[
"Lim",
"Donggeun",
""
],
[
"Min",
"Cheol-Hui",
""
],
[
"Kim",
"Young Min",
""
]
] |
new_dataset
| 0.976718 |
2305.03251
|
Hideaki Hata
|
Takeru Tanaka, Hideaki Hata, Bodin Chinthanet, Raula Gaikovina Kula,
Kenichi Matsumoto
|
Meta-Maintanance for Dockerfiles: Are We There Yet?
|
10 pages
| null | null | null |
cs.SE
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Docker allows for the packaging of applications and dependencies, and its
instructions are described in Dockerfiles. Nowadays, version pinning is
recommended to avoid unexpected changes in the latest version of a package.
However, version pinning in Dockerfiles is not yet fully realized (only 17k of
the 141k Dockerfiles we analyzed), because of the difficulties caused by
version pinning. To maintain Dockerfiles with version-pinned packages, it is
important to update package versions, not only for improved functionality, but
also for software supply chain security, as packages are changed to address
vulnerabilities and bug fixes. However, when updating multiple version-pinned
packages, it is necessary to understand the dependencies between packages and
ensure version compatibility, which is not easy. To address this issue, we
explore the applicability of the meta-maintenance approach, which aims to
distribute the successful updates in a part of a group that independently
maintains a common artifact. We conduct an exploratory analysis of 7,914
repositories on GitHub that hold Dockerfiles, which retrieve packages on GitHub
by URLs. There were 385 repository groups with the same multiple package
combinations, and 208 groups had Dockerfiles with newer version combinations
compared to others, which are considered meta-maintenance applicable. Our
findings support the potential of meta-maintenance for updating multiple
version-pinned packages and also reveal future challenges.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 02:33:45 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Tanaka",
"Takeru",
""
],
[
"Hata",
"Hideaki",
""
],
[
"Chinthanet",
"Bodin",
""
],
[
"Kula",
"Raula Gaikovina",
""
],
[
"Matsumoto",
"Kenichi",
""
]
] |
new_dataset
| 0.998001 |
2305.03277
|
Ajian Liu
|
Ajian Liu, Zichang Tan, Zitong Yu, Chenxu Zhao, Jun Wan, Yanyan Liang,
Zhen Lei, Du Zhang, Stan Z. Li, Guodong Guo
|
FM-ViT: Flexible Modal Vision Transformers for Face Anti-Spoofing
|
12 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The availability of handy multi-modal (i.e., RGB-D) sensors has brought about
a surge of face anti-spoofing research. However, the current multi-modal face
presentation attack detection (PAD) has two defects: (1) The framework based on
multi-modal fusion requires providing modalities consistent with the training
input, which seriously limits the deployment scenario. (2) The performance of
ConvNet-based model on high fidelity datasets is increasingly limited. In this
work, we present a pure transformer-based framework, dubbed the Flexible Modal
Vision Transformer (FM-ViT), for face anti-spoofing to flexibly target any
single-modal (i.e., RGB) attack scenarios with the help of available
multi-modal data. Specifically, FM-ViT retains a specific branch for each
modality to capture different modal information and introduces the Cross-Modal
Transformer Block (CMTB), which consists of two cascaded attentions named
Multi-headed Mutual-Attention (MMA) and Fusion-Attention (MFA) to guide each
modal branch to mine potential features from informative patch tokens, and to
learn modality-agnostic liveness features by enriching the modal information of
own CLS token, respectively. Experiments demonstrate that the single model
trained based on FM-ViT can not only flexibly evaluate different modal samples,
but also outperforms existing single-modal frameworks by a large margin, and
approaches the multi-modal frameworks introduced with smaller FLOPs and model
parameters.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 04:28:48 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Liu",
"Ajian",
""
],
[
"Tan",
"Zichang",
""
],
[
"Yu",
"Zitong",
""
],
[
"Zhao",
"Chenxu",
""
],
[
"Wan",
"Jun",
""
],
[
"Liang",
"Yanyan",
""
],
[
"Lei",
"Zhen",
""
],
[
"Zhang",
"Du",
""
],
[
"Li",
"Stan Z.",
""
],
[
"Guo",
"Guodong",
""
]
] |
new_dataset
| 0.999172 |
2305.03302
|
Hao Zhu
|
Menghua Wu, Hao Zhu, Linjia Huang, Yiyu Zhuang, Yuanxun Lu, Xun Cao
|
High-Fidelity 3D Face Generation from Natural Language Descriptions
|
Accepted to CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Synthesizing high-quality 3D face models from natural language descriptions
is very valuable for many applications, including avatar creation, virtual
reality, and telepresence. However, little research ever tapped into this task.
We argue the major obstacle lies in 1) the lack of high-quality 3D face data
with descriptive text annotation, and 2) the complex mapping relationship
between descriptive language space and shape/appearance space. To solve these
problems, we build Describe3D dataset, the first large-scale dataset with
fine-grained text descriptions for text-to-3D face generation task. Then we
propose a two-stage framework to first generate a 3D face that matches the
concrete descriptions, then optimize the parameters in the 3D shape and texture
space with abstract description to refine the 3D face model. Extensive
experimental results show that our method can produce a faithful 3D face that
conforms to the input descriptions with higher accuracy and quality than
previous methods. The code and Describe3D dataset are released at
https://github.com/zhuhao-nju/describe3d .
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 06:10:15 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Wu",
"Menghua",
""
],
[
"Zhu",
"Hao",
""
],
[
"Huang",
"Linjia",
""
],
[
"Zhuang",
"Yiyu",
""
],
[
"Lu",
"Yuanxun",
""
],
[
"Cao",
"Xun",
""
]
] |
new_dataset
| 0.950336 |
2305.03314
|
Haiyun Yang
|
Haiyun Yang
|
Block the Label and Noise: An N-Gram Masked Speller for Chinese Spell
Checking
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Chinese Spell Checking(CSC), a task to detect erroneous characters
in a sentence and correct them, has attracted extensive interest because of its
wide applications in various NLP tasks. Most of the existing methods have
utilized BERT to extract semantic information for CSC task. However, these
methods directly take sentences with only a few errors as inputs, where the
correct characters may leak answers to the model and dampen its ability to
capture distant context; while the erroneous characters may disturb the
semantic encoding process and result in poor representations. Based on such
observations, this paper proposes an n-gram masking layer that masks current
and/or surrounding tokens to avoid label leakage and error disturbance.
Moreover, considering that the mask strategy may ignore multi-modal information
indicated by errors, a novel dot-product gating mechanism is proposed to
integrate the phonological and morphological information with semantic
representation. Extensive experiments on SIGHAN datasets have demonstrated that
the pluggable n-gram masking mechanism can improve the performance of prevalent
CSC models and the proposed methods in this paper outperform multiple powerful
state-of-the-art models.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 06:43:56 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Yang",
"Haiyun",
""
]
] |
new_dataset
| 0.997701 |
2305.03315
|
Jin Li
|
Jin Li, Yang Gao, Ju Dai, Shuai Li, Aimin Hao, Hong Qin
|
MPMNet: A Data-Driven MPM Framework for Dynamic Fluid-Solid Interaction
| null | null |
10.1109/TVCG.2023.3272156
| null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-accuracy, high-efficiency physics-based fluid-solid interaction is
essential for reality modeling and computer animation in online games or
real-time Virtual Reality (VR) systems. However, the large-scale simulation of
incompressible fluid and its interaction with the surrounding solid environment
is either time-consuming or suffering from the reduced time/space resolution
due to the complicated iterative nature pertinent to numerical computations of
involved Partial Differential Equations (PDEs). In recent years, we have
witnessed significant growth in exploring a different, alternative data-driven
approach to addressing some of the existing technical challenges in
conventional model-centric graphics and animation methods. This paper showcases
some of our exploratory efforts in this direction. One technical concern of our
research is to address the central key challenge of how to best construct the
numerical solver effectively and how to best integrate
spatiotemporal/dimensional neural networks with the available MPM's pressure
solvers.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 06:48:11 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Li",
"Jin",
""
],
[
"Gao",
"Yang",
""
],
[
"Dai",
"Ju",
""
],
[
"Li",
"Shuai",
""
],
[
"Hao",
"Aimin",
""
],
[
"Qin",
"Hong",
""
]
] |
new_dataset
| 0.962025 |
2305.03317
|
Nibedita Behera
|
Nibedita Behera, Ashwina Kumar, Ebenezer Rajadurai T, Sai Nitish,
Rajesh Pandian M and Rupesh Nasre
|
StarPlat: A Versatile DSL for Graph Analytics
|
30 pages, 21 figures
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Graphs model several real-world phenomena. With the growth of unstructured
and semi-structured data, parallelization of graph algorithms is inevitable.
Unfortunately, due to inherent irregularity of computation, memory access, and
communication, graph algorithms are traditionally challenging to parallelize.
To tame this challenge, several libraries, frameworks, and domain-specific
languages (DSLs) have been proposed to reduce the parallel programming burden
of the users, who are often domain experts. However, existing frameworks to
model graph algorithms typically target a single architecture. In this paper,
we present a graph DSL, named StarPlat, that allows programmers to specify
graph algorithms in a high-level format, but generates code for three different
backends from the same algorithmic specification. In particular, the DSL
compiler generates OpenMP for multi-core, MPI for distributed, and CUDA for
many-core GPUs. Since these three are completely different parallel programming
paradigms, binding them together under the same language is challenging. We
share our experience with the language design. Central to our compiler is an
intermediate representation which allows a common representation of the
high-level program, from which individual backend code generations begin. We
demonstrate the expressiveness of StarPlat by specifying four graph algorithms:
betweenness centrality computation, page rank computation, single-source
shortest paths, and triangle counting. We illustrate the effectiveness of our
approach by comparing the performance of the generated codes with that obtained
with hand-crafted library codes. We find that the generated code is competitive
to library-based codes in many cases. More importantly, we show the feasibility
to generate efficient codes for different target architectures from the same
algorithmic specification of graph algorithms.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 06:55:07 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Behera",
"Nibedita",
""
],
[
"Kumar",
"Ashwina",
""
],
[
"T",
"Ebenezer Rajadurai",
""
],
[
"Nitish",
"Sai",
""
],
[
"M",
"Rajesh Pandian",
""
],
[
"Nasre",
"Rupesh",
""
]
] |
new_dataset
| 0.995218 |
2305.03336
|
Firoj Alam
|
Maram Hasanain, Ahmed Oumar El-Shangiti, Rabindra Nath Nandi, Preslav
Nakov and Firoj Alam
|
QCRI at SemEval-2023 Task 3: News Genre, Framing and Persuasion
Techniques Detection using Multilingual Models
|
Accepted at SemEval-23 (ACL-23, propaganda, disinformation,
misinformation, fake news
| null | null | null |
cs.CL cs.AI cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Misinformation spreading in mainstream and social media has been misleading
users in different ways. Manual detection and verification efforts by
journalists and fact-checkers can no longer cope with the great scale and quick
spread of misleading information. This motivated research and industry efforts
to develop systems for analyzing and verifying news spreading online. The
SemEval-2023 Task 3 is an attempt to address several subtasks under this
overarching problem, targeting writing techniques used in news articles to
affect readers' opinions. The task addressed three subtasks with six languages,
in addition to three ``surprise'' test languages, resulting in 27 different
test setups. This paper describes our participating system to this task. Our
team is one of the 6 teams that successfully submitted runs for all setups. The
official results show that our system is ranked among the top 3 systems for 10
out of the 27 setups.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 07:40:41 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Hasanain",
"Maram",
""
],
[
"El-Shangiti",
"Ahmed Oumar",
""
],
[
"Nandi",
"Rabindra Nath",
""
],
[
"Nakov",
"Preslav",
""
],
[
"Alam",
"Firoj",
""
]
] |
new_dataset
| 0.999185 |
2305.03347
|
Weijia Wu
|
Weijia Wu and Yuzhong Zhao, Zhuang Li and Jiahong Li, Hong Zhou and
Mike Zheng Shou and Xiang Bai
|
A Large Cross-Modal Video Retrieval Dataset with Reading Comprehension
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Most existing cross-modal language-to-video retrieval (VR) research focuses
on single-modal input from video, i.e., visual representation, while the text
is omnipresent in human environments and frequently critical to understand
video. To study how to retrieve video with both modal inputs, i.e., visual and
text semantic representations, we first introduce a large-scale and cross-modal
Video Retrieval dataset with text reading comprehension, TextVR, which contains
42.2k sentence queries for 10.5k videos of 8 scenario domains, i.e., Street
View (indoor), Street View (outdoor), Games, Sports, Driving, Activity, TV
Show, and Cooking. The proposed TextVR requires one unified cross-modal model
to recognize and comprehend texts, relate them to the visual context, and
decide what text semantic information is vital for the video retrieval task.
Besides, we present a detailed analysis of TextVR compared to the existing
datasets and design a novel multimodal video retrieval baseline for the
text-based video retrieval task. The dataset analysis and extensive experiments
show that our TextVR benchmark provides many new technical challenges and
insights from previous datasets for the video-and-language community. The
project website and GitHub repo can be found at
https://sites.google.com/view/loveucvpr23/guest-track and
https://github.com/callsys/TextVR, respectively.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 08:00:14 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Wu",
"Weijia",
""
],
[
"Zhao",
"Yuzhong",
""
],
[
"Li",
"Zhuang",
""
],
[
"Li",
"Jiahong",
""
],
[
"Zhou",
"Hong",
""
],
[
"Shou",
"Mike Zheng",
""
],
[
"Bai",
"Xiang",
""
]
] |
new_dataset
| 0.999787 |
2305.03351
|
Yiyi Zhang
|
Yiyi Zhang, Zhiwen Ying, Ying Zheng, Cuiling Wu, Nannan Li, Jun Wang,
Xianzhong Feng, Xiaogang Xu
|
Leaf Cultivar Identification via Prototype-enhanced Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Plant leaf identification is crucial for biodiversity protection and
conservation and has gradually attracted the attention of academia in recent
years. Due to the high similarity among different varieties, leaf cultivar
recognition is also considered to be an ultra-fine-grained visual
classification (UFGVC) task, which is facing a huge challenge. In practice, an
instance may be related to multiple varieties to varying degrees, especially in
the UFGVC datasets. However, deep learning methods trained on one-hot labels
fail to reflect patterns shared across categories and thus perform poorly on
this task. To address this issue, we generate soft targets integrated with
inter-class similarity information. Specifically, we continuously update the
prototypical features for each category and then capture the similarity scores
between instances and prototypes accordingly. Original one-hot labels and the
similarity scores are incorporated to yield enhanced labels. Prototype-enhanced
soft labels not only contain original one-hot label information, but also
introduce rich inter-category semantic association information, thus providing
more effective supervision for deep model training. Extensive experimental
results on public datasets show that our method can significantly improve the
performance on the UFGVC task of leaf cultivar identification.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 08:11:31 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Zhang",
"Yiyi",
""
],
[
"Ying",
"Zhiwen",
""
],
[
"Zheng",
"Ying",
""
],
[
"Wu",
"Cuiling",
""
],
[
"Li",
"Nannan",
""
],
[
"Wang",
"Jun",
""
],
[
"Feng",
"Xianzhong",
""
],
[
"Xu",
"Xiaogang",
""
]
] |
new_dataset
| 0.999082 |
2305.03369
|
Lukas Christ
|
Lukas Christ, Shahin Amiriparian, Alice Baird, Alexander Kathan,
Niklas M\"uller, Steffen Klug, Chris Gagne, Panagiotis Tzirakis, Eva-Maria
Me{\ss}ner, Andreas K\"onig, Alan Cowen, Erik Cambria, Bj\"orn W. Schuller
|
The MuSe 2023 Multimodal Sentiment Analysis Challenge: Mimicked
Emotions, Cross-Cultural Humour, and Personalisation
|
Baseline paper for the 4th Multimodal Sentiment Analysis Challenge
(MuSe) 2023, a workshop at ACM Multimedia 2023
| null | null | null |
cs.LG cs.AI cs.CL cs.MM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The MuSe 2023 is a set of shared tasks addressing three different
contemporary multimodal affect and sentiment analysis problems: In the Mimicked
Emotions Sub-Challenge (MuSe-Mimic), participants predict three continuous
emotion targets. This sub-challenge utilises the Hume-Vidmimic dataset
comprising of user-generated videos. For the Cross-Cultural Humour Detection
Sub-Challenge (MuSe-Humour), an extension of the Passau Spontaneous Football
Coach Humour (Passau-SFCH) dataset is provided. Participants predict the
presence of spontaneous humour in a cross-cultural setting. The Personalisation
Sub-Challenge (MuSe-Personalisation) is based on the Ulm-Trier Social Stress
Test (Ulm-TSST) dataset, featuring recordings of subjects in a stressed
situation. Here, arousal and valence signals are to be predicted, whereas parts
of the test labels are made available in order to facilitate personalisation.
MuSe 2023 seeks to bring together a broad audience from different research
communities such as audio-visual emotion recognition, natural language
processing, signal processing, and health informatics. In this baseline paper,
we introduce the datasets, sub-challenges, and provided feature sets. As a
competitive baseline system, a Gated Recurrent Unit (GRU)-Recurrent Neural
Network (RNN) is employed. On the respective sub-challenges' test datasets, it
achieves a mean (across three continuous intensity targets) Pearson's
Correlation Coefficient of .4727 for MuSe-Mimic, an Area Under the Curve (AUC)
value of .8310 for MuSe-Humor and Concordance Correlation Coefficient (CCC)
values of .7482 for arousal and .7827 for valence in the MuSe-Personalisation
sub-challenge.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 08:53:57 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Christ",
"Lukas",
""
],
[
"Amiriparian",
"Shahin",
""
],
[
"Baird",
"Alice",
""
],
[
"Kathan",
"Alexander",
""
],
[
"Müller",
"Niklas",
""
],
[
"Klug",
"Steffen",
""
],
[
"Gagne",
"Chris",
""
],
[
"Tzirakis",
"Panagiotis",
""
],
[
"Meßner",
"Eva-Maria",
""
],
[
"König",
"Andreas",
""
],
[
"Cowen",
"Alan",
""
],
[
"Cambria",
"Erik",
""
],
[
"Schuller",
"Björn W.",
""
]
] |
new_dataset
| 0.999779 |
2305.03376
|
Goda Klumbyte
|
Goda Klumbyte, Hannah Piehl, Claude Draude
|
Explaining the ghosts: Feminist intersectional XAI and cartography as
methods to account for invisible labour
|
Workshop Behind the Scenes of Automation: Ghostly Care-Work,
Maintenance, and Interference, ACM Conference on Human Factors in Computing
Systems CHI 23, April 23-28, 2023, Hamburg, Germany, 6 pages
| null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Contemporary automation through AI entails a substantial amount of
behind-the-scenes human labour, which is often both invisibilised and
underpaid. Since invisible labour, including labelling and maintenance work, is
an integral part of contemporary AI systems, it remains important to sensitise
users to its role. We suggest that this could be done through explainable AI
(XAI) design, particularly feminist intersectional XAI. We propose the method
of cartography, which stems from feminist intersectional research, to draw out
a systemic perspective of AI and include dimensions of AI that pertain to
invisible labour.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 09:10:39 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Klumbyte",
"Goda",
""
],
[
"Piehl",
"Hannah",
""
],
[
"Draude",
"Claude",
""
]
] |
new_dataset
| 0.971026 |
2305.03425
|
Zeeshan Kaleem
|
Misha Urooj Khan, Maham Misbah, Zeeshan Kaleem, Yansha Deng, Abbas
Jamalipour
|
GAANet: Ghost Auto Anchor Network for Detecting Varying Size Drones in
Dark
|
Accepted @ IEEE VTC2023-Spring, Florence, Italy
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The usage of drones has tremendously increased in different sectors spanning
from military to industrial applications. Despite all the benefits they offer,
their misuse can lead to mishaps, and tackling them becomes more challenging
particularly at night due to their small size and low visibility conditions. To
overcome those limitations and improve the detection accuracy at night, we
propose an object detector called Ghost Auto Anchor Network (GAANet) for
infrared (IR) images. The detector uses a YOLOv5 core to address challenges in
object detection for IR images, such as poor accuracy and a high false alarm
rate caused by extended altitudes, poor lighting, and low image resolution. To
improve performance, we implemented auto anchor calculation, modified the
conventional convolution block to ghost-convolution, adjusted the input channel
size, and used the AdamW optimizer. To enhance the precision of multiscale tiny
object recognition, we also introduced an additional extra-small object feature
extractor and detector. Experimental results in a custom IR dataset with
multiple classes (birds, drones, planes, and helicopters) demonstrate that
GAANet shows improvement compared to state-of-the-art detectors. In comparison
to GhostNet-YOLOv5, GAANet has higher overall mean average precision (mAP@50),
recall, and precision around 2.5\%, 2.3\%, and 1.4\%, respectively. The dataset
and code for this paper are available as open source at
https://github.com/ZeeshanKaleem/GhostAutoAnchorNet.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 10:46:05 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Khan",
"Misha Urooj",
""
],
[
"Misbah",
"Maham",
""
],
[
"Kaleem",
"Zeeshan",
""
],
[
"Deng",
"Yansha",
""
],
[
"Jamalipour",
"Abbas",
""
]
] |
new_dataset
| 0.995871 |
2305.03487
|
Canhui Tang
|
Canhui Tang, Yiheng Li, Shaoyi Du, Guofa Wang, and Zhiqiang Tian
|
HD2Reg: Hierarchical Descriptors and Detectors for Point Cloud
Registration
|
Accepted by IEEE Intelligent Vehicles Symposium 2023 (IV 2023)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Feature Descriptors and Detectors are two main components of feature-based
point cloud registration. However, little attention has been drawn to the
explicit representation of local and global semantics in the learning of
descriptors and detectors. In this paper, we present a framework that
explicitly extracts dual-level descriptors and detectors and performs
coarse-to-fine matching with them. First, to explicitly learn local and global
semantics, we propose a hierarchical contrastive learning strategy, training
the robust matching ability of high-level descriptors, and refining the local
feature space using low-level descriptors. Furthermore, we propose to learn
dual-level saliency maps that extract two groups of keypoints in two different
senses. To overcome the weak supervision of binary matchability labels, we
propose a ranking strategy to label the significance ranking of keypoints, and
thus provide more fine-grained supervision signals. Finally, we propose a
global-to-local matching scheme to obtain robust and accurate correspondences
by leveraging the complementary dual-level features.Quantitative experiments on
3DMatch and KITTI odometry datasets show that our method achieves robust and
accurate point cloud registration and outperforms recent keypoint-based
methods.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 12:57:04 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Tang",
"Canhui",
""
],
[
"Li",
"Yiheng",
""
],
[
"Du",
"Shaoyi",
""
],
[
"Wang",
"Guofa",
""
],
[
"Tian",
"Zhiqiang",
""
]
] |
new_dataset
| 0.998188 |
2305.03508
|
Mann Khatri
|
Mann Khatri, Pritish Wadhwa, Gitansh Satija, Reshma Sheik, Yaman
Kumar, Rajiv Ratn Shah, Ponnurangam Kumaraguru
|
CiteCaseLAW: Citation Worthiness Detection in Caselaw for Legal
Assistive Writing
|
A dataset for Legal domain
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In legal document writing, one of the key elements is properly citing the
case laws and other sources to substantiate claims and arguments. Understanding
the legal domain and identifying appropriate citation context or cite-worthy
sentences are challenging tasks that demand expensive manual annotation. The
presence of jargon, language semantics, and high domain specificity makes legal
language complex, making any associated legal task hard for automation. The
current work focuses on the problem of citation-worthiness identification. It
is designed as the initial step in today's citation recommendation systems to
lighten the burden of extracting an adequate set of citation contexts. To
accomplish this, we introduce a labeled dataset of 178M sentences for
citation-worthiness detection in the legal domain from the Caselaw Access
Project (CAP). The performance of various deep learning models was examined on
this novel dataset. The domain-specific pre-trained model tends to outperform
other models, with an 88% F1-score for the citation-worthiness detection task.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 04:20:56 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Khatri",
"Mann",
""
],
[
"Wadhwa",
"Pritish",
""
],
[
"Satija",
"Gitansh",
""
],
[
"Sheik",
"Reshma",
""
],
[
"Kumar",
"Yaman",
""
],
[
"Shah",
"Rajiv Ratn",
""
],
[
"Kumaraguru",
"Ponnurangam",
""
]
] |
new_dataset
| 0.983426 |
2305.03512
|
Min Young Lee
|
Min Young Lee
|
Building Multimodal AI Chatbots
|
Bachelor's thesis
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This work aims to create a multimodal AI system that chats with humans and
shares relevant photos. While earlier works were limited to dialogues about
specific objects or scenes within images, recent works have incorporated images
into open-domain dialogues. However, their response generators are unimodal,
accepting text input but no image input, thus prone to generating responses
contradictory to the images shared in the dialogue. Therefore, this work
proposes a complete chatbot system using two multimodal deep learning models:
an image retriever that understands texts and a response generator that
understands images. The image retriever, implemented by ViT and BERT, selects
the most relevant image given the dialogue history and a database of images.
The response generator, implemented by ViT and GPT-2/DialoGPT, generates an
appropriate response given the dialogue history and the most recently retrieved
image. The two models are trained and evaluated on PhotoChat, an open-domain
dialogue dataset in which a photo is shared in each session. In automatic
evaluation, the proposed image retriever outperforms existing baselines VSE++
and SCAN with Recall@1/5/10 of 0.1/0.3/0.4 and MRR of 0.2 when ranking 1,000
images. The proposed response generator also surpasses the baseline Divter with
PPL of 16.9, BLEU-1/2 of 0.13/0.03, and Distinct-1/2 of 0.97/0.86, showing a
significant improvement in PPL by -42.8 and BLEU-1/2 by +0.07/0.02. In human
evaluation with a Likert scale of 1-5, the complete multimodal chatbot system
receives higher image-groundedness of 4.3 and engagingness of 4.3, along with
competitive fluency of 4.1, coherence of 3.9, and humanness of 3.1, when
compared to other chatbot variants. The source code is available at:
https://github.com/minniie/multimodal_chat.git.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 16:43:54 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Lee",
"Min Young",
""
]
] |
new_dataset
| 0.99511 |
2305.03534
|
Marco Fiore
|
Marco Fiore and Marina Mongiello
|
Blockchain for smart cities improvement: an architecture proposal
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
The combination between innovative topics and emerging technologies lets
researchers define new processes and models. New needs regard the definition of
modular and scalable approaches, with society and environment in mind. An
important topic to focus on is the smart city one. The use of emerging
technologies lets smart cities develop new processes to improve services
offered from various actors, either industries or government. Smart cities were
born to improve quality of life for citizens. To reach this goal, various
approaches have been proposed, but they lack on a common interface to let each
stakeholder communicate in a simple and fast way. This paper shows the proposal
of an architecture to overcome the actual limitations of smart cities: it uses
Blockchain technology as a distributed database to let everyone join the
network and feel part of a community. Blockchain can improve processes
development for smart cities. Scalability is granted thanks to a context-aware
approach: applications do not need to know about the back-end implementation,
they just need to adapt to an interface. With Blockchain, it is possible to
collect data anonymously to make some statistical analysis, to access public
records to ensure security in the city and to guarantee the origin of products
and energy.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 13:41:18 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Fiore",
"Marco",
""
],
[
"Mongiello",
"Marina",
""
]
] |
new_dataset
| 0.994371 |
2305.03582
|
Samir Sadok
|
Samir Sadok, Simon Leglaive, Laurent Girin, Xavier Alameda-Pineda,
Renaud S\'eguier
|
A Multimodal Dynamical Variational Autoencoder for Audiovisual Speech
Representation Learning
|
25 pages, 14 figures, https://samsad35.github.io/site-mdvae/
| null | null | null |
cs.SD cs.LG cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a multimodal \textit{and} dynamical VAE (MDVAE)
applied to unsupervised audio-visual speech representation learning. The latent
space is structured to dissociate the latent dynamical factors that are shared
between the modalities from those that are specific to each modality. A static
latent variable is also introduced to encode the information that is constant
over time within an audiovisual speech sequence. The model is trained in an
unsupervised manner on an audiovisual emotional speech dataset, in two stages.
In the first stage, a vector quantized VAE (VQ-VAE) is learned independently
for each modality, without temporal modeling. The second stage consists in
learning the MDVAE model on the intermediate representation of the VQ-VAEs
before quantization. The disentanglement between static versus dynamical and
modality-specific versus modality-common information occurs during this second
training stage. Extensive experiments are conducted to investigate how
audiovisual speech latent factors are encoded in the latent space of MDVAE.
These experiments include manipulating audiovisual speech, audiovisual facial
image denoising, and audiovisual speech emotion recognition. The results show
that MDVAE effectively combines the audio and visual information in its latent
space. They also show that the learned static representation of audiovisual
speech can be used for emotion recognition with few labeled data, and with
better accuracy compared with unimodal baselines and a state-of-the-art
supervised model based on an audiovisual transformer architecture.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 14:37:26 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Sadok",
"Samir",
""
],
[
"Leglaive",
"Simon",
""
],
[
"Girin",
"Laurent",
""
],
[
"Alameda-Pineda",
"Xavier",
""
],
[
"Séguier",
"Renaud",
""
]
] |
new_dataset
| 0.996344 |
2305.03640
|
Sanket Kachole Mr
|
Sanket Kachole, Yusra Alkendi, Fariborz Baghaei Naeini, Dimitrios
Makris, Yahya Zweiri
|
Asynchronous Events-based Panoptic Segmentation using Graph Mixer Neural
Network
|
9 pages, 6 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the context of robotic grasping, object segmentation encounters several
difficulties when faced with dynamic conditions such as real-time operation,
occlusion, low lighting, motion blur, and object size variability. In response
to these challenges, we propose the Graph Mixer Neural Network that includes a
novel collaborative contextual mixing layer, applied to 3D event graphs formed
on asynchronous events. The proposed layer is designed to spread spatiotemporal
correlation within an event graph at four nearest neighbor levels parallelly.
We evaluate the effectiveness of our proposed method on the Event-based
Segmentation (ESD) Dataset, which includes five unique image degradation
challenges, including occlusion, blur, brightness, trajectory, scale variance,
and segmentation of known and unknown objects. The results show that our
proposed approach outperforms state-of-the-art methods in terms of mean
intersection over the union and pixel accuracy. Code available at:
https://github.com/sanket0707/GNN-Mixer.git
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 15:56:46 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Kachole",
"Sanket",
""
],
[
"Alkendi",
"Yusra",
""
],
[
"Naeini",
"Fariborz Baghaei",
""
],
[
"Makris",
"Dimitrios",
""
],
[
"Zweiri",
"Yahya",
""
]
] |
new_dataset
| 0.997668 |
2305.03668
|
Andrea Burns
|
Andrea Burns, Krishna Srinivasan, Joshua Ainslie, Geoff Brown, Bryan
A. Plummer, Kate Saenko, Jianmo Ni, Mandy Guo
|
A Suite of Generative Tasks for Multi-Level Multimodal Webpage
Understanding
|
Data can be downloaded at
https://github.com/google-research-datasets/wit/blob/main/wikiweb2m.md
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Webpages have been a rich, scalable resource for vision-language and language
only tasks. Yet only pieces of webpages are kept: image-caption pairs, long
text articles, or raw HTML, never all in one place. Webpage tasks have
resultingly received little attention and structured image-text data left
underused. To study multimodal webpage understanding, we introduce the
Wikipedia Webpage suite (WikiWeb2M) of 2M pages. We verify its utility on three
generative tasks: page description generation, section summarization, and
contextual image captioning. We design a novel attention mechanism Prefix
Global, which selects the most relevant image and text content as global tokens
to attend to the rest of the webpage for context. By using page structure to
separate such tokens, it performs better than full attention with lower
computational complexity. Experiments show that the new annotations from
WikiWeb2M improve task performance compared to data from prior work. We also
include ablations on sequence length, input features, and model size.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 16:38:05 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Burns",
"Andrea",
""
],
[
"Srinivasan",
"Krishna",
""
],
[
"Ainslie",
"Joshua",
""
],
[
"Brown",
"Geoff",
""
],
[
"Plummer",
"Bryan A.",
""
],
[
"Saenko",
"Kate",
""
],
[
"Ni",
"Jianmo",
""
],
[
"Guo",
"Mandy",
""
]
] |
new_dataset
| 0.986729 |
2305.03706
|
Bianca Lamm
|
Daniel Ladwig (1), Bianca Lamm (1 and 2), Janis Keuper (2) ((1) IMLA,
Offenburg University, (2) Markant Services International GmbH)
|
Fine-Grained Product Classification on Leaflet Advertisements
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we describe a first publicly available fine-grained product
recognition dataset based on leaflet images. Using advertisement leaflets,
collected over several years from different European retailers, we provide a
total of 41.6k manually annotated product images in 832 classes. Further, we
investigate three different approaches for this fine-grained product
classification task, Classification by Image, by Text, as well as by Image and
Text. The approach "Classification by Text" uses the text extracted directly
from the leaflet product images. We show, that the combination of image and
text as input improves the classification of visual difficult to distinguish
products. The final model leads to an accuracy of 96.4% with a Top-3 score of
99.2%. We release our code at
https://github.com/ladwigd/Leaflet-Product-Classification.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 17:38:00 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Ladwig",
"Daniel",
"",
"1 and 2"
],
[
"Lamm",
"Bianca",
"",
"1 and 2"
],
[
"Keuper",
"Janis",
""
]
] |
new_dataset
| 0.999873 |
2305.03726
|
Bo Li
|
Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, Ziwei
Liu
|
Otter: A Multi-Modal Model with In-Context Instruction Tuning
|
Technical Report
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have demonstrated significant universal
capabilities as few/zero-shot learners in various tasks due to their
pre-training on vast amounts of text data, as exemplified by GPT-3, which
boosted to InstrctGPT and ChatGPT, effectively following natural language
instructions to accomplish real-world tasks. In this paper, we propose to
introduce instruction tuning into multi-modal models, motivated by the Flamingo
model's upstream interleaved format pretraining dataset. We adopt a similar
approach to construct our MultI-Modal In-Context Instruction Tuning (MIMIC-IT)
dataset. We then introduce Otter, a multi-modal model based on OpenFlamingo
(open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and
showcasing improved instruction-following ability and in-context learning. We
also optimize OpenFlamingo's implementation for researchers, democratizing the
required training resources from 1$\times$ A100 GPU to 4$\times$ RTX-3090 GPUs,
and integrate both OpenFlamingo and Otter into Huggingface Transformers for
more researchers to incorporate the models into their customized training and
inference pipelines.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 17:59:46 GMT"
}
] | 2023-05-08T00:00:00 |
[
[
"Li",
"Bo",
""
],
[
"Zhang",
"Yuanhan",
""
],
[
"Chen",
"Liangyu",
""
],
[
"Wang",
"Jinghao",
""
],
[
"Yang",
"Jingkang",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.996718 |
2005.02151
|
Vince Lyzinski
|
Keith Levin, Carey E. Priebe, Vince Lyzinski
|
Vertex Nomination in Richly Attributed Networks
|
46 pages, 5 figures
| null | null | null |
cs.IR cs.LG math.ST stat.ML stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vertex nomination is a lightly-supervised network information retrieval task
in which vertices of interest in one graph are used to query a second graph to
discover vertices of interest in the second graph. Similar to other information
retrieval tasks, the output of a vertex nomination scheme is a ranked list of
the vertices in the second graph, with the heretofore unknown vertices of
interest ideally concentrating at the top of the list. Vertex nomination
schemes provide a useful suite of tools for efficiently mining complex networks
for pertinent information. In this paper, we explore, both theoretically and
practically, the dual roles of content (i.e., edge and vertex attributes) and
context (i.e., network topology) in vertex nomination. We provide necessary and
sufficient conditions under which vertex nomination schemes that leverage both
content and context outperform schemes that leverage only content or context
separately. While the joint utility of both content and context has been
demonstrated empirically in the literature, the framework presented in this
paper provides a novel theoretical basis for understanding the potential
complementary roles of network features and topology.
|
[
{
"version": "v1",
"created": "Wed, 29 Apr 2020 15:13:24 GMT"
},
{
"version": "v2",
"created": "Wed, 6 May 2020 13:01:43 GMT"
},
{
"version": "v3",
"created": "Thu, 4 May 2023 15:21:04 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Levin",
"Keith",
""
],
[
"Priebe",
"Carey E.",
""
],
[
"Lyzinski",
"Vince",
""
]
] |
new_dataset
| 0.997311 |
2102.12846
|
Dimitri Kartsaklis
|
Robin Lorenz, Anna Pearson, Konstantinos Meichanetzidis, Dimitri
Kartsaklis, Bob Coecke
|
QNLP in Practice: Running Compositional Models of Meaning on a Quantum
Computer
|
38 pages
|
Journal of Artificial Intelligence Research Vol. 76 (2023),
1305-1342
|
10.1613/jair.1.14329
| null |
cs.CL cs.AI cs.LG quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantum Natural Language Processing (QNLP) deals with the design and
implementation of NLP models intended to be run on quantum hardware. In this
paper, we present results on the first NLP experiments conducted on Noisy
Intermediate-Scale Quantum (NISQ) computers for datasets of size greater than
100 sentences. Exploiting the formal similarity of the compositional model of
meaning by Coecke, Sadrzadeh and Clark (2010) with quantum theory, we create
representations for sentences that have a natural mapping to quantum circuits.
We use these representations to implement and successfully train NLP models
that solve simple sentence classification tasks on quantum hardware. We conduct
quantum simulations that compare the syntax-sensitive model of Coecke et al.
with two baselines that use less or no syntax; specifically, we implement the
quantum analogues of a "bag-of-words" model, where syntax is not taken into
account at all, and of a word-sequence model, where only word order is
respected. We demonstrate that all models converge smoothly both in simulations
and when run on quantum hardware, and that the results are the expected ones
based on the nature of the tasks and the datasets used. Another important goal
of this paper is to describe in a way accessible to AI and NLP researchers the
main principles, process and challenges of experiments on quantum hardware. Our
aim in doing this is to take the first small steps in this unexplored research
territory and pave the way for practical Quantum Natural Language Processing.
|
[
{
"version": "v1",
"created": "Thu, 25 Feb 2021 13:37:33 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2023 11:34:16 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Lorenz",
"Robin",
""
],
[
"Pearson",
"Anna",
""
],
[
"Meichanetzidis",
"Konstantinos",
""
],
[
"Kartsaklis",
"Dimitri",
""
],
[
"Coecke",
"Bob",
""
]
] |
new_dataset
| 0.997828 |
2208.04139
|
Sachith Seneviratne PhD
|
Sachith Seneviratne, Damith Senanayake, Sanka Rasnayaka, Rajith
Vidanaarachchi and Jason Thompson
|
DALLE-URBAN: Capturing the urban design expertise of large text to image
transformers
|
Accepted to DICTA 2022, released 11000+ environmental scene images
generated by Stable Diffusion and 1000+ images generated by DALLE-2
| null |
10.1109/DICTA56598.2022.10034603
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatically converting text descriptions into images using transformer
architectures has recently received considerable attention. Such advances have
implications for many applied design disciplines across fashion, art,
architecture, urban planning, landscape design and the future tools available
to such disciplines. However, a detailed analysis capturing the capabilities of
such models, specifically with a focus on the built environment, has not been
performed to date. In this work, we investigate the capabilities and biases of
such text-to-image methods as it applies to the built environment in detail. We
use a systematic grammar to generate queries related to the built environment
and evaluate resulting generated images. We generate 1020 different images and
find that text to image transformers are robust at generating realistic images
across different domains for this use-case. Generated imagery can be found at
the github: https://github.com/sachith500/DALLEURBAN
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 04:59:16 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Oct 2022 08:21:46 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Seneviratne",
"Sachith",
""
],
[
"Senanayake",
"Damith",
""
],
[
"Rasnayaka",
"Sanka",
""
],
[
"Vidanaarachchi",
"Rajith",
""
],
[
"Thompson",
"Jason",
""
]
] |
new_dataset
| 0.992248 |
2209.07202
|
Yazan Boshmaf
|
Yazan Boshmaf, Isuranga Perera, Udesh Kumarasinghe, Sajitha Liyanage,
Husam Al Jawaheri
|
Dizzy: Large-Scale Crawling and Analysis of Onion Services
| null | null | null | null |
cs.CR cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With nearly 2.5m users, onion services have become the prominent part of the
darkweb. Over the last five years alone, the number of onion domains has
increased 20x, reaching more than 700k unique domains in January 2022. As onion
services host various types of illicit content, they have become a valuable
resource for darkweb research and an integral part of e-crime investigation and
threat intelligence. However, this content is largely un-indexed by today's
search engines and researchers have to rely on outdated or manually-collected
datasets that are limited in scale, scope, or both.
To tackle this problem, we built Dizzy: An open-source crawling and analysis
system for onion services. Dizzy implements novel techniques to explore,
update, check, and classify onion services at scale, without overwhelming the
Tor network. We deployed Dizzy in April 2021 and used it to analyze more than
63.3m crawled onion webpages, focusing on domain operations, web content,
cryptocurrency usage, and web graph. Our main findings show that onion services
are unreliable due to their high churn rate, have a relatively small number of
reachable domains that are often similar and illicit, enjoy a growing
underground cryptocurrency economy, and have a graph that is relatively
tightly-knit to, but topologically different from, the regular web's graph.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 10:29:25 GMT"
},
{
"version": "v2",
"created": "Mon, 1 May 2023 07:53:06 GMT"
},
{
"version": "v3",
"created": "Thu, 4 May 2023 07:49:53 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Boshmaf",
"Yazan",
""
],
[
"Perera",
"Isuranga",
""
],
[
"Kumarasinghe",
"Udesh",
""
],
[
"Liyanage",
"Sajitha",
""
],
[
"Jawaheri",
"Husam Al",
""
]
] |
new_dataset
| 0.999668 |
2210.02438
|
Ivan Kapelyukh
|
Ivan Kapelyukh, Vitalis Vosylius, Edward Johns
|
DALL-E-Bot: Introducing Web-Scale Diffusion Models to Robotics
|
Webpage and videos: ( https://www.robot-learning.uk/dall-e-bot )
Published in IEEE Robotics and Automation Letters (RA-L)
| null |
10.1109/LRA.2023.3272516
| null |
cs.RO cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the first work to explore web-scale diffusion models for
robotics. DALL-E-Bot enables a robot to rearrange objects in a scene, by first
inferring a text description of those objects, then generating an image
representing a natural, human-like arrangement of those objects, and finally
physically arranging the objects according to that goal image. We show that
this is possible zero-shot using DALL-E, without needing any further example
arrangements, data collection, or training. DALL-E-Bot is fully autonomous and
is not restricted to a pre-defined set of objects or scenes, thanks to DALL-E's
web-scale pre-training. Encouraging real-world results, with both human studies
and objective metrics, show that integrating web-scale diffusion models into
robotics pipelines is a promising direction for scalable, unsupervised robot
learning.
|
[
{
"version": "v1",
"created": "Wed, 5 Oct 2022 17:58:31 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Nov 2022 01:30:26 GMT"
},
{
"version": "v3",
"created": "Thu, 4 May 2023 14:11:50 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Kapelyukh",
"Ivan",
""
],
[
"Vosylius",
"Vitalis",
""
],
[
"Johns",
"Edward",
""
]
] |
new_dataset
| 0.998261 |
2211.07302
|
Chang-Bin Jeon
|
Chang-Bin Jeon, Hyeongi Moon, Keunwoo Choi, Ben Sangbae Chon, and
Kyogu Lee
|
MedleyVox: An Evaluation Dataset for Multiple Singing Voices Separation
|
5 pages, 3 figures, 6 tables, To appear in ICASSP 2023 (camera-ready
version)
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Separation of multiple singing voices into each voice is a rarely studied
area in music source separation research. The absence of a benchmark dataset
has hindered its progress. In this paper, we present an evaluation dataset and
provide baseline studies for multiple singing voices separation. First, we
introduce MedleyVox, an evaluation dataset for multiple singing voices
separation. We specify the problem definition in this dataset by categorizing
it into i) unison, ii) duet, iii) main vs. rest, and iv) N-singing separation.
Second, to overcome the absence of existing multi-singing datasets for a
training purpose, we present a strategy for construction of multiple singing
mixtures using various single-singing datasets. Third, we propose the improved
super-resolution network (iSRNet), which greatly enhances initial estimates of
separation networks. Jointly trained with the Conv-TasNet and the multi-singing
mixture construction strategy, the proposed iSRNet achieved comparable
performance to ideal time-frequency masks on duet and unison subsets of
MedleyVox. Audio samples, the dataset, and codes are available on our website
(https://github.com/jeonchangbin49/MedleyVox).
|
[
{
"version": "v1",
"created": "Mon, 14 Nov 2022 12:27:35 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2023 14:13:42 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Jeon",
"Chang-Bin",
""
],
[
"Moon",
"Hyeongi",
""
],
[
"Choi",
"Keunwoo",
""
],
[
"Chon",
"Ben Sangbae",
""
],
[
"Lee",
"Kyogu",
""
]
] |
new_dataset
| 0.999488 |
2301.08523
|
Nikolaj Ignatieff Schwartzbach
|
Daji Landis and Nikolaj I. Schwartzbach
|
Side Contract Commitment Attacks on Blockchains
|
Preprint split in two
| null | null | null |
cs.GT cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We identify a subtle security issue that impacts the design of smart
contracts, because agents may themselves deploy smart contracts (side
contracts). Typically, equilibria of games are analyzed in vitro, under the
assumption that players cannot arbitrarily commit to strategies. However,
equilibria thus obtained do not hold in general in vivo, when games are
deployed on a blockchain. Being able to deploy side contracts changes
fundamental game-theoretic assumptions by inducing a meta-game wherein agents
strategize to deploy the best contracts. Not taking side contracts into account
thus fails to capture an important aspect of deploying smart contracts in
practice. A game that remains secure when the players can deploy side contracts
is said to be side contract resilient. We demonstrate the non-triviality of
side contract resilience by analyzing two smart contracts for decentralized
commerce. These contracts have the same intended functionality, but we show
that only one is side contract resilient. We then demonstrate a side contract
attack on first-price auctions, which are the transaction mechanisms used by
most major blockchains. We show that an agent may deploy a contract ensuring
their transaction is included in the next block at almost zero cost while
forcing most other agents to enter into a lottery for the remaining block
space. This benefits all the users, but is detrimental to the miners. This
might be cause for re-evaluation of the use of auctions in transaction fee
mechanisms. We show that the attack works under certain conditions that hold
with high probability from natural distributions. The attack also works against
the transaction mechanism EIP-1559. Our work highlights an issue that is
necessary to address to ensure the secure deployment of smart contracts and
suggests that other contracts already deployed on major blockchains may be
susceptible to these attacks.
|
[
{
"version": "v1",
"created": "Fri, 20 Jan 2023 11:57:42 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2023 12:42:59 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Landis",
"Daji",
""
],
[
"Schwartzbach",
"Nikolaj I.",
""
]
] |
new_dataset
| 0.977022 |
2303.09806
|
Qingtao Liu
|
Qingtao Liu, Yu Cui, Zhengnan Sun, Haoming Li, Gaofeng Li, Lin Shao,
Jiming Chen and Qi Ye
|
DexRepNet: Learning Dexterous Robotic Grasping Network with Geometric
and Spatial Hand-Object Representations
|
IROS2023(Under Review)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Robotic dexterous grasping is a challenging problem due to the high degree of
freedom (DoF) and complex contacts of multi-fingered robotic hands. Existing
deep reinforcement learning (DRL) based methods leverage human demonstrations
to reduce sample complexity due to the high dimensional action space with
dexterous grasping. However, less attention has been paid to hand-object
interaction representations for high-level generalization. In this paper, we
propose a novel geometric and spatial hand-object interaction representation,
named DexRep, to capture dynamic object shape features and the spatial
relations between hands and objects during grasping. DexRep comprises Occupancy
Feature for rough shapes within sensing range by moving hands, Surface Feature
for changing hand-object surface distances, and Local-Geo Feature for local
geometric surface features most related to potential contacts. Based on the new
representation, we propose a dexterous deep reinforcement learning method to
learn a generalizable grasping policy DexRepNet. Experimental results show that
our method outperforms baselines using existing representations for robotic
grasping dramatically both in grasp success rate and convergence speed. It
achieves a 93% grasping success rate on seen objects and higher than 80%
grasping success rates on diverse objects of unseen categories in both
simulation and real-world experiments.
|
[
{
"version": "v1",
"created": "Fri, 17 Mar 2023 07:23:09 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Mar 2023 02:04:44 GMT"
},
{
"version": "v3",
"created": "Thu, 4 May 2023 06:52:37 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Liu",
"Qingtao",
""
],
[
"Cui",
"Yu",
""
],
[
"Sun",
"Zhengnan",
""
],
[
"Li",
"Haoming",
""
],
[
"Li",
"Gaofeng",
""
],
[
"Shao",
"Lin",
""
],
[
"Chen",
"Jiming",
""
],
[
"Ye",
"Qi",
""
]
] |
new_dataset
| 0.989008 |
2304.11359
|
Qian Wang
|
Qian Wang, Yongqin Xian, Hefei Ling, Jinyuan Zhang, Xiaorui Lin, Ping
Li, Jiazhong Chen, Ning Yu
|
Detecting Adversarial Faces Using Only Real Face Self-Perturbations
|
IJCAI2023
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial attacks aim to disturb the functionality of a target system by
adding specific noise to the input samples, bringing potential threats to
security and robustness when applied to facial recognition systems. Although
existing defense techniques achieve high accuracy in detecting some specific
adversarial faces (adv-faces), new attack methods especially GAN-based attacks
with completely different noise patterns circumvent them and reach a higher
attack success rate. Even worse, existing techniques require attack data before
implementing the defense, making it impractical to defend newly emerging
attacks that are unseen to defenders. In this paper, we investigate the
intrinsic generality of adv-faces and propose to generate pseudo adv-faces by
perturbing real faces with three heuristically designed noise patterns. We are
the first to train an adv-face detector using only real faces and their
self-perturbations, agnostic to victim facial recognition systems, and agnostic
to unseen attacks. By regarding adv-faces as out-of-distribution data, we then
naturally introduce a novel cascaded system for adv-face detection, which
consists of training data self-perturbations, decision boundary regularization,
and a max-pooling-based binary classifier focusing on abnormal local color
aberrations. Experiments conducted on LFW and CelebA-HQ datasets with eight
gradient-based and two GAN-based attacks validate that our method generalizes
to a variety of unseen adversarial attacks.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 09:55:48 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2023 01:40:39 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Wang",
"Qian",
""
],
[
"Xian",
"Yongqin",
""
],
[
"Ling",
"Hefei",
""
],
[
"Zhang",
"Jinyuan",
""
],
[
"Lin",
"Xiaorui",
""
],
[
"Li",
"Ping",
""
],
[
"Chen",
"Jiazhong",
""
],
[
"Yu",
"Ning",
""
]
] |
new_dataset
| 0.981954 |
2304.11794
|
Jun Wu
|
Jun Wu, Xuesong Ye, Chengjie Mou and Weinan Dai
|
FineEHR: Refine Clinical Note Representations to Improve Mortality
Prediction
|
The 11th International Symposium on Digital Forensics and Security
(Full Paper, Oral Presentation)
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monitoring the health status of patients in the Intensive Care Unit (ICU) is
a critical aspect of providing superior care and treatment. The availability of
large-scale electronic health records (EHR) provides machine learning models
with an abundance of clinical text and vital sign data, enabling them to make
highly accurate predictions. Despite the emergence of advanced Natural Language
Processing (NLP) algorithms for clinical note analysis, the complex textual
structure and noise present in raw clinical data have posed significant
challenges. Coarse embedding approaches without domain-specific refinement have
limited the accuracy of these algorithms. To address this issue, we propose
FINEEHR, a system that utilizes two representation learning techniques, namely
metric learning and fine-tuning, to refine clinical note embeddings, while
leveraging the intrinsic correlations among different health statuses and note
categories. We evaluate the performance of FINEEHR using two metrics, namely
Area Under the Curve (AUC) and AUC-PR, on a real-world MIMIC III dataset. Our
experimental results demonstrate that both refinement approaches improve
prediction accuracy, and their combination yields the best results. Moreover,
our proposed method outperforms prior works, with an AUC improvement of over
10%, achieving an average AUC of 96.04% and an average AUC-PR of 96.48% across
various classifiers.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 02:42:52 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2023 16:01:17 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Wu",
"Jun",
""
],
[
"Ye",
"Xuesong",
""
],
[
"Mou",
"Chengjie",
""
],
[
"Dai",
"Weinan",
""
]
] |
new_dataset
| 0.982924 |
2305.00355
|
Yifang Xu
|
Yifang Xu, Yunzhuo Sun, Yang Li, Yilei Shi, Xiaoxiang Zhu, Sidan Du
|
MH-DETR: Video Moment and Highlight Detection with Cross-modal
Transformer
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increasing demand for video understanding, video moment and
highlight detection (MHD) has emerged as a critical research topic. MHD aims to
localize all moments and predict clip-wise saliency scores simultaneously.
Despite progress made by existing DETR-based methods, we observe that these
methods coarsely fuse features from different modalities, which weakens the
temporal intra-modal context and results in insufficient cross-modal
interaction. To address this issue, we propose MH-DETR (Moment and Highlight
Detection Transformer) tailored for MHD. Specifically, we introduce a simple
yet efficient pooling operator within the uni-modal encoder to capture global
intra-modal context. Moreover, to obtain temporally aligned cross-modal
features, we design a plug-and-play cross-modal interaction module between the
encoder and decoder, seamlessly integrating visual and textual features.
Comprehensive experiments on QVHighlights, Charades-STA, Activity-Net, and
TVSum datasets show that MH-DETR outperforms existing state-of-the-art methods,
demonstrating its effectiveness and superiority. Our code is available at
https://github.com/YoucanBaby/MH-DETR.
|
[
{
"version": "v1",
"created": "Sat, 29 Apr 2023 22:50:53 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Xu",
"Yifang",
""
],
[
"Sun",
"Yunzhuo",
""
],
[
"Li",
"Yang",
""
],
[
"Shi",
"Yilei",
""
],
[
"Zhu",
"Xiaoxiang",
""
],
[
"Du",
"Sidan",
""
]
] |
new_dataset
| 0.998362 |
2305.01290
|
Rajen Kumar
|
Rajen Kumar, Prashant Kumar Srivastava, Sudhan Majhi
|
A Direct Construction of Type-II $Z$ Complementary Code Set with
Arbitrarily Large Codes
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a construction of type-II $Z$-complementary code
set (ZCCS), using a multi-variable function with Hamiltonian paths and disjoint
vertices. For a type-I $(K,M,Z,N)$-ZCCS, $K$ is bounded by $K \leq M
\left\lfloor \frac{N}{Z}\right\rfloor$. However, the proposed type-II ZCCS
provides $K = M(N-Z+1)$. The proposed type-II ZCCS provides a larger number of
codes compared to that of type-I ZCCS. Further, the proposed construction can
generate the Kernel of complete complementary code (CCC) as $(p,p,p)$-CCC, for
any integral value of $p\ge2$.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 09:46:42 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2023 22:22:44 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Kumar",
"Rajen",
""
],
[
"Srivastava",
"Prashant Kumar",
""
],
[
"Majhi",
"Sudhan",
""
]
] |
new_dataset
| 0.97331 |
2305.01867
|
Raymond Leung
|
Raymond Leung
|
An experience with PyCUDA: Refactoring an existing implementation of a
ray-surface intersection algorithm
|
14 pages. Keywords: PyCUDA, Python scripting, GPU Run-Time Code
Generation (RTCG), ray-mesh intersection, open-source code, learning, shared
experience
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article is a sequel to "GPU implementation of a ray-surface intersection
algorithm in CUDA" (arXiv:2209.02878) [1]. Its main focus is PyCUDA which
represents a Python scripting approach to GPU run-time code generation in the
Compute Unified Device Architecture (CUDA) framework. It accompanies the
open-source code distributed in GitHub which provides a PyCUDA implementation
of a GPU-based line-segment, surface-triangle intersection test. The objective
is to share a PyCUDA learning experience with people who are new to PyCUDA.
Using the existing CUDA code and foundation from [1] as the starting point, we
document the key changes made to facilitate a transition to PyCUDA. As the CUDA
source for the ray-surface intersection test contains both host and device code
and uses multiple kernel functions, these notes offer a substantive example and
real-world perspective of what it is like to utilize PyCUDA. It delves into
custom data structures such as binary radix tree and highlights some possible
pitfalls. The case studies present a debugging strategy which may be used to
examine complex C structures in device memory using standard Python tools
without the CUDA-GDB debugger.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 02:42:43 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2023 12:01:45 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Leung",
"Raymond",
""
]
] |
new_dataset
| 0.999153 |
2305.01938
|
Fengbin Zhu
|
Fengbin Zhu, Chao Wang, Fuli Feng, Zifeng Ren, Moxin Li, Tat-Seng Chua
|
Doc2SoarGraph: Discrete Reasoning over Visually-Rich Table-Text
Documents with Semantic-Oriented Hierarchical Graphs
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Discrete reasoning over table-text documents (e.g., financial reports) gains
increasing attention in recent two years. Existing works mostly simplify this
challenge by manually selecting and transforming document pages to structured
tables and paragraphs, hindering their practical application. In this work, we
explore a more realistic problem setting in the form of TAT-DQA, i.e. to answer
the question over a visually-rich table-text document. Specifically, we propose
a novel Doc2SoarGraph framework with enhanced discrete reasoning capability by
harnessing the differences and correlations among different elements (e.g.,
quantities, dates) of the given question and document with Semantic-oriented
hierarchical Graph structures. We conduct extensive experiments on TAT-DQA
dataset, and the results show that our proposed framework outperforms the best
baseline model by 17.73% and 16.91% in terms of Exact Match (EM) and F1 score
respectively on the test set, achieving the new state-of-the-art.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 07:30:32 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2023 10:02:39 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Zhu",
"Fengbin",
""
],
[
"Wang",
"Chao",
""
],
[
"Feng",
"Fuli",
""
],
[
"Ren",
"Zifeng",
""
],
[
"Li",
"Moxin",
""
],
[
"Chua",
"Tat-Seng",
""
]
] |
new_dataset
| 0.998331 |
2305.02319
|
Vasil Kolev
|
Vasil Kolev, Yavor Chapanov
|
Wavelet Coherence Of Total Solar Irradiance and Atlantic Climate
|
pages 12, Proceedings of the XIII Bulgarian-Serbian Astronomical
Conference (XIII BSAC), Velingrad, Bulgaria, 2022
|
Proceedings of the XIII Bulgarian-Serbian Astronomical Conference
(XIII BSAC) Velingrad, Bulgaria, October 3-7, no.25, pp.97-107, 2022
| null | null |
cs.CV astro-ph.IM cs.SE eess.SP hep-ex
|
http://creativecommons.org/licenses/by/4.0/
|
The oscillations of climatic parameters of North Atlantic Ocean play
important role in various events in North America and Europe. Several climatic
indices are associated with these oscillations. The long term Atlantic
temperature anomalies are described by the Atlantic Multidecadal Oscillation
(AMO). The Atlantic Multidecadal Oscillation also known as Atlantic
Multidecadal Variability (AMV), is the variability of the sea surface
temperature (SST) of the North Atlantic Ocean at the timescale of several
decades. The AMO is correlated to air temperatures and rainfall over much of
the Northern Hemisphere, in particular in the summer climate in North America
and Europe. The long-term variations of surface temperature are driven mainly
by the cycles of solar activity, represented by the variations of the Total
Solar Irradiance (TSI). The frequency and amplitude dependences between the TSI
and AMO are analyzed by wavelet coherence of millennial time series since 800
AD till now. The results of wavelet coherence are compared with the detected
common solar and climate cycles in narrow frequency bands by the method of
Partial Fourier Approximation. The long-term coherence between TSI and AMO can
help to understand better the recent climate change and can improve the long
term forecast.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 17:59:05 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Kolev",
"Vasil",
""
],
[
"Chapanov",
"Yavor",
""
]
] |
new_dataset
| 0.998301 |
2305.02326
|
Zihao Zhang
|
Zihao Zhang
|
Cybernetic Environment: A Historical Reflection on System, Design, and
Machine Intelligence
|
8 pages, theory/history
|
JoDLA Journal of Digital Landscape Architecture, 2020
|
10.14627/537690004
| null |
cs.AI cs.IT cs.RO cs.SY eess.SY math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Taking on a historical lens, this paper traces the development of cybernetics
and systems thinking back to the 1950s, when a group of interdisciplinary
scholars converged to create a new theoretical model based on machines and
systems for understanding matters of meaning, information, consciousness, and
life. By presenting a genealogy of research in the landscape architecture
discipline, the paper argues that landscape architects have been an important
part of the development of cybernetics by materializing systems based on
cybernetic principles in the environment through ecologically based landscape
design. The landscape discipline has developed a design framework that provides
transformative insights into understanding machine intelligence. The paper
calls for a new paradigm of environmental engagement to understand matters of
design and machine intelligence.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 13:09:42 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Zhang",
"Zihao",
""
]
] |
new_dataset
| 0.99184 |
2305.02360
|
Mengyun Shi
|
Mengyun Shi, Claire Cardie, Serge Belongie
|
Fashionpedia-Ads: Do Your Favorite Advertisements Reveal Your Fashion
Taste?
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Consumers are exposed to advertisements across many different domains on the
internet, such as fashion, beauty, car, food, and others. On the other hand,
fashion represents second highest e-commerce shopping category. Does consumer
digital record behavior on various fashion ad images reveal their fashion
taste? Does ads from other domains infer their fashion taste as well? In this
paper, we study the correlation between advertisements and fashion taste.
Towards this goal, we introduce a new dataset, Fashionpedia-Ads, which asks
subjects to provide their preferences on both ad (fashion, beauty, car, and
dessert) and fashion product (social network and e-commerce style) images.
Furthermore, we exhaustively collect and annotate the emotional, visual and
textual information on the ad images from multi-perspectives (abstractive
level, physical level, captions, and brands). We open-source Fashionpedia-Ads
to enable future studies and encourage more approaches to interpretability
research between advertisements and fashion taste.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 18:00:42 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Shi",
"Mengyun",
""
],
[
"Cardie",
"Claire",
""
],
[
"Belongie",
"Serge",
""
]
] |
new_dataset
| 0.999841 |
2305.02382
|
Miquel Espi Marques
|
Vasudha Kowtha, Miquel Espi Marques, Jonathan Huang, Yichi Zhang,
Carlos Avendano
|
Learning to Detect Novel and Fine-Grained Acoustic Sequences Using
Pretrained Audio Representations
|
IEEE ICASSP 2023
| null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
This work investigates pretrained audio representations for few shot Sound
Event Detection. We specifically address the task of few shot detection of
novel acoustic sequences, or sound events with semantically meaningful temporal
structure, without assuming access to non-target audio. We develop procedures
for pretraining suitable representations, and methods which transfer them to
our few shot learning scenario. Our experiments evaluate the general purpose
utility of our pretrained representations on AudioSet, and the utility of
proposed few shot methods via tasks constructed from real-world acoustic
sequences. Our pretrained embeddings are suitable to the proposed task, and
enable multiple aspects of our few shot framework.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 18:41:24 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Kowtha",
"Vasudha",
""
],
[
"Marques",
"Miquel Espi",
""
],
[
"Huang",
"Jonathan",
""
],
[
"Zhang",
"Yichi",
""
],
[
"Avendano",
"Carlos",
""
]
] |
new_dataset
| 0.988832 |
2305.02426
|
Pegah Ahadian
|
Ali Mehrban, Pegah Ahadian
|
evaluating bert and parsbert for analyzing persian advertisement data
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper discusses the impact of the Internet on modern trading and the
importance of data generated from these transactions for organizations to
improve their marketing efforts. The paper uses the example of Divar, an online
marketplace for buying and selling products and services in Iran, and presents
a competition to predict the percentage of a car sales ad that would be
published on the Divar website. Since the dataset provides a rich source of
Persian text data, the authors use the Hazm library, a Python library designed
for processing Persian text, and two state-of-the-art language models, mBERT
and ParsBERT, to analyze it. The paper's primary objective is to compare the
performance of mBERT and ParsBERT on the Divar dataset. The authors provide
some background on data mining, Persian language, and the two language models,
examine the dataset's composition and statistical features, and provide details
on their fine-tuning and training configurations for both approaches. They
present the results of their analysis and highlight the strengths and
weaknesses of the two language models when applied to Persian text data. The
paper offers valuable insights into the challenges and opportunities of working
with low-resource languages such as Persian and the potential of advanced
language models like BERT for analyzing such data. The paper also explains the
data mining process, including steps such as data cleaning and normalization
techniques. Finally, the paper discusses the types of machine learning
problems, such as supervised, unsupervised, and reinforcement learning, and the
pattern evaluation techniques, such as confusion matrix. Overall, the paper
provides an informative overview of the use of language models and data mining
techniques for analyzing text data in low-resource languages, using the example
of the Divar dataset.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 20:50:05 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Mehrban",
"Ali",
""
],
[
"Ahadian",
"Pegah",
""
]
] |
new_dataset
| 0.997745 |
2305.02433
|
Andrew Adamatzky
|
Panagiotis Mougkogiannis and Andrew Adamatzky
|
Spiking frequency modulation of proteinoids with light and realisation
of Boolean gates
| null | null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper examines the modulation of proteinoid spiking frequency in
response to light. Proteinoids are proteins formed through thermal condensation
of amino acids and have been found to exhibit spiking behaviour in response to
various stimuli. It has been demonstrated that their properties can be
modulated by light, with the frequency of spikes changing in response to
varying light intensity and wavelength. This paper explores the underlying
mechanisms of this phenomenon, including how light affects the proteinoid's
structure and its effect on the spiking frequency. We also discuss the
potential implications of this modulation for future research and applications.
Our research findings suggest that light could be used as a tool to regulate
the spiking frequency of proteinoids, opening up a new range of possibilities
for unconventional computing research.
|
[
{
"version": "v1",
"created": "Wed, 3 May 2023 21:25:51 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Mougkogiannis",
"Panagiotis",
""
],
[
"Adamatzky",
"Andrew",
""
]
] |
new_dataset
| 0.994868 |
2305.02493
|
Shuhang Tan
|
Shuhang Tan, Zhiling Wang and Yan Zhong
|
RCP-RF: A Comprehensive Road-car-pedestrian Risk Management Framework
based on Driving Risk Potential Field
| null | null | null | null |
cs.LG cs.AI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent years have witnessed the proliferation of traffic accidents, which led
wide researches on Automated Vehicle (AV) technologies to reduce vehicle
accidents, especially on risk assessment framework of AV technologies. However,
existing time-based frameworks can not handle complex traffic scenarios and
ignore the motion tendency influence of each moving objects on the risk
distribution, leading to performance degradation. To address this problem, we
novelly propose a comprehensive driving risk management framework named RCP-RF
based on potential field theory under Connected and Automated Vehicles (CAV)
environment, where the pedestrian risk metric are combined into a unified
road-vehicle driving risk management framework. Different from existing
algorithms, the motion tendency between ego and obstacle cars and the
pedestrian factor are legitimately considered in the proposed framework, which
can improve the performance of the driving risk model. Moreover, it requires
only O(N 2) of time complexity in the proposed method. Empirical studies
validate the superiority of our proposed framework against state-of-the-art
methods on real-world dataset NGSIM and real AV platform.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 01:54:37 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Tan",
"Shuhang",
""
],
[
"Wang",
"Zhiling",
""
],
[
"Zhong",
"Yan",
""
]
] |
new_dataset
| 0.997494 |
2305.02510
|
Prasanna Date
|
Prasanna Date, Chathika Gunaratne, Shruti Kulkarni, Robert Patton,
Mark Coletti, Thomas Potok
|
SuperNeuro: A Fast and Scalable Simulator for Neuromorphic Computing
| null | null | null | null |
cs.NE cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
In many neuromorphic workflows, simulators play a vital role for important
tasks such as training spiking neural networks (SNNs), running neuroscience
simulations, and designing, implementing and testing neuromorphic algorithms.
Currently available simulators are catered to either neuroscience workflows
(such as NEST and Brian2) or deep learning workflows (such as BindsNET). While
the neuroscience-based simulators are slow and not very scalable, the deep
learning-based simulators do not support certain functionalities such as
synaptic delay that are typical of neuromorphic workloads. In this paper, we
address this gap in the literature and present SuperNeuro, which is a fast and
scalable simulator for neuromorphic computing, capable of both homogeneous and
heterogeneous simulations as well as GPU acceleration. We also present
preliminary results comparing SuperNeuro to widely used neuromorphic simulators
such as NEST, Brian2 and BindsNET in terms of computation times. We demonstrate
that SuperNeuro can be approximately 10--300 times faster than some of the
other simulators for small sparse networks. On large sparse and large dense
networks, SuperNeuro can be approximately 2.2 and 3.4 times faster than the
other simulators respectively.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 02:43:01 GMT"
}
] | 2023-05-05T00:00:00 |
[
[
"Date",
"Prasanna",
""
],
[
"Gunaratne",
"Chathika",
""
],
[
"Kulkarni",
"Shruti",
""
],
[
"Patton",
"Robert",
""
],
[
"Coletti",
"Mark",
""
],
[
"Potok",
"Thomas",
""
]
] |
new_dataset
| 0.992903 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.