id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.13566
|
Xiaogang Peng
|
Xiaogang Peng, Xiao Zhou, Yikai Luo, Hao Wen, Yu Ding, Zizhao Wu
|
The MI-Motion Dataset and Benchmark for 3D Multi-Person Motion
Prediction
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D multi-person motion prediction is a challenging task that involves
modeling individual behaviors and interactions between people. Despite the
emergence of approaches for this task, comparing them is difficult due to the
lack of standardized training settings and benchmark datasets. In this paper,
we introduce the Multi-Person Interaction Motion (MI-Motion) Dataset, which
includes skeleton sequences of multiple individuals collected by motion capture
systems and refined and synthesized using a game engine. The dataset contains
167k frames of interacting people's skeleton poses and is categorized into 5
different activity scenes. To facilitate research in multi-person motion
prediction, we also provide benchmarks to evaluate the performance of
prediction methods in three settings: short-term, long-term, and
ultra-long-term prediction. Additionally, we introduce a novel baseline
approach that leverages graph and temporal convolutional networks, which has
demonstrated competitive results in multi-person motion prediction. We believe
that the proposed MI-Motion benchmark dataset and baseline will facilitate
future research in this area, ultimately leading to better understanding and
modeling of multi-person interactions.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 15:38:22 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jun 2023 15:13:31 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Peng",
"Xiaogang",
""
],
[
"Zhou",
"Xiao",
""
],
[
"Luo",
"Yikai",
""
],
[
"Wen",
"Hao",
""
],
[
"Ding",
"Yu",
""
],
[
"Wu",
"Zizhao",
""
]
] |
new_dataset
| 0.999876 |
2306.13667
|
Fachrina Dewi Puspitasari
|
Fachrina Dewi Puspitasari, Gareth Tyson, Ehsan-Ul Haq, Pan Hui,
Lik-Hang Lee
|
Ghost Booking as a New Philanthropy Channel: A Case Study on
Ukraine-Russia Conflict
|
Accepted at ACM Hypertext 2023
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The term ghost booking has recently emerged as a new way to conduct
humanitarian acts during the conflict between Russia and Ukraine in 2022. The
phenomenon describes the events where netizens donate to Ukrainian citizens
through no-show bookings on the Airbnb platform. Impressively, the social
fundraising act that used to be organized on donation-based crowdfunding
platforms is shifted into a sharing economy platform market and thus gained
more visibility. Although the donation purpose is clear, the motivation of
donors in selecting a property to book remains concealed. Thus, our study aims
to explore peer-to-peer donation behavior on a platform that was originally
intended for economic exchanges, and further identifies which platform
attributes effectively drive donation behaviors. We collect over 200K guest
reviews from 16K Airbnb property listings in Ukraine by employing two
collection methods (screen scraping and HTML parsing). Then, we distinguish
ghost bookings among guest reviews. Our analysis uncovers the relationship
between ghost booking behavior and the platform attributes, and pinpoints
several attributes that influence ghost booking. Our findings highlight that
donors incline to credible properties explicitly featured with humanitarian
needs, i.e., the hosts in penury.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 14:11:51 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Puspitasari",
"Fachrina Dewi",
""
],
[
"Tyson",
"Gareth",
""
],
[
"Haq",
"Ehsan-Ul",
""
],
[
"Hui",
"Pan",
""
],
[
"Lee",
"Lik-Hang",
""
]
] |
new_dataset
| 0.985699 |
2306.13675
|
Kenya Andrews
|
Kenya S. Andrews and Bhuvani Shah and Lu Cheng
|
Intersectionality and Testimonial Injustice in Medical Records
| null | null | null | null |
cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Detecting testimonial injustice is an essential element of addressing
inequities and promoting inclusive healthcare practices, many of which are
life-critical. However, using a single demographic factor to detect testimonial
injustice does not fully encompass the nuanced identities that contribute to a
patient's experience. Further, some injustices may only be evident when
examining the nuances that arise through the lens of intersectionality.
Ignoring such injustices can result in poor quality of care or life-endangering
events. Thus, considering intersectionality could result in more accurate
classifications and just decisions. To illustrate this, we use real-world
medical data to determine whether medical records exhibit words that could lead
to testimonial injustice, employ fairness metrics (e.g. demographic parity,
differential intersectional fairness, and subgroup fairness) to assess the
severity to which subgroups are experiencing testimonial injustice, and analyze
how the intersectionality of demographic features (e.g. gender and race) make a
difference in uncovering testimonial injustice. From our analysis, we found
that with intersectionality we can better see disparities in how subgroups are
treated and there are differences in how someone is treated based on the
intersection of their demographic attributes. This has not been previously
studied in clinical records, nor has it been proven through empirical study.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 17:22:50 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Andrews",
"Kenya S.",
""
],
[
"Shah",
"Bhuvani",
""
],
[
"Cheng",
"Lu",
""
]
] |
new_dataset
| 0.999036 |
2306.13678
|
Fei-Liang Yuan
|
Fei-Liang Yuan, Martin Sommerfeld, Pradeep Muramulla, Srikanth
Gopireddy, Lars Pasternak, Nora Urbanetz, Thomas Profitlich
|
Rigid3D: a hybrid multi-sphere DEM framework for simulation of
non-spherical particles in multi-phase flow
|
Manuscript for submission to Springer Journal - Computational
Particle Mechanics
| null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article presents the development and validation of a hybrid multi-sphere
discrete element framework - Rigid3D, for the simulation of granular systems
with arbitrarily shaped particles in 3D space. In this DEM framework, a
non-spherical particle is approximated by three different geometric models: (1)
multi-sphere model with overlapping spheres (MS model), (2) particle surface
with triangle mesh (surface model), and (3) discretized particle body with
polyhedral cells (cell model). The multi-sphere approach will be the "engine"
for efficient DEM simulations, while the particle's mesh and cell models will
be updated simultaneously according to the position and orientation of their
associated MS model, for use in particle-related inter-phase couplings in a
multi-phase flow. In this sense, Rigid3D tries to combine the best of both
worlds in multi-sphere and polyhedral DEMs: multi-sphere method for the
efficiency and acceptable accuracy in the DEM simulation of granular flows,
while the surface and cell models for the couplings between particles and other
phases (continuous or dispersed phases) without affecting the performance of
DEM simulations.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 14:38:29 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Yuan",
"Fei-Liang",
""
],
[
"Sommerfeld",
"Martin",
""
],
[
"Muramulla",
"Pradeep",
""
],
[
"Gopireddy",
"Srikanth",
""
],
[
"Pasternak",
"Lars",
""
],
[
"Urbanetz",
"Nora",
""
],
[
"Profitlich",
"Thomas",
""
]
] |
new_dataset
| 0.967549 |
2306.13680
|
Rachel Draelos
|
Angela Hemesath, Kenyon Wright, Matthew Michael Draelos, Rachel Lea
Draelos
|
The Cydoc smart patient intake form accelerates medical note writing
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Purpose: This study evaluates the effect of Cydoc software tools on medical
note time-to-completion and quality.
Methods: Medical students were recruited by email to participate in a video
encounter with a standardized patient for three scenarios: writing a note from
scratch (control), writing a note with the Cydoc educational tool, and writing
a note with the Cydoc intake form. Notes were subsequently anonymized and rated
by a resident physician across four quality measures. Note time-to-completion
was analyzed using a one-way ANOVA with post-hoc Bonferroni correction, while
note quality scores were compared using a Wilcoxon paired signed rank test.
Results: Eighteen medical students participated in the study. The average
note time-to-completion, which included the patient interview and note writing,
was 17 +/- 7.0 minutes from scratch, 18 +/- 8.0 minutes with the educational
tool, and 5.7 +/- 3.0 minutes with the intake form. Using the Cydoc intake form
was significantly faster than writing from scratch (p = 0.0001) or using the
educational tool (p = 8 x 10-5). Notes written with Cydoc tools had higher note
comprehensiveness (3.24 > 3.06), pertinent positives (3.47 > 2.94), and
pertinent negatives (3.47 > 2.67), although this trend did not reach
statistical significance.
Conclusions: Using the Cydoc smart patient intake form accelerated note
writing by 2.98x while maintaining note quality. The Cydoc smart patient intake
form has the potential to streamline clinical documentation and save
clinicians' time. Future work is needed to evaluate Cydoc tools in an in-person
outpatient setting with practicing clinician users.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 15:48:51 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Hemesath",
"Angela",
""
],
[
"Wright",
"Kenyon",
""
],
[
"Draelos",
"Matthew Michael",
""
],
[
"Draelos",
"Rachel Lea",
""
]
] |
new_dataset
| 0.986658 |
2306.13693
|
Onur Dizdar
|
Onur Dizdar, Ata Sattarzadeh, Yi Xien Yap, and Stephen Wang
|
RSMA for Overloaded MIMO Networks: Low-Complexity Design for Max-Min
Fairness
|
arXiv admin note: text overlap with arXiv:2306.13414
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rate-Splitting Multiple Access (RSMA) is a robust multiple access scheme for
multi-antenna wireless networks. In this work, we study the performance of RSMA
in downlink overloaded networks, where the number of transmit antennas is
smaller than the number of users. We provide analysis and closed-form solutions
for optimal power and rate allocations that maximize max-min fairness when
low-complexity precoding schemes are employed. The derived closed-form
solutions are used to propose a low-complexity RSMA system design for precoder
selection and resource allocation for arbitrary number of users and antennas
under perfect and imperfect Channel State Information at the Transmitter
(CSIT). We compare the performance of the proposed design with benchmark
designs based on Space Division Multiple Access (SDMA) with and without user
scheduling. By numerical results, we show that the proposed low-complexity RSMA
design achieves a significantly higher rate compared to the SDMA-based
benchmark designs under perfect and imperfect CSIT.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 10:19:26 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Dizdar",
"Onur",
""
],
[
"Sattarzadeh",
"Ata",
""
],
[
"Yap",
"Yi Xien",
""
],
[
"Wang",
"Stephen",
""
]
] |
new_dataset
| 0.999079 |
2306.13702
|
Dmitriy Smirnov
|
Dmitriy Smirnov, Chloe LeGendre, Xueming Yu, Paul Debevec
|
Magenta Green Screen: Spectrally Multiplexed Alpha Matting with Deep
Colorization
|
In DigiPro 2023
| null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Magenta Green Screen, a novel machine learning--enabled matting
technique for recording the color image of a foreground actor and a
simultaneous high-quality alpha channel without requiring a special camera or
manual keying techniques. We record the actor on a green background but light
them with only red and blue foreground lighting. In this configuration, the
green channel shows the actor silhouetted against a bright, even background,
which can be used directly as a holdout matte, the inverse of the actor's alpha
channel. We then restore the green channel of the foreground using a machine
learning colorization technique. We train the colorization model with an
example sequence of the actor lit by white lighting, yielding convincing and
temporally stable colorization results. We further show that time-multiplexing
the lighting between Magenta Green Screen and Green Magenta Screen allows the
technique to be practiced under what appears to be mostly normal lighting. We
demonstrate that our technique yields high-quality compositing results when
implemented on a modern LED virtual production stage. The alpha channel data
obtainable with our technique can provide significantly higher quality training
data for natural image matting algorithms to support future ML matting
research.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 16:22:33 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Smirnov",
"Dmitriy",
""
],
[
"LeGendre",
"Chloe",
""
],
[
"Yu",
"Xueming",
""
],
[
"Debevec",
"Paul",
""
]
] |
new_dataset
| 0.988903 |
2306.13743
|
James Chen
|
James Y. Chen, Recep Can Yavas, Victoria Kostina
|
Variable-Length Codes with Bursty Feedback
|
Presented at ISIT 2023
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study variable-length codes for point-to-point discrete memoryless
channels with noiseless unlimited-rate feedback that occurs in $L$ bursts. We
term such codes variable-length bursty-feedback (VLBF) codes. Unlike classical
codes with feedback after each transmitted code symbol, bursty feedback fits
better with protocols that employ sparse feedback after a packet is sent and
also with half-duplex end devices that cannot transmit and listen to the
channel at the same time. We present a novel non-asymptotic achievability bound
for VLBF codes with $L$ bursts of feedback over any discrete memoryless
channel. We numerically evaluate the bound over the binary symmetric channel
(BSC). We perform optimization over the time instances at which feedback occurs
for both our own bound and Yavas et al.'s non-asymptotic achievability bound
for variable-length stop-feedback (VLSF) codes, where only a single bit is sent
at each feedback instance. Our results demonstrate the advantages of richer
feedback: VLBF codes significantly outperform VLSF codes at short blocklengths,
especially as the error probability $\epsilon$ decreases. Remarkably, for
BSC(0.11) and error probability $10^{-10}$, our VLBF code with $L=5$ and
expected decoding time $N\leq 400$ outperforms the achievability bound given by
Polyanskiy et al. for VLSF codes with $L=\infty$, and our VLBF code with $L=3$.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 19:04:50 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Chen",
"James Y.",
""
],
[
"Yavas",
"Recep Can",
""
],
[
"Kostina",
"Victoria",
""
]
] |
new_dataset
| 0.994816 |
2306.13761
|
Amal Feriani
|
Amal Feriani, Di Wu, Steve Liu, Greg Dudek
|
CeBed: A Benchmark for Deep Data-Driven OFDM Channel Estimation
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning has been extensively used in wireless communication problems,
including channel estimation. Although several data-driven approaches exist, a
fair and realistic comparison between them is difficult due to inconsistencies
in the experimental conditions and the lack of a standardized experimental
design. In addition, the performance of data-driven approaches is often
compared based on empirical analysis. The lack of reproducibility and
availability of standardized evaluation tools (e.g., datasets, codebases)
hinder the development and progress of data-driven methods for channel
estimation and wireless communication in general. In this work, we introduce an
initiative to build benchmarks that unify several data-driven OFDM channel
estimation approaches. Specifically, we present CeBed (a testbed for channel
estimation) including different datasets covering various systems models and
propagation conditions along with the implementation of ten deep and
traditional baselines. This benchmark considers different practical aspects
such as the robustness of the data-driven models, the number and the
arrangement of pilots, and the number of receive antennas. This work offers a
comprehensive and unified framework to help researchers evaluate and design
data-driven channel estimation algorithms.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 19:55:41 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Feriani",
"Amal",
""
],
[
"Wu",
"Di",
""
],
[
"Liu",
"Steve",
""
],
[
"Dudek",
"Greg",
""
]
] |
new_dataset
| 0.997577 |
2306.13775
|
Senem Tanberk PhD
|
Selahattin Serdar Helli, Senem Tanberk, Sena Nur Cavsak
|
Resume Information Extraction via Post-OCR Text Processing
|
in Turkish language
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Information extraction (IE), one of the main tasks of natural language
processing (NLP), has recently increased importance in the use of resumes. In
studies on the text to extract information from the CV, sentence classification
was generally made using NLP models. In this study, it is aimed to extract
information by classifying all of the text groups after pre-processing such as
Optical Character Recognition (OCT) and object recognition with the YOLOv8
model of the resumes. The text dataset consists of 286 resumes collected for 5
different (education, experience, talent, personal and language) job
descriptions in the IT industry. The dataset created for object recognition
consists of 1198 resumes, which were collected from the open-source internet
and labeled as sets of text. BERT, BERT-t, DistilBERT, RoBERTa and XLNet were
used as models. F1 score variances were used to compare the model results. In
addition, the YOLOv8 model has also been reported comparatively in itself. As a
result of the comparison, DistilBERT was showed better results despite having a
lower number of parameters than other models.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 20:14:07 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Helli",
"Selahattin Serdar",
""
],
[
"Tanberk",
"Senem",
""
],
[
"Cavsak",
"Sena Nur",
""
]
] |
new_dataset
| 0.999677 |
2306.13776
|
Jinkyu Koo
|
Jinkyu Koo, John Yang, Le An, Gwenaelle Cunha Sergio, Su Inn Park
|
Swin-Free: Achieving Better Cross-Window Attention and Efficiency with
Size-varying Window
|
8 pages, 3 figures
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer models have shown great potential in computer vision, following
their success in language tasks. Swin Transformer is one of them that
outperforms convolution-based architectures in terms of accuracy, while
improving efficiency when compared to Vision Transformer (ViT) and its
variants, which have quadratic complexity with respect to the input size. Swin
Transformer features shifting windows that allows cross-window connection while
limiting self-attention computation to non-overlapping local windows. However,
shifting windows introduces memory copy operations, which account for a
significant portion of its runtime. To mitigate this issue, we propose
Swin-Free in which we apply size-varying windows across stages, instead of
shifting windows, to achieve cross-connection among local windows. With this
simple design change, Swin-Free runs faster than the Swin Transformer at
inference with better accuracy. Furthermore, we also propose a few of Swin-Free
variants that are faster than their Swin Transformer counterparts.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 20:19:58 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Koo",
"Jinkyu",
""
],
[
"Yang",
"John",
""
],
[
"An",
"Le",
""
],
[
"Sergio",
"Gwenaelle Cunha",
""
],
[
"Park",
"Su Inn",
""
]
] |
new_dataset
| 0.996499 |
2306.13814
|
Loc Hoang
|
Loc Hoang, Rita Brugarolas Brufau, Ke Ding, Bo Wu
|
BatchGNN: Efficient CPU-Based Distributed GNN Training on Very Large
Graphs
|
Edited preprint of a conference submission
| null | null | null |
cs.LG cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present BatchGNN, a distributed CPU system that showcases techniques that
can be used to efficiently train GNNs on terabyte-sized graphs. It reduces
communication overhead with macrobatching in which multiple minibatches'
subgraph sampling and feature fetching are batched into one communication relay
to reduce redundant feature fetches when input features are static. BatchGNN
provides integrated graph partitioning and native GNN layer implementations to
improve runtime, and it can cache aggregated input features to further reduce
sampling overhead. BatchGNN achieves an average $3\times$ speedup over DistDGL
on three GNN models trained on OGBN graphs, outperforms the runtimes reported
by distributed GPU systems $P^3$ and DistDGLv2, and scales to a terabyte-sized
graph.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 23:25:34 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Hoang",
"Loc",
""
],
[
"Brufau",
"Rita Brugarolas",
""
],
[
"Ding",
"Ke",
""
],
[
"Wu",
"Bo",
""
]
] |
new_dataset
| 0.991136 |
2306.13818
|
Jiafei Duan
|
Jiafei Duan, Yi Ru Wang, Mohit Shridhar, Dieter Fox, Ranjay Krishna
|
AR2-D2:Training a Robot Without a Robot
|
Project website: www.ar2d2.site
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diligently gathered human demonstrations serve as the unsung heroes
empowering the progression of robot learning. Today, demonstrations are
collected by training people to use specialized controllers, which
(tele-)operate robots to manipulate a small number of objects. By contrast, we
introduce AR2-D2: a system for collecting demonstrations which (1) does not
require people with specialized training, (2) does not require any real robots
during data collection, and therefore, (3) enables manipulation of diverse
objects with a real robot. AR2-D2 is a framework in the form of an iOS app that
people can use to record a video of themselves manipulating any object while
simultaneously capturing essential data modalities for training a real robot.
We show that data collected via our system enables the training of behavior
cloning agents in manipulating real objects. Our experiments further show that
training with our AR data is as effective as training with real-world robot
demonstrations. Moreover, our user study indicates that users find AR2-D2
intuitive to use and require no training in contrast to four other frequently
employed methods for collecting robot demonstrations.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 23:54:26 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Duan",
"Jiafei",
""
],
[
"Wang",
"Yi Ru",
""
],
[
"Shridhar",
"Mohit",
""
],
[
"Fox",
"Dieter",
""
],
[
"Krishna",
"Ranjay",
""
]
] |
new_dataset
| 0.999605 |
2306.13837
|
Mao Chen
|
Yajing Yang, Zeyu Zeng, Mao Chen, Ruirui Shang
|
DEKGCI: A double-sided recommendation model for integrating knowledge
graph and user-item interaction graph
|
24 pages, 6 figures,6 tables
| null | null | null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Both knowledge graphs and user-item interaction graphs are frequently used in
recommender systems due to their ability to provide rich information for
modeling users and items. However, existing studies often focused on one of
these sources (either the knowledge graph or the user-item interaction graph),
resulting in underutilization of the benefits that can be obtained by
integrating both sources of information. In this paper, we propose DEKGCI, a
novel double-sided recommendation model. In DEKGCI, we use the high-order
collaborative signals from the user-item interaction graph to enrich the user
representations on the user side. Additionally, we utilize the high-order
structural and semantic information from the knowledge graph to enrich the item
representations on the item side. DEKGCI simultaneously learns the user and
item representations to effectively capture the joint interactions between
users and items. Three real-world datasets are adopted in the experiments to
evaluate DEKGCI's performance, and experimental results demonstrate its high
effectiveness compared to seven state-of-the-art baselines in terms of AUC and
ACC.
|
[
{
"version": "v1",
"created": "Sat, 24 Jun 2023 01:54:49 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Yang",
"Yajing",
""
],
[
"Zeng",
"Zeyu",
""
],
[
"Chen",
"Mao",
""
],
[
"Shang",
"Ruirui",
""
]
] |
new_dataset
| 0.993959 |
2306.13875
|
Zhiling Guo
|
Zhiling Guo, Yinqiang Zheng, Haoran Zhang, Xiaodan Shi, Zekun Cai,
Ryosuke Shibasaki, Jinyue Yan
|
Real-World Video for Zoom Enhancement based on Spatio-Temporal Coupling
|
11 pages
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, single-frame image super-resolution (SR) has become more
realistic by considering the zooming effect and using real-world short- and
long-focus image pairs. In this paper, we further investigate the feasibility
of applying realistic multi-frame clips to enhance zoom quality via
spatio-temporal information coupling. Specifically, we first built a real-world
video benchmark, VideoRAW, by a synchronized co-axis optical system. The
dataset contains paired short-focus raw and long-focus sRGB videos of different
dynamic scenes. Based on VideoRAW, we then presented a Spatio-Temporal Coupling
Loss, termed as STCL. The proposed STCL is intended for better utilization of
information from paired and adjacent frames to align and fuse features both
temporally and spatially at the feature level. The outperformed experimental
results obtained in different zoom scenarios demonstrate the superiority of
integrating real-world video dataset and STCL into existing SR models for zoom
quality enhancement, and reveal that the proposed method can serve as an
advanced and viable tool for video zoom.
|
[
{
"version": "v1",
"created": "Sat, 24 Jun 2023 06:19:00 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Guo",
"Zhiling",
""
],
[
"Zheng",
"Yinqiang",
""
],
[
"Zhang",
"Haoran",
""
],
[
"Shi",
"Xiaodan",
""
],
[
"Cai",
"Zekun",
""
],
[
"Shibasaki",
"Ryosuke",
""
],
[
"Yan",
"Jinyue",
""
]
] |
new_dataset
| 0.999557 |
2306.13888
|
Raviraj Joshi
|
Aabha Pingle, Aditya Vyawahare, Isha Joshi, Rahul Tangsali, Raviraj
Joshi
|
L3Cube-MahaSent-MD: A Multi-domain Marathi Sentiment Analysis Dataset
and Transformer Models
|
Accepted at DMLR Workshop @ ICML 2023
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The exploration of sentiment analysis in low-resource languages, such as
Marathi, has been limited due to the availability of suitable datasets. In this
work, we present L3Cube-MahaSent-MD, a multi-domain Marathi sentiment analysis
dataset, with four different domains - movie reviews, general tweets, TV show
subtitles, and political tweets. The dataset consists of around 60,000 manually
tagged samples covering 3 distinct sentiments - positive, negative, and
neutral. We create a sub-dataset for each domain comprising 15k samples. The
MahaSent-MD is the first comprehensive multi-domain sentiment analysis dataset
within the Indic sentiment landscape. We fine-tune different monolingual and
multilingual BERT models on these datasets and report the best accuracy with
the MahaBERT model. We also present an extensive in-domain and cross-domain
analysis thus highlighting the need for low-resource multi-domain datasets. The
data and models are available at https://github.com/l3cube-pune/MarathiNLP .
|
[
{
"version": "v1",
"created": "Sat, 24 Jun 2023 07:27:53 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Pingle",
"Aabha",
""
],
[
"Vyawahare",
"Aditya",
""
],
[
"Joshi",
"Isha",
""
],
[
"Tangsali",
"Rahul",
""
],
[
"Joshi",
"Raviraj",
""
]
] |
new_dataset
| 0.999889 |
2306.13894
|
Masato Kobayashi
|
Kenta Okamoto, Akihisa Nagata, Kyoma Arai, Yusei Nagao, Tatsuki
Nishimura, Kento Hirogaki, Shunya Tanaka, Masato Kobayashi, Tatsuya Sanada,
Masaya Kataoka
|
OUXT Polaris: Autonomous Navigation System for the 2022 Maritime RobotX
Challenge
|
Technical Design Paper of 2022 Maritime RobotX Challenge
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
OUXT-Polaris has been developing an autonomous navigation system by
participating in the Maritime RobotX Challenge 2014, 2016, and 2018. In this
paper, we describe the improvement of the previous vessel system. We also
indicate the advantage of the improved design. Moreover, we describe the
developing method under Covid-19 using simulation / miniture-size hardware and
the feature components for the next RobotX Challenge.
|
[
{
"version": "v1",
"created": "Sat, 24 Jun 2023 07:57:42 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Okamoto",
"Kenta",
""
],
[
"Nagata",
"Akihisa",
""
],
[
"Arai",
"Kyoma",
""
],
[
"Nagao",
"Yusei",
""
],
[
"Nishimura",
"Tatsuki",
""
],
[
"Hirogaki",
"Kento",
""
],
[
"Tanaka",
"Shunya",
""
],
[
"Kobayashi",
"Masato",
""
],
[
"Sanada",
"Tatsuya",
""
],
[
"Kataoka",
"Masaya",
""
]
] |
new_dataset
| 0.997752 |
2306.13908
|
Sungho Suh
|
Yunmin Cho, Lala Shakti Swarup Ray, Kundan Sai Prabhu Thota, Sungho
Suh, Paul Lukowicz
|
ClothFit: Cloth-Human-Attribute Guided Virtual Try-On Network Using 3D
Simulated Dataset
|
Accepted at IEEE International Conference on Image Processing 2023
(ICIP 2023)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online clothing shopping has become increasingly popular, but the high rate
of returns due to size and fit issues has remained a major challenge. To
address this problem, virtual try-on systems have been developed to provide
customers with a more realistic and personalized way to try on clothing. In
this paper, we propose a novel virtual try-on method called ClothFit, which can
predict the draping shape of a garment on a target body based on the actual
size of the garment and human attributes. Unlike existing try-on models,
ClothFit considers the actual body proportions of the person and available
cloth sizes for clothing virtualization, making it more appropriate for current
online apparel outlets. The proposed method utilizes a U-Net-based network
architecture that incorporates cloth and human attributes to guide the
realistic virtual try-on synthesis. Specifically, we extract features from a
cloth image using an auto-encoder and combine them with features from the
user's height, weight, and cloth size. The features are concatenated with the
features from the U-Net encoder, and the U-Net decoder synthesizes the final
virtual try-on image. Our experimental results demonstrate that ClothFit can
significantly improve the existing state-of-the-art methods in terms of
photo-realistic virtual try-on results.
|
[
{
"version": "v1",
"created": "Sat, 24 Jun 2023 08:57:36 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Cho",
"Yunmin",
""
],
[
"Ray",
"Lala Shakti Swarup",
""
],
[
"Thota",
"Kundan Sai Prabhu",
""
],
[
"Suh",
"Sungho",
""
],
[
"Lukowicz",
"Paul",
""
]
] |
new_dataset
| 0.999542 |
2306.13922
|
Aviv Weinstein
|
Aviv Weinstein and Yoav Goldberg
|
Unsupervised Mapping of Arguments of Deverbal Nouns to Their
Corresponding Verbal Labels
|
Accepted to Findings of ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Deverbal nouns are nominal forms of verbs commonly used in written English
texts to describe events or actions, as well as their arguments. However, many
NLP systems, and in particular pattern-based ones, neglect to handle such
nominalized constructions. The solutions that do exist for handling arguments
of nominalized constructions are based on semantic annotation and require
semantic ontologies, making their applications restricted to a small set of
nouns. We propose to adopt instead a more syntactic approach, which maps the
arguments of deverbal nouns to the universal-dependency relations of the
corresponding verbal construction. We present an unsupervised mechanism --
based on contextualized word representations -- which allows to enrich
universal-dependency trees with dependency arcs denoting arguments of deverbal
nouns, using the same labels as the corresponding verbal cases. By sharing the
same label set as in the verbal case, patterns that were developed for verbs
can be applied without modification but with high accuracy also to the nominal
constructions.
|
[
{
"version": "v1",
"created": "Sat, 24 Jun 2023 10:07:01 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Weinstein",
"Aviv",
""
],
[
"Goldberg",
"Yoav",
""
]
] |
new_dataset
| 0.976923 |
2306.13948
|
Jingwei Zuo
|
Jingwei Zuo, Wenbin Li, Michele Baldo and Hakim Hacid
|
Unleashing Realistic Air Quality Forecasting: Introducing the
Ready-to-Use PurpleAirSF Dataset
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Air quality forecasting has garnered significant attention recently, with
data-driven models taking center stage due to advancements in machine learning
and deep learning models. However, researchers face challenges with complex
data acquisition and the lack of open-sourced datasets, hindering efficient
model validation. This paper introduces PurpleAirSF, a comprehensive and easily
accessible dataset collected from the PurpleAir network. With its high temporal
resolution, various air quality measures, and diverse geographical coverage,
this dataset serves as a useful tool for researchers aiming to develop novel
forecasting models, study air pollution patterns, and investigate their impacts
on health and the environment. We present a detailed account of the data
collection and processing methods employed to build PurpleAirSF. Furthermore,
we conduct preliminary experiments using both classic and modern
spatio-temporal forecasting models, thereby establishing a benchmark for future
air quality forecasting tasks.
|
[
{
"version": "v1",
"created": "Sat, 24 Jun 2023 12:10:16 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Zuo",
"Jingwei",
""
],
[
"Li",
"Wenbin",
""
],
[
"Baldo",
"Michele",
""
],
[
"Hacid",
"Hakim",
""
]
] |
new_dataset
| 0.999819 |
2306.13957
|
Lei Huang
|
Lei Huang, Zheng Yuan, Huihui Yan, Rong Sheng, Linjing Liu, Fuzhou
Wang, Weidun Xie, Nanjun Chen, Fei Huang, Songfang Huang, Ka-Chun Wong,
Yaoyun Zhang
|
DiffDTM: A conditional structure-free framework for bioactive molecules
generation targeted for dual proteins
| null | null | null | null |
cs.LG q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advances in deep generative models shed light on de novo molecule generation
with desired properties. However, molecule generation targeted for dual protein
targets still faces formidable challenges including protein 3D structure data
requisition for model training, auto-regressive sampling, and model
generalization for unseen targets. Here, we proposed DiffDTM, a novel
conditional structure-free deep generative model based on a diffusion model for
dual targets based molecule generation to address the above issues.
Specifically, DiffDTM receives protein sequences and molecular graphs as inputs
instead of protein and molecular conformations and incorporates an information
fusion module to achieve conditional generation in a one-shot manner. We have
conducted comprehensive multi-view experiments to demonstrate that DiffDTM can
generate drug-like, synthesis-accessible, novel, and high-binding affinity
molecules targeting specific dual proteins, outperforming the state-of-the-art
(SOTA) models in terms of multiple evaluation metrics. Furthermore, we utilized
DiffDTM to generate molecules towards dopamine receptor D2 and
5-hydroxytryptamine receptor 1A as new antipsychotics. The experimental results
indicate that DiffDTM can be easily plugged into unseen dual targets to
generate bioactive molecules, addressing the issues of requiring insufficient
active molecule data for training as well as the need to retrain when
encountering new targets.
|
[
{
"version": "v1",
"created": "Sat, 24 Jun 2023 13:08:55 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Huang",
"Lei",
""
],
[
"Yuan",
"Zheng",
""
],
[
"Yan",
"Huihui",
""
],
[
"Sheng",
"Rong",
""
],
[
"Liu",
"Linjing",
""
],
[
"Wang",
"Fuzhou",
""
],
[
"Xie",
"Weidun",
""
],
[
"Chen",
"Nanjun",
""
],
[
"Huang",
"Fei",
""
],
[
"Huang",
"Songfang",
""
],
[
"Wong",
"Ka-Chun",
""
],
[
"Zhang",
"Yaoyun",
""
]
] |
new_dataset
| 0.964041 |
2306.14056
|
Benjamin Kiefer
|
Benjamin Kiefer, Timon H\"ofer, Andreas Zell
|
Stable Yaw Estimation of Boats from the Viewpoint of UAVs and USVs
|
Accepted at ECMR 2023
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Yaw estimation of boats from the viewpoint of unmanned aerial vehicles (UAVs)
and unmanned surface vehicles (USVs) or boats is a crucial task in various
applications such as 3D scene rendering, trajectory prediction, and navigation.
However, the lack of literature on yaw estimation of objects from the viewpoint
of UAVs has motivated us to address this domain. In this paper, we propose a
method based on HyperPosePDF for predicting the orientation of boats in the 6D
space. For that, we use existing datasets, such as PASCAL3D+ and our own
datasets, SeaDronesSee-3D and BOArienT, which we annotated manually. We extend
HyperPosePDF to work in video-based scenarios, such that it yields robust
orientation predictions across time. Naively applying HyperPosePDF on video
data yields single-point predictions, resulting in far-off predictions and
often incorrect symmetric orientations due to unseen or visually different
data. To alleviate this issue, we propose aggregating the probability
distributions of pose predictions, resulting in significantly improved
performance, as shown in our experimental evaluation. Our proposed method could
significantly benefit downstream tasks in marine robotics.
|
[
{
"version": "v1",
"created": "Sat, 24 Jun 2023 20:47:37 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Kiefer",
"Benjamin",
""
],
[
"Höfer",
"Timon",
""
],
[
"Zell",
"Andreas",
""
]
] |
new_dataset
| 0.994607 |
2306.14060
|
Liunian Harold Li
|
Liunian Harold Li, Zi-Yi Dou, Nanyun Peng, Kai-Wei Chang
|
DesCo: Learning Object Recognition with Rich Language Descriptions
| null | null | null | null |
cs.CV cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent development in vision-language approaches has instigated a paradigm
shift in learning visual recognition models from language supervision. These
approaches align objects with language queries (e.g. "a photo of a cat") and
improve the models' adaptability to identify novel objects and domains.
Recently, several studies have attempted to query these models with complex
language expressions that include specifications of fine-grained semantic
details, such as attributes, shapes, textures, and relations. However, simply
incorporating language descriptions as queries does not guarantee accurate
interpretation by the models. In fact, our experiments show that GLIP, the
state-of-the-art vision-language model for object detection, often disregards
contextual information in the language descriptions and instead relies heavily
on detecting objects solely by their names. To tackle the challenges, we
propose a new description-conditioned (DesCo) paradigm of learning object
recognition models with rich language descriptions consisting of two major
innovations: 1) we employ a large language model as a commonsense knowledge
engine to generate rich language descriptions of objects based on object names
and the raw image-text caption; 2) we design context-sensitive queries to
improve the model's ability in deciphering intricate nuances embedded within
descriptions and enforce the model to focus on context rather than object names
alone. On two novel object detection benchmarks, LVIS and OminiLabel, under the
zero-shot detection setting, our approach achieves 34.8 APr minival (+9.1) and
29.3 AP (+3.6), respectively, surpassing the prior state-of-the-art models,
GLIP and FIBER, by a large margin.
|
[
{
"version": "v1",
"created": "Sat, 24 Jun 2023 21:05:02 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Li",
"Liunian Harold",
""
],
[
"Dou",
"Zi-Yi",
""
],
[
"Peng",
"Nanyun",
""
],
[
"Chang",
"Kai-Wei",
""
]
] |
new_dataset
| 0.999475 |
2306.14067
|
Michael Ogezi
|
Michael Ogezi, Bradley Hauer, Talgat Omarov, Ning Shi, Grzegorz
Kondrak
|
UAlberta at SemEval-2023 Task 1: Context Augmentation and Translation
for Multilingual Visual Word Sense Disambiguation
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We describe the systems of the University of Alberta team for the
SemEval-2023 Visual Word Sense Disambiguation (V-WSD) Task. We present a novel
algorithm that leverages glosses retrieved from BabelNet, in combination with
text and image encoders. Furthermore, we compare language-specific encoders
against the application of English encoders to translated texts. As the
contexts given in the task datasets are extremely short, we also experiment
with augmenting these contexts with descriptions generated by a language model.
This yields substantial improvements in accuracy. We describe and evaluate
additional V-WSD methods which use image generation and text-conditioned image
segmentation. Overall, the results of our official submission rank us 18 out of
56 teams. Some of our unofficial results are even better than the official
ones. Our code is publicly available at https://github.com/UAlberta-NLP/v-wsd.
|
[
{
"version": "v1",
"created": "Sat, 24 Jun 2023 22:00:06 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Ogezi",
"Michael",
""
],
[
"Hauer",
"Bradley",
""
],
[
"Omarov",
"Talgat",
""
],
[
"Shi",
"Ning",
""
],
[
"Kondrak",
"Grzegorz",
""
]
] |
new_dataset
| 0.999527 |
2306.14070
|
N. Benjamin Erichson
|
Pu Ren, N. Benjamin Erichson, Shashank Subramanian, Omer San, Zarija
Lukic and Michael W. Mahoney
|
SuperBench: A Super-Resolution Benchmark Dataset for Scientific Machine
Learning
| null | null | null | null |
cs.CV eess.IV physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Super-Resolution (SR) techniques aim to enhance data resolution, enabling the
retrieval of finer details, and improving the overall quality and fidelity of
the data representation. There is growing interest in applying SR methods to
complex spatiotemporal systems within the Scientific Machine Learning (SciML)
community, with the hope of accelerating numerical simulations and/or improving
forecasts in weather, climate, and related areas. However, the lack of
standardized benchmark datasets for comparing and validating SR methods hinders
progress and adoption in SciML. To address this, we introduce SuperBench, the
first benchmark dataset featuring high-resolution datasets (up to
$2048\times2048$ dimensions), including data from fluid flows, cosmology, and
weather. Here, we focus on validating spatial SR performance from data-centric
and physics-preserved perspectives, as well as assessing robustness to data
degradation tasks. While deep learning-based SR methods (developed in the
computer vision community) excel on certain tasks, despite relatively limited
prior physics information, we identify limitations of these methods in
accurately capturing intricate fine-scale features and preserving fundamental
physical properties and constraints in scientific data. These shortcomings
highlight the importance and subtlety of incorporating domain knowledge into ML
models. We anticipate that SuperBench will significantly advance SR methods for
scientific tasks.
|
[
{
"version": "v1",
"created": "Sat, 24 Jun 2023 22:39:33 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Ren",
"Pu",
""
],
[
"Erichson",
"N. Benjamin",
""
],
[
"Subramanian",
"Shashank",
""
],
[
"San",
"Omer",
""
],
[
"Lukic",
"Zarija",
""
],
[
"Mahoney",
"Michael W.",
""
]
] |
new_dataset
| 0.999842 |
2306.14116
|
Xian Tao
|
Xian Tao, Zhen Qu, Hengliang Luo, Jianwen Han, Yonghao He, Danfeng
Liu, Chengkan Lv, Fei Shen, Zhengtao Zhang
|
The Second-place Solution for CVPR VISION 23 Challenge Track 1 -- Data
Effificient Defect Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The Vision Challenge Track 1 for Data-Effificient Defect Detection requires
competitors to instance segment 14 industrial inspection datasets in a
data-defificient setting. This report introduces the technical details of the
team Aoi-overfifitting-Team for this challenge. Our method focuses on the key
problem of segmentation quality of defect masks in scenarios with limited
training samples. Based on the Hybrid Task Cascade (HTC) instance segmentation
algorithm, we connect the transformer backbone (Swin-B) through composite
connections inspired by CBNetv2 to enhance the baseline results. Additionally,
we propose two model ensemble methods to further enhance the segmentation
effect: one incorporates semantic segmentation into instance segmentation,
while the other employs multi-instance segmentation fusion algorithms. Finally,
using multi-scale training and test-time augmentation (TTA), we achieve an
average mAP@0.50:0.95 of more than 48.49% and an average mAR@0.50:0.95 of
66.71% on the test set of the Data Effificient Defect Detection Challenge. The
code is available at https://github.com/love6tao/Aoi-overfitting-team
|
[
{
"version": "v1",
"created": "Sun, 25 Jun 2023 03:37:02 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Tao",
"Xian",
""
],
[
"Qu",
"Zhen",
""
],
[
"Luo",
"Hengliang",
""
],
[
"Han",
"Jianwen",
""
],
[
"He",
"Yonghao",
""
],
[
"Liu",
"Danfeng",
""
],
[
"Lv",
"Chengkan",
""
],
[
"Shen",
"Fei",
""
],
[
"Zhang",
"Zhengtao",
""
]
] |
new_dataset
| 0.954118 |
2306.14134
|
Kailai Yan
|
Kailai Yan
|
Fine-grained Modulation for Zigbee Codeword Translation
| null | null | null | null |
cs.NI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In Zigbee backscatter systems, tags piggyback information by adding phase
shifts to the RF carriers. The instantaneous-phase shift (IPS) modulation adds
phase shifts by toggling between discrete phases, which is easy to realize and
is widely used in previous systems. However, as the spectrum efficiency of IPS
is poor, it is not suitable for large networks. Thus, frequency-phase shift
(FPS) modulation was proposed to make up this drawback. It adds continuous
phase shifts to the non-productive carriers by toggling between square waves of
different frequencies and has higher spectrum efficiency. In this paper, we
realized IPS modulation and FPS modulation on Zigbee single-tone signals,
respectively. In addition, we creatively proposed to apply FPS modulation to
codeword translation.
We prototype the system using a microchip transmitter, an off-the-shelf FPGA,
and a commodity Zigbee receiver. Through extensive experiments, we proved that
FPS modulated Zigbee transmissions have a bandwidth of 1.8 MHz, Which is 2x
lower than that of IPS. In addtion, we conducted simulation experiments in
MATLAB and demonstrated that FPS modulation can be used in Zigbee codeword
translation.
|
[
{
"version": "v1",
"created": "Sun, 25 Jun 2023 05:54:16 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Yan",
"Kailai",
""
]
] |
new_dataset
| 0.998362 |
2306.14137
|
Yuanzhi Liu
|
Yuanzhi Liu, Yujia Fu, Minghui Qin, Yufeng Xu, Baoxin Xu, Fengdong
Chen, Bart Goossens, Hongwei Yu, Chun Liu, Long Chen, Wei Tao, and Hui Zhao
|
BotanicGarden: A high-quality and large-scale robot navigation dataset
in challenging natural environments
|
Submitted to IEEE RA-L for possible publications
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid developments of mobile robotics and autonomous navigation over the
years are largely empowered by public datasets for testing and upgrading, such
as SLAM and localization tasks. Impressive demos and benchmark results have
arisen, indicating the establishment of a mature technical framework. However,
from the view point of real-world deployments, there are still critical defects
of robustness in challenging environments, especially in large-scale,
GNSS-denied, textural-monotonous, and unstructured scenarios. To meet the
pressing validation demands in such scope, we build a novel challenging robot
navigation dataset in a large botanic garden of more than 48000m2.
Comprehensive sensors are employed, including high-res/rate stereo Gray&RGB
cameras, rotational and forward 3D LiDARs, and low-cost and industrial-grade
IMUs, all of which are well calibrated and accurately hardware-synchronized. An
all-terrain wheeled robot is configured to mount the sensor suite and provide
odometry data. A total of 32 long and short sequences of 2.3 million images are
collected, covering scenes of thick woods, riversides, narrow paths, bridges,
and grasslands that rarely appeared in previous resources. Excitedly, both
highly-accurate ego-motions and 3D map ground truth are provided, along with
fine-annotated vision semantics. Our goal is to contribute a high-quality
dataset to advance robot navigation and sensor fusion research to a higher
level.
|
[
{
"version": "v1",
"created": "Sun, 25 Jun 2023 06:11:51 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Liu",
"Yuanzhi",
""
],
[
"Fu",
"Yujia",
""
],
[
"Qin",
"Minghui",
""
],
[
"Xu",
"Yufeng",
""
],
[
"Xu",
"Baoxin",
""
],
[
"Chen",
"Fengdong",
""
],
[
"Goossens",
"Bart",
""
],
[
"Yu",
"Hongwei",
""
],
[
"Liu",
"Chun",
""
],
[
"Chen",
"Long",
""
],
[
"Tao",
"Wei",
""
],
[
"Zhao",
"Hui",
""
]
] |
new_dataset
| 0.999853 |
2306.14149
|
Xiao Zhang
|
Xiao Zhang, Heqi Zheng, Yuxiang Nie, Heyan Huang, Xian-Ling Mao
|
SciMRC: Multi-perspective Scientific Machine Reading Comprehension
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Scientific machine reading comprehension (SMRC) aims to understand scientific
texts through interactions with humans by given questions. As far as we know,
there is only one dataset focused on exploring full-text scientific machine
reading comprehension. However, the dataset has ignored the fact that different
readers may have different levels of understanding of the text, and only
includes single-perspective question-answer pairs, leading to a lack of
consideration of different perspectives. To tackle the above problem, we
propose a novel multi-perspective SMRC dataset, called SciMRC, which includes
perspectives from beginners, students and experts. Our proposed SciMRC is
constructed from 741 scientific papers and 6,057 question-answer pairs. Each
perspective of beginners, students and experts contains 3,306, 1,800 and 951 QA
pairs, respectively. The extensive experiments on SciMRC by utilizing
pre-trained models suggest the importance of considering perspectives of SMRC,
and demonstrate its challenging nature for machine comprehension.
|
[
{
"version": "v1",
"created": "Sun, 25 Jun 2023 07:25:14 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Zhang",
"Xiao",
""
],
[
"Zheng",
"Heqi",
""
],
[
"Nie",
"Yuxiang",
""
],
[
"Huang",
"Heyan",
""
],
[
"Mao",
"Xian-Ling",
""
]
] |
new_dataset
| 0.995151 |
2306.14168
|
Chensen Huang
|
Chensen Huang, Guibo Zhu, Guojing Ge, Taihao Li, Jinqiao Wang
|
FastBCSD: Fast and Efficient Neural Network for Binary Code Similarity
Detection
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Binary code similarity detection (BCSD) has various applications, including
but not limited to vulnerability detection, plagiarism detection, and malware
detection. Previous research efforts mainly focus on transforming binary code
to assembly code strings using reverse compilation and then using pre-trained
deep learning models with large parameters to obtain feature representation
vector of binary code. While these models have proven to be effective in
representing binary code, their large parameter size leads to considerable
computational expenses during both training and inference. In this paper, we
present a lightweight neural network, called FastBCSD, that employs a dynamic
instruction vector encoding method and takes only assembly code as input
feature to achieve comparable accuracy to the pre-training models while
reducing the computational resources and time cost.
On the BinaryCorp dataset, our method achieves a similar average MRR score to
the state-of-the-art pre-training-based method (jTrans), while on the
BinaryCorp 3M dataset, our method even outperforms the latest technology by
0.01. Notably, FastBCSD has a much smaller parameter size (13.4M) compared to
jTrans (87.88M), and its latency time is 1/5 of jTrans on NVIDIA GTX 1080Ti.
|
[
{
"version": "v1",
"created": "Sun, 25 Jun 2023 08:22:10 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Huang",
"Chensen",
""
],
[
"Zhu",
"Guibo",
""
],
[
"Ge",
"Guojing",
""
],
[
"Li",
"Taihao",
""
],
[
"Wang",
"Jinqiao",
""
]
] |
new_dataset
| 0.99495 |
2306.14205
|
Luca Accorsi
|
Luca Accorsi and Daniele Vigo
|
Routing One Million Customers in a Handful of Minutes
| null | null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we propose a new dataset of Capacitated Vehicle Routing
Problem instances which are up to two orders of magnitude larger than those in
the currently used benchmarks. Despite these sizes might not have an immediate
application to real-world logistic scenarios, we believe they could foster
fresh new research efforts on the design of effective and efficient algorithmic
components for routing problems. We provide computational results for such
instances by running FILO2, an adaptation of the FILO algorithm proposed in
Accorsi and Vigo (2021), designed to handle extremely large-scale CVRP
instances. Solutions for such instances are obtained by using an a standard
personal computer in a considerably short computing time, thus showing the
effectiveness of the acceleration and pruning techniques already proposed in
FILO. Finally, results of FILO2 on well-known literature instances show that
the newly introduced changes improve the overall scalability of the approach
with respect to the previous FILO design.
|
[
{
"version": "v1",
"created": "Sun, 25 Jun 2023 11:09:30 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Accorsi",
"Luca",
""
],
[
"Vigo",
"Daniele",
""
]
] |
new_dataset
| 0.99969 |
2306.14260
|
Yoshiki Ito
|
Yoshiki Ito
|
HOKEM: Human and Object Keypoint-based Extension Module for Human-Object
Interaction Detection
|
Accepted to IEEE ICIP 2023
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human-object interaction (HOI) detection for capturing relationships between
humans and objects is an important task in the semantic understanding of
images. When processing human and object keypoints extracted from an image
using a graph convolutional network (GCN) to detect HOI, it is crucial to
extract appropriate object keypoints regardless of the object type and to
design a GCN that accurately captures the spatial relationships between
keypoints. This paper presents the human and object keypoint-based extension
module (HOKEM) as an easy-to-use extension module to improve the accuracy of
the conventional detection models. The proposed object keypoint extraction
method is simple yet accurately represents the shapes of various objects.
Moreover, the proposed human-object adaptive GCN (HO-AGCN), which introduces
adaptive graph optimization and attention mechanism, accurately captures the
spatial relationships between keypoints. Experiments using the HOI dataset,
V-COCO, showed that HOKEM boosted the accuracy of an appearance-based model by
a large margin.
|
[
{
"version": "v1",
"created": "Sun, 25 Jun 2023 14:40:26 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Ito",
"Yoshiki",
""
]
] |
new_dataset
| 0.984526 |
2306.14272
|
Anton Wahrst\"atter
|
Anton Wahrst\"atter, Matthew Solomon, Ben DiFrancesco, Vitalik Buterin
and Davor Svetinovic
|
BaseSAP: Modular Stealth Address Protocol for Programmable Blockchains
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stealth addresses represent an approach to enhancing privacy within public
and distributed blockchains, such as Ethereum and Bitcoin. Stealth address
protocols generate a distinct, randomly generated address for the recipient,
thereby concealing interactions between entities. In this study, we introduce
BaseSAP, an autonomous base-layer protocol for embedding stealth addresses
within the application layer of programmable blockchains. BaseSAP expands upon
previous research to develop a modular protocol for executing unlikable
transactions on public blockchains. BaseSAP allows for developing additional
stealth address layers using different cryptographic algorithms on top of the
primary implementation, capitalizing on its modularity. To demonstrate the
effectiveness of our proposed protocol, we present simulations of an advanced
Secp256k1-based dual-key stealth address protocol. This protocol is designed on
top of BaseSAP and is deployed on the Goerli and Sepolia test networks as the
first prototype implementation. Furthermore, we provide cost analyses and
underscore potential security ramifications and attack vectors that could
affect the privacy of stealth addresses. Our study reveals the flexibility of
the BaseSAP protocol and offers insight into the broader implications of
stealth address technology.
|
[
{
"version": "v1",
"created": "Sun, 25 Jun 2023 15:45:05 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Wahrstätter",
"Anton",
""
],
[
"Solomon",
"Matthew",
""
],
[
"DiFrancesco",
"Ben",
""
],
[
"Buterin",
"Vitalik",
""
],
[
"Svetinovic",
"Davor",
""
]
] |
new_dataset
| 0.998994 |
2306.14292
|
Oleg Lashinin
|
Veronika Ivanova, Oleg Lashinin, Marina Ananyeva and Sergey Kolesnikov
|
RecBaselines2023: a new dataset for choosing baselines for recommender
models
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
The number of proposed recommender algorithms continues to grow. The authors
propose new approaches and compare them with existing models, called baselines.
Due to the large number of recommender models, it is difficult to estimate
which algorithms to choose in the article. To solve this problem, we have
collected and published a dataset containing information about the recommender
models used in 903 papers, both as baselines and as proposed approaches. This
dataset can be seen as a typical dataset with interactions between papers and
previously proposed models. In addition, we provide a descriptive analysis of
the dataset and highlight possible challenges to be investigated with the data.
Furthermore, we have conducted extensive experiments using a well-established
methodology to build a good recommender algorithm under the dataset. Our
experiments show that the selection of the best baselines for proposing new
recommender approaches can be considered and successfully solved by existing
state-of-the-art collaborative filtering models. Finally, we discuss
limitations and future work.
|
[
{
"version": "v1",
"created": "Sun, 25 Jun 2023 16:52:37 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Ivanova",
"Veronika",
""
],
[
"Lashinin",
"Oleg",
""
],
[
"Ananyeva",
"Marina",
""
],
[
"Kolesnikov",
"Sergey",
""
]
] |
new_dataset
| 0.999421 |
2306.14342
|
Hao Chen
|
Hao Chen
|
New Euclidean and Hermitian Self-Dual Cyclic Codes with Square-Root-Like
Minimum Distances
|
14 pages. arXiv admin note: substantial text overlap with
arXiv:2306.11423
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Binary self-dual codes with large minimum distances, such as the extended
Hamming code and the Golay code, are fascinating objects in the coding theory.
They are closely related to sporadic simple groups, lattices and invariant
theory. A family of binary self-dual repeated-root cyclic codes with lengths
$n_i$ and minimum distances $d_i \geq \frac{1}{2} \sqrt{n_i+2}$, $n_i$ goes to
the infinity for $i=1,2, \ldots$, was constructed in a paper of IEEE Trans.
Inf. Theory, 2009. In this paper, we construct families of Euclidean self-dual
repeated-root cyclic codes over the field ${\bf F}_{2^s}$, $s \geq 2$, with
lengths $n_i$ and minimum distances at least $\sqrt{2^{s-1}n}-2^s$, where
lengths $n_i$ go to the infinity. We also construct families of Hermitian
self-dual repeated-root cyclic codes over the field ${\bf F}_{2^{2s}}$, $s \geq
1$, with lengths $n_i$ and minimum distances at least $\sqrt{n_i/2}$, where
lengths $n_i$ go to the infinity. Our results show that Euclidean and Hermitian
self-dual codes with large automorphism groups and large minimum distances can
always be constructed.
|
[
{
"version": "v1",
"created": "Sun, 25 Jun 2023 21:11:41 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Chen",
"Hao",
""
]
] |
new_dataset
| 0.999116 |
2306.14379
|
Shuntaro Yada
|
Shuntaro Yada, Eiji Aramaki
|
HeaRT: Health Record Timeliner to visualise patients' medical history
from health record text
|
Full evaluation results at:
https://github.com/shuntaroy/heart-evaluation
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Electronic health records (EHRs), which contain patients' medical histories,
tend to be written in freely formatted (unstructured) text because they are
complicated by their nature. Quickly understanding a patient's history is
challenging and critical because writing styles vary among doctors, which may
even cause clinical incidents. This paper proposes a Health Record Timeliner
system (HeaRT), which visualises patients' clinical histories directly from
natural language text in EHRs. Unlike only a few previous attempts, our system
achieved feasible and practical performance for the first time, by integrating
a state-of-the-art language model that recognises clinical entities (e.g.
diseases, medicines, and time expressions) and their temporal relations from
the raw text in EHRs and radiology reports. By chronologically aligning the
clinical entities to the clinical events extracted from a medical report, this
web-based system visualises them in a Gantt chart-like format. Our novel
evaluation method showed that the proposed system successfully generated
coherent timelines from the two sets of radiology reports describing the same
CT scan but written by different radiologists. Real-world assessments are
planned to improve the remaining issues.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 01:53:16 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Yada",
"Shuntaro",
""
],
[
"Aramaki",
"Eiji",
""
]
] |
new_dataset
| 0.996365 |
2306.14399
|
Fengheng Li
|
Yun Guo, Wei Feng, Zheng Zhang, Xiancong Ren, Yaoyu Li, Jingjing Lv,
Xin Zhu, Zhangang Lin, Jingping Shao
|
Mutual Query Network for Multi-Modal Product Image Segmentation
|
Accepted by ICME2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Product image segmentation is vital in e-commerce. Most existing methods
extract the product image foreground only based on the visual modality, making
it difficult to distinguish irrelevant products. As product titles contain
abundant appearance information and provide complementary cues for product
image segmentation, we propose a mutual query network to segment products based
on both visual and linguistic modalities. First, we design a language query
vision module to obtain the response of language description in image areas,
thus aligning the visual and linguistic representations across modalities.
Then, a vision query language module utilizes the correlation between visual
and linguistic modalities to filter the product title and effectively suppress
the content irrelevant to the vision in the title. To promote the research in
this field, we also construct a Multi-Modal Product Segmentation dataset
(MMPS), which contains 30,000 images and corresponding titles. The proposed
method significantly outperforms the state-of-the-art methods on MMPS.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 03:18:38 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Guo",
"Yun",
""
],
[
"Feng",
"Wei",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Ren",
"Xiancong",
""
],
[
"Li",
"Yaoyu",
""
],
[
"Lv",
"Jingjing",
""
],
[
"Zhu",
"Xin",
""
],
[
"Lin",
"Zhangang",
""
],
[
"Shao",
"Jingping",
""
]
] |
new_dataset
| 0.966745 |
2306.14412
|
Chao Zhang
|
Chao Zhang, Shiwei Wu, Sirui Zhao, Tong Xu, Enhong Chen
|
A Solution to CVPR'2023 AQTC Challenge: Video Alignment for Multi-Step
Inference
|
5 pages, 1 figure, technical report for track3 of CVPR 2023 LOVEU
challenge
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Affordance-centric Question-driven Task Completion (AQTC) for Egocentric
Assistant introduces a groundbreaking scenario. In this scenario, through
learning instructional videos, AI assistants provide users with step-by-step
guidance on operating devices. In this paper, we present a solution for
enhancing video alignment to improve multi-step inference. Specifically, we
first utilize VideoCLIP to generate video-script alignment features.
Afterwards, we ground the question-relevant content in instructional videos.
Then, we reweight the multimodal context to emphasize prominent features.
Finally, we adopt GRU to conduct multi-step inference. Through comprehensive
experiments, we demonstrate the effectiveness and superiority of our method,
which secured the 2nd place in CVPR'2023 AQTC challenge. Our code is available
at https://github.com/zcfinal/LOVEU-CVPR23-AQTC.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 04:19:33 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Zhang",
"Chao",
""
],
[
"Wu",
"Shiwei",
""
],
[
"Zhao",
"Sirui",
""
],
[
"Xu",
"Tong",
""
],
[
"Chen",
"Enhong",
""
]
] |
new_dataset
| 0.994311 |
2306.14418
|
Thanh Vu Trong
|
Thanh Trong Vu, Thanh-Dat Do, and Hieu Dinh Vo
|
Context-Encoded Code Change Representation for Automated Commit Message
Generation
|
16 pages
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Changes in source code are an inevitable part of software development. They
are the results of indispensable activities such as fixing bugs or improving
functionality. Descriptions for code changes (commit messages) help people
better understand the changes. However, due to a lack of motivation and time
pressure, writing high-quality commit messages remains reluctantly considered.
Several methods have been proposed with the aim of automated commit message
generation.
However, the existing methods are still limited because they only utilise
either the changed code or the changed code combined with surrounding
statements.
This paper proposes a method to represent code changes by combining the
changed code and the unchanged code which have program dependence on the
changed code. This method overcomes the limitations of current representations
while improving the performance of 5/6 of state-of-the-art commit message
generation methods by up to 15% in METEOR, 14% in ROUGE-L, and 10% in BLEU-4.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 04:48:14 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Vu",
"Thanh Trong",
""
],
[
"Do",
"Thanh-Dat",
""
],
[
"Vo",
"Hieu Dinh",
""
]
] |
new_dataset
| 0.992842 |
2306.14425
|
Dongjae Lee
|
Dongjae Lee, Sunwoo Hwang, Changhyeon Kim, Seung Jae Lee, H. Jin Kim
|
Minimally actuated tiltrotor for perching and normal force exertion
|
7 pages, 10 figures, 2023 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS) accepted
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This study presents a new hardware design and control of a minimally actuated
5 control degrees of freedom (CDoF) quadrotor-based tiltrotor. The proposed
tiltrotor possesses several characteristics distinct from those found in
existing works, including: 1) minimal number of actuators for 5 CDoF, 2) large
margin to generate interaction force during aerial physical interaction (APhI),
and 3) no mechanical obstruction in thrust direction rotation. Thanks to these
properties, the proposed tiltrotor is suitable for perching-enabled APhI since
it can hover parallel to an arbitrarily oriented surface and can freely adjust
its thrust direction. To fully control the 5-CDoF of the designed tiltrotor, we
construct an asymptotically stabilizing controller with stability analysis. The
proposed tiltrotor design and controller are validated in experiments where the
first two experiments of $x,y$ position tracking and pitch tracking show
controllability of the added CDoF compared to a conventional quadrotor.
Finally, the last experiment of perching and cart pushing demonstrates the
proposed tiltrotor's applicability to perching-enabled APhI.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 05:31:02 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Lee",
"Dongjae",
""
],
[
"Hwang",
"Sunwoo",
""
],
[
"Kim",
"Changhyeon",
""
],
[
"Lee",
"Seung Jae",
""
],
[
"Kim",
"H. Jin",
""
]
] |
new_dataset
| 0.955557 |
2306.14436
|
Dongfang Zhao
|
Dongfang Zhao
|
Silca: Singular Caching of Homomorphic Encryption for Outsourced
Databases in Cloud Computing
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Ensuring the confidentiality and privacy of sensitive information in cloud
computing and outsourced databases is crucial. Homomorphic encryption (HE)
offers a solution by enabling computations on encrypted data without
decryption, allowing secure outsourcing while maintaining data confidentiality.
However, HE faces performance challenges in query-intensive databases. To
address this, we propose two novel optimizations, Silca and SilcaZ, tailored to
outsourced databases in cloud computing. Silca utilizes a singular caching
technique to reduce computational overhead, while SilcaZ leverages modular
arithmetic operations to ensure the applicability of singular caching for
intensive HE operations. We prove the semantic security of Silca and SilcaZ and
implement them with CKKS and BGV in HElib as MySQL loadable functions.
Extensive experiments with seven real-world datasets demonstrate their superior
performance compared to existing HE schemes, bridging the gap between
theoretical advancements and practical applications in applying HE schemes on
outsourced databases in cloud computing.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 06:05:00 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Zhao",
"Dongfang",
""
]
] |
new_dataset
| 0.985328 |
2306.14447
|
Haochen Shi
|
Haochen Shi, Huazhe Xu, Samuel Clarke, Yunzhu Li, Jiajun Wu
|
RoboCook: Long-Horizon Elasto-Plastic Object Manipulation with Diverse
Tools
|
Project page: https://hshi74.github.io/robocook/
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans excel in complex long-horizon soft body manipulation tasks via
flexible tool use: bread baking requires a knife to slice the dough and a
rolling pin to flatten it. Often regarded as a hallmark of human cognition,
tool use in autonomous robots remains limited due to challenges in
understanding tool-object interactions. Here we develop an intelligent robotic
system, RoboCook, which perceives, models, and manipulates elasto-plastic
objects with various tools. RoboCook uses point cloud scene representations,
models tool-object interactions with Graph Neural Networks (GNNs), and combines
tool classification with self-supervised policy learning to devise manipulation
plans. We demonstrate that from just 20 minutes of real-world interaction data
per tool, a general-purpose robot arm can learn complex long-horizon soft
object manipulation tasks, such as making dumplings and alphabet letter
cookies. Extensive evaluations show that RoboCook substantially outperforms
state-of-the-art approaches, exhibits robustness against severe external
disturbances, and demonstrates adaptability to different materials.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 06:30:29 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Shi",
"Haochen",
""
],
[
"Xu",
"Huazhe",
""
],
[
"Clarke",
"Samuel",
""
],
[
"Li",
"Yunzhu",
""
],
[
"Wu",
"Jiajun",
""
]
] |
new_dataset
| 0.998628 |
2306.14457
|
Andrea Bacciu
|
Andrea Bacciu, Giovanni Trappolini, Andrea Santilli, Emanuele
Rodol\`a, Fabrizio Silvestri
|
Fauno: The Italian Large Language Model that will leave you senza
parole!
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents Fauno, the first and largest open-source Italian
conversational Large Language Model (LLM). Our goal with Fauno is to
democratize the study of LLMs in Italian, demonstrating that obtaining a
fine-tuned conversational bot with a single GPU is possible. In addition, we
release a collection of datasets for conversational AI in Italian. The datasets
on which we fine-tuned Fauno include various topics such as general question
answering, computer science, and medical questions. We release our code and
datasets on \url{https://github.com/RSTLess-research/Fauno-Italian-LLM}
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 07:00:38 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Bacciu",
"Andrea",
""
],
[
"Trappolini",
"Giovanni",
""
],
[
"Santilli",
"Andrea",
""
],
[
"Rodolà",
"Emanuele",
""
],
[
"Silvestri",
"Fabrizio",
""
]
] |
new_dataset
| 0.994893 |
2306.14476
|
Sheraz Hassan
|
Sheraz Hassan, Muhammad Tahir, Momin Uppal, Zubair Khalid, Ivan
Gorban, Selim Turki
|
STEF-DHNet: Spatiotemporal External Factors Based Deep Hybrid Network
for Enhanced Long-Term Taxi Demand Prediction
|
8 pages, 3 Figures
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Accurately predicting the demand for ride-hailing services can result in
significant benefits such as more effective surge pricing strategies, improved
driver positioning, and enhanced customer service. By understanding the demand
fluctuations, companies can anticipate and respond to consumer requirements
more efficiently, leading to increased efficiency and revenue. However,
forecasting demand in a particular region can be challenging, as it is
influenced by several external factors, such as time of day, weather
conditions, and location. Thus, understanding and evaluating these factors is
essential for predicting consumer behavior and adapting to their needs
effectively. Grid-based deep learning approaches have proven effective in
predicting regional taxi demand. However, these models have limitations in
integrating external factors in their spatiotemporal complexity and maintaining
high accuracy over extended time horizons without continuous retraining, which
makes them less suitable for practical and commercial applications. To address
these limitations, this paper introduces STEF-DHNet, a demand prediction model
that combines Convolutional Neural Network (CNN) and Long Short-Term Memory
(LSTM) to integrate external features as spatiotemporal information and capture
their influence on ride-hailing demand. The proposed model is evaluated using a
long-term performance metric called the rolling error, which assesses its
ability to maintain high accuracy over long periods without retraining. The
results show that STEF-DHNet outperforms existing state-of-the-art methods on
three diverse datasets, demonstrating its potential for practical use in
real-world scenarios.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 07:37:50 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Hassan",
"Sheraz",
""
],
[
"Tahir",
"Muhammad",
""
],
[
"Uppal",
"Momin",
""
],
[
"Khalid",
"Zubair",
""
],
[
"Gorban",
"Ivan",
""
],
[
"Turki",
"Selim",
""
]
] |
new_dataset
| 0.950388 |
2306.14490
|
Siyu Mo
|
Jianwei Li, Siyu Mo, Yanfei Shen
|
TaiChi Action Capture and Performance Analysis with Multi-view RGB
Cameras
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in computer vision and deep learning have influenced the
field of sports performance analysis for researchers to track and reconstruct
freely moving humans without any marker attachment. However, there are few
works for vision-based motion capture and intelligent analysis for professional
TaiChi movement. In this paper, we propose a framework for TaiChi performance
capture and analysis with multi-view geometry and artificial intelligence
technology. The main innovative work is as follows: 1) A multi-camera system
suitable for TaiChi motion capture is built and the multi-view TaiChi data is
collected and processed; 2) A combination of traditional visual method and
implicit neural radiance field is proposed to achieve sparse 3D skeleton fusion
and dense 3D surface reconstruction. 3) The normalization modeling of movement
sequences is carried out based on motion transfer, so as to realize TaiChi
performance analysis for different groups. We have carried out evaluation
experiments, and the experimental results have shown the efficiency of our
method.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 08:04:24 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Li",
"Jianwei",
""
],
[
"Mo",
"Siyu",
""
],
[
"Shen",
"Yanfei",
""
]
] |
new_dataset
| 0.991617 |
2306.14492
|
XInyu Wang
|
Xinyu Wang and Jianwei Li
|
A Badminton Recognition and Tracking System Based on Context
Multi-feature Fusion
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ball recognition and tracking have traditionally been the main focus of
computer vision researchers as a crucial component of sports video analysis.
The difficulties, such as the small ball size, blurry appearance, quick
movements, and so on, prevent many classic methods from performing well on ball
detection and tracking. In this paper, we present a method for detecting and
tracking badminton balls. According to the characteristics of different ball
speeds, two trajectory clip trackers are designed based on different rules to
capture the correct trajectory of the ball. Meanwhile, combining contextual
information, two rounds of detection from coarse-grained to fine-grained are
used to solve the challenges encountered in badminton detection. The
experimental results show that the precision, recall, and F1-measure of our
method, reach 100%, 72.6% and 84.1% with the data without occlusion,
respectively.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 08:07:56 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Wang",
"Xinyu",
""
],
[
"Li",
"Jianwei",
""
]
] |
new_dataset
| 0.997457 |
2306.14497
|
Jos\'e Miguel Moreno
|
Jos\'e Miguel Moreno, Srdjan Matic, Narseo Vallina-Rodriguez, Juan
Tapiador
|
Your Code is 0000: An Analysis of the Disposable Phone Numbers Ecosystem
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Short Message Service (SMS) is a popular channel for online service providers
to verify accounts and authenticate users registered to a particular service.
Specialized applications, called Public SMS Gateways (PSGs), offer free
Disposable Phone Numbers (DPNs) that can be used to receive SMS messages. DPNs
allow users to protect their privacy when creating online accounts. However,
they can also be abused for fraudulent activities and to bypass security
mechanisms like Two-Factor Authentication (2FA). In this paper, we perform a
large-scale and longitudinal study of the DPN ecosystem by monitoring 17,141
unique DPNs in 29 PSGs over the course of 12 months. Using a dataset of over
70M messages, we provide an overview of the ecosystem and study the different
services that offer DPNs and their relationships. Next, we build a framework
that (i) identifies and classifies the purpose of an SMS; and (ii) accurately
attributes every message to more than 200 popular Internet services that
require SMS for creating registered accounts. Our results indicate that the DPN
ecosystem is globally used to support fraudulent account creation and access,
and that this issue is ubiquitous and affects all major Internet platforms and
specialized online services.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 08:16:38 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Moreno",
"José Miguel",
""
],
[
"Matic",
"Srdjan",
""
],
[
"Vallina-Rodriguez",
"Narseo",
""
],
[
"Tapiador",
"Juan",
""
]
] |
new_dataset
| 0.99914 |
2306.14546
|
Samy Badreddine
|
Samy Badreddine, Luciano Serafini, Michael Spranger
|
logLTN: Differentiable Fuzzy Logic in the Logarithm Space
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The AI community is increasingly focused on merging logic with deep learning
to create Neuro-Symbolic (NeSy) paradigms and assist neural approaches with
symbolic knowledge. A significant trend in the literature involves integrating
axioms and facts in loss functions by grounding logical symbols with neural
networks and operators with fuzzy semantics. Logic Tensor Networks (LTN) is one
of the main representatives in this category, known for its simplicity,
efficiency, and versatility. However, it has been previously shown that not all
fuzzy operators perform equally when applied in a differentiable setting.
Researchers have proposed several configurations of operators, trading off
between effectiveness, numerical stability, and generalization to different
formulas. This paper presents a configuration of fuzzy operators for grounding
formulas end-to-end in the logarithm space. Our goal is to develop a
configuration that is more effective than previous proposals, able to handle
any formula, and numerically stable. To achieve this, we propose semantics that
are best suited for the logarithm space and introduce novel simplifications and
improvements that are crucial for optimization via gradient-descent. We use LTN
as the framework for our experiments, but the conclusions of our work apply to
any similar NeSy framework. Our findings, both formal and empirical, show that
the proposed configuration outperforms the state-of-the-art and that each of
our modifications is essential in achieving these results.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 09:39:05 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Badreddine",
"Samy",
""
],
[
"Serafini",
"Luciano",
""
],
[
"Spranger",
"Michael",
""
]
] |
new_dataset
| 0.98201 |
2306.14610
|
Cheng-Yu Hsieh
|
Cheng-Yu Hsieh, Jieyu Zhang, Zixian Ma, Aniruddha Kembhavi, Ranjay
Krishna
|
SugarCrepe: Fixing Hackable Benchmarks for Vision-Language
Compositionality
| null | null | null | null |
cs.CV cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the last year alone, a surge of new benchmarks to measure compositional
understanding of vision-language models have permeated the machine learning
ecosystem. Given an image, these benchmarks probe a model's ability to identify
its associated caption amongst a set of compositional distractors.
Surprisingly, we find significant biases in all these benchmarks rendering them
hackable. This hackability is so dire that blind models with no access to the
image outperform state-of-the-art vision-language models. To remedy this
rampant vulnerability, we introduce SugarCrepe, a new benchmark for
vision-language compositionality evaluation. We employ large language models,
instead of rule-based templates used in previous benchmarks, to generate fluent
and sensical hard negatives, and utilize an adversarial refinement mechanism to
maximally reduce biases. We re-evaluate state-of-the-art models and recently
proposed compositionality inducing strategies, and find that their improvements
were hugely overestimated, suggesting that more innovation is needed in this
important direction. We release SugarCrepe and the code for evaluation at:
https://github.com/RAIVNLab/sugar-crepe.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 11:35:22 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Hsieh",
"Cheng-Yu",
""
],
[
"Zhang",
"Jieyu",
""
],
[
"Ma",
"Zixian",
""
],
[
"Kembhavi",
"Aniruddha",
""
],
[
"Krishna",
"Ranjay",
""
]
] |
new_dataset
| 0.995187 |
2306.14644
|
Chen Li
|
Chen Li, Xutan Peng, Teng Wang, Yixiao Ge, Mengyang Liu, Xuyuan Xu,
Yexin Wang, Ying Shan
|
PTVD: A Large-Scale Plot-Oriented Multimodal Dataset Based on Television
Dramas
|
19 pages, 10 figures
| null | null | null |
cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Art forms such as movies and television (TV) dramas are reflections of the
real world, which have attracted much attention from the multimodal learning
community recently. However, existing corpora in this domain share three
limitations: (1) annotated in a scene-oriented fashion, they ignore the
coherence within plots; (2) their text lacks empathy and seldom mentions
situational context; (3) their video clips fail to cover long-form relationship
due to short duration. To address these fundamental issues, using 1,106 TV
drama episodes and 24,875 informative plot-focused sentences written by
professionals, with the help of 449 human annotators, we constructed PTVD, the
first plot-oriented multimodal dataset in the TV domain. It is also the first
non-English dataset of its kind. Additionally, PTVD contains more than 26
million bullet screen comments (BSCs), powering large-scale pre-training. Next,
aiming to open-source a strong baseline for follow-up works, we developed the
multimodal algorithm that attacks different cinema/TV modelling problems with a
unified architecture. Extensive experiments on three cognitive-inspired tasks
yielded a number of novel observations (some of them being quite
counter-intuition), further validating the value of PTVD in promoting
multimodal research. The dataset and codes are released at
\url{https://ptvd.github.io/}.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 12:30:20 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Li",
"Chen",
""
],
[
"Peng",
"Xutan",
""
],
[
"Wang",
"Teng",
""
],
[
"Ge",
"Yixiao",
""
],
[
"Liu",
"Mengyang",
""
],
[
"Xu",
"Xuyuan",
""
],
[
"Wang",
"Yexin",
""
],
[
"Shan",
"Ying",
""
]
] |
new_dataset
| 0.999833 |
2306.14649
|
Hoang-Hiep Le
|
Hoang-Hiep Le, Md. Aftab Baig, Wei-Chen Hong, Cheng-Hsien Tsai,
Cheng-Jui Yeh, Fu-Xiang Liang, I-Ting Huang, Wei-Tzu Tsai, Ting-Yin Cheng,
Sourav De, Nan-Yow Chen, Wen-Jay Lee, Ing-Chao Lin, Da-Wei Chang, Darsen D.
Lu
|
CIMulator: A Comprehensive Simulation Platform for Computing-In-Memory
Circuit Macros with Low Bit-Width and Real Memory Materials
| null | null | null | null |
cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a simulation platform, namely CIMulator, for quantifying
the efficacy of various synaptic devices in neuromorphic accelerators for
different neural network architectures. Nonvolatile memory devices, such as
resistive random-access memory, ferroelectric field-effect transistor, and
volatile static random-access memory devices, can be selected as synaptic
devices. A multilayer perceptron and convolutional neural networks (CNNs), such
as LeNet-5, VGG-16, and a custom CNN named C4W-1, are simulated to evaluate the
effects of these synaptic devices on the training and inference outcomes. The
dataset used in the simulations are MNIST, CIFAR-10, and a white blood cell
dataset. By applying batch normalization and appropriate optimizers in the
training phase, neuromorphic systems with very low-bit-width or binary weights
could achieve high pattern recognition rates that approach software-based CNN
accuracy. We also introduce spiking neural networks with RRAM-based synaptic
devices for the recognition of MNIST handwritten digits.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 12:36:07 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Le",
"Hoang-Hiep",
""
],
[
"Baig",
"Md. Aftab",
""
],
[
"Hong",
"Wei-Chen",
""
],
[
"Tsai",
"Cheng-Hsien",
""
],
[
"Yeh",
"Cheng-Jui",
""
],
[
"Liang",
"Fu-Xiang",
""
],
[
"Huang",
"I-Ting",
""
],
[
"Tsai",
"Wei-Tzu",
""
],
[
"Cheng",
"Ting-Yin",
""
],
[
"De",
"Sourav",
""
],
[
"Chen",
"Nan-Yow",
""
],
[
"Lee",
"Wen-Jay",
""
],
[
"Lin",
"Ing-Chao",
""
],
[
"Chang",
"Da-Wei",
""
],
[
"Lu",
"Darsen D.",
""
]
] |
new_dataset
| 0.967606 |
2306.14689
|
Andrej Bal\'a\v{z}
|
Andrej Bal\'a\v{z} and Alessia Petescia
|
Prefix-free graphs and suffix array construction in sublinear space
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
A recent paradigm shift in bioinformatics from a single reference genome to a
pangenome brought with it several graph structures. These graph structures must
implement operations, such as efficient construction from multiple genomes and
read mapping. Read mapping is a well-studied problem in sequential data, and,
together with data structures such as suffix array and Burrows-Wheeler
transform, allows for efficient computation. Attempts to achieve comparatively
high performance on graphs bring many complications since the common data
structures on strings are not easily obtainable for graphs. In this work, we
introduce prefix-free graphs, a novel pangenomic data structure; we show how to
construct them and how to use them to obtain well-known data structures from
stringology in sublinear space, allowing for many efficient operations on
pangenomes.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 13:34:32 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Baláž",
"Andrej",
""
],
[
"Petescia",
"Alessia",
""
]
] |
new_dataset
| 0.997133 |
2306.14709
|
Alina Marcu M.Sc
|
Alexandra Budisteanu, Dragos Costea, Alina Marcu and Marius Leordeanu
|
Self-supervised novel 2D view synthesis of large-scale scenes with
efficient multi-scale voxel carving
|
11 pages, 3 figures
| null | null | null |
cs.CV cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The task of generating novel views of real scenes is increasingly important
nowadays when AI models become able to create realistic new worlds. In many
practical applications, it is important for novel view synthesis methods to
stay grounded in the physical world as much as possible, while also being able
to imagine it from previously unseen views. While most current methods are
developed and tested in virtual environments with small scenes and no errors in
pose and depth information, we push the boundaries to the real-world domain of
large scales in the new context of UAVs. Our algorithmic contributions are two
folds. First, we manage to stay anchored in the real 3D world, by introducing
an efficient multi-scale voxel carving method, which is able to accommodate
significant noises in pose, depth, and illumination variations, while being
able to reconstruct the view of the world from drastically different poses at
test time. Second, our final high-resolution output is efficiently self-trained
on data automatically generated by the voxel carving module, which gives it the
flexibility to adapt efficiently to any scene. We demonstrated the
effectiveness of our method on highly complex and large-scale scenes in real
environments while outperforming the current state-of-the-art. Our code is
publicly available: https://github.com/onorabil/MSVC.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 13:57:05 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Budisteanu",
"Alexandra",
""
],
[
"Costea",
"Dragos",
""
],
[
"Marcu",
"Alina",
""
],
[
"Leordeanu",
"Marius",
""
]
] |
new_dataset
| 0.997432 |
2306.14757
|
Chrysoula Stathakopoulou
|
Chrysoula Stathakopoulou, Michael Wei, Maofan Yin, Hongbo Zhang,
Dahlia Malkhi
|
BBCA-LEDGER: High Throughput Consensus meets Low Latency
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents BBCA-LEDGER, a Byzantine log replication technology for
partially synchronous networks enabling blocks to be broadcast in parallel,
such that each broadcast is finalized independently and instantaneously into an
individual slot in the log. Every finalized broadcast is eventually committed
to the total ordering, so that all network bandwidth has utility in
disseminating blocks. Finalizing log slots in parallel achieves both high
throughput and low latency. BBCA-LEDGER is composed of two principal protocols
that interweave together, a low-latency/high-throughput happy path, and a
high-throughput DAG-based fallback path. The happy path employs a novel
primitive called BBCA, a consistent broadcast enforcing unique slot numbering.
In steady state, BBCA ensures that a transaction can be committed with low
latency, in just 3 network steps. Under network partitions or faults, we
harness recent advances in BFT and build a fallback mechanism on a direct
acyclic graph (DAG) created by BBCA broadcasts. In this manner, BBCA-LEDGER
exhibits the throughput benefits of DAG-based BFT in face of gaps.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 15:11:50 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Stathakopoulou",
"Chrysoula",
""
],
[
"Wei",
"Michael",
""
],
[
"Yin",
"Maofan",
""
],
[
"Zhang",
"Hongbo",
""
],
[
"Malkhi",
"Dahlia",
""
]
] |
new_dataset
| 0.99649 |
2306.14809
|
Austin Tripp
|
Austin Tripp, Sergio Bacallado, Sukriti Singh, Jos\'e Miguel
Hern\'andez-Lobato
|
Tanimoto Random Features for Scalable Molecular Machine Learning
|
Work in progress: expect updates in the future. Article is 29 pages
with 9 figures
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The Tanimoto coefficient is commonly used to measure the similarity between
molecules represented as discrete fingerprints, either as a distance metric or
a positive definite kernel. While many kernel methods can be accelerated using
random feature approximations, at present there is a lack of such
approximations for the Tanimoto kernel. In this paper we propose two kinds of
novel random features to allow this kernel to scale to large datasets, and in
the process discover a novel extension of the kernel to real vectors. We
theoretically characterize these random features, and provide error bounds on
the spectral norm of the Gram matrix. Experimentally, we show that the random
features proposed in this work are effective at approximating the Tanimoto
coefficient in real-world datasets and that the kernels explored in this work
are useful for molecular property prediction and optimization tasks.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 16:11:11 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Tripp",
"Austin",
""
],
[
"Bacallado",
"Sergio",
""
],
[
"Singh",
"Sukriti",
""
],
[
"Hernández-Lobato",
"José Miguel",
""
]
] |
new_dataset
| 0.989757 |
2306.14846
|
Dhruv Shah
|
Dhruv Shah, Ajay Sridhar, Nitish Dashora, Kyle Stachowicz, Kevin
Black, Noriaki Hirose, Sergey Levine
|
ViNT: A Foundation Model for Visual Navigation
| null | null | null | null |
cs.RO cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
General-purpose pre-trained models ("foundation models") have enabled
practitioners to produce generalizable solutions for individual machine
learning problems with datasets that are significantly smaller than those
required for learning from scratch. Such models are typically trained on large
and diverse datasets with weak supervision, consuming much more training data
than is available for any individual downstream application. In this paper, we
describe the Visual Navigation Transformer (ViNT), a foundation model that aims
to bring the success of general-purpose pre-trained models to vision-based
robotic navigation. ViNT is trained with a general goal-reaching objective that
can be used with any navigation dataset, and employs a flexible
Transformer-based architecture to learn navigational affordances and enable
efficient adaptation to a variety of downstream navigational tasks. ViNT is
trained on a number of existing navigation datasets, comprising hundreds of
hours of robotic navigation from a variety of different robotic platforms, and
exhibits positive transfer, outperforming specialist models trained on singular
datasets. ViNT can be augmented with diffusion-based subgoal proposals to
explore novel environments, and can solve kilometer-scale navigation problems
when equipped with long-range heuristics. ViNT can also be adapted to novel
task specifications with a technique inspired by prompt-tuning, where the goal
encoder is replaced by an encoding of another task modality (e.g., GPS
waypoints or routing commands) embedded into the same space of goal tokens.
This flexibility and ability to accommodate a variety of downstream problem
domains establishes ViNT as an effective foundation model for mobile robotics.
For videos, code, and model checkpoints, see our project page at
https://visualnav-transformer.github.io.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 16:57:03 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Shah",
"Dhruv",
""
],
[
"Sridhar",
"Ajay",
""
],
[
"Dashora",
"Nitish",
""
],
[
"Stachowicz",
"Kyle",
""
],
[
"Black",
"Kevin",
""
],
[
"Hirose",
"Noriaki",
""
],
[
"Levine",
"Sergey",
""
]
] |
new_dataset
| 0.982442 |
2306.14874
|
Nikita Rudin
|
David Hoeller, Nikita Rudin, Dhionis Sako and Marco Hutter
|
ANYmal Parkour: Learning Agile Navigation for Quadrupedal Robots
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Performing agile navigation with four-legged robots is a challenging task due
to the highly dynamic motions, contacts with various parts of the robot, and
the limited field of view of the perception sensors. In this paper, we propose
a fully-learned approach to train such robots and conquer scenarios that are
reminiscent of parkour challenges. The method involves training advanced
locomotion skills for several types of obstacles, such as walking, jumping,
climbing, and crouching, and then using a high-level policy to select and
control those skills across the terrain. Thanks to our hierarchical
formulation, the navigation policy is aware of the capabilities of each skill,
and it will adapt its behavior depending on the scenario at hand. Additionally,
a perception module is trained to reconstruct obstacles from highly occluded
and noisy sensory data and endows the pipeline with scene understanding.
Compared to previous attempts, our method can plan a path for challenging
scenarios without expert demonstration, offline computation, a priori knowledge
of the environment, or taking contacts explicitly into account. While these
modules are trained from simulated data only, our real-world experiments
demonstrate successful transfer on hardware, where the robot navigates and
crosses consecutive challenging obstacles with speeds of up to two meters per
second. The supplementary video can be found on the project website:
https://sites.google.com/leggedrobotics.com/agile-navigation
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 17:43:18 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Hoeller",
"David",
""
],
[
"Rudin",
"Nikita",
""
],
[
"Sako",
"Dhionis",
""
],
[
"Hutter",
"Marco",
""
]
] |
new_dataset
| 0.998617 |
2306.14893
|
Canwen Xu
|
Daya Guo and Canwen Xu and Nan Duan and Jian Yin and Julian McAuley
|
LongCoder: A Long-Range Pre-trained Language Model for Code Completion
|
ICML 2023
| null | null | null |
cs.SE cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce a new task for code completion that focuses on
handling long code input and propose a sparse Transformer model, called
LongCoder, to address this task. LongCoder employs a sliding window mechanism
for self-attention and introduces two types of globally accessible tokens -
bridge tokens and memory tokens - to improve performance and efficiency. Bridge
tokens are inserted throughout the input sequence to aggregate local
information and facilitate global interaction, while memory tokens are included
to highlight important statements that may be invoked later and need to be
memorized, such as package imports and definitions of classes, functions, or
structures. We conduct experiments on a newly constructed dataset that contains
longer code context and the publicly available CodeXGLUE benchmark.
Experimental results demonstrate that LongCoder achieves superior performance
on code completion tasks compared to previous models while maintaining
comparable efficiency in terms of computational resources during inference. All
the codes and data are available at https://github.com/microsoft/CodeBERT.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 17:59:24 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Guo",
"Daya",
""
],
[
"Xu",
"Canwen",
""
],
[
"Duan",
"Nan",
""
],
[
"Yin",
"Jian",
""
],
[
"McAuley",
"Julian",
""
]
] |
new_dataset
| 0.999068 |
2306.14896
|
Ankit Goyal
|
Ankit Goyal, Jie Xu, Yijie Guo, Valts Blukis, Yu-Wei Chao, Dieter Fox
|
RVT: Robotic View Transformer for 3D Object Manipulation
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For 3D object manipulation, methods that build an explicit 3D representation
perform better than those relying only on camera images. But using explicit 3D
representations like voxels comes at large computing cost, adversely affecting
scalability. In this work, we propose RVT, a multi-view transformer for 3D
manipulation that is both scalable and accurate. Some key features of RVT are
an attention mechanism to aggregate information across views and re-rendering
of the camera input from virtual views around the robot workspace. In
simulations, we find that a single RVT model works well across 18 RLBench tasks
with 249 task variations, achieving 26% higher relative success than the
existing state-of-the-art method (PerAct). It also trains 36X faster than
PerAct for achieving the same performance and achieves 2.3X the inference speed
of PerAct. Further, RVT can perform a variety of manipulation tasks in the real
world with just a few ($\sim$10) demonstrations per task. Visual results, code,
and trained model are provided at https://robotic-view-transformer.github.io/.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 17:59:31 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Goyal",
"Ankit",
""
],
[
"Xu",
"Jie",
""
],
[
"Guo",
"Yijie",
""
],
[
"Blukis",
"Valts",
""
],
[
"Chao",
"Yu-Wei",
""
],
[
"Fox",
"Dieter",
""
]
] |
new_dataset
| 0.996614 |
2306.14899
|
Jingkang Yang
|
Binzhu Xie, Sicheng Zhang, Zitang Zhou, Bo Li, Yuanhan Zhang, Jack
Hessel, Jingkang Yang, Ziwei Liu
|
FunQA: Towards Surprising Video Comprehension
|
Ask VLMs about humor, creation, and magics. Project Page:
https://funqa-benchmark.github.io/ Codebase:
https://github.com/Jingkang50/FunQA
| null | null | null |
cs.CV cs.AI cs.CL cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Surprising videos, e.g., funny clips, creative performances, or visual
illusions, attract significant attention. Enjoyment of these videos is not
simply a response to visual stimuli; rather, it hinges on the human capacity to
understand (and appreciate) commonsense violations depicted in these videos. We
introduce FunQA, a challenging video question answering (QA) dataset
specifically designed to evaluate and enhance the depth of video reasoning
based on counter-intuitive and fun videos. Unlike most video QA benchmarks
which focus on less surprising contexts, e.g., cooking or instructional videos,
FunQA covers three previously unexplored types of surprising videos: 1)
HumorQA, 2) CreativeQA, and 3) MagicQA. For each subset, we establish rigorous
QA tasks designed to assess the model's capability in counter-intuitive
timestamp localization, detailed video description, and reasoning around
counter-intuitiveness. We also pose higher-level tasks, such as attributing a
fitting and vivid title to the video, and scoring the video creativity. In
total, the FunQA benchmark consists of 312K free-text QA pairs derived from
4.3K video clips, spanning a total of 24 video hours. Extensive experiments
with existing VideoQA models reveal significant performance gaps for the FunQA
videos across spatial-temporal reasoning, visual-centered reasoning, and
free-text generation.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 17:59:55 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Xie",
"Binzhu",
""
],
[
"Zhang",
"Sicheng",
""
],
[
"Zhou",
"Zitang",
""
],
[
"Li",
"Bo",
""
],
[
"Zhang",
"Yuanhan",
""
],
[
"Hessel",
"Jack",
""
],
[
"Yang",
"Jingkang",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.999539 |
2106.07400
|
Clara Meister
|
Clara Meister, Martina Forster, Ryan Cotterell
|
Determinantal Beam Search
| null |
Proceedings of ACL-IJCNLP 2021
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Beam search is a go-to strategy for decoding neural sequence models. The
algorithm can naturally be viewed as a subset optimization problem, albeit one
where the corresponding set function does not reflect interactions between
candidates. Empirically, this leads to sets often exhibiting high overlap,
e.g., strings may differ by only a single word. Yet in use-cases that call for
multiple solutions, a diverse or representative set is often desired. To
address this issue, we propose a reformulation of beam search, which we call
determinantal beam search. Determinantal beam search has a natural relationship
to determinantal point processes (DPPs), models over sets that inherently
encode intra-set interactions. By posing iterations in beam search as a series
of subdeterminant maximization problems, we can turn the algorithm into a
diverse subset selection process. In a case study, we use the string
subsequence kernel to explicitly encourage n-gram coverage in text generated
from a sequence model. We observe that our algorithm offers competitive
performance against other diverse set generation strategies in the context of
language generation, while providing a more general approach to optimizing for
diversity.
|
[
{
"version": "v1",
"created": "Mon, 14 Jun 2021 13:01:46 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Jun 2021 11:50:04 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Jun 2021 07:09:16 GMT"
},
{
"version": "v4",
"created": "Fri, 23 Jun 2023 05:52:22 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Meister",
"Clara",
""
],
[
"Forster",
"Martina",
""
],
[
"Cotterell",
"Ryan",
""
]
] |
new_dataset
| 0.994748 |
2206.00789
|
Ali Raza
|
Ali Raza (1), Thomas Unger (1), Matthew Boyd (3), Eric Munson (1),
Parul Sohal (1), Ulrich Drepper (2), Richard Jones (2), Daniel Bristot de
Oliveira (2), Larry Woodman (2), Renato Mancuso (1), Jonathan Appavoo (1) and
Orran Krieger (1) ((1) Boston University, (2) Red Hat, (3) MIT CSAIL)
|
Unikernel Linux (UKL)
|
Added more results in the evaluation section. Improved overall
writing and added diagrams to explain the architecture
|
Proceedings of the Eighteenth European Conference on Computer
Systems (EuroSys 23), May 2023, Pages 590 - 605
|
10.1145/3552326.3587458
| null |
cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents Unikernel Linux (UKL), a path toward integrating
unikernel optimization techniques in Linux, a general purpose operating system.
UKL adds a configuration option to Linux allowing for a single, optimized
process to link with the kernel directly, and run at supervisor privilege. This
UKL process does not require application source code modification, only a
re-link with our, slightly modified, Linux kernel and glibc. Unmodified
applications show modest performance gains out of the box, and developers can
further optimize applications for more significant gains (e.g. 26% throughput
improvement for Redis). UKL retains support for co-running multiple user level
processes capable of communicating with the UKL process using standard IPC. UKL
preserves Linux's battle-tested codebase, community, and ecosystem of tools,
applications, and hardware support. UKL runs both on bare-metal and virtual
servers and supports multi-core execution. The changes to the Linux kernel are
modest (1250 LOC).
|
[
{
"version": "v1",
"created": "Wed, 1 Jun 2022 22:45:12 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2023 19:13:54 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Raza",
"Ali",
"",
"Boston University"
],
[
"Unger",
"Thomas",
"",
"Boston University"
],
[
"Boyd",
"Matthew",
"",
"MIT CSAIL"
],
[
"Munson",
"Eric",
"",
"Boston University"
],
[
"Sohal",
"Parul",
"",
"Boston University"
],
[
"Drepper",
"Ulrich",
"",
"Red Hat"
],
[
"Jones",
"Richard",
"",
"Red Hat"
],
[
"de Oliveira",
"Daniel Bristot",
"",
"Red Hat"
],
[
"Woodman",
"Larry",
"",
"Red Hat"
],
[
"Mancuso",
"Renato",
"",
"Boston University"
],
[
"Appavoo",
"Jonathan",
"",
"Boston University"
],
[
"Krieger",
"Orran",
"",
"Boston University"
]
] |
new_dataset
| 0.997652 |
2209.15040
|
Marcelo Orenes-Vera
|
Marcelo Orenes-Vera, Ilya Sharapov, Robert Schreiber, Mathias
Jacquelin, Philippe Vandermersch, Sharan Chetlur
|
Wafer-Scale Fast Fourier Transforms
| null |
Proceedings of the 37th International Conference on Supercomputing
2023
|
10.1145/3577193.3593708
| null |
cs.DC cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
We have implemented fast Fourier transforms for one, two, and
three-dimensional arrays on the Cerebras CS-2, a system whose memory and
processing elements reside on a single silicon wafer. The wafer-scale engine
(WSE) encompasses a two-dimensional mesh of roughly 850,000 processing elements
(PEs) with fast local memory and equally fast nearest-neighbor
interconnections.
Our wafer-scale FFT (wsFFT) parallelizes a $n^3$ problem with up to $n^2$
PEs. At this point a PE processes only a single vector of the 3D domain (known
as a pencil) per superstep, where each of the three supersteps performs FFT
along one of the three axes of the input array. Between supersteps, wsFFT
redistributes (transposes) the data to bring all elements of each
one-dimensional pencil being transformed into the memory of a single PE. Each
redistribution causes an all-to-all communication along one of the mesh
dimensions. Given the level of parallelism, the size of the messages
transmitted between pairs of PEs can be as small as a single word. In theory, a
mesh is not ideal for all-to-all communication due to its limited bisection
bandwidth. However, the mesh interconnecting PEs on the WSE lies entirely
on-wafer and achieves nearly peak bandwidth even with tiny messages.
This high efficiency on fine-grain communication allow wsFFT to achieve
unprecedented levels of parallelism and performance. We analyse in detail
computation and communication time, as well as the weak and strong scaling,
using both FP16 and FP32 precision. With 32-bit arithmetic on the CS-2, we
achieve 959 microseconds for 3D FFT of a $512^3$ complex input array using a
512x512 subgrid of the on-wafer PEs. This is the largest ever parallelization
for this problem size and the first implementation that breaks the millisecond
barrier.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2022 18:25:32 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Orenes-Vera",
"Marcelo",
""
],
[
"Sharapov",
"Ilya",
""
],
[
"Schreiber",
"Robert",
""
],
[
"Jacquelin",
"Mathias",
""
],
[
"Vandermersch",
"Philippe",
""
],
[
"Chetlur",
"Sharan",
""
]
] |
new_dataset
| 0.95477 |
2210.09936
|
Th\'eo Pierron
|
Thomas Bellitto, Nicolas Bousquet, Adam Kabela, Th\'eo Pierron
|
The smallest 5-chromatic tournament
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A coloring of a digraph is a partition of its vertex set such that each class
induces a digraph with no directed cycles. A digraph is $k$-chromatic if $k$ is
the minimum number of classes in such partition, and a digraph is oriented if
there is at most one arc between each pair of vertices. Clearly, the smallest
$k$-chromatic digraph is the complete digraph on $k$ vertices, but determining
the order of the smallest $k$-chromatic oriented graphs is a challenging
problem. It is known that the smallest $2$-, $3$- and $4$-chromatic oriented
graphs have $3$, $7$ and $11$ vertices, respectively. In 1994, Neumann-Lara
conjectured that a smallest $5$-chromatic oriented graph has $17$ vertices. We
solve this conjecture and show that the correct order is $19$.
|
[
{
"version": "v1",
"created": "Tue, 18 Oct 2022 15:38:10 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Jun 2023 11:23:33 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Bellitto",
"Thomas",
""
],
[
"Bousquet",
"Nicolas",
""
],
[
"Kabela",
"Adam",
""
],
[
"Pierron",
"Théo",
""
]
] |
new_dataset
| 0.998094 |
2211.08604
|
Jiho Choi
|
Jiho Choi, Junghoon Park, Woocheol Kim, Jin-Hyeok Park, Yumin Suh,
Minchang Sung
|
PU GNN: Chargeback Fraud Detection in P2E MMORPGs via Graph Attention
Networks with Imbalanced PU Labels
|
ECML PKDD 2023 (Applied Data Science Track)
| null | null | null |
cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recent advent of play-to-earn (P2E) systems in massively multiplayer
online role-playing games (MMORPGs) has made in-game goods interchangeable with
real-world values more than ever before. The goods in the P2E MMORPGs can be
directly exchanged with cryptocurrencies such as Bitcoin, Ethereum, or Klaytn
via blockchain networks. Unlike traditional in-game goods, once they had been
written to the blockchains, P2E goods cannot be restored by the game operation
teams even with chargeback fraud such as payment fraud, cancellation, or
refund. To tackle the problem, we propose a novel chargeback fraud prediction
method, PU GNN, which leverages graph attention networks with PU loss to
capture both the players' in-game behavior with P2E token transaction patterns.
With the adoption of modified GraphSMOTE, the proposed model handles the
imbalanced distribution of labels in chargeback fraud datasets. The conducted
experiments on three real-world P2E MMORPG datasets demonstrate that PU GNN
achieves superior performances over previously suggested methods.
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 01:26:57 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Dec 2022 15:42:42 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Dec 2022 09:15:04 GMT"
},
{
"version": "v4",
"created": "Wed, 11 Jan 2023 15:57:00 GMT"
},
{
"version": "v5",
"created": "Mon, 3 Apr 2023 09:39:15 GMT"
},
{
"version": "v6",
"created": "Fri, 28 Apr 2023 02:46:12 GMT"
},
{
"version": "v7",
"created": "Fri, 23 Jun 2023 05:01:32 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Choi",
"Jiho",
""
],
[
"Park",
"Junghoon",
""
],
[
"Kim",
"Woocheol",
""
],
[
"Park",
"Jin-Hyeok",
""
],
[
"Suh",
"Yumin",
""
],
[
"Sung",
"Minchang",
""
]
] |
new_dataset
| 0.996997 |
2211.08992
|
Sourya Dey
|
Sourya Dey, Eric Davis
|
DLKoopman: A deep learning software package for Koopman theory
| null |
In Proceedings of The 5th Annual Learning for Dynamics and Control
Conference, volume 211 of PMLR, pages 1467-1479. Jun 2023
| null | null |
cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
We present DLKoopman -- a software package for Koopman theory that uses deep
learning to learn an encoding of a nonlinear dynamical system into a linear
space, while simultaneously learning the linear dynamics. While several
previous efforts have either restricted the ability to learn encodings, or been
bespoke efforts designed for specific systems, DLKoopman is a generalized tool
that can be applied to data-driven learning and optimization of any dynamical
system. It can either be trained on data from individual states (snapshots) of
a system and used to predict its unknown states, or trained on data from
trajectories of a system and used to predict unknown trajectories for new
initial states. DLKoopman is available on the Python Package Index (PyPI) as
'dlkoopman', and includes extensive documentation and tutorials. Additional
contributions of the package include a novel metric called Average Normalized
Absolute Error for evaluating performance, and a ready-to-use hyperparameter
search module for improving performance.
|
[
{
"version": "v1",
"created": "Tue, 15 Nov 2022 18:45:51 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Jun 2023 17:10:50 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Dey",
"Sourya",
""
],
[
"Davis",
"Eric",
""
]
] |
new_dataset
| 0.99821 |
2212.07648
|
Taotao Zhou
|
Taotao Zhou, Kai He, Di Wu, Teng Xu, Qixuan Zhang, Kuixiang Shao,
Wenzheng Chen, Lan Xu, Jingyi Yu
|
Relightable Neural Human Assets from Multi-view Gradient Illuminations
|
Project page: https://miaoing.github.io/RNHA
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human modeling and relighting are two fundamental problems in computer vision
and graphics, where high-quality datasets can largely facilitate related
research. However, most existing human datasets only provide multi-view human
images captured under the same illumination. Although valuable for modeling
tasks, they are not readily used in relighting problems. To promote research in
both fields, in this paper, we present UltraStage, a new 3D human dataset that
contains more than 2,000 high-quality human assets captured under both
multi-view and multi-illumination settings. Specifically, for each example, we
provide 32 surrounding views illuminated with one white light and two gradient
illuminations. In addition to regular multi-view images, gradient illuminations
help recover detailed surface normal and spatially-varying material maps,
enabling various relighting applications. Inspired by recent advances in neural
representation, we further interpret each example into a neural human asset
which allows novel view synthesis under arbitrary lighting conditions. We show
our neural human assets can achieve extremely high capture performance and are
capable of representing fine details such as facial wrinkles and cloth folds.
We also validate UltraStage in single image relighting tasks, training neural
networks with virtual relighted data from neural assets and demonstrating
realistic rendering improvements over prior arts. UltraStage will be publicly
available to the community to stimulate significant future developments in
various human modeling and rendering tasks. The dataset is available at
https://miaoing.github.io/RNHA.
|
[
{
"version": "v1",
"created": "Thu, 15 Dec 2022 08:06:03 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Dec 2022 08:51:53 GMT"
},
{
"version": "v3",
"created": "Fri, 23 Jun 2023 07:50:16 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Zhou",
"Taotao",
""
],
[
"He",
"Kai",
""
],
[
"Wu",
"Di",
""
],
[
"Xu",
"Teng",
""
],
[
"Zhang",
"Qixuan",
""
],
[
"Shao",
"Kuixiang",
""
],
[
"Chen",
"Wenzheng",
""
],
[
"Xu",
"Lan",
""
],
[
"Yu",
"Jingyi",
""
]
] |
new_dataset
| 0.983095 |
2301.06601
|
Tasos Spiliotopoulos
|
Karolis Zilius, Tasos Spiliotopoulos, Aad van Moorsel
|
A Dataset of Coordinated Cryptocurrency-Related Social Media Campaigns
|
Camera-ready version for the ICWSM 2023 Conference. This paper
describes the dataset available at https://zenodo.org/record/7813450
|
Proceedings of the International AAAI Conference on Web and Social
Media (ICWSM 2023), 17(1), 1112-1121
|
10.1609/icwsm.v17i1.22219
| null |
cs.HC cs.CR cs.CY cs.IR cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
The rise in adoption of cryptoassets has brought many new and inexperienced
investors in the cryptocurrency space. These investors can be disproportionally
influenced by information they receive online, and particularly from social
media. This paper presents a dataset of crypto-related bounty events and the
users that participate in them. These events coordinate social media campaigns
to create artificial "hype" around a crypto project in order to influence the
price of its token. The dataset consists of information about 15.8K cross-media
bounty events, 185K participants, 10M forum comments and 82M social media URLs
collected from the Bounties(Altcoins) subforum of the BitcoinTalk online forum
from May 2014 to December 2022. We describe the data collection and the data
processing methods employed and we present a basic characterization of the
dataset. Furthermore, we discuss potential research opportunities afforded by
the dataset across many disciplines and we highlight potential novel insights
into how the cryptocurrency industry operates and how it interacts with its
audience.
|
[
{
"version": "v1",
"created": "Mon, 16 Jan 2023 20:37:29 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Apr 2023 13:54:31 GMT"
},
{
"version": "v3",
"created": "Fri, 23 Jun 2023 13:38:33 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Zilius",
"Karolis",
""
],
[
"Spiliotopoulos",
"Tasos",
""
],
[
"van Moorsel",
"Aad",
""
]
] |
new_dataset
| 0.999792 |
2302.05981
|
Sebastian Risi
|
Shyam Sudhakaran, Miguel Gonz\'alez-Duque, Claire Glanois, Matthias
Freiberger, Elias Najarro, Sebastian Risi
|
MarioGPT: Open-Ended Text2Level Generation through Large Language Models
| null | null | null | null |
cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Procedural Content Generation (PCG) algorithms provide a technique to
generate complex and diverse environments in an automated way. However, while
generating content with PCG methods is often straightforward, generating
meaningful content that reflects specific intentions and constraints remains
challenging. Furthermore, many PCG algorithms lack the ability to generate
content in an open-ended manner. Recently, Large Language Models (LLMs) have
shown to be incredibly effective in many diverse domains. These trained LLMs
can be fine-tuned, re-using information and accelerating training for new
tasks. In this work, we introduce MarioGPT, a fine-tuned GPT2 model trained to
generate tile-based game levels, in our case Super Mario Bros levels. We show
that MarioGPT can not only generate diverse levels, but can be text-prompted
for controllable level generation, addressing one of the key challenges of
current PCG techniques. As far as we know, MarioGPT is the first text-to-level
model. We also combine MarioGPT with novelty search, enabling it to generate
diverse levels with varying play-style dynamics (i.e. player paths). This
combination allows for the open-ended generation of an increasingly diverse
range of content.
|
[
{
"version": "v1",
"created": "Sun, 12 Feb 2023 19:12:24 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 21:06:28 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Sudhakaran",
"Shyam",
""
],
[
"González-Duque",
"Miguel",
""
],
[
"Glanois",
"Claire",
""
],
[
"Freiberger",
"Matthias",
""
],
[
"Najarro",
"Elias",
""
],
[
"Risi",
"Sebastian",
""
]
] |
new_dataset
| 0.990786 |
2303.01933
|
Meysam Basiri Prof
|
Teodoro Dias, Meysam Basiri
|
BogieCopter: A Multi-Modal Aerial-Ground Vehicle for Long-Endurance
Inspection Applications
|
This paper has been accepted for publication at the IEEE
International Conference on Robotics and Automation (ICRA), London, 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The use of Micro Aerial Vehicles (MAVs) for inspection and surveillance
missions has proved to be extremely useful, however, their usability is
negatively impacted by the large power requirements and the limited operating
time. This work describes the design and development of a novel hybrid
aerial-ground vehicle, enabling multi-modal mobility and long operating time,
suitable for long-endurance inspection and monitoring applications. The design
consists of a MAV with two tiltable axles and four independent passive wheels,
allowing it to fly, approach, land and move on flat and inclined surfaces,
while using the same set of actuators for all modes of locomotion. In
comparison to existing multi-modal designs with passive wheels, the proposed
design enables a higher ground locomotion efficiency, provides a higher payload
capacity, and presents one of the lowest mass increases due to the ground
actuation mechanism. The vehicle's performance is evaluated through a series of
real experiments, demonstrating its flying, ground locomotion and wall-climbing
capabilities, and the energy consumption for all modes of locomotion is
evaluated.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 14:03:42 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Jun 2023 09:49:41 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Dias",
"Teodoro",
""
],
[
"Basiri",
"Meysam",
""
]
] |
new_dataset
| 0.963744 |
2304.13460
|
Robin Ferede
|
Robin Ferede, Guido C.H.E. de Croon, Christophe De Wagter, Dario Izzo
|
End-to-end Neural Network Based Quadcopter control
|
11 pages, 16 figures
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Developing optimal controllers for aggressive high-speed quadcopter flight
poses significant challenges in robotics. Recent trends in the field involve
utilizing neural network controllers trained through supervised or
reinforcement learning. However, the sim-to-real transfer introduces a reality
gap, requiring the use of robust inner loop controllers during real flights,
which limits the network's control authority and flight performance. In this
paper, we investigate for the first time, an end-to-end neural network
controller, addressing the reality gap issue without being restricted by an
inner-loop controller. The networks, referred to as G\&CNets, are trained to
learn an energy-optimal policy mapping the quadcopter's state to rpm commands
using an optimal trajectory dataset. In hover-to-hover flights, we identified
the unmodeled moments as a significant contributor to the reality gap. To
mitigate this, we propose an adaptive control strategy that works by learning
from optimal trajectories of a system affected by constant external pitch, roll
and yaw moments. In real test flights, this model mismatch is estimated onboard
and fed to the network to obtain the optimal rpm command. We demonstrate the
effectiveness of our method by performing energy-optimal hover-to-hover flights
with and without moment feedback. Finally, we compare the adaptive controller
to a state-of-the-art differential-flatness-based controller in a consecutive
waypoint flight and demonstrate the advantages of our method in terms of energy
optimality and robustness.
|
[
{
"version": "v1",
"created": "Wed, 26 Apr 2023 11:32:34 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2023 12:51:28 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Ferede",
"Robin",
""
],
[
"de Croon",
"Guido C. H. E.",
""
],
[
"De Wagter",
"Christophe",
""
],
[
"Izzo",
"Dario",
""
]
] |
new_dataset
| 0.997162 |
2304.14676
|
Yuxiang Lu
|
Yuxiang Lu and Syed Ali Jafar
|
Quantum Cross Subspace Alignment Codes via the $N$-sum Box Abstraction
|
arXiv admin note: substantial text overlap with arXiv:2304.07561
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cross-subspace alignment (CSA) codes are used in various private information
retrieval (PIR) schemes (e.g., with secure storage) and in secure distributed
batch matrix multiplication (SDBMM). Using a recently developed $N$-sum box
abstraction of a quantum multiple-access channel (QMAC), we translate CSA
schemes over classical multiple-access channels into efficient quantum CSA
schemes over a QMAC, achieving maximal superdense coding gain. Because of the
$N$-sum box abstraction, the underlying problem of coding to exploit quantum
entanglements for CSA schemes, becomes conceptually equivalent to that of
designing a channel matrix for a MIMO MAC subject to given structural
constraints imposed by the $N$-sum box abstraction, such that the resulting
MIMO MAC is able to implement the functionality of a CSA scheme
(encoding/decoding) over-the-air. Applications include Quantum PIR with secure
and MDS-coded storage, as well as Quantum SDBMM.
|
[
{
"version": "v1",
"created": "Fri, 28 Apr 2023 08:07:10 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Lu",
"Yuxiang",
""
],
[
"Jafar",
"Syed Ali",
""
]
] |
new_dataset
| 0.989227 |
2306.10159
|
Md Zahid Hasan
|
Md Zahid Hasan, Jiajing Chen, Jiyang Wang, Mohammed Shaiqur Rahman,
Ameya Joshi, Senem Velipasalar, Chinmay Hegde, Anuj Sharma, Soumik Sarkar
|
Vision-Language Models can Identify Distracted Driver Behavior from
Naturalistic Videos
|
15 pages, 10 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognizing the activities, causing distraction, in real-world driving
scenarios is critical for ensuring the safety and reliability of both drivers
and pedestrians on the roadways. Conventional computer vision techniques are
typically data-intensive and require a large volume of annotated training data
to detect and classify various distracted driving behaviors, thereby limiting
their efficiency and scalability. We aim to develop a generalized framework
that showcases robust performance with access to limited or no annotated
training data. Recently, vision-language models have offered large-scale
visual-textual pretraining that can be adapted to task-specific learning like
distracted driving activity recognition. Vision-language pretraining models,
such as CLIP, have shown significant promise in learning natural
language-guided visual representations. This paper proposes a CLIP-based driver
activity recognition approach that identifies driver distraction from
naturalistic driving images and videos. CLIP's vision embedding offers
zero-shot transfer and task-based finetuning, which can classify distracted
activities from driving video data. Our results show that this framework offers
state-of-the-art performance on zero-shot transfer and video-based CLIP for
predicting the driver's state on two public datasets. We propose both
frame-based and video-based frameworks developed on top of the CLIP's visual
representation for distracted driving detection and classification task and
report the results.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 20:02:51 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2023 23:11:43 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Hasan",
"Md Zahid",
""
],
[
"Chen",
"Jiajing",
""
],
[
"Wang",
"Jiyang",
""
],
[
"Rahman",
"Mohammed Shaiqur",
""
],
[
"Joshi",
"Ameya",
""
],
[
"Velipasalar",
"Senem",
""
],
[
"Hegde",
"Chinmay",
""
],
[
"Sharma",
"Anuj",
""
],
[
"Sarkar",
"Soumik",
""
]
] |
new_dataset
| 0.983498 |
2306.12802
|
Thanh Lam Hoang
|
Hoang Thanh Lam, Marco Luca Sbodio, Marcos Mart\'inez Galindo,
Mykhaylo Zayats, Ra\'ul Fern\'andez-D\'iaz, V\'ictor Valls, Gabriele Picco,
Cesar Berrospi Ramis, Vanessa L\'opez
|
Otter-Knowledge: benchmarks of multimodal knowledge graph representation
learning from different sources for drug discovery
| null | null | null | null |
cs.LG cs.AI q-bio.BM
|
http://creativecommons.org/licenses/by/4.0/
|
Recent research in representation learning utilizes large databases of
proteins or molecules to acquire knowledge of drug and protein structures
through unsupervised learning techniques. These pre-trained representations
have proven to significantly enhance the accuracy of subsequent tasks, such as
predicting the affinity between drugs and target proteins. In this study, we
demonstrate that by incorporating knowledge graphs from diverse sources and
modalities into the sequences or SMILES representation, we can further enrich
the representation and achieve state-of-the-art results on established
benchmark datasets. We provide preprocessed and integrated data obtained from 7
public sources, which encompass over 30M triples. Additionally, we make
available the pre-trained models based on this data, along with the reported
outcomes of their performance on three widely-used benchmark datasets for
drug-target binding affinity prediction found in the Therapeutic Data Commons
(TDC) benchmarks. Additionally, we make the source code for training models on
benchmark datasets publicly available. Our objective in releasing these
pre-trained models, accompanied by clean data for model pretraining and
benchmark results, is to encourage research in knowledge-enhanced
representation learning.
|
[
{
"version": "v1",
"created": "Thu, 22 Jun 2023 11:01:41 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Jun 2023 10:03:38 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Lam",
"Hoang Thanh",
""
],
[
"Sbodio",
"Marco Luca",
""
],
[
"Galindo",
"Marcos Martínez",
""
],
[
"Zayats",
"Mykhaylo",
""
],
[
"Fernández-Díaz",
"Raúl",
""
],
[
"Valls",
"Víctor",
""
],
[
"Picco",
"Gabriele",
""
],
[
"Ramis",
"Cesar Berrospi",
""
],
[
"López",
"Vanessa",
""
]
] |
new_dataset
| 0.994028 |
2306.13169
|
M Charity
|
M Charity, Dipika Rajesh, Sam Earle, and Julian Togelius
|
Amorphous Fortress: Observing Emergent Behavior in Multi-Agent FSMs
|
9 pages; Accepted to the 1st ALIFE for and from video games Workshop
2023
| null | null | null |
cs.AI cs.MA cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a system called Amorphous Fortress -- an abstract, yet spatial,
open-ended artificial life simulation. In this environment, the agents are
represented as finite-state machines (FSMs) which allow for multi-agent
interaction within a constrained space. These agents are created by randomly
generating and evolving the FSMs; sampling from pre-defined states and
transitions. This environment was designed to explore the emergent AI behaviors
found implicitly in simulation games such as Dwarf Fortress or The Sims. We
apply the hill-climber evolutionary search algorithm to this environment to
explore the various levels of depth and interaction from the generated FSMs.
|
[
{
"version": "v1",
"created": "Thu, 22 Jun 2023 19:32:53 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Charity",
"M",
""
],
[
"Rajesh",
"Dipika",
""
],
[
"Earle",
"Sam",
""
],
[
"Togelius",
"Julian",
""
]
] |
new_dataset
| 0.988948 |
2306.13268
|
Cheng Chi
|
Cheng Chi, Xin Zhang, Jiahui Liu, Yulong Sun, Zihao Zhang, and Xingqun
Zhan
|
GICI-LIB: A GNSS/INS/Camera Integrated Navigation Library
|
Open-source: https://github.com/chichengcn/gici-open. This work has
been submitted to the IEEE for possible publication. Copyright may be
transferred without notice, after which this version may no longer be
accessible
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate navigation is essential for autonomous robots and vehicles. In
recent years, the integration of the Global Navigation Satellite System (GNSS),
Inertial Navigation System (INS), and camera has garnered considerable
attention due to its robustness and high accuracy in diverse environments. In
such systems, fully utilizing the role of GNSS is cumbersome because of the
diverse choices of formulations, error models, satellite constellations, signal
frequencies, and service types, which lead to different precision, robustness,
and usage dependencies. To clarify the capacity of GNSS algorithms and
accelerate the development efficiency of employing GNSS in multi-sensor fusion
algorithms, we open source the GNSS/INS/Camera Integration Library (GICI-LIB),
together with detailed documentation and a comprehensive land vehicle dataset.
A factor graph optimization-based multi-sensor fusion framework is established,
which combines almost all GNSS measurement error sources by fully considering
temporal and spatial correlations between measurements. The graph structure is
designed for flexibility, making it easy to form any kind of integration
algorithm. For illustration, four Real-Time Kinematic (RTK)-based algorithms
from GICI-LIB are evaluated using our dataset. Results confirm the potential of
the GICI system to provide continuous precise navigation solutions in a wide
spectrum of urban environments.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 02:40:33 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Chi",
"Cheng",
""
],
[
"Zhang",
"Xin",
""
],
[
"Liu",
"Jiahui",
""
],
[
"Sun",
"Yulong",
""
],
[
"Zhang",
"Zihao",
""
],
[
"Zhan",
"Xingqun",
""
]
] |
new_dataset
| 0.99571 |
2306.13323
|
Alexander Tsaregorodtsev
|
Alexander Tsaregorodtsev, Michael Buchholz, Vasileios Belagiannis
|
Automated Automotive Radar Calibration With Intelligent Vehicles
|
5 pages, 4 figures, accepted for presentation at the 31st European
Signal Processing Conference (EUSIPCO), September 4 - September 8, 2023,
Helsinki, Finland
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While automotive radar sensors are widely adopted and have been used for
automatic cruise control and collision avoidance tasks, their application
outside of vehicles is still limited. As they have the ability to resolve
multiple targets in 3D space, radars can also be used for improving environment
perception. This application, however, requires a precise calibration, which is
usually a time-consuming and labor-intensive task. We, therefore, present an
approach for automated and geo-referenced extrinsic calibration of automotive
radar sensors that is based on a novel hypothesis filtering scheme. Our method
does not require external modifications of a vehicle and instead uses the
location data obtained from automated vehicles. This location data is then
combined with filtered sensor data to create calibration hypotheses. Subsequent
filtering and optimization recovers the correct calibration. Our evaluation on
data from a real testing site shows that our method can correctly calibrate
infrastructure sensors in an automated manner, thus enabling cooperative
driving scenarios.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 07:01:10 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Tsaregorodtsev",
"Alexander",
""
],
[
"Buchholz",
"Michael",
""
],
[
"Belagiannis",
"Vasileios",
""
]
] |
new_dataset
| 0.996519 |
2306.13379
|
Camilo Thorne
|
Chieling Yueh, Evangelos Kanoulas, Bruno Martins, Camilo Thorne, Saber
Akhondi
|
Stress Testing BERT Anaphora Resolution Models for Reaction Extraction
in Chemical Patents
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The high volume of published chemical patents and the importance of a timely
acquisition of their information gives rise to automating information
extraction from chemical patents. Anaphora resolution is an important component
of comprehensive information extraction, and is critical for extracting
reactions. In chemical patents, there are five anaphoric relations of interest:
co-reference, transformed, reaction associated, work up, and contained. Our
goal is to investigate how the performance of anaphora resolution models for
reaction texts in chemical patents differs in a noise-free and noisy
environment and to what extent we can improve the robustness against noise of
the model.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 09:01:56 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Yueh",
"Chieling",
""
],
[
"Kanoulas",
"Evangelos",
""
],
[
"Martins",
"Bruno",
""
],
[
"Thorne",
"Camilo",
""
],
[
"Akhondi",
"Saber",
""
]
] |
new_dataset
| 0.992652 |
2306.13388
|
J\"ames M\'en\'etrey
|
Pascal Gerig, J\"ames M\'en\'etrey, Baptiste Lanoix, Florian Stoller,
Pascal Felber, Marcelo Pasin, Valerio Schiavoni
|
Preventing EFail Attacks with Client-Side WebAssembly: The Case of Swiss
Post's IncaMail
|
This publication incorporates results from the VEDLIoT project, which
received funding from the European Union's Horizon 2020 research and
innovation programme under grant agreement No 957197
|
DEBS'23: Proceedings of the 17th ACM International Conference on
Distributed and Event-Based Systems, Neuch\^atel, Switzerland, June 2023
|
10.1145/3583678.3596899
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional email encryption schemes are vulnerable to EFail attacks, which
exploit the lack of message authentication by manipulating ciphertexts and
exfiltrating plaintext via HTML backchannels. Swiss Post's IncaMail, a secure
email service for transmitting legally binding, encrypted, and verifiable
emails, counters EFail attacks using an authenticated-encryption with
associated data (AEAD) encryption scheme to ensure message privacy and
authentication between servers. IncaMail relies on a trusted infrastructure
backend and encrypts messages per user policy. This paper presents a revised
IncaMail architecture that offloads the majority of cryptographic operations to
clients, offering benefits such as reduced computational load and energy
footprint, relaxed trust assumptions, and per-message encryption key policies.
Our proof-of-concept prototype and benchmarks demonstrate the robustness of the
proposed scheme, with client-side WebAssembly-based cryptographic operations
yielding significant performance improvements (up to ~14x) over conventional
JavaScript implementations.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 09:15:04 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Gerig",
"Pascal",
""
],
[
"Ménétrey",
"Jämes",
""
],
[
"Lanoix",
"Baptiste",
""
],
[
"Stoller",
"Florian",
""
],
[
"Felber",
"Pascal",
""
],
[
"Pasin",
"Marcelo",
""
],
[
"Schiavoni",
"Valerio",
""
]
] |
new_dataset
| 0.995757 |
2306.13478
|
Adriano Pastore
|
Adriano Pastore
|
A Proof of the Weak Simplex Conjecture
|
6 pages, submitted to a conference for peer review
| null | null | null |
cs.IT math.IT math.MG
|
http://creativecommons.org/licenses/by/4.0/
|
We solve a long-standing open problem about the optimal codebook structure of
codes in $n$-dimensional Euclidean space that consist of $n+1$ codewords
subject to a codeword energy constraint, in terms of minimizing the average
decoding error probability. The conjecture states that optimal codebooks are
formed by the $n+1$ vertices of a regular simplex (the $n$-dimensional
generalization of a regular tetrahedron) inscribed in the unit sphere. A
self-contained proof of this conjecture is provided that hinges on symmetry
arguments and leverages a relaxation approach that consists in jointly
optimizing the codebook and the decision regions, rather than the codeword
locations alone.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 12:36:11 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Pastore",
"Adriano",
""
]
] |
new_dataset
| 0.992445 |
2306.13486
|
Michael Mior
|
Michael Mior
|
Relational Playground: Teaching the Duality of Relational Algebra and
SQL
| null | null |
10.1145/3596673.3596978
| null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Students in introductory data management courses are often taught how to
write queries in SQL. This is a useful and practical skill, but it gives
limited insight into how queries are processed by relational database engines.
In contrast, relational algebra is a commonly used internal representation of
queries by database engines, but can be challenging for students to grasp. We
developed a tool we call Relational Playground for database students to explore
the connection between relational algebra and SQL.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 13:02:19 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Mior",
"Michael",
""
]
] |
new_dataset
| 0.95573 |
2306.13505
|
Katherine Mimnaugh
|
Katherine J. Mimnaugh, Evan G. Center, Markku Suomalainen, Israel
Becerra, Eliezer Lozano, Rafael Murrieta-Cid, Timo Ojala, Steven M. LaValle,
and Kara D. Federmeier
|
Virtual Reality Sickness Reduces Attention During Immersive Experiences
| null | null | null | null |
cs.HC q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we show that Virtual Reality (VR) sickness is associated with
a reduction in attention, which was detected with the P3b Event-Related
Potential (ERP) component from electroencephalography (EEG) measurements
collected in a dual-task paradigm. We hypothesized that sickness symptoms such
as nausea, eyestrain, and fatigue would reduce the users' capacity to pay
attention to tasks completed in a virtual environment, and that this reduction
in attention would be dynamically reflected in a decrease of the P3b amplitude
while VR sickness was experienced. In a user study, participants were taken on
a tour through a museum in VR along paths with varying amounts of rotation,
shown previously to cause different levels of VR sickness. While paying
attention to the virtual museum (the primary task), participants were asked to
silently count tones of a different frequency (the secondary task). Control
measurements for comparison against the VR sickness conditions were taken when
the users were not wearing the Head-Mounted Display (HMD) and while they were
immersed in VR but not moving through the environment. This exploratory study
shows, across multiple analyses, that the effect mean amplitude of the P3b
collected during the task is associated with both sickness severity measured
after the task with a questionnaire (SSQ) and with the number of counting
errors on the secondary task. Thus, VR sickness may impair attention and task
performance, and these changes in attention can be tracked with ERP measures as
they happen, without asking participants to assess their sickness symptoms in
the moment.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 14:06:13 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Mimnaugh",
"Katherine J.",
""
],
[
"Center",
"Evan G.",
""
],
[
"Suomalainen",
"Markku",
""
],
[
"Becerra",
"Israel",
""
],
[
"Lozano",
"Eliezer",
""
],
[
"Murrieta-Cid",
"Rafael",
""
],
[
"Ojala",
"Timo",
""
],
[
"LaValle",
"Steven M.",
""
],
[
"Federmeier",
"Kara D.",
""
]
] |
new_dataset
| 0.97944 |
2306.13531
|
Satoshi Tsutsui
|
Satoshi Tsutsui, Winnie Pang, Bihan Wen
|
WBCAtt: A White Blood Cell Dataset Annotated with Detailed Morphological
Attributes
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The examination of blood samples at a microscopic level plays a fundamental
role in clinical diagnostics, influencing a wide range of medical conditions.
For instance, an in-depth study of White Blood Cells (WBCs), a crucial
component of our blood, is essential for diagnosing blood-related diseases such
as leukemia and anemia. While multiple datasets containing WBC images have been
proposed, they mostly focus on cell categorization, often lacking the necessary
morphological details to explain such categorizations, despite the importance
of explainable artificial intelligence (XAI) in medical domains. This paper
seeks to address this limitation by introducing comprehensive annotations for
WBC images. Through collaboration with pathologists, a thorough literature
review, and manual inspection of microscopic images, we have identified 11
morphological attributes associated with the cell and its components (nucleus,
cytoplasm, and granules). We then annotated ten thousand WBC images with these
attributes. Moreover, we conduct experiments to predict these attributes from
images, providing insights beyond basic WBC classification. As the first public
dataset to offer such extensive annotations, we also illustrate specific
applications that can benefit from our attribute annotations. Overall, our
dataset paves the way for interpreting WBC recognition models, further
advancing XAI in the fields of pathology and hematology.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 14:52:37 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Tsutsui",
"Satoshi",
""
],
[
"Pang",
"Winnie",
""
],
[
"Wen",
"Bihan",
""
]
] |
new_dataset
| 0.999862 |
2306.13631
|
Ay\c{c}a Takmaz
|
Ay\c{c}a Takmaz, Elisabetta Fedele, Robert W. Sumner, Marc Pollefeys,
Federico Tombari, Francis Engelmann
|
OpenMask3D: Open-Vocabulary 3D Instance Segmentation
|
project page: https://openmask3d.github.io/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the task of open-vocabulary 3D instance segmentation.
Traditional approaches for 3D instance segmentation largely rely on existing 3D
annotated datasets, which are restricted to a closed-set of object categories.
This is an important limitation for real-life applications where one might need
to perform tasks guided by novel, open-vocabulary queries related to objects
from a wide variety. Recently, open-vocabulary 3D scene understanding methods
have emerged to address this problem by learning queryable features per each
point in the scene. While such a representation can be directly employed to
perform semantic segmentation, existing methods have limitations in their
ability to identify object instances. In this work, we address this limitation,
and propose OpenMask3D, which is a zero-shot approach for open-vocabulary 3D
instance segmentation. Guided by predicted class-agnostic 3D instance masks,
our model aggregates per-mask features via multi-view fusion of CLIP-based
image embeddings. We conduct experiments and ablation studies on the ScanNet200
dataset to evaluate the performance of OpenMask3D, and provide insights about
the open-vocabulary 3D instance segmentation task. We show that our approach
outperforms other open-vocabulary counterparts, particularly on the long-tail
distribution. Furthermore, OpenMask3D goes beyond the limitations of
close-vocabulary approaches, and enables the segmentation of object instances
based on free-form queries describing object properties such as semantics,
geometry, affordances, and material properties.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 17:36:44 GMT"
}
] | 2023-06-26T00:00:00 |
[
[
"Takmaz",
"Ayça",
""
],
[
"Fedele",
"Elisabetta",
""
],
[
"Sumner",
"Robert W.",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Tombari",
"Federico",
""
],
[
"Engelmann",
"Francis",
""
]
] |
new_dataset
| 0.999823 |
2004.12141
|
Ayrat Khalimov
|
L\'eo Exibard, Emmanuel Filiot, Ayrat Khalimov
|
Church Synthesis on Register Automata over Linearly Ordered Data Domains
|
v7: final journal version
| null |
10.4230/LIPIcs.STACS.2021.54
| null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a Church synthesis game, two players, Adam and Eve, alternately pick some
element in a finite alphabet, for an infinite number of rounds. The game is won
by Eve if the omega-word formed by this infinite interaction belongs to a given
language S, called the specification. It is well-known that for omega-regular
specifications, it is decidable whether Eve has a strategy to enforce the
specification no matter what Adam does. We study the extension of Church
synthesis games to the linearly ordered data domains (Q, <) and (N, <). In this
setting, the infinite interaction between Adam and Eve results in an omega-data
word, i.e., an infinite sequence of elements in the domain.
We study this problem when specifications are given as register automata.
Those automata consist in finite automata equipped with a finite set of
registers in which they can store data values, that they can then compare with
incoming data values with respect to the linear order. Church games over (N, <)
are however undecidable, even for deterministic register automata. Thus, we
introduce one-sided Church games, where Eve instead operates over a finite
alphabet, while Adam still manipulates data. We show that they are determined,
and that deciding the existence of a winning strategy is in ExpTime, both for Q
and N. This follows from a study of constraint sequences, which abstract the
behaviour of register automata, and allow us to reduce Church games to
omega-regular games. We present an application of one-sided Church games to a
transducer synthesis problem. In this application, a transducer models a
reactive system (Eve) which outputs data stored in its registers, depending on
its interaction with an environment (Adam) which inputs data to the system.
|
[
{
"version": "v1",
"created": "Sat, 25 Apr 2020 13:23:47 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Jan 2021 13:31:02 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Jan 2021 20:26:47 GMT"
},
{
"version": "v4",
"created": "Mon, 12 Apr 2021 18:55:27 GMT"
},
{
"version": "v5",
"created": "Wed, 6 Oct 2021 06:16:00 GMT"
},
{
"version": "v6",
"created": "Mon, 20 Mar 2023 12:20:15 GMT"
},
{
"version": "v7",
"created": "Thu, 22 Jun 2023 16:58:56 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Exibard",
"Léo",
""
],
[
"Filiot",
"Emmanuel",
""
],
[
"Khalimov",
"Ayrat",
""
]
] |
new_dataset
| 0.994512 |
2106.02602
|
Alexey Zaytsev
|
Evgenia Romanenkova and Alexander Stepikin and Matvey Morozov and
Alexey Zaytsev
|
InDiD: Instant Disorder Detection via Representation Learning
| null | null |
10.1145/3503161.3548182
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For sequential data, a change point is a moment of abrupt regime switch in
data streams. Such changes appear in different scenarios, including simpler
data from sensors and more challenging video surveillance data. We need to
detect disorders as fast as possible. Classic approaches for change point
detection (CPD) might underperform for semi-structured sequential data because
they cannot process its structure without a proper representation. We propose a
principled loss function that balances change detection delay and time to a
false alarm. It approximates classic rigorous solutions but is differentiable
and allows representation learning for deep models. We consider synthetic
sequences, real-world data sensors and videos with change points. We carefully
labelled available data with change point moments for video data and released
it for the first time. Experiments suggest that complex data require meaningful
representations tailored for the specificity of the CPD task -- and our
approach provides them outperforming considered baselines. For example, for
explosion detection in video, the F1 score for our method is $0.53$ compared to
baseline scores of $0.31$ and $0.35$.
|
[
{
"version": "v1",
"created": "Fri, 4 Jun 2021 17:04:13 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Dec 2021 13:57:05 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Apr 2022 07:22:27 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Romanenkova",
"Evgenia",
""
],
[
"Stepikin",
"Alexander",
""
],
[
"Morozov",
"Matvey",
""
],
[
"Zaytsev",
"Alexey",
""
]
] |
new_dataset
| 0.998269 |
2204.00132
|
Walter Zimmer
|
Walter Zimmer, Marcus Grabler and Alois Knoll
|
Real-Time and Robust 3D Object Detection Within Road-Side LiDARs Using
Domain Adaptation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This work aims to address the challenges in domain adaptation of 3D object
detection using infrastructure LiDARs. We design a model DASE-ProPillars that
can detect vehicles in infrastructure-based LiDARs in real-time. Our model uses
PointPillars as the baseline model with additional modules to improve the 3D
detection performance. To prove the effectiveness of our proposed modules in
DASE-ProPillars, we train and evaluate the model on two datasets, the open
source A9-Dataset and a semi-synthetic infrastructure dataset created within
the Regensburg Next project. We do several sets of experiments for each module
in the DASE-ProPillars detector that show that our model outperforms the
SE-ProPillars baseline on the real A9 test set and a semi-synthetic A9 test
set, while maintaining an inference speed of 45 Hz (22 ms). We apply domain
adaptation from the semi-synthetic A9-Dataset to the semi-synthetic dataset
from the Regensburg Next project by applying transfer learning and achieve a 3D
mAP@0.25 of 93.49% on the Car class of the target test set using 40 recall
positions.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2022 22:54:49 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 16:27:57 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Zimmer",
"Walter",
""
],
[
"Grabler",
"Marcus",
""
],
[
"Knoll",
"Alois",
""
]
] |
new_dataset
| 0.995569 |
2204.02482
|
Huifeng Zhu
|
Huifeng Zhu, Haoqi Shan, Dean Sullivan, Xiaolong Guo, Yier Jin, Xuan
Zhang
|
PDNPulse: Sensing PCB Anomaly with the Intrinsic Power Delivery Network
|
This paper has been accepted by IEEE Transactions on Information
Forensics and Security (TIFS'2023)
| null |
10.1109/TIFS.2023.3285490
| null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
The ubiquitous presence of printed circuit boards (PCBs) in modern electronic
systems and embedded devices makes their integrity a top security concern. To
take advantage of the economies of scale, today's PCB design and manufacturing
are often performed by suppliers around the globe, exposing them to many
security vulnerabilities along the segmented PCB supply chain. Moreover, the
increasing complexity of the PCB designs also leaves ample room for numerous
sneaky board-level attacks to be implemented throughout each stage of a PCB's
lifetime, threatening many electronic devices. In this paper, we propose
PDNPulse, a power delivery network (PDN) based PCB anomaly detection framework
that can identify a wide spectrum of board-level malicious modifications.
PDNPulse leverages the fact that the PDN's characteristics are inevitably
affected by modifications to the PCB, no matter how minuscule. By detecting
changes to the PDN impedance profile and using the Frechet distance-based
anomaly detection algorithms, PDNPulse can robustly and successfully discern
malicious modifications across the system. Using PDNPulse, we conduct extensive
experiments on seven commercial-off-the-shelf PCBs, covering different design
scales, different threat models, and seven different anomaly types. The results
confirm that PDNPulse creates an effective security asymmetry between attack
and defense.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2022 20:32:43 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2023 03:03:30 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Zhu",
"Huifeng",
""
],
[
"Shan",
"Haoqi",
""
],
[
"Sullivan",
"Dean",
""
],
[
"Guo",
"Xiaolong",
""
],
[
"Jin",
"Yier",
""
],
[
"Zhang",
"Xuan",
""
]
] |
new_dataset
| 0.997439 |
2208.12558
|
Giacomo Ortali
|
Walter Didimo, Michael Kaufmann, Giuseppe Liotta, Giacomo Ortali
|
Rectilinear Planarity of Partial 2-Trees
|
arXiv admin note: substantial text overlap with arXiv:2110.00548
Appears in the Proceedings of the 30th International Symposium on Graph
Drawing and Network Visualization (GD 2022)
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A graph is rectilinear planar if it admits a planar orthogonal drawing
without bends. While testing rectilinear planarity is NP-hard in general (Garg
and Tamassia, 2001), it is a long-standing open problem to establish a tight
upper bound on its complexity for partial 2-trees, i.e., graphs whose
biconnected components are series-parallel. We describe a new O(n^2)-time
algorithm to test rectilinear planarity of partial 2-trees, which improves over
the current best bound of O(n^3 \log n) (Di Giacomo et al., 2022). Moreover,
for partial 2-trees where no two parallel-components in a biconnected component
share a pole, we are able to achieve optimal O(n)-time complexity. Our
algorithms are based on an extensive study and a deeper understanding of the
notion of orthogonal spirality, introduced several years ago (Di Battista et
al, 1998) to describe how much an orthogonal drawing of a subgraph is rolled-up
in an orthogonal drawing of the graph.
|
[
{
"version": "v1",
"created": "Fri, 26 Aug 2022 10:09:18 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Aug 2022 09:54:16 GMT"
},
{
"version": "v3",
"created": "Fri, 9 Sep 2022 13:54:58 GMT"
},
{
"version": "v4",
"created": "Thu, 22 Jun 2023 10:01:47 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Didimo",
"Walter",
""
],
[
"Kaufmann",
"Michael",
""
],
[
"Liotta",
"Giuseppe",
""
],
[
"Ortali",
"Giacomo",
""
]
] |
new_dataset
| 0.99848 |
2211.12020
|
Alexandre Duval
|
Alexandre Duval, Victor Schmidt, Santiago Miret, Yoshua Bengio, Alex
Hern\'andez-Garc\'ia, David Rolnick
|
PhAST: Physics-Aware, Scalable, and Task-specific GNNs for Accelerated
Catalyst Design
|
Accepted at the NeurIPS 2022 AI for Accelerated Materials Design
Workshop. Under submission at JMLR
| null | null | null |
cs.LG physics.comp-ph
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Mitigating the climate crisis requires a rapid transition towards
lower-carbon energy. Catalyst materials play a crucial role in the
electrochemical reactions involved in numerous industrial processes key to this
transition, such as renewable energy storage and electrofuel synthesis. To
reduce the energy spent on such activities, we must quickly discover more
efficient catalysts to drive electrochemical reactions. Machine learning (ML)
holds the potential to efficiently model materials properties from large
amounts of data, accelerating electrocatalyst design. The Open Catalyst Project
OC20 dataset was constructed to that end. However, ML models trained on OC20
are still neither scalable nor accurate enough for practical applications. In
this paper, we propose task-specific innovations applicable to most
architectures, enhancing both computational efficiency and accuracy. This
includes improvements in (1) the graph creation step, (2) atom representations,
(3) the energy prediction head, and (4) the force prediction head. We describe
these contributions and evaluate them thoroughly on multiple architectures.
Overall, our proposed PhAST improvements increase energy MAE by 4 to 42$\%$
while dividing compute time by 3 to 8$\times$ depending on the targeted
task/model. PhAST also enables CPU training, leading to 40$\times$ speedups in
highly parallelized settings. Python package:
\url{https://phast.readthedocs.io}.
|
[
{
"version": "v1",
"created": "Tue, 22 Nov 2022 05:24:30 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 15:53:49 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Jun 2023 10:34:42 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Duval",
"Alexandre",
""
],
[
"Schmidt",
"Victor",
""
],
[
"Miret",
"Santiago",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Hernández-García",
"Alex",
""
],
[
"Rolnick",
"David",
""
]
] |
new_dataset
| 0.994772 |
2303.02936
|
Yuanzheng Ci
|
Yuanzheng Ci, Yizhou Wang, Meilin Chen, Shixiang Tang, Lei Bai, Feng
Zhu, Rui Zhao, Fengwei Yu, Donglian Qi, Wanli Ouyang
|
UniHCP: A Unified Model for Human-Centric Perceptions
|
Accepted for publication at the IEEE/CVF Conference on Computer
Vision and Pattern Recognition 2023 (CVPR 2023)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Human-centric perceptions (e.g., pose estimation, human parsing, pedestrian
detection, person re-identification, etc.) play a key role in industrial
applications of visual models. While specific human-centric tasks have their
own relevant semantic aspect to focus on, they also share the same underlying
semantic structure of the human body. However, few works have attempted to
exploit such homogeneity and design a general-propose model for human-centric
tasks. In this work, we revisit a broad range of human-centric tasks and unify
them in a minimalist manner. We propose UniHCP, a Unified Model for
Human-Centric Perceptions, which unifies a wide range of human-centric tasks in
a simplified end-to-end manner with the plain vision transformer architecture.
With large-scale joint training on 33 human-centric datasets, UniHCP can
outperform strong baselines on several in-domain and downstream tasks by direct
evaluation. When adapted to a specific task, UniHCP achieves new SOTAs on a
wide range of human-centric tasks, e.g., 69.8 mIoU on CIHP for human parsing,
86.18 mA on PA-100K for attribute prediction, 90.3 mAP on Market1501 for ReID,
and 85.8 JI on CrowdHuman for pedestrian detection, performing better than
specialized models tailored for each task.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 07:10:07 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Mar 2023 20:26:47 GMT"
},
{
"version": "v3",
"created": "Fri, 26 May 2023 09:05:14 GMT"
},
{
"version": "v4",
"created": "Thu, 22 Jun 2023 05:17:53 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Ci",
"Yuanzheng",
""
],
[
"Wang",
"Yizhou",
""
],
[
"Chen",
"Meilin",
""
],
[
"Tang",
"Shixiang",
""
],
[
"Bai",
"Lei",
""
],
[
"Zhu",
"Feng",
""
],
[
"Zhao",
"Rui",
""
],
[
"Yu",
"Fengwei",
""
],
[
"Qi",
"Donglian",
""
],
[
"Ouyang",
"Wanli",
""
]
] |
new_dataset
| 0.961955 |
2303.04091
|
Benjamin Estermann
|
Giacomo Camposampiero, Loic Houmard, Benjamin Estermann, Jo\"el
Mathys, Roger Wattenhofer
|
Abstract Visual Reasoning Enabled by Language
|
The first two authors have contributed equally to this work. Accepted
as regular paper at CVPR 2023 Workshop and Challenges for New Frontiers in
Visual Language Reasoning: Compositionality, Prompts and Causality (NFVLR)
| null | null | null |
cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While artificial intelligence (AI) models have achieved human or even
superhuman performance in many well-defined applications, they still struggle
to show signs of broad and flexible intelligence. The Abstraction and Reasoning
Corpus (ARC), a visual intelligence benchmark introduced by Fran\c{c}ois
Chollet, aims to assess how close AI systems are to human-like cognitive
abilities. Most current approaches rely on carefully handcrafted
domain-specific program searches to brute-force solutions for the tasks present
in ARC. In this work, we propose a general learning-based framework for solving
ARC. It is centered on transforming tasks from the vision to the language
domain. This composition of language and vision allows for pre-trained models
to be leveraged at each stage, enabling a shift from handcrafted priors towards
the learned priors of the models. While not yet beating state-of-the-art models
on ARC, we demonstrate the potential of our approach, for instance, by solving
some ARC tasks that have not been solved previously.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 17:52:46 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2023 12:52:24 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Jun 2023 10:41:41 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Camposampiero",
"Giacomo",
""
],
[
"Houmard",
"Loic",
""
],
[
"Estermann",
"Benjamin",
""
],
[
"Mathys",
"Joël",
""
],
[
"Wattenhofer",
"Roger",
""
]
] |
new_dataset
| 0.951238 |
2305.03138
|
Nabajeet Barman
|
Nabajeet Barman, Yuriy Reznik and Maria G. Martini
|
A Subjective Dataset for Multi-Screen Video Streaming Applications
| null | null | null | null |
cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In modern-era video streaming systems, videos are streamed and displayed on a
wide range of devices. Such devices vary from large-screen UHD and HDTVs to
medium-screen Desktop PCs and Laptops to smaller-screen devices such as mobile
phones and tablets. It is well known that a video is perceived differently when
displayed on different devices. The viewing experience for a particular video
on smaller screen devices such as smartphones and tablets, which have high
pixel density, will be different with respect to the case where the same video
is played on a large screen device such as a TV or PC monitor. Being able to
model such relative differences in perception effectively can help in the
design of better quality metrics and in the design of more efficient and
optimized encoding profiles, leading to lower storage, encoding, and
transmission costs. This paper presents a new, open-source dataset consisting
of subjective ratings for various encoded video sequences of different
resolutions and bitrates (quality) when viewed on three devices of varying
screen sizes: TV, Tablet, and Mobile. Along with the subjective scores, an
evaluation of some of the most famous and commonly used open-source objective
quality metrics is also presented. It is observed that the performance of the
metrics varies a lot across different device types, with the recently
standardized ITU-T P.1204.3 Model, on average, outperforming their
full-reference counterparts. The dataset consisting of the videos, along with
their subjective and objective scores, is available freely on Github at
https://github.com/NabajeetBarman/Multiscreen-Dataset.
|
[
{
"version": "v1",
"created": "Thu, 4 May 2023 20:42:51 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2023 14:30:17 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Barman",
"Nabajeet",
""
],
[
"Reznik",
"Yuriy",
""
],
[
"Martini",
"Maria G.",
""
]
] |
new_dataset
| 0.999861 |
2306.08411
|
Izzy Friedlander
|
Izzy Friedlander
|
The MacWilliams Identity for the Hermitian Rank Metric
|
39 pages. arXiv admin note: substantial text overlap with
arXiv:2210.16153
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Error-correcting codes have an important role in data storage and
transmission and in cryptography, particularly in the post-quantum era.
Hermitian matrices over finite fields and equipped with the rank metric have
the potential to offer enhanced security with greater efficiency in encryption
and decryption. One crucial tool for evaluating the error-correcting
capabilities of a code is its weight distribution and the MacWilliams Theorem
has long been used to identify this structure of new codes from their known
duals. Earlier papers have developed the MacWilliams Theorem for certain
classes of matrices in the form of a functional transformation, developed using
$q$-algebra, character theory and Generalised Krawtchouk polynomials, which is
easy to apply and also allows for moments of the weight distribution to be
found. In this paper, recent work by Kai-Uwe Schmidt on the properties of codes
based on Hermitian matrices such as bounds on their size and the eigenvalues of
their association scheme is extended by introducing a negative-$q$ algebra to
establish a MacWilliams Theorem in this form together with some of its
associated moments. The similarities in this approach and in the paper for the
Skew-Rank metric by Friedlander et al. have been emphasised to facilitate
future generalisation to any translation scheme.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 10:11:52 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2023 15:50:22 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Friedlander",
"Izzy",
""
]
] |
new_dataset
| 0.999299 |
2306.11207
|
Wisdom Ikezogwo
|
Wisdom Oluchi Ikezogwo, Mehmet Saygin Seyfioglu, Fatemeh Ghezloo,
Dylan Stefan Chan Geva, Fatwir Sheikh Mohammed, Pavan Kumar Anand, Ranjay
Krishna, Linda Shapiro
|
Quilt-1M: One Million Image-Text Pairs for Histopathology
| null | null | null | null |
cs.CV cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent accelerations in multi-modal applications have been made possible with
the plethora of image and text data available online. However, the scarcity of
analogous data in the medical field, specifically in histopathology, has halted
comparable progress. To enable similar representation learning for
histopathology, we turn to YouTube, an untapped resource of videos, offering
$1,087$ hours of valuable educational histopathology videos from expert
clinicians. From YouTube, we curate Quilt: a large-scale vision-language
dataset consisting of $768,826$ image and text pairs. Quilt was automatically
curated using a mixture of models, including large language models, handcrafted
algorithms, human knowledge databases, and automatic speech recognition. In
comparison, the most comprehensive datasets curated for histopathology amass
only around $200$K samples. We combine Quilt with datasets from other sources,
including Twitter, research papers, and the internet in general, to create an
even larger dataset: Quilt-1M, with $1$M paired image-text samples, marking it
as the largest vision-language histopathology dataset to date. We demonstrate
the value of Quilt-1M by fine-tuning a pre-trained CLIP model. Our model
outperforms state-of-the-art models on both zero-shot and linear probing tasks
for classifying new histopathology images across $13$ diverse patch-level
datasets of $8$ different sub-pathologies and cross-modal retrieval tasks.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 00:14:47 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2023 05:01:16 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Ikezogwo",
"Wisdom Oluchi",
""
],
[
"Seyfioglu",
"Mehmet Saygin",
""
],
[
"Ghezloo",
"Fatemeh",
""
],
[
"Geva",
"Dylan Stefan Chan",
""
],
[
"Mohammed",
"Fatwir Sheikh",
""
],
[
"Anand",
"Pavan Kumar",
""
],
[
"Krishna",
"Ranjay",
""
],
[
"Shapiro",
"Linda",
""
]
] |
new_dataset
| 0.992539 |
2306.11710
|
Maxim Maximov
|
Maxim Maximov, Tim Meinhardt, Ismail Elezi, Zoe Papakipos, Caner
Hazirbas, Cristian Canton Ferrer, Laura Leal-Taix\'e
|
Data-Driven but Privacy-Conscious: Pedestrian Dataset De-identification
via Full-Body Person Synthesis
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The advent of data-driven technology solutions is accompanied by an
increasing concern with data privacy. This is of particular importance for
human-centered image recognition tasks, such as pedestrian detection,
re-identification, and tracking. To highlight the importance of privacy issues
and motivate future research, we motivate and introduce the Pedestrian Dataset
De-Identification (PDI) task. PDI evaluates the degree of de-identification and
downstream task training performance for a given de-identification method. As a
first baseline, we propose IncogniMOT, a two-stage full-body de-identification
pipeline based on image synthesis via generative adversarial networks. The
first stage replaces target pedestrians with synthetic identities. To improve
downstream task performance, we then apply stage two, which blends and adapts
the synthetic image parts into the data. To demonstrate the effectiveness of
IncogniMOT, we generate a fully de-identified version of the MOT17 pedestrian
tracking dataset and analyze its application as training data for pedestrian
re-identification, detection, and tracking models. Furthermore, we show how our
data is able to narrow the synthetic-to-real performance gap in a
privacy-conscious manner.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 17:39:24 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2023 10:15:48 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Maximov",
"Maxim",
""
],
[
"Meinhardt",
"Tim",
""
],
[
"Elezi",
"Ismail",
""
],
[
"Papakipos",
"Zoe",
""
],
[
"Hazirbas",
"Caner",
""
],
[
"Ferrer",
"Cristian Canton",
""
],
[
"Leal-Taixé",
"Laura",
""
]
] |
new_dataset
| 0.999244 |
2306.11975
|
Hiroyuki Ootomo
|
Hiroyuki Ootomo, Katsuhisa Ozaki, Rio Yokota
|
DGEMM on Integer Matrix Multiplication Unit
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Deep learning hardware achieves high throughput and low power consumption by
reducing computing precision and specializing in matrix multiplication. For
machine learning inference, fixed-point value computation is commonplace, where
the input and output values and the model parameters are quantized. Thus, many
processors are now equipped with fast integer matrix multiplication units
(IMMU). It is of significant interest to find a way to harness these IMMUs to
improve the performance of HPC applications while maintaining accuracy. We
focus on the Ozaki scheme, which computes a high-precision matrix
multiplication by using lower-precision computing units, and show the
advantages and disadvantages of using IMMU. The experiment using integer Tensor
Cores shows that we can compute double-precision matrix multiplication faster
than cuBLAS and an existing Ozaki scheme implementation on FP16 Tensor Cores on
NVIDIA consumer GPUs. Furthermore, we demonstrate accelerating a quantum
circuit simulation by up to 4.33 while maintaining the FP64 accuracy.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 02:03:28 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2023 02:56:13 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Ootomo",
"Hiroyuki",
""
],
[
"Ozaki",
"Katsuhisa",
""
],
[
"Yokota",
"Rio",
""
]
] |
new_dataset
| 0.990662 |
2306.12436
|
Bing Wang
|
Junkai Mao, Yuexing Han and Bing Wang
|
MPSTAN: Metapopulation-based Spatio-Temporal Attention Network for
Epidemic Forecasting
| null | null | null | null |
cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate epidemic forecasting plays a vital role for governments in
developing effective prevention measures for suppressing epidemics. Most of the
present spatio-temporal models cannot provide a general framework for stable,
and accurate forecasting of epidemics with diverse evolution trends.
Incorporating epidemiological domain knowledge ranging from single-patch to
multi-patch into neural networks is expected to improve forecasting accuracy.
However, relying solely on single-patch knowledge neglects inter-patch
interactions, while constructing multi-patch knowledge is challenging without
population mobility data. To address the aforementioned problems, we propose a
novel hybrid model called Metapopulation-based Spatio-Temporal Attention
Network (MPSTAN). This model aims to improve the accuracy of epidemic
forecasting by incorporating multi-patch epidemiological knowledge into a
spatio-temporal model and adaptively defining inter-patch interactions.
Moreover, we incorporate inter-patch epidemiological knowledge into both the
model construction and loss function to help the model learn epidemic
transmission dynamics. Extensive experiments conducted on two representative
datasets with different epidemiological evolution trends demonstrate that our
proposed model outperforms the baselines and provides more accurate and stable
short- and long-term forecasting. We confirm the effectiveness of domain
knowledge in the learning model and investigate the impact of different ways of
integrating domain knowledge on forecasting. We observe that using domain
knowledge in both model construction and loss functions leads to more efficient
forecasting, and selecting appropriate domain knowledge can improve accuracy
further.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 18:12:55 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Mao",
"Junkai",
""
],
[
"Han",
"Yuexing",
""
],
[
"Wang",
"Bing",
""
]
] |
new_dataset
| 0.996324 |
2306.12525
|
Zixiang Zhou
|
Dongqiangzi Ye, Yufei Xie, Weijia Chen, Zixiang Zhou, Hassan Foroosh
|
LPFormer: LiDAR Pose Estimation Transformer with Multi-Task Network
|
Technical report of the top solution for the Waymo Open Dataset
Challenges 2023 - Pose Estimation. CVPR 2023 Workshop on Autonomous Driving
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this technical report, we present the 1st place solution for the 2023
Waymo Open Dataset Pose Estimation challenge. Due to the difficulty of
acquiring large-scale 3D human keypoint annotation, previous methods have
commonly relied on 2D image features and 2D sequential annotations for 3D human
pose estimation. In contrast, our proposed method, named LPFormer, uses only
LiDAR as its input along with its corresponding 3D annotations. LPFormer
consists of two stages: the first stage detects the human bounding box and
extracts multi-level feature representations, while the second stage employs a
transformer-based network to regress the human keypoints using these features.
Experimental results on the Waymo Open Dataset demonstrate the top performance,
and improvements even compared to previous multi-modal solutions.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 19:20:15 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Ye",
"Dongqiangzi",
""
],
[
"Xie",
"Yufei",
""
],
[
"Chen",
"Weijia",
""
],
[
"Zhou",
"Zixiang",
""
],
[
"Foroosh",
"Hassan",
""
]
] |
new_dataset
| 0.999442 |
2306.12547
|
Shuzhe Wang
|
Shuzhe Wang, Juho Kannala, Daniel Barath
|
DGC-GNN: Descriptor-free Geometric-Color Graph Neural Network for 2D-3D
Matching
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Direct matching of 2D keypoints in an input image to a 3D point cloud of the
scene without requiring visual descriptors has garnered increased interest due
to its lower memory requirements, inherent privacy preservation, and reduced
need for expensive 3D model maintenance compared to visual descriptor-based
methods. However, existing algorithms often compromise on performance,
resulting in a significant deterioration compared to their descriptor-based
counterparts. In this paper, we introduce DGC-GNN, a novel algorithm that
employs a global-to-local Graph Neural Network (GNN) that progressively
exploits geometric and color cues to represent keypoints, thereby improving
matching robustness. Our global-to-local procedure encodes both Euclidean and
angular relations at a coarse level, forming the geometric embedding to guide
the local point matching. We evaluate DGC-GNN on both indoor and outdoor
datasets, demonstrating that it not only doubles the accuracy of the
state-of-the-art descriptor-free algorithm but, also, substantially narrows the
performance gap between descriptor-based and descriptor-free methods. The code
and trained models will be made publicly available.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 20:21:15 GMT"
}
] | 2023-06-23T00:00:00 |
[
[
"Wang",
"Shuzhe",
""
],
[
"Kannala",
"Juho",
""
],
[
"Barath",
"Daniel",
""
]
] |
new_dataset
| 0.988866 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.