id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.17157
|
Simeon Adebola
|
Kaiyuan Chen, Ryan Hoque, Karthik Dharmarajan, Edith LLontop, Simeon
Adebola, Jeffrey Ichnowski, John Kubiatowicz, and Ken Goldberg
|
FogROS2-SGC: A ROS2 Cloud Robotics Platform for Secure Global
Connectivity
|
9 pages, 8 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The Robot Operating System (ROS2) is the most widely used software platform
for building robotics applications. FogROS2 extends ROS2 to allow robots to
access cloud computing on demand. However, ROS2 and FogROS2 assume that all
robots are locally connected and that each robot has full access and control of
the other robots. With applications like distributed multi-robot systems,
remote robot control, and mobile robots, robotics increasingly involves the
global Internet and complex trust management. Existing approaches for
connecting disjoint ROS2 networks lack key features such as security,
compatibility, efficiency, and ease of use. We introduce FogROS2-SGC, an
extension of FogROS2 that can effectively connect robot systems across
different physical locations, networks, and Data Distribution Services (DDS).
With globally unique and location-independent identifiers, FogROS2-SGC securely
and efficiently routes data between robotics components around the globe.
FogROS2-SGC is agnostic to the ROS2 distribution and configuration, is
compatible with non-ROS2 software, and seamlessly extends existing ROS2
applications without any code modification. Experiments suggest FogROS2-SGC is
19x faster than rosbridge (a ROS2 package with comparable features, but lacking
security). We also apply FogROS2-SGC to 4 robots and compute nodes that are
3600km apart. Videos and code are available on the project website
https://sites.google.com/view/fogros2-sgc.
|
[
{
"version": "v1",
"created": "Thu, 29 Jun 2023 17:57:55 GMT"
}
] | 2023-06-30T00:00:00 |
[
[
"Chen",
"Kaiyuan",
""
],
[
"Hoque",
"Ryan",
""
],
[
"Dharmarajan",
"Karthik",
""
],
[
"LLontop",
"Edith",
""
],
[
"Adebola",
"Simeon",
""
],
[
"Ichnowski",
"Jeffrey",
""
],
[
"Kubiatowicz",
"John",
""
],
[
"Goldberg",
"Ken",
""
]
] |
new_dataset
| 0.998552 |
2109.01537
|
Dimitris Gkoumas
|
Dimitris Gkoumas, Bo Wang, Adam Tsakalidis, Maria Wolters, Arkaitz
Zubiaga, Matthew Purver and Maria Liakata
|
A Longitudinal Multi-modal Dataset for Dementia Monitoring and Diagnosis
| null | null | null | null |
cs.CL cs.AI cs.DB cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Dementia is a family of neurogenerative conditions affecting memory and
cognition in an increasing number of individuals in our globally aging
population. Automated analysis of language, speech and paralinguistic
indicators have been gaining popularity as potential indicators of cognitive
decline. Here we propose a novel longitudinal multi-modal dataset collected
from people with mild dementia and age matched controls over a period of
several months in a natural setting. The multi-modal data consists of spoken
conversations, a subset of which are transcribed, as well as typed and written
thoughts and associated extra-linguistic information such as pen strokes and
keystrokes. We describe the dataset in detail and proceed to focus on a task
using the speech modality. The latter involves distinguishing controls from
people with dementia by exploiting the longitudinal nature of the data. Our
experiments showed significant differences in how the speech varied from
session to session in the control and dementia groups.
|
[
{
"version": "v1",
"created": "Fri, 3 Sep 2021 14:02:12 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Gkoumas",
"Dimitris",
""
],
[
"Wang",
"Bo",
""
],
[
"Tsakalidis",
"Adam",
""
],
[
"Wolters",
"Maria",
""
],
[
"Zubiaga",
"Arkaitz",
""
],
[
"Purver",
"Matthew",
""
],
[
"Liakata",
"Maria",
""
]
] |
new_dataset
| 0.999699 |
2203.05566
|
Naomi Patterson
|
Alexander Senchenko, Naomi Patterson, Hamman Samuel, Dan Isper
|
SUPERNOVA: Automating Test Selection and Defect Prevention in AAA Video
Games Using Risk Based Testing and Machine Learning
|
ICST 2022 Industry Track Proceedings, 10 pages, 8 figures, 2 tables
|
Verification and Validation (ICST), 2022, pp. 345-354
|
10.1109/ICST53961.2022.00043
| null |
cs.SE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Testing video games is an increasingly difficult task as traditional methods
fail to scale with growing software systems. Manual testing is a very
labor-intensive process, and therefore quickly becomes cost prohibitive. Using
scripts for automated testing is affordable, however scripts are ineffective in
non-deterministic environments, and knowing when to run each test is another
problem altogether. The modern game's complexity, scope, and player
expectations are rapidly increasing where quality control is a big portion of
the production cost and delivery risk. Reducing this risk and making production
happen is a big challenge for the industry currently. To keep production costs
realistic up-to and after release, we are focusing on preventive quality
assurance tactics alongside testing and data analysis automation. We present
SUPERNOVA (Selection of tests and Universal defect Prevention in External
Repositories for Novel Objective Verification of software Anomalies), a system
responsible for test selection and defect prevention while also functioning as
an automation hub. By integrating data analysis functionality with machine and
deep learning capability, SUPERNOVA assists quality assurance testers in
finding bugs and developers in reducing defects, which improves stability
during the production cycle and keeps testing costs under control. The direct
impact of this has been observed to be a reduction in 55% or more testing hours
for an undisclosed sports game title that has shipped, which was using these
test selection optimizations. Furthermore, using risk scores generated by a
semi-supervised machine learning model, we are able to detect with 71%
precision and 77% recall the probability of a change-list being bug inducing,
and provide a detailed breakdown of this inference to developers. These efforts
improve workflow and reduce testing hours required on game titles in
development.
|
[
{
"version": "v1",
"created": "Thu, 10 Mar 2022 00:47:46 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jun 2023 16:35:23 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Senchenko",
"Alexander",
""
],
[
"Patterson",
"Naomi",
""
],
[
"Samuel",
"Hamman",
""
],
[
"Isper",
"Dan",
""
]
] |
new_dataset
| 0.960022 |
2203.09501
|
Clemens Grabmayer
|
Clemens Grabmayer
|
A Coinductive Reformulation of Milner's Proof System for Regular
Expressions Modulo Bisimilarity
|
arXiv admin note: substantial text overlap with arXiv:2108.13104
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Milner (1984) defined an operational semantics for regular expressions as
finite-state processes. In order to axiomatize bisimilarity of regular
expressions under this process semantics, he adapted Salomaa's proof system
that is complete for equality of regular expressions under the language
semantics. Apart from most equational axioms, Milner's system Mil inherits from
Salomaa's system a non-algebraic rule for solving single fixed-point equations.
Recognizing distinctive properties of the process semantics that render
Salomaa's proof strategy inapplicable, Milner posed completeness of the system
Mil as an open question.
As a proof-theoretic approach to this problem we characterize the
derivational power that the fixed-point rule adds to the purely equational part
Mil$^-$ of Mil. We do so by means of a coinductive rule that permits cyclic
derivations that consist of a finite process graph with empty steps that
satisfies the layered loop existence and elimination property LLEE, and two of
its Mil$^{-}$-provable solutions. With this rule as replacement for the
fixed-point rule in Mil, we define the coinductive reformulation cMil as an
extension of Mil$^{-}$. In order to show that cMil and Mil are theorem
equivalent we develop effective proof transformations from Mil to cMil, and
vice versa. Since it is located half-way in between bisimulations and proofs in
Milner's system Mil, cMil may become a beachhead for a completeness proof of
Mil.
This article extends our contribution to the CALCO 2022 proceedings. Here we
refine the proof transformations by framing them as eliminations of derivable
and admissible rules, and we link coinductive proofs to a coalgebraic
formulation of solutions of process graphs.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 17:50:48 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Mar 2022 17:46:46 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Feb 2023 13:23:06 GMT"
},
{
"version": "v4",
"created": "Tue, 2 May 2023 18:43:38 GMT"
},
{
"version": "v5",
"created": "Wed, 28 Jun 2023 14:57:35 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Grabmayer",
"Clemens",
""
]
] |
new_dataset
| 0.997163 |
2204.06604
|
Irene Li
|
Irene Li, Keen You, Yujie Qiao, Lucas Huang, Chia-Chun Hsieh, Benjamin
Rosand, Jeremy Goldwasser, Dragomir Radev
|
EHRKit: A Python Natural Language Processing Toolkit for Electronic
Health Record Texts
| null | null | null | null |
cs.CL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The Electronic Health Record (EHR) is an essential part of the modern medical
system and impacts healthcare delivery, operations, and research. Unstructured
text is attracting much attention despite structured information in the EHRs
and has become an exciting research field. The success of the recent neural
Natural Language Processing (NLP) method has led to a new direction for
processing unstructured clinical notes. In this work, we create a python
library for clinical texts, EHRKit. This library contains two main parts:
MIMIC-III-specific functions and tasks specific functions. The first part
introduces a list of interfaces for accessing MIMIC-III NOTEEVENTS data,
including basic search, information retrieval, and information extraction. The
second part integrates many third-party libraries for up to 12 off-shelf NLP
tasks such as named entity recognition, summarization, machine translation,
etc.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2022 18:51:01 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Jun 2022 04:59:39 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Aug 2022 01:20:32 GMT"
},
{
"version": "v4",
"created": "Wed, 10 Aug 2022 12:46:39 GMT"
},
{
"version": "v5",
"created": "Wed, 28 Jun 2023 03:03:26 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Li",
"Irene",
""
],
[
"You",
"Keen",
""
],
[
"Qiao",
"Yujie",
""
],
[
"Huang",
"Lucas",
""
],
[
"Hsieh",
"Chia-Chun",
""
],
[
"Rosand",
"Benjamin",
""
],
[
"Goldwasser",
"Jeremy",
""
],
[
"Radev",
"Dragomir",
""
]
] |
new_dataset
| 0.998939 |
2206.05442
|
Iddo Drori
|
Iddo Drori, Sarah J. Zhang, Reece Shuttleworth, Sarah Zhang, Keith
Tyser, Zad Chin, Pedro Lantigua, Saisamrit Surbehera, Gregory Hunter, Derek
Austin, Leonard Tang, Yann Hicke, Sage Simhon, Sathwik Karnik, Darnell
Granberry, Madeleine Udell
|
From Human Days to Machine Seconds: Automatically Answering and
Generating Machine Learning Final Exams
|
9 pages
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
A final exam in machine learning at a top institution such as MIT, Harvard,
or Cornell typically takes faculty days to write, and students hours to solve.
We demonstrate that large language models pass machine learning finals at a
human level, on finals available online after the models were trained, and
automatically generate new human-quality final exam questions in seconds.
Previous work has developed program synthesis and few-shot learning methods to
solve university-level problem set questions in mathematics and STEM courses.
In this work, we develop and compare methods that solve final exams, which
differ from problem sets in several ways: the questions are longer, have
multiple parts, are more complicated, and span a broader set of topics. We
curate a dataset and benchmark of questions from machine learning final exams
available online and code for answering these questions and generating new
questions. We show how to generate new questions from other questions and
course notes. For reproducibility and future research on this final exam
benchmark, we use automatic checkers for multiple-choice, numeric, and
questions with expression answers. We perform ablation studies comparing
zero-shot learning with few-shot learning and chain-of-thought prompting using
GPT-3, OPT, Codex, and ChatGPT across machine learning topics and find that
few-shot learning methods perform best. We highlight the transformative
potential of language models to streamline the writing and solution of
large-scale assessments, significantly reducing the workload from human days to
mere machine seconds. Our results suggest that rather than banning large
language models such as ChatGPT in class, instructors should teach students to
harness them by asking students meta-questions about correctness, completeness,
and originality of the responses generated, encouraging critical thinking in
academic studies.
|
[
{
"version": "v1",
"created": "Sat, 11 Jun 2022 06:38:06 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Aug 2022 23:56:52 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Dec 2022 19:37:45 GMT"
},
{
"version": "v4",
"created": "Thu, 22 Dec 2022 18:59:36 GMT"
},
{
"version": "v5",
"created": "Fri, 23 Dec 2022 13:41:18 GMT"
},
{
"version": "v6",
"created": "Thu, 15 Jun 2023 03:32:23 GMT"
},
{
"version": "v7",
"created": "Wed, 28 Jun 2023 04:42:05 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Drori",
"Iddo",
""
],
[
"Zhang",
"Sarah J.",
""
],
[
"Shuttleworth",
"Reece",
""
],
[
"Zhang",
"Sarah",
""
],
[
"Tyser",
"Keith",
""
],
[
"Chin",
"Zad",
""
],
[
"Lantigua",
"Pedro",
""
],
[
"Surbehera",
"Saisamrit",
""
],
[
"Hunter",
"Gregory",
""
],
[
"Austin",
"Derek",
""
],
[
"Tang",
"Leonard",
""
],
[
"Hicke",
"Yann",
""
],
[
"Simhon",
"Sage",
""
],
[
"Karnik",
"Sathwik",
""
],
[
"Granberry",
"Darnell",
""
],
[
"Udell",
"Madeleine",
""
]
] |
new_dataset
| 0.980092 |
2211.05206
|
Friederike Groschupp
|
Friederike Groschupp, Mark Kuhne, Moritz Schneider, Ivan Puddu, Shweta
Shinde, Srdjan Capkun
|
It's TEEtime: A New Architecture Bringing Sovereignty to Smartphones
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern smartphones are complex systems in which control over phone resources
is exercised by phone manufacturers, OS vendors, and users. These stakeholders
have diverse and often competing interests. Barring some exceptions, users
entrust their security and privacy to OS vendors (Android and iOS) and need to
accept their constraints. Manufacturers protect their firmware and peripherals
from the OS by executing in the highest privilege and leveraging dedicated CPUs
and TEEs. OS vendors need to trust the highest privileged code deployed by
manufacturers. This division of control over the phone is not ideal for OS
vendors and is even more disadvantageous for the users. Users are generally
limited in what applications they can install on their devices, in the privacy
model and trust assumptions of the existing applications, and in the
functionalities that applications can have.
We propose TEEtime, a new smartphone architecture based on trusted execution
allowing to balance the control different stakeholders exert over phones. More
leveled control over the phone means that no stakeholder is more privileged
than the others. In particular, TEEtime makes users sovereign over their
phones: It enables them to install sensitive applications in isolated domains
with protected access to selected peripherals alongside an OS. TEEtime achieves
this while maintaining compatibility with the existing smartphone ecosystem and
without relying on virtualization; it only assumes trust in a phone's firmware.
TEEtime is the first TEE architecture that allows isolated execution domains to
gain protected and direct access to peripherals. TEEtime is based on Armv8-A
and achieves peripheral isolation using a novel mechanism based on memory and
interrupt controller protection. We demonstrate the feasibility of our design
by implementing a prototype of TEEtime, and by running exemplary sensitive
applications.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 21:26:37 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jun 2023 16:26:56 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Groschupp",
"Friederike",
""
],
[
"Kuhne",
"Mark",
""
],
[
"Schneider",
"Moritz",
""
],
[
"Puddu",
"Ivan",
""
],
[
"Shinde",
"Shweta",
""
],
[
"Capkun",
"Srdjan",
""
]
] |
new_dataset
| 0.998852 |
2211.07042
|
Nicole Wein
|
Shyan Akmal and Nicole Wein
|
A Local-to-Global Theorem for Congested Shortest Paths
|
Updated to reflect reviewer comments
| null | null | null |
cs.DS cs.CC cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Amiri and Wargalla (2020) proved the following local-to-global theorem in
directed acyclic graphs (DAGs): if $G$ is a weighted DAG such that for each
subset $S$ of 3 nodes there is a shortest path containing every node in $S$,
then there exists a pair $(s,t)$ of nodes such that there is a shortest
$st$-path containing every node in $G$.
We extend this theorem to general graphs. For undirected graphs, we prove
that the same theorem holds (up to a difference in the constant 3). For
directed graphs, we provide a counterexample to the theorem (for any constant),
and prove a roundtrip analogue of the theorem which shows there exists a pair
$(s,t)$ of nodes such that every node in $G$ is contained in the union of a
shortest $st$-path and a shortest $ts$-path.
The original theorem for DAGs has an application to the $k$-Shortest Paths
with Congestion $c$ (($k,c$)-SPC) problem. In this problem, we are given a
weighted graph $G$, together with $k$ node pairs $(s_1,t_1),\dots,(s_k,t_k)$,
and a positive integer $c\leq k$. We are tasked with finding paths $P_1,\dots,
P_k$ such that each $P_i$ is a shortest path from $s_i$ to $t_i$, and every
node in the graph is on at most $c$ paths $P_i$, or reporting that no such
collection of paths exists.
When $c=k$ the problem is easily solved by finding shortest paths for each
pair $(s_i,t_i)$ independently. When $c=1$, the $(k,c)$-SPC problem recovers
the $k$-Disjoint Shortest Paths ($k$-DSP) problem, where the collection of
shortest paths must be node-disjoint. For fixed $k$, $k$-DSP can be solved in
polynomial time on DAGs and undirected graphs. Previous work shows that the
local-to-global theorem for DAGs implies that $(k,c)$-SPC on DAGs whenever
$k-c$ is constant. In the same way, our work implies that $(k,c)$-SPC can be
solved in polynomial time on undirected graphs whenever $k-c$ is constant.
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 23:08:27 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Jun 2023 23:32:38 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Akmal",
"Shyan",
""
],
[
"Wein",
"Nicole",
""
]
] |
new_dataset
| 0.982153 |
2212.01476
|
Chao Zhao
|
Chao Zhao, Faeze Brahman, Kaiqiang Song, Wenlin Yao, Dian Yu, Snigdha
Chaturvedi
|
NarraSum: A Large-Scale Dataset for Abstractive Narrative Summarization
|
EMNLP Findings 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Narrative summarization aims to produce a distilled version of a narrative to
describe its most salient events and characters. Summarizing a narrative is
challenging as it requires an understanding of event causality and character
behaviors. To encourage research in this direction, we propose NarraSum, a
large-scale narrative summarization dataset. It contains 122K narrative
documents, which are collected from plot descriptions of movies and TV episodes
with diverse genres, and their corresponding abstractive summaries. Experiments
show that there is a large performance gap between humans and the
state-of-the-art summarization models on NarraSum. We hope that this dataset
will promote future research in summarization, as well as broader studies of
natural language understanding and generation. The dataset is available at
https://github.com/zhaochaocs/narrasum.
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2022 22:51:51 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jun 2023 04:08:20 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Zhao",
"Chao",
""
],
[
"Brahman",
"Faeze",
""
],
[
"Song",
"Kaiqiang",
""
],
[
"Yao",
"Wenlin",
""
],
[
"Yu",
"Dian",
""
],
[
"Chaturvedi",
"Snigdha",
""
]
] |
new_dataset
| 0.999845 |
2301.07695
|
Gyubok Lee
|
Gyubok Lee, Hyeonji Hwang, Seongsu Bae, Yeonsu Kwon, Woncheol Shin,
Seongjun Yang, Minjoon Seo, Jong-Yeup Kim, Edward Choi
|
EHRSQL: A Practical Text-to-SQL Benchmark for Electronic Health Records
|
Published as a conference paper at NeurIPS 2022 (Track on Datasets
and Benchmarks)
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present a new text-to-SQL dataset for electronic health records (EHRs).
The utterances were collected from 222 hospital staff members, including
physicians, nurses, and insurance review and health records teams. To construct
the QA dataset on structured EHR data, we conducted a poll at a university
hospital and used the responses to create seed questions. We then manually
linked these questions to two open-source EHR databases, MIMIC-III and eICU,
and included various time expressions and held-out unanswerable questions in
the dataset, which were also collected from the poll. Our dataset poses a
unique set of challenges: the model needs to 1) generate SQL queries that
reflect a wide range of needs in the hospital, including simple retrieval and
complex operations such as calculating survival rate, 2) understand various
time expressions to answer time-sensitive questions in healthcare, and 3)
distinguish whether a given question is answerable or unanswerable. We believe
our dataset, EHRSQL, can serve as a practical benchmark for developing and
assessing QA models on structured EHR data and take a step further towards
bridging the gap between text-to-SQL research and its real-life deployment in
healthcare. EHRSQL is available at https://github.com/glee4810/EHRSQL.
|
[
{
"version": "v1",
"created": "Mon, 16 Jan 2023 05:10:20 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Feb 2023 19:10:08 GMT"
},
{
"version": "v3",
"created": "Tue, 11 Apr 2023 04:39:31 GMT"
},
{
"version": "v4",
"created": "Wed, 28 Jun 2023 15:16:51 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Lee",
"Gyubok",
""
],
[
"Hwang",
"Hyeonji",
""
],
[
"Bae",
"Seongsu",
""
],
[
"Kwon",
"Yeonsu",
""
],
[
"Shin",
"Woncheol",
""
],
[
"Yang",
"Seongjun",
""
],
[
"Seo",
"Minjoon",
""
],
[
"Kim",
"Jong-Yeup",
""
],
[
"Choi",
"Edward",
""
]
] |
new_dataset
| 0.999833 |
2302.00952
|
Mingchen Zhuge
|
Weimin Shi, Mingchen Zhuge, Dehong Gao, Zhong Zhou, Ming-Ming Cheng,
Deng-Ping Fan
|
QR-CLIP: Introducing Explicit Open-World Knowledge for Location and Time
Reasoning
|
Technical Report. Github: https://github.com/Shi-Wm/QR-CLIP
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Daily images may convey abstract meanings that require us to memorize and
infer profound information from them. To encourage such human-like reasoning,
in this work, we teach machines to predict where and when it was taken rather
than performing basic tasks like traditional segmentation or classification.
Inspired by Horn's QR theory, we designed a novel QR-CLIP model consisting of
two components: 1) the Quantity module first retrospects more open-world
knowledge as the candidate language inputs; 2) the Relevance module carefully
estimates vision and language cues and infers the location and time.
Experiments show our QR-CLIP's effectiveness, and it outperforms the previous
SOTA on each task by an average of about 10% and 130% relative lift in terms of
location and time reasoning. This study lays a technical foundation for
location and time reasoning and suggests that effectively introducing
open-world knowledge is one of the panaceas for the tasks.
|
[
{
"version": "v1",
"created": "Thu, 2 Feb 2023 08:44:12 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jun 2023 15:14:45 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Jun 2023 09:41:25 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Shi",
"Weimin",
""
],
[
"Zhuge",
"Mingchen",
""
],
[
"Gao",
"Dehong",
""
],
[
"Zhou",
"Zhong",
""
],
[
"Cheng",
"Ming-Ming",
""
],
[
"Fan",
"Deng-Ping",
""
]
] |
new_dataset
| 0.999185 |
2302.09444
|
Mohammad Khalid Jawed
|
Andrew Choi, Dezhong Tong, Brian Park, Demetri Terzopoulos, Jungseock
Joo, Mohammad Khalid Jawed
|
mBEST: Realtime Deformable Linear Object Detection Through Minimal
Bending Energy Skeleton Pixel Traversals
|
YouTube video: https://youtu.be/q84I9i0DOK4
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Robotic manipulation of deformable materials is a challenging task that often
requires realtime visual feedback. This is especially true for deformable
linear objects (DLOs) or "rods", whose slender and flexible structures make
proper tracking and detection nontrivial. To address this challenge, we present
mBEST, a robust algorithm for the realtime detection of DLOs that is capable of
producing an ordered pixel sequence of each DLO's centerline along with
segmentation masks. Our algorithm obtains a binary mask of the DLOs and then
thins it to produce a skeleton pixel representation. After refining the
skeleton to ensure topological correctness, the pixels are traversed to
generate paths along each unique DLO. At the core of our algorithm, we
postulate that intersections can be robustly handled by choosing the
combination of paths that minimizes the cumulative bending energy of the
DLO(s). We show that this simple and intuitive formulation outperforms the
state-of-the-art methods for detecting DLOs with large numbers of sporadic
crossings ranging from curvatures with high variance to nearly-parallel
configurations. Furthermore, our method achieves a significant performance
improvement of approximately 50% faster runtime and better scaling over the
state of the art.
|
[
{
"version": "v1",
"created": "Sat, 18 Feb 2023 23:45:29 GMT"
},
{
"version": "v2",
"created": "Tue, 23 May 2023 02:44:40 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Jun 2023 20:41:34 GMT"
},
{
"version": "v4",
"created": "Tue, 27 Jun 2023 20:58:44 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Choi",
"Andrew",
""
],
[
"Tong",
"Dezhong",
""
],
[
"Park",
"Brian",
""
],
[
"Terzopoulos",
"Demetri",
""
],
[
"Joo",
"Jungseock",
""
],
[
"Jawed",
"Mohammad Khalid",
""
]
] |
new_dataset
| 0.999553 |
2303.14307
|
Pingchuan Ma
|
Pingchuan Ma, Alexandros Haliassos, Adriana Fernandez-Lopez, Honglie
Chen, Stavros Petridis, Maja Pantic
|
Auto-AVSR: Audio-Visual Speech Recognition with Automatic Labels
|
Accepted to ICASSP 2023
| null |
10.1109/ICASSP49357.2023.10096889
| null |
cs.CV cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Audio-visual speech recognition has received a lot of attention due to its
robustness against acoustic noise. Recently, the performance of automatic,
visual, and audio-visual speech recognition (ASR, VSR, and AV-ASR,
respectively) has been substantially improved, mainly due to the use of larger
models and training sets. However, accurate labelling of datasets is
time-consuming and expensive. Hence, in this work, we investigate the use of
automatically-generated transcriptions of unlabelled datasets to increase the
training set size. For this purpose, we use publicly-available pre-trained ASR
models to automatically transcribe unlabelled datasets such as AVSpeech and
VoxCeleb2. Then, we train ASR, VSR and AV-ASR models on the augmented training
set, which consists of the LRS2 and LRS3 datasets as well as the additional
automatically-transcribed data. We demonstrate that increasing the size of the
training set, a recent trend in the literature, leads to reduced WER despite
using noisy transcriptions. The proposed model achieves new state-of-the-art
performance on AV-ASR on LRS2 and LRS3. In particular, it achieves a WER of
0.9% on LRS3, a relative improvement of 30% over the current state-of-the-art
approach, and outperforms methods that have been trained on non-publicly
available datasets with 26 times more training data.
|
[
{
"version": "v1",
"created": "Sat, 25 Mar 2023 00:37:34 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 16:22:36 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Jun 2023 14:41:17 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Ma",
"Pingchuan",
""
],
[
"Haliassos",
"Alexandros",
""
],
[
"Fernandez-Lopez",
"Adriana",
""
],
[
"Chen",
"Honglie",
""
],
[
"Petridis",
"Stavros",
""
],
[
"Pantic",
"Maja",
""
]
] |
new_dataset
| 0.998382 |
2303.17503
|
Sotetsu Koyamada
|
Sotetsu Koyamada, Shinri Okano, Soichiro Nishimori, Yu Murata, Keigo
Habara, Haruka Kita, Shin Ishii
|
Pgx: Hardware-accelerated Parallel Game Simulators for Reinforcement
Learning
|
9 pages
| null | null | null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We propose Pgx, a suite of board game reinforcement learning (RL)
environments written in JAX and optimized for GPU/TPU accelerators. By
leveraging auto-vectorization and Just-In-Time (JIT) compilation of JAX, Pgx
can efficiently scale to thousands of parallel executions over accelerators. In
our experiments on a DGX-A100 workstation, we discovered that Pgx can simulate
RL environments 10-100x faster than existing Python RL libraries. Pgx includes
RL environments commonly used as benchmarks in RL research, such as backgammon,
chess, shogi, and Go. Additionally, Pgx offers miniature game sets and baseline
models to facilitate rapid research cycles. We demonstrate the efficient
training of the Gumbel AlphaZero algorithm with Pgx environments. Overall, Pgx
provides high-performance environment simulators for researchers to accelerate
their RL experiments. Pgx is available at https://github.com/sotetsuk/pgx.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 02:41:23 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jun 2023 02:48:17 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Koyamada",
"Sotetsu",
""
],
[
"Okano",
"Shinri",
""
],
[
"Nishimori",
"Soichiro",
""
],
[
"Murata",
"Yu",
""
],
[
"Habara",
"Keigo",
""
],
[
"Kita",
"Haruka",
""
],
[
"Ishii",
"Shin",
""
]
] |
new_dataset
| 0.966059 |
2303.17709
|
Michael Correll
|
Michael Correll
|
Teru Teru B\=ozu: Defensive Raincloud Plots
| null | null |
10.1111/cgf.14826
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Univariate visualizations like histograms, rug plots, or box plots provide
concise visual summaries of distributions. However, each individual
visualization may fail to robustly distinguish important features of a
distribution, or provide sufficient information for all of the relevant tasks
involved in summarizing univariate data. One solution is to juxtapose or
superimpose multiple univariate visualizations in the same chart, as in Allen
et al.'s "raincloud plots." In this paper I examine the design space of
raincloud plots, and, through a series of simulation studies, explore designs
where the component visualizations mutually "defend" against situations where
important distribution features are missed or trivial features are given undue
prominence. I suggest a class of "defensive" raincloud plot designs that
provide good mutual coverage for surfacing distributional features of interest.
|
[
{
"version": "v1",
"created": "Thu, 30 Mar 2023 21:03:33 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Correll",
"Michael",
""
]
] |
new_dataset
| 0.988255 |
2304.12210
|
Mark Ibrahim
|
Randall Balestriero, Mark Ibrahim, Vlad Sobal, Ari Morcos, Shashank
Shekhar, Tom Goldstein, Florian Bordes, Adrien Bardes, Gregoire Mialon,
Yuandong Tian, Avi Schwarzschild, Andrew Gordon Wilson, Jonas Geiping,
Quentin Garrido, Pierre Fernandez, Amir Bar, Hamed Pirsiavash, Yann LeCun and
Micah Goldblum
|
A Cookbook of Self-Supervised Learning
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning, dubbed the dark matter of intelligence, is a
promising path to advance machine learning. Yet, much like cooking, training
SSL methods is a delicate art with a high barrier to entry. While many
components are familiar, successfully training a SSL method involves a dizzying
set of choices from the pretext tasks to training hyper-parameters. Our goal is
to lower the barrier to entry into SSL research by laying the foundations and
latest SSL recipes in the style of a cookbook. We hope to empower the curious
researcher to navigate the terrain of methods, understand the role of the
various knobs, and gain the know-how required to explore how delicious SSL can
be.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 15:49:53 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jun 2023 14:15:22 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Balestriero",
"Randall",
""
],
[
"Ibrahim",
"Mark",
""
],
[
"Sobal",
"Vlad",
""
],
[
"Morcos",
"Ari",
""
],
[
"Shekhar",
"Shashank",
""
],
[
"Goldstein",
"Tom",
""
],
[
"Bordes",
"Florian",
""
],
[
"Bardes",
"Adrien",
""
],
[
"Mialon",
"Gregoire",
""
],
[
"Tian",
"Yuandong",
""
],
[
"Schwarzschild",
"Avi",
""
],
[
"Wilson",
"Andrew Gordon",
""
],
[
"Geiping",
"Jonas",
""
],
[
"Garrido",
"Quentin",
""
],
[
"Fernandez",
"Pierre",
""
],
[
"Bar",
"Amir",
""
],
[
"Pirsiavash",
"Hamed",
""
],
[
"LeCun",
"Yann",
""
],
[
"Goldblum",
"Micah",
""
]
] |
new_dataset
| 0.97905 |
2306.08640
|
Difei Gao
|
Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan
Fan, Mike Zheng Shou
|
AssistGPT: A General Multi-modal Assistant that can Plan, Execute,
Inspect, and Learn
|
Project page: https://showlab.github.io/assistgpt/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent research on Large Language Models (LLMs) has led to remarkable
advancements in general NLP AI assistants. Some studies have further explored
the use of LLMs for planning and invoking models or APIs to address more
general multi-modal user queries. Despite this progress, complex visual-based
tasks still remain challenging due to the diverse nature of visual tasks. This
diversity is reflected in two aspects: 1) Reasoning paths. For many real-life
applications, it is hard to accurately decompose a query simply by examining
the query itself. Planning based on the specific visual content and the results
of each step is usually required. 2) Flexible inputs and intermediate results.
Input forms could be flexible for in-the-wild cases, and involves not only a
single image or video but a mixture of videos and images, e.g., a user-view
image with some reference videos. Besides, a complex reasoning process will
also generate diverse multimodal intermediate results, e.g., video narrations,
segmented video clips, etc. To address such general cases, we propose a
multi-modal AI assistant, AssistGPT, with an interleaved code and language
reasoning approach called Plan, Execute, Inspect, and Learn (PEIL) to integrate
LLMs with various tools. Specifically, the Planner is capable of using natural
language to plan which tool in Executor should do next based on the current
reasoning progress. Inspector is an efficient memory manager to assist the
Planner to feed proper visual information into a specific tool. Finally, since
the entire reasoning process is complex and flexible, a Learner is designed to
enable the model to autonomously explore and discover the optimal solution. We
conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving
state-of-the-art results. Moreover, showcases demonstrate the ability of our
system to handle questions far more complex than those found in the benchmarks.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 17:12:56 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jun 2023 05:00:35 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Gao",
"Difei",
""
],
[
"Ji",
"Lei",
""
],
[
"Zhou",
"Luowei",
""
],
[
"Lin",
"Kevin Qinghong",
""
],
[
"Chen",
"Joya",
""
],
[
"Fan",
"Zihan",
""
],
[
"Shou",
"Mike Zheng",
""
]
] |
new_dataset
| 0.992099 |
2306.09364
|
Vijay Ekambaram
|
Vijay Ekambaram, Arindam Jati, Nam Nguyen, Phanwadee Sinthong, Jayant
Kalagnanam
|
TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series
Forecasting
|
Accepted in the Proceedings of the 29th ACM SIGKDD Conference on
Knowledge Discovery and Data Mining (KDD 23), Research Track. Delayed release
in arXiv to comply with the conference policies on the double-blind review
process. This paper has been submitted to the KDD peer-review process on Feb
02, 2023
| null |
10.1145/3580305.3599533
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Transformers have gained popularity in time series forecasting for their
ability to capture long-sequence interactions. However, their high memory and
computing requirements pose a critical bottleneck for long-term forecasting. To
address this, we propose TSMixer, a lightweight neural architecture exclusively
composed of multi-layer perceptron (MLP) modules. TSMixer is designed for
multivariate forecasting and representation learning on patched time series,
providing an efficient alternative to Transformers. Our model draws inspiration
from the success of MLP-Mixer models in computer vision. We demonstrate the
challenges involved in adapting Vision MLP-Mixer for time series and introduce
empirically validated components to enhance accuracy. This includes a novel
design paradigm of attaching online reconciliation heads to the MLP-Mixer
backbone, for explicitly modeling the time-series properties such as hierarchy
and channel-correlations. We also propose a Hybrid channel modeling approach to
effectively handle noisy channel interactions and generalization across diverse
datasets, a common challenge in existing patch channel-mixing methods.
Additionally, a simple gated attention mechanism is introduced in the backbone
to prioritize important features. By incorporating these lightweight
components, we significantly enhance the learning capability of simple MLP
structures, outperforming complex Transformer models with minimal computing
usage. Moreover, TSMixer's modular design enables compatibility with both
supervised and masked self-supervised learning methods, making it a promising
building block for time-series Foundation Models. TSMixer outperforms
state-of-the-art MLP and Transformer models in forecasting by a considerable
margin of 8-60%. It also outperforms the latest strong benchmarks of
Patch-Transformer models (by 1-2%) with a significant reduction in memory and
runtime (2-3X).
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 06:26:23 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jun 2023 09:17:36 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Jun 2023 01:57:23 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Ekambaram",
"Vijay",
""
],
[
"Jati",
"Arindam",
""
],
[
"Nguyen",
"Nam",
""
],
[
"Sinthong",
"Phanwadee",
""
],
[
"Kalagnanam",
"Jayant",
""
]
] |
new_dataset
| 0.995601 |
2306.14764
|
Surendrabikram Thapa
|
Farhan Ahmad Jafri, Mohammad Aman Siddiqui, Surendrabikram Thapa,
Kritesh Rauniyar, Usman Naseem, Imran Razzak
|
Uncovering Political Hate Speech During Indian Election Campaign: A New
Low-Resource Dataset and Baselines
|
Accepted to ICWSM Workshop (MEDIATE)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The detection of hate speech in political discourse is a critical issue, and
this becomes even more challenging in low-resource languages. To address this
issue, we introduce a new dataset named IEHate, which contains 11,457 manually
annotated Hindi tweets related to the Indian Assembly Election Campaign from
November 1, 2021, to March 9, 2022. We performed a detailed analysis of the
dataset, focusing on the prevalence of hate speech in political communication
and the different forms of hateful language used. Additionally, we benchmark
the dataset using a range of machine learning, deep learning, and
transformer-based algorithms. Our experiments reveal that the performance of
these models can be further improved, highlighting the need for more advanced
techniques for hate speech detection in low-resource languages. In particular,
the relatively higher score of human evaluation over algorithms emphasizes the
importance of utilizing both human and automated approaches for effective hate
speech moderation. Our IEHate dataset can serve as a valuable resource for
researchers and practitioners working on developing and evaluating hate speech
detection techniques in low-resource languages. Overall, our work underscores
the importance of addressing the challenges of identifying and mitigating hate
speech in political discourse, particularly in the context of low-resource
languages. The dataset and resources for this work are made available at
https://github.com/Farhan-jafri/Indian-Election.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 15:17:54 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Jun 2023 16:55:14 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Jafri",
"Farhan Ahmad",
""
],
[
"Siddiqui",
"Mohammad Aman",
""
],
[
"Thapa",
"Surendrabikram",
""
],
[
"Rauniyar",
"Kritesh",
""
],
[
"Naseem",
"Usman",
""
],
[
"Razzak",
"Imran",
""
]
] |
new_dataset
| 0.999847 |
2306.15412
|
Haojie Wei
|
Haojie Wei, Xueke Cao, Tangpeng Dan, Yueguo Chen
|
RMVPE: A Robust Model for Vocal Pitch Estimation in Polyphonic Music
|
This paper has been accepted by INTERSPEECH 2023
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vocal pitch is an important high-level feature in music audio processing.
However, extracting vocal pitch in polyphonic music is more challenging due to
the presence of accompaniment. To eliminate the influence of the accompaniment,
most previous methods adopt music source separation models to obtain clean
vocals from polyphonic music before predicting vocal pitches. As a result, the
performance of vocal pitch estimation is affected by the music source
separation models. To address this issue and directly extract vocal pitches
from polyphonic music, we propose a robust model named RMVPE. This model can
extract effective hidden features and accurately predict vocal pitches from
polyphonic music. The experimental results demonstrate the superiority of RMVPE
in terms of raw pitch accuracy (RPA) and raw chroma accuracy (RCA).
Additionally, experiments conducted with different types of noise show that
RMVPE is robust across all signal-to-noise ratio (SNR) levels. The code of
RMVPE is available at https://github.com/Dream-High/RMVPE.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 12:11:55 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jun 2023 01:53:37 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Wei",
"Haojie",
""
],
[
"Cao",
"Xueke",
""
],
[
"Dan",
"Tangpeng",
""
],
[
"Chen",
"Yueguo",
""
]
] |
new_dataset
| 0.984362 |
2306.15634
|
Gaspard Michel
|
No\'e Durandard and Viet-Anh Tran and Gaspard Michel and Elena V.
Epure
|
Automatic Annotation of Direct Speech in Written French Narratives
|
9 pages, ACL 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The automatic annotation of direct speech (AADS) in written text has been
often used in computational narrative understanding. Methods based on either
rules or deep neural networks have been explored, in particular for English or
German languages. Yet, for French, our target language, not many works exist.
Our goal is to create a unified framework to design and evaluate AADS models in
French. For this, we consolidated the largest-to-date French narrative dataset
annotated with DS per word; we adapted various baselines for sequence labelling
or from AADS in other languages; and we designed and conducted an extensive
evaluation focused on generalisation. Results show that the task still requires
substantial efforts and emphasise characteristics of each baseline. Although
this framework could be improved, it is a step further to encourage more
research on the topic.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 17:21:00 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jun 2023 07:44:53 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Durandard",
"Noé",
""
],
[
"Tran",
"Viet-Anh",
""
],
[
"Michel",
"Gaspard",
""
],
[
"Epure",
"Elena V.",
""
]
] |
new_dataset
| 0.976472 |
2306.15704
|
Rui He
|
Yuanxi Sun, Rui He, Youzeng Li, Zuwei Huang, Feng Hu, Xu Cheng, Jie
Tang
|
MAE-GEBD:Winning the CVPR'2023 LOVEU-GEBD Challenge
|
Winner of CVPR2023 LOVEU GEBD Challenge
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The Generic Event Boundary Detection (GEBD) task aims to build a model for
segmenting videos into segments by detecting general event boundaries
applicable to various classes. In this paper, based on last year's MAE-GEBD
method, we have improved our model performance on the GEBD task by adjusting
the data processing strategy and loss function. Based on last year's approach,
we extended the application of pseudo-label to a larger dataset and made many
experimental attempts. In addition, we applied focal loss to concentrate more
on difficult samples and improved our model performance. Finally, we improved
the segmentation alignment strategy used last year, and dynamically adjusted
the segmentation alignment method according to the boundary density and
duration of the video, so that our model can be more flexible and fully
applicable in different situations. With our method, we achieve an F1 score of
86.03% on the Kinetics-GEBD test set, which is a 0.09% improvement in the F1
score compared to our 2022 Kinetics-GEBD method.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 02:35:19 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Sun",
"Yuanxi",
""
],
[
"He",
"Rui",
""
],
[
"Li",
"Youzeng",
""
],
[
"Huang",
"Zuwei",
""
],
[
"Hu",
"Feng",
""
],
[
"Cheng",
"Xu",
""
],
[
"Tang",
"Jie",
""
]
] |
new_dataset
| 0.985601 |
2306.15748
|
Yifan Zhang
|
Yifan Zhang, Arnav Vaibhav Malawade, Xiaofang Zhang, Yuhui Li,
DongHwan Seong, Mohammad Abdullah Al Faruque and Sitao Huang
|
CARMA: Context-Aware Runtime Reconfiguration for Energy-Efficient Sensor
Fusion
|
Accepted to be published in the 2023 ACM/IEEE International Symposium
on Low Power Electronics and Design (ISLPED 2023)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous systems (AS) are systems that can adapt and change their behavior
in response to unanticipated events and include systems such as aerial drones,
autonomous vehicles, and ground/aquatic robots. AS require a wide array of
sensors, deep-learning models, and powerful hardware platforms to perceive and
safely operate in real-time. However, in many contexts, some sensing modalities
negatively impact perception while increasing the system's overall energy
consumption. Since AS are often energy-constrained edge devices,
energy-efficient sensor fusion methods have been proposed. However, existing
methods either fail to adapt to changing scenario conditions or to optimize
energy efficiency system-wide. We propose CARMA: a context-aware sensor fusion
approach that uses context to dynamically reconfigure the computation flow on a
Field-Programmable Gate Array (FPGA) at runtime. By clock-gating unused sensors
and model sub-components, CARMA significantly reduces the energy used by a
multi-sensory object detector without compromising performance. We use a
Deep-learning Processor Unit (DPU) based reconfiguration approach to minimize
the latency of model reconfiguration. We evaluate multiple
context-identification strategies, propose a novel system-wide
energy-performance joint optimization, and evaluate scenario-specific
perception performance. Across challenging real-world sensing contexts, CARMA
outperforms state-of-the-art methods with up to 1.3x speedup and 73% lower
energy consumption.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 19:00:07 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Zhang",
"Yifan",
""
],
[
"Malawade",
"Arnav Vaibhav",
""
],
[
"Zhang",
"Xiaofang",
""
],
[
"Li",
"Yuhui",
""
],
[
"Seong",
"DongHwan",
""
],
[
"Faruque",
"Mohammad Abdullah Al",
""
],
[
"Huang",
"Sitao",
""
]
] |
new_dataset
| 0.994797 |
2306.15769
|
Ali Shirali
|
Ali Shirali, Moritz Hardt
|
What Makes ImageNet Look Unlike LAION
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
ImageNet was famously created from Flickr image search results. What if we
recreated ImageNet instead by searching the massive LAION dataset based on
image captions alone? In this work, we carry out this counterfactual
investigation. We find that the resulting ImageNet recreation, which we call
LAIONet, looks distinctly unlike the original. Specifically, the intra-class
similarity of images in the original ImageNet is dramatically higher than it is
for LAIONet. Consequently, models trained on ImageNet perform significantly
worse on LAIONet. We propose a rigorous explanation for the discrepancy in
terms of a subtle, yet important, difference in two plausible causal
data-generating processes for the respective datasets, that we support with
systematic experimentation. In a nutshell, searching based on an image caption
alone creates an information bottleneck that mitigates the selection bias
otherwise present in image-based filtering. Our explanation formalizes a
long-held intuition in the community that ImageNet images are stereotypical,
unnatural, and overly simple representations of the class category. At the same
time, it provides a simple and actionable takeaway for future dataset creation
efforts.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 19:34:53 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Shirali",
"Ali",
""
],
[
"Hardt",
"Moritz",
""
]
] |
new_dataset
| 0.996244 |
2306.15794
|
Eric Nguyen
|
Eric Nguyen, Michael Poli, Marjan Faizi, Armin Thomas, Callum
Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli,
Yoshua Bengio, Stefano Ermon, Stephen A. Baccus, Chris R\'e
|
HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide
Resolution
| null | null | null | null |
cs.LG q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Genomic (DNA) sequences encode an enormous amount of information for gene
regulation and protein synthesis. Similar to natural language models,
researchers have proposed foundation models in genomics to learn generalizable
features from unlabeled genome data that can then be fine-tuned for downstream
tasks such as identifying regulatory elements. Due to the quadratic scaling of
attention, previous Transformer-based genomic models have used 512 to 4k tokens
as context (<0.001% of the human genome), significantly limiting the modeling
of long-range interactions in DNA. In addition, these methods rely on
tokenizers to aggregate meaningful DNA units, losing single nucleotide
resolution where subtle genetic variations can completely alter protein
function via single nucleotide polymorphisms (SNPs). Recently, Hyena, a large
language model based on implicit convolutions was shown to match attention in
quality while allowing longer context lengths and lower time complexity.
Leveraging Hyenas new long-range capabilities, we present HyenaDNA, a genomic
foundation model pretrained on the human reference genome with context lengths
of up to 1 million tokens at the single nucleotide-level, an up to 500x
increase over previous dense attention-based models. HyenaDNA scales
sub-quadratically in sequence length (training up to 160x faster than
Transformer), uses single nucleotide tokens, and has full global context at
each layer. We explore what longer context enables - including the first use of
in-context learning in genomics for simple adaptation to novel tasks without
updating pretrained model weights. On fine-tuned benchmarks from the Nucleotide
Transformer, HyenaDNA reaches state-of-the-art (SotA) on 12 of 17 datasets
using a model with orders of magnitude less parameters and pretraining data. On
the GenomicBenchmarks, HyenaDNA surpasses SotA on all 8 datasets on average by
+9 accuracy points.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 20:46:34 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Nguyen",
"Eric",
""
],
[
"Poli",
"Michael",
""
],
[
"Faizi",
"Marjan",
""
],
[
"Thomas",
"Armin",
""
],
[
"Birch-Sykes",
"Callum",
""
],
[
"Wornow",
"Michael",
""
],
[
"Patel",
"Aman",
""
],
[
"Rabideau",
"Clayton",
""
],
[
"Massaroli",
"Stefano",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Ermon",
"Stefano",
""
],
[
"Baccus",
"Stephen A.",
""
],
[
"Ré",
"Chris",
""
]
] |
new_dataset
| 0.999663 |
2306.15813
|
Guohui Lin
|
Qiaojun Shu and Guohui Lin
|
Planar graphs are acyclically edge $(\Delta + 5)$-colorable
|
Full version with 120 pages
| null | null | null |
cs.DM cs.DS math.CO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
An edge coloring of a graph $G$ is to color all the edges in the graph such
that adjacent edges receive different colors. It is acyclic if each cycle in
the graph receives at least three colors. Fiam{\v{c}}ik (1978) and Alon,
Sudakov and Zaks (2001) conjectured that every simple graph with maximum degree
$\Delta$ is acyclically edge $(\Delta + 2)$-colorable -- the well-known acyclic
edge coloring conjecture (AECC). Despite many major breakthroughs and minor
improvements, the conjecture remains open even for planar graphs. In this
paper, we prove that planar graphs are acyclically edge $(\Delta +
5)$-colorable. Our proof has two main steps: Using discharging methods, we
first show that every non-trivial planar graph must have one of the eight
groups of well characterized local structures; and then acyclically edge color
the graph using no more than $\Delta + 5$ colors by an induction on the number
of edges.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 22:14:15 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Shu",
"Qiaojun",
""
],
[
"Lin",
"Guohui",
""
]
] |
new_dataset
| 0.998128 |
2306.15852
|
Meenakshi Sarkar
|
Meenakshi Sarkar, Vinayak Honkote, Dibyendu Das and Debasish Ghose
|
Action-conditioned Deep Visual Prediction with RoAM, a new Indoor Human
Motion Dataset for Autonomous Robots
| null | null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
With the increasing adoption of robots across industries, it is crucial to
focus on developing advanced algorithms that enable robots to anticipate,
comprehend, and plan their actions effectively in collaboration with humans. We
introduce the Robot Autonomous Motion (RoAM) video dataset, which is collected
with a custom-made turtlebot3 Burger robot in a variety of indoor environments
recording various human motions from the robot's ego-vision. The dataset also
includes synchronized records of the LiDAR scan and all control actions taken
by the robot as it navigates around static and moving human agents. The unique
dataset provides an opportunity to develop and benchmark new visual prediction
frameworks that can predict future image frames based on the action taken by
the recording agent in partially observable scenarios or cases where the
imaging sensor is mounted on a moving platform. We have benchmarked the dataset
on our novel deep visual prediction framework called ACPNet where the
approximated future image frames are also conditioned on action taken by the
robot and demonstrated its potential for incorporating robot dynamics into the
video prediction paradigm for mobile robotics and autonomous navigation
research.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 00:58:44 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Sarkar",
"Meenakshi",
""
],
[
"Honkote",
"Vinayak",
""
],
[
"Das",
"Dibyendu",
""
],
[
"Ghose",
"Debasish",
""
]
] |
new_dataset
| 0.996488 |
2306.15853
|
Marjan Shahi
|
Marjan Shahi, David Clausi, Alexander Wong
|
GoalieNet: A Multi-Stage Network for Joint Goalie, Equipment, and Net
Pose Estimation in Ice Hockey
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In the field of computer vision-driven ice hockey analytics, one of the most
challenging and least studied tasks is goalie pose estimation. Unlike general
human pose estimation, goalie pose estimation is much more complex as it
involves not only the detection of keypoints corresponding to the joints of the
goalie concealed under thick padding and mask, but also a large number of
non-human keypoints corresponding to the large leg pads and gloves worn, the
stick, as well as the hockey net. To tackle this challenge, we introduce
GoalieNet, a multi-stage deep neural network for jointly estimating the pose of
the goalie, their equipment, and the net. Experimental results using NHL
benchmark data demonstrate that the proposed GoalieNet can achieve an average
of 84\% accuracy across all keypoints, where 22 out of 29 keypoints are
detected with more than 80\% accuracy. This indicates that such a joint pose
estimation approach can be a promising research direction.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 01:00:36 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Shahi",
"Marjan",
""
],
[
"Clausi",
"David",
""
],
[
"Wong",
"Alexander",
""
]
] |
new_dataset
| 0.99558 |
2306.15919
|
Junhyung Jo
|
Junhyung Jo, Hamidreza Kasaei
|
Fine-grained 3D object recognition: an approach and experiments
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Three-dimensional (3D) object recognition technology is being used as a core
technology in advanced technologies such as autonomous driving of automobiles.
There are two sets of approaches for 3D object recognition: (i) hand-crafted
approaches like Global Orthographic Object Descriptor (GOOD), and (ii) deep
learning-based approaches such as MobileNet and VGG. However, it is needed to
know which of these approaches works better in an open-ended domain where the
number of known categories increases over time, and the system should learn
about new object categories using few training examples. In this paper, we
first implemented an offline 3D object recognition system that takes an object
view as input and generates category labels as output. In the offline stage,
instance-based learning (IBL) is used to form a new category and we use K-fold
cross-validation to evaluate the obtained object recognition performance. We
then test the proposed approach in an online fashion by integrating the code
into a simulated teacher test. As a result, we concluded that the approach
using deep learning features is more suitable for open-ended fashion. Moreover,
we observed that concatenating the hand-crafted and deep learning features
increases the classification accuracy.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 04:48:21 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Jo",
"Junhyung",
""
],
[
"Kasaei",
"Hamidreza",
""
]
] |
new_dataset
| 0.999594 |
2306.15943
|
Raashid Altaf
|
Raashid Altaf, Pravesh Biyani
|
No Transfers Required: Integrating Last Mile with Public Transit Using
Opti-Mile
| null | null | null | null |
cs.CY cs.AI math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
Public transit is a popular mode of transit due to its affordability, despite
the inconveniences due to the necessity of transfers required to reach most
areas. For example, in the bus and metro network of New Delhi, only 30\% of
stops can be directly accessed from any starting point, thus requiring
transfers for most commutes. Additionally, last-mile services like rickshaws,
tuk-tuks or shuttles are commonly used as feeders to the nearest public transit
access points, which further adds to the complexity and inefficiency of a
journey. Ultimately, users often face a tradeoff between coverage and transfers
to reach their destination, regardless of the mode of transit or the use of
last-mile services. To address the problem of limited accessibility and
inefficiency due to transfers in public transit systems, we propose
``opti-mile," a novel trip planning approach that combines last-mile services
with public transit such that no transfers are required. Opti-mile allows users
to customise trip parameters such as maximum walking distance, and acceptable
fare range. We analyse the transit network of New Delhi, evaluating the
efficiency, feasibility and advantages of opti-mile for optimal multi-modal
trips between randomly selected source-destination pairs. We demonstrate that
opti-mile trips lead to a 10% reduction in distance travelled for 18% increase
in price compared to traditional shortest paths. We also show that opti-mile
trips provide better coverage of the city than public transit, without a
significant fare increase.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 06:05:14 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Altaf",
"Raashid",
""
],
[
"Biyani",
"Pravesh",
""
]
] |
new_dataset
| 0.994236 |
2306.15945
|
Fredrik Berggren
|
Fredrik Berggren, Branislav M. Popovic
|
Permutation Polynomial Interleaved Zadoff-Chu Sequences
|
Submitted to IEEE Transactions on Information Theory
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Constant amplitude zero autocorrelation (CAZAC) sequences have modulus one
and ideal periodic autocorrelation function. Such sequences have been used in
communications systems, e.g., for reference signals, synchronization signals
and random access preambles. We propose a new family CAZAC sequences, which is
constructed by interleaving a Zadoff-Chu sequence by a quadratic permutation
polynomial (QPP), or by a permutation polynomial whose inverse is a QPP. It is
demonstrated that a set of orthogonal interleaved Zadoff-Chu sequences can be
constructed by proper choice of QPPs.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 06:06:48 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Berggren",
"Fredrik",
""
],
[
"Popovic",
"Branislav M.",
""
]
] |
new_dataset
| 0.952166 |
2306.15953
|
Yi Hua
|
Yi Hua, Yongyi Zhao, Aswin C. Sankaranarayanan
|
Angle Sensitive Pixels for Lensless Imaging on Spherical Sensors
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose OrbCam, a lensless architecture for imaging with spherical
sensors. Prior work in lensless imager techniques have focused largely on using
planar sensors; for such designs, it is important to use a modulation element,
e.g. amplitude or phase masks, to construct a invertible imaging system. In
contrast, we show that the diversity of pixel orientations on a curved surface
is sufficient to improve the conditioning of the mapping between the scene and
the sensor. Hence, when imaging on a spherical sensor, all pixels can have the
same angular response function such that the lensless imager is comprised of
pixels that are identical to each other and differ only in their orientations.
We provide the computational tools for the design of the angular response of
the pixels in a spherical sensor that leads to well-conditioned and
noise-robust measurements. We validate our design in both simulation and a lab
prototype. The implications of our design is that the lensless imaging can be
enabled easily for curved and flexible surfaces thereby opening up a new set of
application domains.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 06:28:53 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Hua",
"Yi",
""
],
[
"Zhao",
"Yongyi",
""
],
[
"Sankaranarayanan",
"Aswin C.",
""
]
] |
new_dataset
| 0.997827 |
2306.15968
|
Xinyang Lu
|
Xinyang Lu, Flint Xiaofeng Fan and Tianying Wang
|
Action and Trajectory Planning for Urban Autonomous Driving with
Hierarchical Reinforcement Learning
|
ICML Workshop on New Frontiers in Learning, Control, and Dynamical
Systems
| null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement Learning (RL) has made promising progress in planning and
decision-making for Autonomous Vehicles (AVs) in simple driving scenarios.
However, existing RL algorithms for AVs fail to learn critical driving skills
in complex urban scenarios. First, urban driving scenarios require AVs to
handle multiple driving tasks of which conventional RL algorithms are
incapable. Second, the presence of other vehicles in urban scenarios results in
a dynamically changing environment, which challenges RL algorithms to plan the
action and trajectory of the AV. In this work, we propose an action and
trajectory planner using Hierarchical Reinforcement Learning (atHRL) method,
which models the agent behavior in a hierarchical model by using the perception
of the lidar and birdeye view. The proposed atHRL method learns to make
decisions about the agent's future trajectory and computes target waypoints
under continuous settings based on a hierarchical DDPG algorithm. The waypoints
planned by the atHRL model are then sent to a low-level controller to generate
the steering and throttle commands required for the vehicle maneuver. We
empirically verify the efficacy of atHRL through extensive experiments in
complex urban driving scenarios that compose multiple tasks with the presence
of other vehicles in the CARLA simulator. The experimental results suggest a
significant performance improvement compared to the state-of-the-art RL
methods.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 07:11:02 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Lu",
"Xinyang",
""
],
[
"Fan",
"Flint Xiaofeng",
""
],
[
"Wang",
"Tianying",
""
]
] |
new_dataset
| 0.960439 |
2306.15990
|
Mattia Giovanni Campana
|
Mattia Giovanni Campana, Franca Delmastro
|
MyDigitalFootprint: an extensive context dataset for pervasive computing
applications at the edge
| null |
Pervasive and Mobile Computing, Volume 70, January 2021, 101309
|
10.1016/j.pmcj.2020.101309
| null |
cs.LG cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The widespread diffusion of connected smart devices has contributed to the
rapid expansion and evolution of the Internet at its edge. Personal mobile
devices interact with other smart objects in their surroundings, adapting
behavior based on rapidly changing user context. The ability of mobile devices
to process this data locally is crucial for quick adaptation. This can be
achieved through a single elaboration process integrated into user applications
or a middleware platform for context processing. However, the lack of public
datasets considering user context complexity in the mobile environment hinders
research progress. We introduce MyDigitalFootprint, a large-scale dataset
comprising smartphone sensor data, physical proximity information, and Online
Social Networks interactions. This dataset supports multimodal context
recognition and social relationship modeling. It spans two months of
measurements from 31 volunteer users in their natural environment, allowing for
unrestricted behavior. Existing public datasets focus on limited context data
for specific applications, while ours offers comprehensive information on the
user context in the mobile environment. To demonstrate the dataset's
effectiveness, we present three context-aware applications utilizing various
machine learning tasks: (i) a social link prediction algorithm based on
physical proximity data, (ii) daily-life activity recognition using
smartphone-embedded sensors data, and (iii) a pervasive context-aware
recommender system. Our dataset, with its heterogeneity of information, serves
as a valuable resource to validate new research in mobile and edge computing.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 07:59:47 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Campana",
"Mattia Giovanni",
""
],
[
"Delmastro",
"Franca",
""
]
] |
new_dataset
| 0.999847 |
2306.16006
|
Tomasz Lizurej
|
Zeta Avarikioti, Tomasz Lizurej, Tomasz Michalak, Michelle Yeo
|
Lightning Creation Games
| null | null | null | null |
cs.GT cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Payment channel networks (PCNs) are a promising solution to the scalability
problem of cryptocurrencies. Any two users connected by a payment channel in
the network can theoretically send an unbounded number of instant, costless
transactions between them. Users who are not directly connected can also
transact with each other in a multi-hop fashion. In this work, we study the
incentive structure behind the creation of payment channel networks,
particularly from the point of view of a single user that wants to join the
network. We define a utility function for a new user in terms of expected
revenue, expected fees, and the cost of creating channels, and then provide
constant factor approximation algorithms that optimise the utility function
given a certain budget. Additionally, we take a step back from a single user to
the whole network and examine the parameter spaces under which simple graph
topologies form a Nash equilibrium.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 08:26:59 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Avarikioti",
"Zeta",
""
],
[
"Lizurej",
"Tomasz",
""
],
[
"Michalak",
"Tomasz",
""
],
[
"Yeo",
"Michelle",
""
]
] |
new_dataset
| 0.965407 |
2306.16020
|
Sebastian Krapf
|
Sebastian Krapf, Kevin Mayer, Martin Fischer
|
Points for Energy Renovation (PointER): A LiDAR-Derived Point Cloud
Dataset of One Million English Buildings Linked to Energy Characteristics
|
The PointER dataset can be downloaded from
https://doi.org/10.14459/2023mp1713501. The code used for generating building
point clouds is available at https://github.com/kdmayer/PointER
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Rapid renovation of Europe's inefficient buildings is required to reduce
climate change. However, analyzing and evaluating buildings at scale is
challenging because every building is unique. In current practice, the energy
performance of buildings is assessed during on-site visits, which are slow,
costly, and local. This paper presents a building point cloud dataset that
promotes a data-driven, large-scale understanding of the 3D representation of
buildings and their energy characteristics. We generate building point clouds
by intersecting building footprints with geo-referenced LiDAR data and link
them with attributes from UK's energy performance database via the Unique
Property Reference Number (UPRN). To achieve a representative sample, we select
one million buildings from a range of rural and urban regions across England,
of which half a million are linked to energy characteristics. Building point
clouds in new regions can be generated with the open-source code published
alongside the paper. The dataset enables novel research in building energy
modeling and can be easily expanded to other research fields by adding building
features via the UPRN or geo-location.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 08:48:22 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Krapf",
"Sebastian",
""
],
[
"Mayer",
"Kevin",
""
],
[
"Fischer",
"Martin",
""
]
] |
new_dataset
| 0.999747 |
2306.16034
|
Weihua Liu
|
Weihua Liu and Yong Zuo
|
Stone Needle: A General Multimodal Large-scale Model Framework towards
Healthcare
| null | null | null | null |
cs.AI cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In healthcare, multimodal data is prevalent and requires to be
comprehensively analyzed before diagnostic decisions, including medical images,
clinical reports, etc. However, current large-scale artificial intelligence
models predominantly focus on single-modal cognitive abilities and neglect the
integration of multiple modalities. Therefore, we propose Stone Needle, a
general multimodal large-scale model framework tailored explicitly for
healthcare applications. Stone Needle serves as a comprehensive medical
multimodal model foundation, integrating various modalities such as text,
images, videos, and audio to surpass the limitations of single-modal systems.
Through the framework components of intent analysis, medical foundation models,
prompt manager, and medical language module, our architecture can perform
multi-modal interaction in multiple rounds of dialogue. Our method is a general
multimodal large-scale model framework, integrating diverse modalities and
allowing us to tailor for specific tasks. The experimental results demonstrate
the superior performance of our method compared to single-modal systems. The
fusion of different modalities and the ability to process complex medical
information in Stone Needle benefits accurate diagnosis, treatment
recommendations, and patient care.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 09:04:56 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Liu",
"Weihua",
""
],
[
"Zuo",
"Yong",
""
]
] |
new_dataset
| 0.996363 |
2306.16045
|
Jiaming Yu
|
Jiaming Yu, Zihao Guan, Xinyue Chang, Xiumei Liu, Zhenshan Shi,
Changcai Yang, Riqing Chen, Lanyan Xue, Lifang Wei
|
OpenNDD: Open Set Recognition for Neurodevelopmental Disorders Detection
|
10 pages, 2 figures
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Neurodevelopmental disorders (NDDs) are a highly prevalent group of disorders
and represent strong clinical behavioral similarities, and that make it very
challenging for accurate identification of different NDDs such as autism
spectrum disorder (ASD) and attention-deficit hyperactivity disorder (ADHD).
Moreover, there is no reliable physiological markers for NDDs diagnosis and it
solely relies on psychological evaluation criteria. However, it is crucial to
prevent misdiagnosis and underdiagnosis by intelligent assisted diagnosis,
which is closely related to the follow-up corresponding treatment. In order to
relieve these issues, we propose a novel open set recognition framework for
NDDs screening and detection, which is the first application of open set
recognition in this field. It combines auto encoder and adversarial reciprocal
points open set recognition to accurately identify known classes as well as
recognize classes never encountered. And considering the strong similarities
between different subjects, we present a joint scaling method called MMS to
distinguish unknown disorders. To validate the feasibility of our presented
method, we design a reciprocal opposition experiment protocol on the hybrid
datasets from Autism Brain Imaging Data Exchange I (ABIDE I) and THE ADHD-200
SAMPLE (ADHD-200) with 791 samples from four sites and the results demonstrate
the superiority on various metrics. Our OpenNDD has achieved promising
performance, where the accuracy is 77.38%, AUROC is 75.53% and the open set
classification rate is as high as 59.43%.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 09:28:33 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Yu",
"Jiaming",
""
],
[
"Guan",
"Zihao",
""
],
[
"Chang",
"Xinyue",
""
],
[
"Liu",
"Xiumei",
""
],
[
"Shi",
"Zhenshan",
""
],
[
"Yang",
"Changcai",
""
],
[
"Chen",
"Riqing",
""
],
[
"Xue",
"Lanyan",
""
],
[
"Wei",
"Lifang",
""
]
] |
new_dataset
| 0.997014 |
2306.16049
|
Mohammad Belal
|
James She, Kamilla Swart-Arries, Mohammad Belal and Simon Wong
|
What Sentiment and Fun Facts We Learnt Before FIFA World Cup Qatar 2022
Using Twitter and AI
| null | null | null | null |
cs.CL cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Twitter is a social media platform bridging most countries and allows
real-time news discovery. Since the tweets on Twitter are usually short and
express public feelings, thus provide a source for opinion mining and sentiment
analysis for global events. This paper proposed an effective solution, in
providing a sentiment on tweets related to the FIFA World Cup. At least 130k
tweets, as the first in the community, are collected and implemented as a
dataset to evaluate the performance of the proposed machine learning solution.
These tweets are collected with the related hashtags and keywords of the Qatar
World Cup 2022. The Vader algorithm is used in this paper for sentiment
analysis. Through the machine learning method and collected Twitter tweets, we
discovered the sentiments and fun facts of several aspects important to the
period before the World Cup. The result shows people are positive to the
opening of the World Cup.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 09:29:23 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"She",
"James",
""
],
[
"Swart-Arries",
"Kamilla",
""
],
[
"Belal",
"Mohammad",
""
],
[
"Wong",
"Simon",
""
]
] |
new_dataset
| 0.999366 |
2306.16176
|
Zhangyin Feng
|
Zhangyin Feng, Yong Dai, Fan Zhang, Duyu Tang, Xiaocheng Feng,
Shuangzhi Wu, Bing Qin, Yunbo Cao and Shuming Shi
|
SkillNet-X: A Multilingual Multitask Model with Sparsely Activated
Skills
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional multitask learning methods basically can only exploit common
knowledge in task- or language-wise, which lose either cross-language or
cross-task knowledge. This paper proposes a general multilingual multitask
model, named SkillNet-X, which enables a single model to tackle many different
tasks from different languages. To this end, we define several
language-specific skills and task-specific skills, each of which corresponds to
a skill module. SkillNet-X sparsely activates parts of the skill modules which
are relevant either to the target task or the target language. Acting as
knowledge transit hubs, skill modules are capable of absorbing task-related
knowledge and language-related knowledge consecutively. Based on Transformer,
we modify the multi-head attention layer and the feed forward network layer to
accommodate skill modules. We evaluate SkillNet-X on eleven natural language
understanding datasets in four languages. Results show that SkillNet-X performs
better than task-specific baselines and two multitask learning baselines (i.e.,
dense joint model and Mixture-of-Experts model). Furthermore, skill
pre-training further improves the performance of SkillNet-X on almost all
datasets. To investigate the generalization of our model, we conduct
experiments on two new tasks and find that SkillNet-X significantly outperforms
baselines.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 12:53:30 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Feng",
"Zhangyin",
""
],
[
"Dai",
"Yong",
""
],
[
"Zhang",
"Fan",
""
],
[
"Tang",
"Duyu",
""
],
[
"Feng",
"Xiaocheng",
""
],
[
"Wu",
"Shuangzhi",
""
],
[
"Qin",
"Bing",
""
],
[
"Cao",
"Yunbo",
""
],
[
"Shi",
"Shuming",
""
]
] |
new_dataset
| 0.999301 |
2306.16244
|
Yufei Huang
|
Yufei Huang and Deyi Xiong
|
CBBQ: A Chinese Bias Benchmark Dataset Curated with Human-AI
Collaboration for Large Language Models
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Holistically measuring societal biases of large language models is crucial
for detecting and reducing ethical risks in highly capable AI models. In this
work, we present a Chinese Bias Benchmark dataset that consists of over 100K
questions jointly constructed by human experts and generative language models,
covering stereotypes and societal biases in 14 social dimensions related to
Chinese culture and values. The curation process contains 4 essential steps:
bias identification via extensive literature review, ambiguous context
generation, AI-assisted disambiguous context generation, snd manual review \&
recomposition. The testing instances in the dataset are automatically derived
from 3K+ high-quality templates manually authored with stringent quality
control. The dataset exhibits wide coverage and high diversity. Extensive
experiments demonstrate the effectiveness of the dataset in detecting model
bias, with all 10 publicly available Chinese large language models exhibiting
strong bias in certain categories. Additionally, we observe from our
experiments that fine-tuned models could, to a certain extent, heed
instructions and avoid generating outputs that are morally harmful in some
types, in the way of "moral self-correction". Our dataset and results are
publicly available at
\href{https://github.com/YFHuangxxxx/CBBQ}{https://github.com/YFHuangxxxx/CBBQ},
offering debiasing research opportunities to a widened community.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 14:14:44 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Huang",
"Yufei",
""
],
[
"Xiong",
"Deyi",
""
]
] |
new_dataset
| 0.999712 |
2306.16265
|
Sha Yi
|
Sha Yi, Katia Sycara, Zeynep Temel
|
Reconfigurable Robot Control Using Flexible Coupling Mechanisms
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfigurable robot swarms are capable of connecting with each other to form
complex structures. Current mechanical or magnetic connection mechanisms can be
complicated to manufacture, consume high power, have a limited load-bearing
capacity, or can only form rigid structures. In this paper, we present our
low-cost soft anchor design that enables flexible coupling and decoupling
between robots. Our asymmetric anchor requires minimal force to be pushed into
the opening of another robot while having a strong pulling force so that the
connection between robots can be secured. To maintain this flexible coupling
mechanism as an assembled structure, we present our Model Predictive Control
(MPC) frameworks with polygon constraints to model the geometric relationship
between robots. We conducted experiments on the soft anchor to obtain its force
profile, which informed the three-bar linkage model of the anchor in the
simulations. We show that the proposed mechanism and MPC frameworks enable the
robots to couple, decouple, and perform various behaviors in both the
simulation environment and hardware platform. Our code is available at
https://github.com/ZoomLabCMU/puzzlebot_anchor . Video is available at
https://www.youtube.com/watch?v=R3gFplorCJg .
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 14:47:35 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Yi",
"Sha",
""
],
[
"Sycara",
"Katia",
""
],
[
"Temel",
"Zeynep",
""
]
] |
new_dataset
| 0.998845 |
2306.16268
|
Mohammad Ali Hussiny
|
Mohammad Ali Hussiny, Lilja {\O}vrelid
|
Emotion Analysis of Tweets Banning Education in Afghanistan
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces the first emotion annotated dataset for the Dari
variant of Persian spoken in Afghanistan. The LetHerLearn dataset contains
7,600 tweets posted in reaction to the Taliban ban of women rights to education
in 2022 and has been manually annotated according to Ekman emotion categories.
We here detail the data collection and annotation process, present relevant
dataset statistics as well as initial experiments on the resulting dataset,
benchmarking a number of different neural architectures for the task of Dari
emotion classification.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 14:50:49 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Hussiny",
"Mohammad Ali",
""
],
[
"Øvrelid",
"Lilja",
""
]
] |
new_dataset
| 0.997759 |
2306.16282
|
Shulamit Reches
|
Eli Bagno, Thierry Dana-Picard and Shulamit Reches
|
ChatGPT may excel in States Medical Licensing Examination but falters in
basic Linear Algebra
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The emergence of ChatGPT has been rapid, and although it has demonstrated
positive impacts in certain domains, its influence is not universally
advantageous. Our analysis focuses on ChatGPT's capabilities in Mathematics
Education, particularly in teaching basic Linear Algebra. While there are
instances where ChatGPT delivers accurate and well-motivated answers, it is
crucial to recognize numerous cases where it makes significant mathematical
errors and fails in logical inference. These occurrences raise concerns
regarding the system's genuine understanding of mathematics, as it appears to
rely more on visual patterns rather than true comprehension. Additionally, the
suitability of ChatGPT as a teacher for students also warrants consideration.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 15:19:29 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Bagno",
"Eli",
""
],
[
"Dana-Picard",
"Thierry",
""
],
[
"Reches",
"Shulamit",
""
]
] |
new_dataset
| 0.952261 |
2306.16309
|
Naomi A. Arnold
|
Ben Steer, Naomi Arnold, Cheick Tidiane Ba, Renaud Lambiotte, Haaroon
Yousaf, Lucas Jeub, Fabian Murariu, Shivam Kapoor, Pedro Rico, Rachel Chan,
Louis Chan, James Alford, Richard G. Clegg Felix Cuadrado, Matthew Russell
Barnes, Peijie Zhong, John N. Pougu\'e Biyong, and Alhamza Alnaimi
|
Raphtory: The temporal graph engine for Rust and Python
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Raphtory is a platform for building and analysing temporal networks. The
library includes methods for creating networks from a variety of data sources;
algorithms to explore their structure and evolution; and an extensible GraphQL
server for deployment of applications built on top. Raphtory's core engine is
built in Rust, for efficiency, with Python interfaces, for ease of use.
Raphtory is developed by network scientists, with a background in Physics,
Applied Mathematics, Engineering and Computer Science, for use across academia
and industry.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 15:39:22 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Steer",
"Ben",
""
],
[
"Arnold",
"Naomi",
""
],
[
"Ba",
"Cheick Tidiane",
""
],
[
"Lambiotte",
"Renaud",
""
],
[
"Yousaf",
"Haaroon",
""
],
[
"Jeub",
"Lucas",
""
],
[
"Murariu",
"Fabian",
""
],
[
"Kapoor",
"Shivam",
""
],
[
"Rico",
"Pedro",
""
],
[
"Chan",
"Rachel",
""
],
[
"Chan",
"Louis",
""
],
[
"Alford",
"James",
""
],
[
"Cuadrado",
"Richard G. Clegg Felix",
""
],
[
"Barnes",
"Matthew Russell",
""
],
[
"Zhong",
"Peijie",
""
],
[
"Biyong",
"John N. Pougué",
""
],
[
"Alnaimi",
"Alhamza",
""
]
] |
new_dataset
| 0.997273 |
2306.16322
|
Zaid Alyafeai Mr
|
Zaid Alyafeai and Maged S. Alshaibani and Badr AlKhamissi and Hamzah
Luqman and Ebrahim Alareqi and Ali Fadel
|
Taqyim: Evaluating Arabic NLP Tasks Using ChatGPT Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have demonstrated impressive performance on
various downstream tasks without requiring fine-tuning, including ChatGPT, a
chat-based model built on top of LLMs such as GPT-3.5 and GPT-4. Despite having
a lower training proportion compared to English, these models also exhibit
remarkable capabilities in other languages. In this study, we assess the
performance of GPT-3.5 and GPT-4 models on seven distinct Arabic NLP tasks:
sentiment analysis, translation, transliteration, paraphrasing, part of speech
tagging, summarization, and diacritization. Our findings reveal that GPT-4
outperforms GPT-3.5 on five out of the seven tasks. Furthermore, we conduct an
extensive analysis of the sentiment analysis task, providing insights into how
LLMs achieve exceptional results on a challenging dialectal dataset.
Additionally, we introduce a new Python interface
https://github.com/ARBML/Taqyim that facilitates the evaluation of these tasks
effortlessly.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 15:54:29 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Alyafeai",
"Zaid",
""
],
[
"Alshaibani",
"Maged S.",
""
],
[
"AlKhamissi",
"Badr",
""
],
[
"Luqman",
"Hamzah",
""
],
[
"Alareqi",
"Ebrahim",
""
],
[
"Fadel",
"Ali",
""
]
] |
new_dataset
| 0.985184 |
2306.16339
|
Yanpeng Cui
|
Yanpeng Cui, Qixun Zhang, Zhiyong Feng, Xiong Li, Zhiqing Wei, Ping
Zhang
|
Seeing is Believing: Detecting Sybil Attack in FANET by Matching Visual
and Auditory Domains
|
7 pages, 9 figures, 1 table
| null | null | null |
cs.CR eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The flying ad hoc network (FANET) will play a crucial role in the B5G/6G era
since it provides wide coverage and on-demand deployment services in a
distributed manner. The detection of Sybil attacks is essential to ensure
trusted communication in FANET. Nevertheless, the conventional methods only
utilize the untrusted information that UAV nodes passively ``heard'' from the
``auditory" domain (AD), resulting in severe communication disruptions and even
collision accidents. In this paper, we present a novel VA-matching solution
that matches the neighbors observed from both the AD and the ``visual'' domain
(VD), which is the first solution that enables UAVs to accurately correlate
what they ``see'' from VD and ``hear'' from AD to detect the Sybil attacks.
Relative entropy is utilized to describe the similarity of observed
characteristics from dual domains. The dynamic weight algorithm is proposed to
distinguish neighbors according to the characteristics' popularity. The
matching model of neighbors observed from AD and VD is established and solved
by the vampire bat optimizer. Experiment results show that the proposed
VA-matching solution removes the unreliability of individual characteristics
and single domains. It significantly outperforms the conventional RSSI-based
method in detecting Sybil attacks. Furthermore, it has strong robustness and
achieves high precision and recall rates.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 16:16:05 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Cui",
"Yanpeng",
""
],
[
"Zhang",
"Qixun",
""
],
[
"Feng",
"Zhiyong",
""
],
[
"Li",
"Xiong",
""
],
[
"Wei",
"Zhiqing",
""
],
[
"Zhang",
"Ping",
""
]
] |
new_dataset
| 0.988592 |
2306.16341
|
Matthew Earnshaw
|
Matthew Earnshaw, Pawe{\l} Soboci\'nski
|
String Diagrammatic Trace Theory
|
Paper accepted for MFCS 2023
| null | null | null |
cs.FL math.CT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We extend the theory of formal languages in monoidal categories to the
multi-sorted, symmetric case, and show how this theory permits a graphical
treatment of topics in concurrency. In particular, we show that Mazurkiewicz
trace languages are precisely symmetric monoidal languages over monoidal
distributed alphabets. We introduce symmetric monoidal automata, which define
the class of regular symmetric monoidal languages. Furthermore, we prove that
Zielonka's asynchronous automata coincide with symmetric monoidal automata over
monoidal distributed alphabets. Finally, we apply the string diagrams for
symmetric premonoidal categories to derive serializations of traces.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 16:16:51 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Earnshaw",
"Matthew",
""
],
[
"Sobociński",
"Paweł",
""
]
] |
new_dataset
| 0.993631 |
2306.16344
|
Raj Desai
|
Riender Happee, Raj Desai, Georgios Papaioannou
|
Simulating vibration transmission and comfort in automated driving
integrating models of seat, body, postural stabilization and motion
perception
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
To enhance motion comfort in (automated) driving we present biomechanical
models and demonstrate their ability to capture vibration transmission from
seat to trunk and head. A computationally efficient full body model is
presented, able to operate in real time while capturing translational and
rotational motion of trunk and head with fore-aft, lateral and vertical seat
motion. Sensory integration models are presented predicting motion perception
and motion sickness accumulation using the head motion as predicted by
biomechanical models.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 16:20:06 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Happee",
"Riender",
""
],
[
"Desai",
"Raj",
""
],
[
"Papaioannou",
"Georgios",
""
]
] |
new_dataset
| 0.994607 |
2306.16391
|
Chen Liu
|
Nikhil Chawla, Chen Liu, Abhishek Chakraborty, Igor Chervatyuk, Ke
Sun, Thais Moreira Hamasaki, Henrique Kawakami
|
The Power of Telemetry: Uncovering Software-Based Side-Channel Attacks
on Apple M1/M2 Systems
|
6 pages, 4 figures, 5 tables
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Power analysis is a class of side-channel attacks, where power consumption
data is used to infer sensitive information and extract secrets from a system.
Traditionally, such attacks required physical access to the target, as well as
specialized devices to measure the power consumption with enough precision. The
PLATYPUS attack has shown that on-chip power meter capabilities exposed to a
software interface might form a new class of power side-channel attacks. This
paper presents a software-based power side-channel attack on Apple Silicon
M1/M2 platforms, exploiting the System Management Controller (SMC) and its
power-related keys, which provides access to the on-chip power meters through a
software interface to user space software. We observed data-dependent power
consumption reporting from such keys and analyzed the correlations between the
power consumption and the processed data. Our work also demonstrated how an
unprivileged user mode application successfully recovers bytes from an AES
encryption key from a cryptographic service supported by a kernel mode driver
in macOS. Furthermore, we discuss the impact of software-based power
side-channels in the industry, possible countermeasures, and the overall
implications of software interfaces for modern on-chip power management
systems.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 17:36:16 GMT"
}
] | 2023-06-29T00:00:00 |
[
[
"Chawla",
"Nikhil",
""
],
[
"Liu",
"Chen",
""
],
[
"Chakraborty",
"Abhishek",
""
],
[
"Chervatyuk",
"Igor",
""
],
[
"Sun",
"Ke",
""
],
[
"Hamasaki",
"Thais Moreira",
""
],
[
"Kawakami",
"Henrique",
""
]
] |
new_dataset
| 0.954249 |
1906.00861
|
Niclas Kannengie{\ss}er
|
Niclas Kannengie{\ss}er, Sebastian Lins, Tobias Dehling, Ali Sunyaev
|
Mind the Gap: Trade-Offs between Distributed Ledger Technology
Characteristics
| null | null |
10.1145/3379463
| null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
When developing peer-to-peer applications on Distributed Ledger Technology
(DLT), a crucial decision is the selection of a suitable DLT design (e.g.,
Ethereum) because it is hard to change the underlying DLT design post hoc. To
facilitate the selection of suitable DLT designs, we review DLT characteristics
and identify trade-offs between them. Furthermore, we assess how DLT designs
account for these trade-offs and we develop archetypes for DLT designs that
cater to specific quality requirements. The main purpose of our article is to
introduce scientific and practical audiences to the intricacies of DLT designs
and to support development of viable applications on DLT.
|
[
{
"version": "v1",
"created": "Mon, 3 Jun 2019 15:16:34 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Nov 2019 19:49:50 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Jan 2020 13:48:31 GMT"
},
{
"version": "v4",
"created": "Tue, 4 Feb 2020 17:22:39 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Kannengießer",
"Niclas",
""
],
[
"Lins",
"Sebastian",
""
],
[
"Dehling",
"Tobias",
""
],
[
"Sunyaev",
"Ali",
""
]
] |
new_dataset
| 0.950312 |
2006.10632
|
Yatin Chaudhary
|
Yatin Chaudhary, Hinrich Sch\"utze, Pankaj Gupta
|
Explainable and Discourse Topic-aware Neural Language Understanding
|
Accepted at ICML2020 (13 pages, 2 figures)
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Marrying topic models and language models exposes language understanding to a
broader source of document-level context beyond sentences via topics. While
introducing topical semantics in language models, existing approaches
incorporate latent document topic proportions and ignore topical discourse in
sentences of the document. This work extends the line of research by
additionally introducing an explainable topic representation in language
understanding, obtained from a set of key terms correspondingly for each latent
topic of the proportion. Moreover, we retain sentence-topic associations along
with document-topic association by modeling topical discourse for every
sentence in the document. We present a novel neural composite language model
that exploits both the latent and explainable topics along with topical
discourse at sentence-level in a joint learning framework of topic and language
models. Experiments over a range of tasks such as language modeling, word sense
disambiguation, document classification, retrieval and text generation
demonstrate ability of the proposed model in improving language understanding.
|
[
{
"version": "v1",
"created": "Thu, 18 Jun 2020 15:53:58 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Jun 2020 08:50:24 GMT"
},
{
"version": "v3",
"created": "Tue, 27 Jun 2023 05:07:42 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Chaudhary",
"Yatin",
""
],
[
"Schütze",
"Hinrich",
""
],
[
"Gupta",
"Pankaj",
""
]
] |
new_dataset
| 0.95314 |
2103.11853
|
Markus Reiter-Haas
|
Markus Reiter-Haas, Simone Kopeinik, Elisabeth Lex
|
Studying Moral-based Differences in the Framing of Political Tweets
|
Accepted for publication in ICWSM-2021 - link to published version
will be added
|
Proceedings of the International AAAI Conference on Web and Social
Media Vol. 15 (2021) 1085-1089
|
10.1609/icwsm.v15i1.18135
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the moral framing of political content on Twitter.
Specifically, we examine differences in moral framing in two datasets: (i)
tweets from US-based politicians annotated with political affiliation and (ii)
COVID-19 related tweets in German from followers of the leaders of the five
major Austrian political parties. Our research is based on recent work that
introduces an unsupervised approach to extract framing bias and intensity in
news using a dictionary of moral virtues and vices. In this paper, we use a
more extensive dictionary and adapt it to German-language tweets. Overall, in
both datasets, we observe a moral framing that is congruent with the public
perception of the political parties. In the US dataset, democrats have a
tendency to frame tweets in terms of care, while loyalty is a characteristic
frame for republicans. In the Austrian dataset, we find that the followers of
the governing conservative party emphasize care, which is a key message and
moral frame in the party's COVID-19 campaign slogan. Our work complements
existing studies on moral framing in social media. Also, our empirical findings
provide novel insights into moral-based framing on COVID-19 in Austria.
|
[
{
"version": "v1",
"created": "Mon, 22 Mar 2021 13:48:21 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Reiter-Haas",
"Markus",
""
],
[
"Kopeinik",
"Simone",
""
],
[
"Lex",
"Elisabeth",
""
]
] |
new_dataset
| 0.973701 |
2201.06268
|
Lukas Hedegaard
|
Lukas Hedegaard and Arian Bakhtiarnia and Alexandros Iosifidis
|
Continual Transformers: Redundancy-Free Attention for Online Inference
|
16 pages, 6 figures, 7 tables
|
International Conference on Learning Representations, 2023
| null | null |
cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformers in their common form are inherently limited to operate on whole
token sequences rather than on one token at a time. Consequently, their use
during online inference on time-series data entails considerable redundancy due
to the overlap in successive token sequences. In this work, we propose novel
formulations of the Scaled Dot-Product Attention, which enable Transformers to
perform efficient online token-by-token inference on a continual input stream.
Importantly, our modifications are purely to the order of computations, while
the outputs and learned weights are identical to those of the original
Transformer Encoder. We validate our Continual Transformer Encoder with
experiments on the THUMOS14, TVSeries and GTZAN datasets with remarkable
results: Our Continual one- and two-block architectures reduce the floating
point operations per prediction by up to 63x and 2.6x, respectively, while
retaining predictive performance.
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 08:20:09 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2022 07:56:35 GMT"
},
{
"version": "v3",
"created": "Tue, 24 Jan 2023 07:42:08 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Hedegaard",
"Lukas",
""
],
[
"Bakhtiarnia",
"Arian",
""
],
[
"Iosifidis",
"Alexandros",
""
]
] |
new_dataset
| 0.957784 |
2206.14718
|
Zihan Li
|
Zihan Li, Yunxiang Li, Qingde Li, Puyang Wang, Dazhou Guo, Le Lu,
Dakai Jin, You Zhang, Qingqi Hong
|
LViT: Language meets Vision Transformer in Medical Image Segmentation
|
Accepted by IEEE Transactions on Medical Imaging (TMI)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning has been widely used in medical image segmentation and other
aspects. However, the performance of existing medical image segmentation models
has been limited by the challenge of obtaining sufficient high-quality labeled
data due to the prohibitive data annotation cost. To alleviate this limitation,
we propose a new text-augmented medical image segmentation model LViT (Language
meets Vision Transformer). In our LViT model, medical text annotation is
incorporated to compensate for the quality deficiency in image data. In
addition, the text information can guide to generate pseudo labels of improved
quality in the semi-supervised learning. We also propose an Exponential Pseudo
label Iteration mechanism (EPI) to help the Pixel-Level Attention Module (PLAM)
preserve local image features in semi-supervised LViT setting. In our model, LV
(Language-Vision) loss is designed to supervise the training of unlabeled
images using text information directly. For evaluation, we construct three
multimodal medical segmentation datasets (image + text) containing X-rays and
CT images. Experimental results show that our proposed LViT has superior
segmentation performance in both fully-supervised and semi-supervised setting.
The code and datasets are available at https://github.com/HUANGLIZI/LViT.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2022 15:36:02 GMT"
},
{
"version": "v2",
"created": "Sun, 14 Aug 2022 17:52:01 GMT"
},
{
"version": "v3",
"created": "Sun, 25 Jun 2023 16:15:03 GMT"
},
{
"version": "v4",
"created": "Tue, 27 Jun 2023 01:43:10 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Li",
"Zihan",
""
],
[
"Li",
"Yunxiang",
""
],
[
"Li",
"Qingde",
""
],
[
"Wang",
"Puyang",
""
],
[
"Guo",
"Dazhou",
""
],
[
"Lu",
"Le",
""
],
[
"Jin",
"Dakai",
""
],
[
"Zhang",
"You",
""
],
[
"Hong",
"Qingqi",
""
]
] |
new_dataset
| 0.982294 |
2211.05100
|
Teven Le Scao
|
BigScience Workshop: Teven Le Scao, Angela Fan, Christopher Akiki,
Ellie Pavlick, Suzana Ili\'c, Daniel Hesslow, Roman Castagn\'e, Alexandra
Sasha Luccioni, Fran\c{c}ois Yvon, Matthias Gall\'e, Jonathan Tow, Alexander
M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas
Wang, Beno\^it Sagot, Niklas Muennighoff, Albert Villanova del Moral,
Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz
Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor
Sanh, Hugo Lauren\c{c}on, Yacine Jernite, Julien Launay, Margaret Mitchell,
Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit
Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris
Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa
Adelani, Dragomir Radev, Eduardo Gonz\'alez Ponferrada, Efrat Levkovizh,
Ethan Kim, Eyal Bar Natan, Francesco De Toni, G\'erard Dupont, Germ\'an
Kruszewski, Giada Pistilli, Hady Elsahar, Hamza Benyamina, Hieu Tran, Ian Yu,
Idris Abdulmumin, Isaac Johnson, Itziar Gonzalez-Dios, Javier de la Rosa,
Jenny Chim, Jesse Dodge, Jian Zhu, Jonathan Chang, J\"org Frohberg, Joseph
Tobing, Joydeep Bhattacharjee, Khalid Almubarak, Kimbo Chen, Kyle Lo, Leandro
Von Werra, Leon Weber, Long Phan, Loubna Ben allal, Ludovic Tanguy, Manan
Dey, Manuel Romero Mu\~noz, Maraim Masoud, Mar\'ia Grandury, Mario
\v{S}a\v{s}ko, Max Huang, Maximin Coavoux, Mayank Singh, Mike Tian-Jian
Jiang, Minh Chien Vu, Mohammad A. Jauhar, Mustafa Ghaleb, Nishant Subramani,
Nora Kassner, Nurulaqilla Khamis, Olivier Nguyen, Omar Espejel, Ona de
Gibert, Paulo Villegas, Peter Henderson, Pierre Colombo, Priscilla Amuok,
Quentin Lhoest, Rheza Harliman, Rishi Bommasani, Roberto Luis L\'opez, Rui
Ribeiro, Salomey Osei, Sampo Pyysalo, Sebastian Nagel, Shamik Bose,
Shamsuddeen Hassan Muhammad, Shanya Sharma, Shayne Longpre, Somaieh Nikpoor,
Stanislav Silberberg, Suhas Pai, Sydney Zink, Tiago Timponi Torrent, Timo
Schick, Tristan Thrush, Valentin Danchev, Vassilina Nikoulina, Veronika
Laippala, Violette Lepercq, Vrinda Prabhu, Zaid Alyafeai, Zeerak Talat, Arun
Raja, Benjamin Heinzerling, Chenglei Si, Davut Emre Ta\c{s}ar, Elizabeth
Salesky, Sabrina J. Mielke, Wilson Y. Lee, Abheesht Sharma, Andrea Santilli,
Antoine Chaffin, Arnaud Stiegler, Debajyoti Datta, Eliza Szczechla, Gunjan
Chhablani, Han Wang, Harshit Pandey, Hendrik Strobelt, Jason Alan Fries, Jos
Rozen, Leo Gao, Lintang Sutawika, M Saiful Bari, Maged S. Al-shaibani, Matteo
Manica, Nihal Nayak, Ryan Teehan, Samuel Albanie, Sheng Shen, Srulik
Ben-David, Stephen H. Bach, Taewoon Kim, Tali Bers, Thibault Fevry, Trishala
Neeraj, Urmish Thakker, Vikas Raunak, Xiangru Tang, Zheng-Xin Yong, Zhiqing
Sun, Shaked Brody, Yallow Uri, Hadar Tojarieh, Adam Roberts, Hyung Won Chung,
Jaesung Tae, Jason Phang, Ofir Press, Conglong Li, Deepak Narayanan, Hatim
Bourfoune, Jared Casper, Jeff Rasley, Max Ryabinin, Mayank Mishra, Minjia
Zhang, Mohammad Shoeybi, Myriam Peyrounette, Nicolas Patry, Nouamane Tazi,
Omar Sanseviero, Patrick von Platen, Pierre Cornette, Pierre Fran\c{c}ois
Lavall\'ee, R\'emi Lacroix, Samyam Rajbhandari, Sanchit Gandhi, Shaden Smith,
St\'ephane Requena, Suraj Patil, Tim Dettmers, Ahmed Baruwa, Amanpreet Singh,
Anastasia Cheveleva, Anne-Laure Ligozat, Arjun Subramonian, Aur\'elie
N\'ev\'eol, Charles Lovering, Dan Garrette, Deepak Tunuguntla, Ehud Reiter,
Ekaterina Taktasheva, Ekaterina Voloshina, Eli Bogdanov, Genta Indra Winata,
Hailey Schoelkopf, Jan-Christoph Kalo, Jekaterina Novikova, Jessica Zosa
Forde, Jordan Clive, Jungo Kasai, Ken Kawamura, Liam Hazan, Marine Carpuat,
Miruna Clinciu, Najoung Kim, Newton Cheng, Oleg Serikov, Omer Antverg, Oskar
van der Wal, Rui Zhang, Ruochen Zhang, Sebastian Gehrmann, Shachar Mirkin,
Shani Pais, Tatiana Shavrina, Thomas Scialom, Tian Yun, Tomasz Limisiewicz,
Verena Rieser, Vitaly Protasov, Vladislav Mikhailov, Yada Pruksachatkun,
Yonatan Belinkov, Zachary Bamberger, Zden\v{e}k Kasner, Alice Rueda, Amanda
Pestana, Amir Feizpour, Ammar Khan, Amy Faranak, Ana Santos, Anthony Hevia,
Antigona Unldreaj, Arash Aghagol, Arezoo Abdollahi, Aycha Tammour, Azadeh
HajiHosseini, Bahareh Behroozi, Benjamin Ajibade, Bharat Saxena, Carlos
Mu\~noz Ferrandis, Daniel McDuff, Danish Contractor, David Lansky, Davis
David, Douwe Kiela, Duong A. Nguyen, Edward Tan, Emi Baylor, Ezinwanne
Ozoani, Fatima Mirza, Frankline Ononiwu, Habib Rezanejad, Hessie Jones,
Indrani Bhattacharya, Irene Solaiman, Irina Sedenko, Isar Nejadgholi, Jesse
Passmore, Josh Seltzer, Julio Bonis Sanz, Livia Dutra, Mairon Samagaio,
Maraim Elbadri, Margot Mieskes, Marissa Gerchick, Martha Akinlolu, Michael
McKenna, Mike Qiu, Muhammed Ghauri, Mykola Burynok, Nafis Abrar, Nazneen
Rajani, Nour Elkott, Nour Fahmy, Olanrewaju Samuel, Ran An, Rasmus Kromann,
Ryan Hao, Samira Alizadeh, Sarmad Shubber, Silas Wang, Sourav Roy, Sylvain
Viguier, Thanh Le, Tobi Oyebade, Trieu Le, Yoyo Yang, Zach Nguyen, Abhinav
Ramesh Kashyap, Alfredo Palasciano, Alison Callahan, Anima Shukla, Antonio
Miranda-Escalada, Ayush Singh, Benjamin Beilharz, Bo Wang, Caio Brito, Chenxi
Zhou, Chirag Jain, Chuxin Xu, Cl\'ementine Fourrier, Daniel Le\'on
Peri\~n\'an, Daniel Molano, Dian Yu, Enrique Manjavacas, Fabio Barth, Florian
Fuhrimann, Gabriel Altay, Giyaseddin Bayrak, Gully Burns, Helena U. Vrabec,
Imane Bello, Ishani Dash, Jihyun Kang, John Giorgi, Jonas Golde, Jose David
Posada, Karthik Rangasai Sivaraman, Lokesh Bulchandani, Lu Liu, Luisa
Shinzato, Madeleine Hahn de Bykhovetz, Maiko Takeuchi, Marc P\`amies, Maria A
Castillo, Marianna Nezhurina, Mario S\"anger, Matthias Samwald, Michael
Cullan, Michael Weinberg, Michiel De Wolf, Mina Mihaljcic, Minna Liu, Moritz
Freidank, Myungsun Kang, Natasha Seelam, Nathan Dahlberg, Nicholas Michio
Broad, Nikolaus Muellner, Pascale Fung, Patrick Haller, Ramya Chandrasekhar,
Renata Eisenberg, Robert Martin, Rodrigo Canalli, Rosaline Su, Ruisi Su,
Samuel Cahyawijaya, Samuele Garda, Shlok S Deshmukh, Shubhanshu Mishra, Sid
Kiblawi, Simon Ott, Sinee Sang-aroonsiri, Srishti Kumar, Stefan Schweter,
Sushil Bharati, Tanmay Laud, Th\'eo Gigant, Tomoya Kainuma, Wojciech Kusa,
Yanis Labrak, Yash Shailesh Bajaj, Yash Venkatraman, Yifan Xu, Yingxin Xu, Yu
Xu, Zhe Tan, Zhongli Xie, Zifan Ye, Mathilde Bras, Younes Belkada, Thomas
Wolf
|
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have been shown to be able to perform new tasks
based on a few demonstrations or natural language instructions. While these
capabilities have led to widespread adoption, most LLMs are developed by
resource-rich organizations and are frequently kept from the public. As a step
towards democratizing this powerful technology, we present BLOOM, a
176B-parameter open-access language model designed and built thanks to a
collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer
language model that was trained on the ROOTS corpus, a dataset comprising
hundreds of sources in 46 natural and 13 programming languages (59 in total).
We find that BLOOM achieves competitive performance on a wide variety of
benchmarks, with stronger results after undergoing multitask prompted
finetuning. To facilitate future research and applications using LLMs, we
publicly release our models and code under the Responsible AI License.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 18:48:09 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Dec 2022 01:09:36 GMT"
},
{
"version": "v3",
"created": "Mon, 13 Mar 2023 15:55:30 GMT"
},
{
"version": "v4",
"created": "Tue, 27 Jun 2023 09:57:58 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Workshop",
"BigScience",
""
],
[
":",
"",
""
],
[
"Scao",
"Teven Le",
""
],
[
"Fan",
"Angela",
""
],
[
"Akiki",
"Christopher",
""
],
[
"Pavlick",
"Ellie",
""
],
[
"Ilić",
"Suzana",
""
],
[
"Hesslow",
"Daniel",
""
],
[
"Castagné",
"Roman",
""
],
[
"Luccioni",
"Alexandra Sasha",
""
],
[
"Yvon",
"François",
""
],
[
"Gallé",
"Matthias",
""
],
[
"Tow",
"Jonathan",
""
],
[
"Rush",
"Alexander M.",
""
],
[
"Biderman",
"Stella",
""
],
[
"Webson",
"Albert",
""
],
[
"Ammanamanchi",
"Pawan Sasanka",
""
],
[
"Wang",
"Thomas",
""
],
[
"Sagot",
"Benoît",
""
],
[
"Muennighoff",
"Niklas",
""
],
[
"del Moral",
"Albert Villanova",
""
],
[
"Ruwase",
"Olatunji",
""
],
[
"Bawden",
"Rachel",
""
],
[
"Bekman",
"Stas",
""
],
[
"McMillan-Major",
"Angelina",
""
],
[
"Beltagy",
"Iz",
""
],
[
"Nguyen",
"Huu",
""
],
[
"Saulnier",
"Lucile",
""
],
[
"Tan",
"Samson",
""
],
[
"Suarez",
"Pedro Ortiz",
""
],
[
"Sanh",
"Victor",
""
],
[
"Laurençon",
"Hugo",
""
],
[
"Jernite",
"Yacine",
""
],
[
"Launay",
"Julien",
""
],
[
"Mitchell",
"Margaret",
""
],
[
"Raffel",
"Colin",
""
],
[
"Gokaslan",
"Aaron",
""
],
[
"Simhi",
"Adi",
""
],
[
"Soroa",
"Aitor",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Alfassy",
"Amit",
""
],
[
"Rogers",
"Anna",
""
],
[
"Nitzav",
"Ariel Kreisberg",
""
],
[
"Xu",
"Canwen",
""
],
[
"Mou",
"Chenghao",
""
],
[
"Emezue",
"Chris",
""
],
[
"Klamm",
"Christopher",
""
],
[
"Leong",
"Colin",
""
],
[
"van Strien",
"Daniel",
""
],
[
"Adelani",
"David Ifeoluwa",
""
],
[
"Radev",
"Dragomir",
""
],
[
"Ponferrada",
"Eduardo González",
""
],
[
"Levkovizh",
"Efrat",
""
],
[
"Kim",
"Ethan",
""
],
[
"Natan",
"Eyal Bar",
""
],
[
"De Toni",
"Francesco",
""
],
[
"Dupont",
"Gérard",
""
],
[
"Kruszewski",
"Germán",
""
],
[
"Pistilli",
"Giada",
""
],
[
"Elsahar",
"Hady",
""
],
[
"Benyamina",
"Hamza",
""
],
[
"Tran",
"Hieu",
""
],
[
"Yu",
"Ian",
""
],
[
"Abdulmumin",
"Idris",
""
],
[
"Johnson",
"Isaac",
""
],
[
"Gonzalez-Dios",
"Itziar",
""
],
[
"de la Rosa",
"Javier",
""
],
[
"Chim",
"Jenny",
""
],
[
"Dodge",
"Jesse",
""
],
[
"Zhu",
"Jian",
""
],
[
"Chang",
"Jonathan",
""
],
[
"Frohberg",
"Jörg",
""
],
[
"Tobing",
"Joseph",
""
],
[
"Bhattacharjee",
"Joydeep",
""
],
[
"Almubarak",
"Khalid",
""
],
[
"Chen",
"Kimbo",
""
],
[
"Lo",
"Kyle",
""
],
[
"Von Werra",
"Leandro",
""
],
[
"Weber",
"Leon",
""
],
[
"Phan",
"Long",
""
],
[
"allal",
"Loubna Ben",
""
],
[
"Tanguy",
"Ludovic",
""
],
[
"Dey",
"Manan",
""
],
[
"Muñoz",
"Manuel Romero",
""
],
[
"Masoud",
"Maraim",
""
],
[
"Grandury",
"María",
""
],
[
"Šaško",
"Mario",
""
],
[
"Huang",
"Max",
""
],
[
"Coavoux",
"Maximin",
""
],
[
"Singh",
"Mayank",
""
],
[
"Jiang",
"Mike Tian-Jian",
""
],
[
"Vu",
"Minh Chien",
""
],
[
"Jauhar",
"Mohammad A.",
""
],
[
"Ghaleb",
"Mustafa",
""
],
[
"Subramani",
"Nishant",
""
],
[
"Kassner",
"Nora",
""
],
[
"Khamis",
"Nurulaqilla",
""
],
[
"Nguyen",
"Olivier",
""
],
[
"Espejel",
"Omar",
""
],
[
"de Gibert",
"Ona",
""
],
[
"Villegas",
"Paulo",
""
],
[
"Henderson",
"Peter",
""
],
[
"Colombo",
"Pierre",
""
],
[
"Amuok",
"Priscilla",
""
],
[
"Lhoest",
"Quentin",
""
],
[
"Harliman",
"Rheza",
""
],
[
"Bommasani",
"Rishi",
""
],
[
"López",
"Roberto Luis",
""
],
[
"Ribeiro",
"Rui",
""
],
[
"Osei",
"Salomey",
""
],
[
"Pyysalo",
"Sampo",
""
],
[
"Nagel",
"Sebastian",
""
],
[
"Bose",
"Shamik",
""
],
[
"Muhammad",
"Shamsuddeen Hassan",
""
],
[
"Sharma",
"Shanya",
""
],
[
"Longpre",
"Shayne",
""
],
[
"Nikpoor",
"Somaieh",
""
],
[
"Silberberg",
"Stanislav",
""
],
[
"Pai",
"Suhas",
""
],
[
"Zink",
"Sydney",
""
],
[
"Torrent",
"Tiago Timponi",
""
],
[
"Schick",
"Timo",
""
],
[
"Thrush",
"Tristan",
""
],
[
"Danchev",
"Valentin",
""
],
[
"Nikoulina",
"Vassilina",
""
],
[
"Laippala",
"Veronika",
""
],
[
"Lepercq",
"Violette",
""
],
[
"Prabhu",
"Vrinda",
""
],
[
"Alyafeai",
"Zaid",
""
],
[
"Talat",
"Zeerak",
""
],
[
"Raja",
"Arun",
""
],
[
"Heinzerling",
"Benjamin",
""
],
[
"Si",
"Chenglei",
""
],
[
"Taşar",
"Davut Emre",
""
],
[
"Salesky",
"Elizabeth",
""
],
[
"Mielke",
"Sabrina J.",
""
],
[
"Lee",
"Wilson Y.",
""
],
[
"Sharma",
"Abheesht",
""
],
[
"Santilli",
"Andrea",
""
],
[
"Chaffin",
"Antoine",
""
],
[
"Stiegler",
"Arnaud",
""
],
[
"Datta",
"Debajyoti",
""
],
[
"Szczechla",
"Eliza",
""
],
[
"Chhablani",
"Gunjan",
""
],
[
"Wang",
"Han",
""
],
[
"Pandey",
"Harshit",
""
],
[
"Strobelt",
"Hendrik",
""
],
[
"Fries",
"Jason Alan",
""
],
[
"Rozen",
"Jos",
""
],
[
"Gao",
"Leo",
""
],
[
"Sutawika",
"Lintang",
""
],
[
"Bari",
"M Saiful",
""
],
[
"Al-shaibani",
"Maged S.",
""
],
[
"Manica",
"Matteo",
""
],
[
"Nayak",
"Nihal",
""
],
[
"Teehan",
"Ryan",
""
],
[
"Albanie",
"Samuel",
""
],
[
"Shen",
"Sheng",
""
],
[
"Ben-David",
"Srulik",
""
],
[
"Bach",
"Stephen H.",
""
],
[
"Kim",
"Taewoon",
""
],
[
"Bers",
"Tali",
""
],
[
"Fevry",
"Thibault",
""
],
[
"Neeraj",
"Trishala",
""
],
[
"Thakker",
"Urmish",
""
],
[
"Raunak",
"Vikas",
""
],
[
"Tang",
"Xiangru",
""
],
[
"Yong",
"Zheng-Xin",
""
],
[
"Sun",
"Zhiqing",
""
],
[
"Brody",
"Shaked",
""
],
[
"Uri",
"Yallow",
""
],
[
"Tojarieh",
"Hadar",
""
],
[
"Roberts",
"Adam",
""
],
[
"Chung",
"Hyung Won",
""
],
[
"Tae",
"Jaesung",
""
],
[
"Phang",
"Jason",
""
],
[
"Press",
"Ofir",
""
],
[
"Li",
"Conglong",
""
],
[
"Narayanan",
"Deepak",
""
],
[
"Bourfoune",
"Hatim",
""
],
[
"Casper",
"Jared",
""
],
[
"Rasley",
"Jeff",
""
],
[
"Ryabinin",
"Max",
""
],
[
"Mishra",
"Mayank",
""
],
[
"Zhang",
"Minjia",
""
],
[
"Shoeybi",
"Mohammad",
""
],
[
"Peyrounette",
"Myriam",
""
],
[
"Patry",
"Nicolas",
""
],
[
"Tazi",
"Nouamane",
""
],
[
"Sanseviero",
"Omar",
""
],
[
"von Platen",
"Patrick",
""
],
[
"Cornette",
"Pierre",
""
],
[
"Lavallée",
"Pierre François",
""
],
[
"Lacroix",
"Rémi",
""
],
[
"Rajbhandari",
"Samyam",
""
],
[
"Gandhi",
"Sanchit",
""
],
[
"Smith",
"Shaden",
""
],
[
"Requena",
"Stéphane",
""
],
[
"Patil",
"Suraj",
""
],
[
"Dettmers",
"Tim",
""
],
[
"Baruwa",
"Ahmed",
""
],
[
"Singh",
"Amanpreet",
""
],
[
"Cheveleva",
"Anastasia",
""
],
[
"Ligozat",
"Anne-Laure",
""
],
[
"Subramonian",
"Arjun",
""
],
[
"Névéol",
"Aurélie",
""
],
[
"Lovering",
"Charles",
""
],
[
"Garrette",
"Dan",
""
],
[
"Tunuguntla",
"Deepak",
""
],
[
"Reiter",
"Ehud",
""
],
[
"Taktasheva",
"Ekaterina",
""
],
[
"Voloshina",
"Ekaterina",
""
],
[
"Bogdanov",
"Eli",
""
],
[
"Winata",
"Genta Indra",
""
],
[
"Schoelkopf",
"Hailey",
""
],
[
"Kalo",
"Jan-Christoph",
""
],
[
"Novikova",
"Jekaterina",
""
],
[
"Forde",
"Jessica Zosa",
""
],
[
"Clive",
"Jordan",
""
],
[
"Kasai",
"Jungo",
""
],
[
"Kawamura",
"Ken",
""
],
[
"Hazan",
"Liam",
""
],
[
"Carpuat",
"Marine",
""
],
[
"Clinciu",
"Miruna",
""
],
[
"Kim",
"Najoung",
""
],
[
"Cheng",
"Newton",
""
],
[
"Serikov",
"Oleg",
""
],
[
"Antverg",
"Omer",
""
],
[
"van der Wal",
"Oskar",
""
],
[
"Zhang",
"Rui",
""
],
[
"Zhang",
"Ruochen",
""
],
[
"Gehrmann",
"Sebastian",
""
],
[
"Mirkin",
"Shachar",
""
],
[
"Pais",
"Shani",
""
],
[
"Shavrina",
"Tatiana",
""
],
[
"Scialom",
"Thomas",
""
],
[
"Yun",
"Tian",
""
],
[
"Limisiewicz",
"Tomasz",
""
],
[
"Rieser",
"Verena",
""
],
[
"Protasov",
"Vitaly",
""
],
[
"Mikhailov",
"Vladislav",
""
],
[
"Pruksachatkun",
"Yada",
""
],
[
"Belinkov",
"Yonatan",
""
],
[
"Bamberger",
"Zachary",
""
],
[
"Kasner",
"Zdeněk",
""
],
[
"Rueda",
"Alice",
""
],
[
"Pestana",
"Amanda",
""
],
[
"Feizpour",
"Amir",
""
],
[
"Khan",
"Ammar",
""
],
[
"Faranak",
"Amy",
""
],
[
"Santos",
"Ana",
""
],
[
"Hevia",
"Anthony",
""
],
[
"Unldreaj",
"Antigona",
""
],
[
"Aghagol",
"Arash",
""
],
[
"Abdollahi",
"Arezoo",
""
],
[
"Tammour",
"Aycha",
""
],
[
"HajiHosseini",
"Azadeh",
""
],
[
"Behroozi",
"Bahareh",
""
],
[
"Ajibade",
"Benjamin",
""
],
[
"Saxena",
"Bharat",
""
],
[
"Ferrandis",
"Carlos Muñoz",
""
],
[
"McDuff",
"Daniel",
""
],
[
"Contractor",
"Danish",
""
],
[
"Lansky",
"David",
""
],
[
"David",
"Davis",
""
],
[
"Kiela",
"Douwe",
""
],
[
"Nguyen",
"Duong A.",
""
],
[
"Tan",
"Edward",
""
],
[
"Baylor",
"Emi",
""
],
[
"Ozoani",
"Ezinwanne",
""
],
[
"Mirza",
"Fatima",
""
],
[
"Ononiwu",
"Frankline",
""
],
[
"Rezanejad",
"Habib",
""
],
[
"Jones",
"Hessie",
""
],
[
"Bhattacharya",
"Indrani",
""
],
[
"Solaiman",
"Irene",
""
],
[
"Sedenko",
"Irina",
""
],
[
"Nejadgholi",
"Isar",
""
],
[
"Passmore",
"Jesse",
""
],
[
"Seltzer",
"Josh",
""
],
[
"Sanz",
"Julio Bonis",
""
],
[
"Dutra",
"Livia",
""
],
[
"Samagaio",
"Mairon",
""
],
[
"Elbadri",
"Maraim",
""
],
[
"Mieskes",
"Margot",
""
],
[
"Gerchick",
"Marissa",
""
],
[
"Akinlolu",
"Martha",
""
],
[
"McKenna",
"Michael",
""
],
[
"Qiu",
"Mike",
""
],
[
"Ghauri",
"Muhammed",
""
],
[
"Burynok",
"Mykola",
""
],
[
"Abrar",
"Nafis",
""
],
[
"Rajani",
"Nazneen",
""
],
[
"Elkott",
"Nour",
""
],
[
"Fahmy",
"Nour",
""
],
[
"Samuel",
"Olanrewaju",
""
],
[
"An",
"Ran",
""
],
[
"Kromann",
"Rasmus",
""
],
[
"Hao",
"Ryan",
""
],
[
"Alizadeh",
"Samira",
""
],
[
"Shubber",
"Sarmad",
""
],
[
"Wang",
"Silas",
""
],
[
"Roy",
"Sourav",
""
],
[
"Viguier",
"Sylvain",
""
],
[
"Le",
"Thanh",
""
],
[
"Oyebade",
"Tobi",
""
],
[
"Le",
"Trieu",
""
],
[
"Yang",
"Yoyo",
""
],
[
"Nguyen",
"Zach",
""
],
[
"Kashyap",
"Abhinav Ramesh",
""
],
[
"Palasciano",
"Alfredo",
""
],
[
"Callahan",
"Alison",
""
],
[
"Shukla",
"Anima",
""
],
[
"Miranda-Escalada",
"Antonio",
""
],
[
"Singh",
"Ayush",
""
],
[
"Beilharz",
"Benjamin",
""
],
[
"Wang",
"Bo",
""
],
[
"Brito",
"Caio",
""
],
[
"Zhou",
"Chenxi",
""
],
[
"Jain",
"Chirag",
""
],
[
"Xu",
"Chuxin",
""
],
[
"Fourrier",
"Clémentine",
""
],
[
"Periñán",
"Daniel León",
""
],
[
"Molano",
"Daniel",
""
],
[
"Yu",
"Dian",
""
],
[
"Manjavacas",
"Enrique",
""
],
[
"Barth",
"Fabio",
""
],
[
"Fuhrimann",
"Florian",
""
],
[
"Altay",
"Gabriel",
""
],
[
"Bayrak",
"Giyaseddin",
""
],
[
"Burns",
"Gully",
""
],
[
"Vrabec",
"Helena U.",
""
],
[
"Bello",
"Imane",
""
],
[
"Dash",
"Ishani",
""
],
[
"Kang",
"Jihyun",
""
],
[
"Giorgi",
"John",
""
],
[
"Golde",
"Jonas",
""
],
[
"Posada",
"Jose David",
""
],
[
"Sivaraman",
"Karthik Rangasai",
""
],
[
"Bulchandani",
"Lokesh",
""
],
[
"Liu",
"Lu",
""
],
[
"Shinzato",
"Luisa",
""
],
[
"de Bykhovetz",
"Madeleine Hahn",
""
],
[
"Takeuchi",
"Maiko",
""
],
[
"Pàmies",
"Marc",
""
],
[
"Castillo",
"Maria A",
""
],
[
"Nezhurina",
"Marianna",
""
],
[
"Sänger",
"Mario",
""
],
[
"Samwald",
"Matthias",
""
],
[
"Cullan",
"Michael",
""
],
[
"Weinberg",
"Michael",
""
],
[
"De Wolf",
"Michiel",
""
],
[
"Mihaljcic",
"Mina",
""
],
[
"Liu",
"Minna",
""
],
[
"Freidank",
"Moritz",
""
],
[
"Kang",
"Myungsun",
""
],
[
"Seelam",
"Natasha",
""
],
[
"Dahlberg",
"Nathan",
""
],
[
"Broad",
"Nicholas Michio",
""
],
[
"Muellner",
"Nikolaus",
""
],
[
"Fung",
"Pascale",
""
],
[
"Haller",
"Patrick",
""
],
[
"Chandrasekhar",
"Ramya",
""
],
[
"Eisenberg",
"Renata",
""
],
[
"Martin",
"Robert",
""
],
[
"Canalli",
"Rodrigo",
""
],
[
"Su",
"Rosaline",
""
],
[
"Su",
"Ruisi",
""
],
[
"Cahyawijaya",
"Samuel",
""
],
[
"Garda",
"Samuele",
""
],
[
"Deshmukh",
"Shlok S",
""
],
[
"Mishra",
"Shubhanshu",
""
],
[
"Kiblawi",
"Sid",
""
],
[
"Ott",
"Simon",
""
],
[
"Sang-aroonsiri",
"Sinee",
""
],
[
"Kumar",
"Srishti",
""
],
[
"Schweter",
"Stefan",
""
],
[
"Bharati",
"Sushil",
""
],
[
"Laud",
"Tanmay",
""
],
[
"Gigant",
"Théo",
""
],
[
"Kainuma",
"Tomoya",
""
],
[
"Kusa",
"Wojciech",
""
],
[
"Labrak",
"Yanis",
""
],
[
"Bajaj",
"Yash Shailesh",
""
],
[
"Venkatraman",
"Yash",
""
],
[
"Xu",
"Yifan",
""
],
[
"Xu",
"Yingxin",
""
],
[
"Xu",
"Yu",
""
],
[
"Tan",
"Zhe",
""
],
[
"Xie",
"Zhongli",
""
],
[
"Ye",
"Zifan",
""
],
[
"Bras",
"Mathilde",
""
],
[
"Belkada",
"Younes",
""
],
[
"Wolf",
"Thomas",
""
]
] |
new_dataset
| 0.99812 |
2212.00964
|
Tianju Xue
|
Tianju Xue, Shuheng Liao, Zhengtao Gan, Chanwook Park, Xiaoyu Xie,
Wing Kam Liu, Jian Cao
|
JAX-FEM: A differentiable GPU-accelerated 3D finite element solver for
automatic inverse design and mechanistic data science
| null | null |
10.1016/j.cpc.2023.108802
| null |
cs.MS cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces JAX-FEM, an open-source differentiable finite element
method (FEM) library. Constructed on top of Google JAX, a rising machine
learning library focusing on high-performance numerical computing, JAX-FEM is
implemented with pure Python while scalable to efficiently solve problems with
moderate to large sizes. For example, in a 3D tensile loading problem with 7.7
million degrees of freedom, JAX-FEM with GPU achieves around 10$\times$
acceleration compared to a commercial FEM code depending on platform. Beyond
efficiently solving forward problems, JAX-FEM employs the automatic
differentiation technique so that inverse problems are solved in a fully
automatic manner without the need to manually derive sensitivities. Examples of
3D topology optimization of nonlinear materials are shown to achieve optimal
compliance. Finally, JAX-FEM is an integrated platform for machine
learning-aided computational mechanics. We show an example of data-driven
multi-scale computations of a composite material where JAX-FEM provides an
all-in-one solution from microscopic data generation and model training to
macroscopic FE computations. The source code of the library and these examples
are shared with the community to facilitate computational mechanics research.
|
[
{
"version": "v1",
"created": "Fri, 2 Dec 2022 04:39:14 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Xue",
"Tianju",
""
],
[
"Liao",
"Shuheng",
""
],
[
"Gan",
"Zhengtao",
""
],
[
"Park",
"Chanwook",
""
],
[
"Xie",
"Xiaoyu",
""
],
[
"Liu",
"Wing Kam",
""
],
[
"Cao",
"Jian",
""
]
] |
new_dataset
| 0.999378 |
2303.00668
|
Zhi Zheng
|
Zhi Zheng, Jin Wang, Yuze Wu, Qifeng Cai, Huan Yu, Ruibin Zhang, Jie
Tu, Jun Meng, Guodong Lu, and Fei Gao
|
Roller-Quadrotor: A Novel Hybrid Terrestrial/Aerial Quadrotor with
Unicycle-Driven and Rotor-Assisted Turning
|
8 pages, 10 figures, accepted by 2023 IEEE/RSJ International
Conference on Intelligent Robots(IROS). This work has been submitted to the
IEEE for possible publication. Copyright may be transferred without notice,
after which this version may no longer be accessible
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Roller-Quadrotor is a novel quadrotor that combines the maneuverability
of aerial drones with the endurance of ground vehicles. This work focuses on
the design, modeling, and experimental validation of the Roller-Quadrotor.
Flight capabilities are achieved through a quadrotor configuration, with four
thrust-providing actuators. Additionally, rolling motion is facilitated by a
unicycle-driven and rotor-assisted turning structure. By utilizing terrestrial
locomotion, the vehicle can overcome rolling and turning resistance, thereby
conserving energy compared to its flight mode. This innovative approach not
only tackles the inherent challenges of traditional rotorcraft but also enables
the vehicle to navigate through narrow gaps and overcome obstacles by taking
advantage of its aerial mobility. We develop comprehensive models and
controllers for the Roller-Quadrotor and validate their performance through
experiments. The results demonstrate its seamless transition between aerial and
terrestrial locomotion, as well as its ability to safely navigate through gaps
half the size of its diameter. Moreover, the terrestrial range of the vehicle
is approximately 2.8 times greater, while the operating time is about 41.2
times longer compared to its aerial capabilities. These findings underscore the
feasibility and effectiveness of the proposed structure and control mechanisms
for efficient navigation through challenging terrains while conserving energy.
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 17:05:16 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2023 15:29:51 GMT"
},
{
"version": "v3",
"created": "Tue, 27 Jun 2023 02:07:44 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Zheng",
"Zhi",
""
],
[
"Wang",
"Jin",
""
],
[
"Wu",
"Yuze",
""
],
[
"Cai",
"Qifeng",
""
],
[
"Yu",
"Huan",
""
],
[
"Zhang",
"Ruibin",
""
],
[
"Tu",
"Jie",
""
],
[
"Meng",
"Jun",
""
],
[
"Lu",
"Guodong",
""
],
[
"Gao",
"Fei",
""
]
] |
new_dataset
| 0.999627 |
2305.12140
|
Zihao Yue
|
Zihao Yue, Qi Zhang, Anwen Hu, Liang Zhang, Ziheng Wang and Qin Jin
|
Movie101: A New Movie Understanding Benchmark
|
Accepted to ACL 2023
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To help the visually impaired enjoy movies, automatic movie narrating systems
are expected to narrate accurate, coherent, and role-aware plots when there are
no speaking lines of actors. Existing works benchmark this challenge as a
normal video captioning task via some simplifications, such as removing role
names and evaluating narrations with ngram-based metrics, which makes it
difficult for automatic systems to meet the needs of real application
scenarios. To narrow this gap, we construct a large-scale Chinese movie
benchmark, named Movie101. Closer to real scenarios, the Movie Clip Narrating
(MCN) task in our benchmark asks models to generate role-aware narration
paragraphs for complete movie clips where no actors are speaking. External
knowledge, such as role information and movie genres, is also provided for
better movie understanding. Besides, we propose a new metric called Movie
Narration Score (MNScore) for movie narrating evaluation, which achieves the
best correlation with human evaluation. Our benchmark also supports the
Temporal Narration Grounding (TNG) task to investigate clip localization given
text descriptions. For both two tasks, our proposed methods well leverage
external knowledge and outperform carefully designed baselines. The dataset and
codes are released at https://github.com/yuezih/Movie101.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 08:43:51 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Jun 2023 11:42:44 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Yue",
"Zihao",
""
],
[
"Zhang",
"Qi",
""
],
[
"Hu",
"Anwen",
""
],
[
"Zhang",
"Liang",
""
],
[
"Wang",
"Ziheng",
""
],
[
"Jin",
"Qin",
""
]
] |
new_dataset
| 0.998398 |
2306.15024
|
Ferenc B\'eres
|
Ferenc B\'eres, Istv\'an Andr\'as Seres, Domokos M. Kelen, Andr\'as A.
Bencz\'ur
|
ethp2psim: Evaluating and deploying privacy-enhanced peer-to-peer
routing protocols for the Ethereum network
| null | null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Network-level privacy is the Achilles heel of financial privacy in
cryptocurrencies. Financial privacy amounts to achieving and maintaining
blockchain- and network-level privacy. Blockchain-level privacy recently
received substantial attention. Specifically, several privacy-enhancing
technologies were proposed and deployed to enhance blockchain-level privacy. On
the other hand, network-level privacy, i.e., privacy on the peer-to-peer layer,
has seen far less attention and development. In this work, we aim to provide a
peer-to-peer network simulator, ethp2psim, that allows researchers to evaluate
the privacy guarantees of privacy-enhanced broadcast and message routing
algorithms. Our goal is two-fold. First, we want to enable researchers to
implement their proposed protocols in our modular simulator framework. Second,
our simulator allows researchers to evaluate the privacy guarantees of
privacy-enhanced routing algorithms. Finally, ethp2psim can help choose the
right protocol parameters for efficient, robust, and private deployment.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 19:31:33 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Béres",
"Ferenc",
""
],
[
"Seres",
"István András",
""
],
[
"Kelen",
"Domokos M.",
""
],
[
"Benczúr",
"András A.",
""
]
] |
new_dataset
| 0.993323 |
2306.15073
|
Li Ding
|
Li Ding, Jack Terwilliger, Aishni Parab, Meng Wang, Lex Fridman, Bruce
Mehler, Bryan Reimer
|
CLERA: A Unified Model for Joint Cognitive Load and Eye Region Analysis
in the Wild
|
ACM Transactions on Computer-Human Interaction
| null |
10.1145/3603622
| null |
cs.HC cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-intrusive, real-time analysis of the dynamics of the eye region allows us
to monitor humans' visual attention allocation and estimate their mental state
during the performance of real-world tasks, which can potentially benefit a
wide range of human-computer interaction (HCI) applications. While commercial
eye-tracking devices have been frequently employed, the difficulty of
customizing these devices places unnecessary constraints on the exploration of
more efficient, end-to-end models of eye dynamics. In this work, we propose
CLERA, a unified model for Cognitive Load and Eye Region Analysis, which
achieves precise keypoint detection and spatiotemporal tracking in a
joint-learning framework. Our method demonstrates significant efficiency and
outperforms prior work on tasks including cognitive load estimation, eye
landmark detection, and blink estimation. We also introduce a large-scale
dataset of 30k human faces with joint pupil, eye-openness, and landmark
annotation, which aims to support future HCI research on human factors and
eye-related analysis.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 21:20:23 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Ding",
"Li",
""
],
[
"Terwilliger",
"Jack",
""
],
[
"Parab",
"Aishni",
""
],
[
"Wang",
"Meng",
""
],
[
"Fridman",
"Lex",
""
],
[
"Mehler",
"Bruce",
""
],
[
"Reimer",
"Bryan",
""
]
] |
new_dataset
| 0.997589 |
2306.15087
|
Virginia K. Felkner
|
Virginia K. Felkner, Ho-Chun Herbert Chang, Eugene Jang, Jonathan May
|
WinoQueer: A Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in
Large Language Models
|
Accepted to ACL 2023 (main conference). Camera-ready version
| null | null | null |
cs.CL cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
We present WinoQueer: a benchmark specifically designed to measure whether
large language models (LLMs) encode biases that are harmful to the LGBTQ+
community. The benchmark is community-sourced, via application of a novel
method that generates a bias benchmark from a community survey. We apply our
benchmark to several popular LLMs and find that off-the-shelf models generally
do exhibit considerable anti-queer bias. Finally, we show that LLM bias against
a marginalized community can be somewhat mitigated by finetuning on data
written about or by members of that community, and that social media text
written by community members is more effective than news text written about the
community by non-members. Our method for community-in-the-loop benchmark
development provides a blueprint for future researchers to develop
community-driven, harms-grounded LLM benchmarks for other marginalized
communities.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 22:07:33 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Felkner",
"Virginia K.",
""
],
[
"Chang",
"Ho-Chun Herbert",
""
],
[
"Jang",
"Eugene",
""
],
[
"May",
"Jonathan",
""
]
] |
new_dataset
| 0.981691 |
2306.15111
|
Chuanyang Jin
|
Chuanyang Jin
|
Semi-Supervised Image Captioning with CLIP
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Image captioning, a fundamental task in vision-language understanding, seeks
to generate accurate natural language descriptions for provided images. The
CLIP model, with its rich semantic features learned from a large corpus of
image-text pairs, is well-suited for this task. In this paper, we present a
two-stage semi-supervised image captioning approach that exploits the potential
of CLIP encoding. Our model comprises a CLIP visual encoder, a mapping network,
and a language model for text generation. In the initial stage, we train the
model using a small labeled dataset by contrasting the generated captions with
the ground truth captions. In the subsequent stage, we continue the training
using unlabeled images, aiming to maximize the image-caption similarity based
on CLIP embeddings. Remarkably, despite utilizing less than 2% of the
COCO-captions, our approach delivers a performance comparable to
state-of-the-art models trained on the complete dataset. Furthermore, the
captions generated by our approach are more distinctive, informative, and in
line with human preference.
|
[
{
"version": "v1",
"created": "Mon, 26 Jun 2023 23:29:16 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Jin",
"Chuanyang",
""
]
] |
new_dataset
| 0.987456 |
2306.15162
|
David Uthus
|
David Uthus, Garrett Tanzer, Manfred Georg
|
YouTube-ASL: A Large-Scale, Open-Domain American Sign Language-English
Parallel Corpus
| null | null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Machine learning for sign languages is bottlenecked by data. In this paper,
we present YouTube-ASL, a large-scale, open-domain corpus of American Sign
Language (ASL) videos and accompanying English captions drawn from YouTube.
With ~1000 hours of videos and >2500 unique signers, YouTube-ASL is ~3x as
large and has ~10x as many unique signers as the largest prior ASL dataset. We
train baseline models for ASL to English translation on YouTube-ASL and
evaluate them on How2Sign, where we achieve a new finetuned state of the art of
12.39 BLEU and, for the first time, report zero-shot results.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 02:44:07 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Uthus",
"David",
""
],
[
"Tanzer",
"Garrett",
""
],
[
"Georg",
"Manfred",
""
]
] |
new_dataset
| 0.999902 |
2306.15390
|
Yanjing Li
|
Yanjing Li, Sheng Xu, Xianbin Cao, Li'an Zhuo, Baochang Zhang, Tian
Wang, Guodong Guo
|
DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit
CNNs
|
Accepted by International Journal of Computer Vision
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Neural architecture search (NAS) proves to be among the effective approaches
for many tasks by generating an application-adaptive neural architecture, which
is still challenged by high computational cost and memory consumption. At the
same time, 1-bit convolutional neural networks (CNNs) with binary weights and
activations show their potential for resource-limited embedded devices. One
natural approach is to use 1-bit CNNs to reduce the computation and memory cost
of NAS by taking advantage of the strengths of each in a unified framework,
while searching the 1-bit CNNs is more challenging due to the more complicated
processes involved. In this paper, we introduce Discrepant Child-Parent Neural
Architecture Search (DCP-NAS) to efficiently search 1-bit CNNs, based on a new
framework of searching the 1-bit model (Child) under the supervision of a
real-valued model (Parent). Particularly, we first utilize a Parent model to
calculate a tangent direction, based on which the tangent propagation method is
introduced to search the optimized 1-bit Child. We further observe a coupling
relationship between the weights and architecture parameters existing in such
differentiable frameworks. To address the issue, we propose a decoupled
optimization method to search an optimized architecture. Extensive experiments
demonstrate that our DCP-NAS achieves much better results than prior arts on
both CIFAR-10 and ImageNet datasets. In particular, the backbones achieved by
our DCP-NAS achieve strong generalization performance on person
re-identification and object detection.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 11:28:29 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Li",
"Yanjing",
""
],
[
"Xu",
"Sheng",
""
],
[
"Cao",
"Xianbin",
""
],
[
"Zhuo",
"Li'an",
""
],
[
"Zhang",
"Baochang",
""
],
[
"Wang",
"Tian",
""
],
[
"Guo",
"Guodong",
""
]
] |
new_dataset
| 0.951802 |
2306.15395
|
Michael Bekos
|
Michael A. Bekos, Michael Kaufmann, Maria Eleni Pavlidi, Xenia Rieger
|
On the Deque and Rique Numbers of Complete and Complete Bipartite Graphs
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Several types of linear layouts of graphs are obtained by leveraging known
data structures; the most notable representatives are the stack and the queue
layouts. In this content, given a data structure, one seeks to specify an order
of the vertices of the graph and a partition of its edges into pages, such that
the endpoints of the edges assigned to each page can be processed by the given
data structure in the underlying order. In this paper, we study deque and rique
layouts of graphs obtained by leveraging the double-ended queue and the
restricted-input double-ended queue (or deque and rique, for short),
respectively. Hence, they generalize both the stack and the queue layouts. We
focus on complete and complete bipartite graphs and present bounds on their
deque- and rique-numbers, that is, on the minimum number of pages needed by any
of these two types of linear layouts.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 11:37:33 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Bekos",
"Michael A.",
""
],
[
"Kaufmann",
"Michael",
""
],
[
"Pavlidi",
"Maria Eleni",
""
],
[
"Rieger",
"Xenia",
""
]
] |
new_dataset
| 0.998784 |
2306.15442
|
Gongyang Li
|
Gongyang Li and Chengjun Han and Zhi Liu
|
No-Service Rail Surface Defect Segmentation via Normalized Attention and
Dual-scale Interaction
|
10 pages, 6 figures, Accepted by IEEE Transactions on Instrumentation
and Measurement 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
No-service rail surface defect (NRSD) segmentation is an essential way for
perceiving the quality of no-service rails. However, due to the complex and
diverse outlines and low-contrast textures of no-service rails, existing
natural image segmentation methods cannot achieve promising performance in NRSD
images, especially in some unique and challenging NRSD scenes. To this end, in
this paper, we propose a novel segmentation network for NRSDs based on
Normalized Attention and Dual-scale Interaction, named NaDiNet. Specifically,
NaDiNet follows the enhancement-interaction paradigm. The Normalized
Channel-wise Self-Attention Module (NAM) and the Dual-scale Interaction Block
(DIB) are two key components of NaDiNet. NAM is a specific extension of the
channel-wise self-attention mechanism (CAM) to enhance features extracted from
low-contrast NRSD images. The softmax layer in CAM will produce very small
correlation coefficients which are not conducive to low-contrast feature
enhancement. Instead, in NAM, we directly calculate the normalized correlation
coefficient between channels to enlarge the feature differentiation. DIB is
specifically designed for the feature interaction of the enhanced features. It
has two interaction branches with dual scales, one for fine-grained clues and
the other for coarse-grained clues. With both branches working together, DIB
can perceive defect regions of different granularities. With these modules
working together, our NaDiNet can generate accurate segmentation map. Extensive
experiments on the public NRSD-MN dataset with man-made and natural NRSDs
demonstrate that our proposed NaDiNet with various backbones (i.e., VGG,
ResNet, and DenseNet) consistently outperforms 10 state-of-the-art methods. The
code and results of our method are available at
https://github.com/monxxcn/NaDiNet.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 12:58:16 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Li",
"Gongyang",
""
],
[
"Han",
"Chengjun",
""
],
[
"Liu",
"Zhi",
""
]
] |
new_dataset
| 0.98677 |
2306.15541
|
Vincenzo Miracula
|
Vincenzo Miracula, Antonio Picone
|
Unleashing the Power of User Reviews: Exploring Airline Choices at
Catania Airport, Italy
|
arXiv admin note: text overlap with arXiv:1311.3475 by other authors
| null | null | null |
cs.CL cs.AI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
This study aims to investigate the possible relationship between the
mechanisms of social influence and the choice of airline, through the use of
new tools, with the aim of understanding whether they can contribute to a
better understanding of the factors influencing the decisions of consumers in
the aviation sector. We have chosen to extract user reviews from well-known
platforms: Trustpilot, Google, and Twitter. By combining web scraping
techniques, we have been able to collect a comprehensive dataset comprising a
wide range of user opinions, feedback, and ratings. We then refined the BERT
model to focus on insightful sentiment in the context of airline reviews.
Through our analysis, we observed an intriguing trend of average negative
sentiment scores across various airlines, giving us deeper insight into the
dynamics between airlines and helping us identify key partnerships, popular
routes, and airlines that play a central role in the aeronautical ecosystem of
Catania airport during the specified period. Our investigation led us to find
that, despite an airline having received prestigious awards as a low-cost
leader in Europe for two consecutive years 2021 and 2022, the "Catanese" user
tends to suffer the dominant position of other companies. Understanding the
impact of positive reviews and leveraging sentiment analysis can help airlines
improve their reputation, attract more customers, and ultimately gain a
competitive edge in the marketplace.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 15:10:57 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Miracula",
"Vincenzo",
""
],
[
"Picone",
"Antonio",
""
]
] |
new_dataset
| 0.995182 |
2306.15559
|
Jan Von Der Assen
|
Jan von der Assen, Alberto Huertas Celdr\'an, Janik Luechinger, Pedro
Miguel S\'anchez S\'anchez, G\'er\^ome Bovet, Gregorio Mart\'inez P\'erez,
Burkhard Stiller
|
RansomAI: AI-powered Ransomware for Stealthy Encryption
| null | null | null | null |
cs.CR cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Cybersecurity solutions have shown promising performance when detecting
ransomware samples that use fixed algorithms and encryption rates. However, due
to the current explosion of Artificial Intelligence (AI), sooner than later,
ransomware (and malware in general) will incorporate AI techniques to
intelligently and dynamically adapt its encryption behavior to be undetected.
It might result in ineffective and obsolete cybersecurity solutions, but the
literature lacks AI-powered ransomware to verify it. Thus, this work proposes
RansomAI, a Reinforcement Learning-based framework that can be integrated into
existing ransomware samples to adapt their encryption behavior and stay
stealthy while encrypting files. RansomAI presents an agent that learns the
best encryption algorithm, rate, and duration that minimizes its detection
(using a reward mechanism and a fingerprinting intelligent detection system)
while maximizing its damage function. The proposed framework was validated in a
ransomware, Ransomware-PoC, that infected a Raspberry Pi 4, acting as a
crowdsensor. A pool of experiments with Deep Q-Learning and Isolation Forest
(deployed on the agent and detection system, respectively) has demonstrated
that RansomAI evades the detection of Ransomware-PoC affecting the Raspberry Pi
4 in a few minutes with >90% accuracy.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 15:36:12 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"von der Assen",
"Jan",
""
],
[
"Celdrán",
"Alberto Huertas",
""
],
[
"Luechinger",
"Janik",
""
],
[
"Sánchez",
"Pedro Miguel Sánchez",
""
],
[
"Bovet",
"Gérôme",
""
],
[
"Pérez",
"Gregorio Martínez",
""
],
[
"Stiller",
"Burkhard",
""
]
] |
new_dataset
| 0.999192 |
2306.15566
|
Jan Von Der Assen
|
Jan von der Assen, Alberto Huertas Celdr\'an, Rinor Sefa, G\'er\^ome
Bovet, Burkhard Stiller
|
MTFS: a Moving Target Defense-Enabled File System for Malware Mitigation
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Ransomware has remained one of the most notorious threats in the
cybersecurity field. Moving Target Defense (MTD) has been proposed as a novel
paradigm for proactive defense. Although various approaches leverage MTD, few
of them rely on the operating system and, specifically, the file system,
thereby making them dependent on other computing devices. Furthermore, existing
ransomware defense techniques merely replicate or detect attacks, without
preventing them. Thus, this paper introduces the MTFS overlay file system and
the design and implementation of three novel MTD techniques implemented on top
of it. One delaying attackers, one trapping recursive directory traversal, and
another one hiding file types. The effectiveness of the techniques are shown in
two experiments. First, it is shown that the techniques can delay and mitigate
ransomware on real IoT devices. Secondly, in a broader scope, the solution was
confronted with 14 ransomware samples, highlighting that it can save 97% of the
files.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 15:44:21 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"von der Assen",
"Jan",
""
],
[
"Celdrán",
"Alberto Huertas",
""
],
[
"Sefa",
"Rinor",
""
],
[
"Bovet",
"Gérôme",
""
],
[
"Stiller",
"Burkhard",
""
]
] |
new_dataset
| 0.998867 |
2306.15604
|
Ryo Sekizawa
|
Ryo Sekizawa, Nan Duan, Shuai Lu, Hitomi Yanaka
|
Constructing Multilingual Code Search Dataset Using Neural Machine
Translation
|
To appear in the Proceedings of the ACL2023 Student Research Workshop
(SRW)
| null | null | null |
cs.CL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Code search is a task to find programming codes that semantically match the
given natural language queries. Even though some of the existing datasets for
this task are multilingual on the programming language side, their query data
are only in English. In this research, we create a multilingual code search
dataset in four natural and four programming languages using a neural machine
translation model. Using our dataset, we pre-train and fine-tune the
Transformer-based models and then evaluate them on multiple code search test
sets. Our results show that the model pre-trained with all natural and
programming language data has performed best in most cases. By applying
back-translation data filtering to our dataset, we demonstrate that the
translation quality affects the model's performance to a certain extent, but
the data size matters more.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 16:42:36 GMT"
}
] | 2023-06-28T00:00:00 |
[
[
"Sekizawa",
"Ryo",
""
],
[
"Duan",
"Nan",
""
],
[
"Lu",
"Shuai",
""
],
[
"Yanaka",
"Hitomi",
""
]
] |
new_dataset
| 0.999832 |
2103.14074
|
Cesar Augusto Ipanaque Zapata Prof.
|
Cesar A. Ipanaque Zapata and Jes\'us Gonz\'alez
|
Parametrised collision-free optimal motion planning algorithms in
Euclidean spaces
|
16 pages. Final version. To appear in Morfismos
| null | null | null |
cs.RO math.AT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe parametrised motion planning algorithms for systems controlling
objects represented by points that move without collisions in an even
dimensional Euclidean space and in the presence of up to three obstacles with
\emph{a priori} unknown positions. Our algorithms are optimal in the sense that
the parametrised local planners have minimal posible size.
|
[
{
"version": "v1",
"created": "Thu, 25 Mar 2021 18:51:04 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Jun 2023 05:56:47 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Zapata",
"Cesar A. Ipanaque",
""
],
[
"González",
"Jesús",
""
]
] |
new_dataset
| 0.976394 |
2111.02926
|
Chengyuan Deng
|
Chengyuan Deng, Shihang Feng, Hanchen Wang, Xitong Zhang, Peng Jin,
Yinan Feng, Qili Zeng, Yinpeng Chen, Youzuo Lin
|
OpenFWI: Large-Scale Multi-Structural Benchmark Datasets for Seismic
Full Waveform Inversion
|
This manuscript has been accepted by NeurIPS 2022 dataset and
benchmark track
| null | null | null |
cs.LG eess.SP
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Full waveform inversion (FWI) is widely used in geophysics to reconstruct
high-resolution velocity maps from seismic data. The recent success of
data-driven FWI methods results in a rapidly increasing demand for open
datasets to serve the geophysics community. We present OpenFWI, a collection of
large-scale multi-structural benchmark datasets, to facilitate diversified,
rigorous, and reproducible research on FWI. In particular, OpenFWI consists of
12 datasets (2.1TB in total) synthesized from multiple sources. It encompasses
diverse domains in geophysics (interface, fault, CO2 reservoir, etc.), covers
different geological subsurface structures (flat, curve, etc.), and contains
various amounts of data samples (2K - 67K). It also includes a dataset for 3D
FWI. Moreover, we use OpenFWI to perform benchmarking over four deep learning
methods, covering both supervised and unsupervised learning regimes. Along with
the benchmarks, we implement additional experiments, including physics-driven
methods, complexity analysis, generalization study, uncertainty quantification,
and so on, to sharpen our understanding of datasets and methods. The studies
either provide valuable insights into the datasets and the performance, or
uncover their current limitations. We hope OpenFWI supports prospective
research on FWI and inspires future open-source efforts on AI for science. All
datasets and related information can be accessed through our website at
https://openfwi-lanl.github.io/
|
[
{
"version": "v1",
"created": "Thu, 4 Nov 2021 15:03:40 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Feb 2022 17:26:31 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Jun 2022 15:54:19 GMT"
},
{
"version": "v4",
"created": "Mon, 29 Aug 2022 15:05:35 GMT"
},
{
"version": "v5",
"created": "Sat, 19 Nov 2022 16:46:26 GMT"
},
{
"version": "v6",
"created": "Sat, 24 Jun 2023 00:02:32 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Deng",
"Chengyuan",
""
],
[
"Feng",
"Shihang",
""
],
[
"Wang",
"Hanchen",
""
],
[
"Zhang",
"Xitong",
""
],
[
"Jin",
"Peng",
""
],
[
"Feng",
"Yinan",
""
],
[
"Zeng",
"Qili",
""
],
[
"Chen",
"Yinpeng",
""
],
[
"Lin",
"Youzuo",
""
]
] |
new_dataset
| 0.999858 |
2205.08207
|
Baosheng Zhang
|
Baosheng Zhang, Xiaoguang Ma, Hongjun Ma and Chunbo Luo
|
DynPL-SVO: A Robust Stereo Visual Odometry for Dynamic Scenes
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most feature-based stereo visual odometry (SVO) approaches estimate the
motion of mobile robots by matching and tracking point features along a
sequence of stereo images. However, in dynamic scenes mainly comprising moving
pedestrians, vehicles, etc., there are insufficient robust static point
features to enable accurate motion estimation, causing failures when
reconstructing robotic motion. In this paper, we proposed DynPL-SVO, a complete
dynamic SVO method that integrated united cost functions containing information
between matched point features and re-projection errors perpendicular and
parallel to the direction of the line features. Additionally, we introduced a
\textit{dynamic} \textit{grid} algorithm to enhance its performance in dynamic
scenes. The stereo camera motion was estimated through Levenberg-Marquard
minimization of the re-projection errors of both point and line features.
Comprehensive experimental results on KITTI and EuRoC MAV datasets showed that
accuracy of the DynPL-SVO was improved by over 20\% on average compared to
other state-of-the-art SVO systems, especially in dynamic scenes.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 10:08:03 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Sep 2022 14:51:21 GMT"
},
{
"version": "v3",
"created": "Sat, 24 Jun 2023 08:47:01 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Zhang",
"Baosheng",
""
],
[
"Ma",
"Xiaoguang",
""
],
[
"Ma",
"Hongjun",
""
],
[
"Luo",
"Chunbo",
""
]
] |
new_dataset
| 0.99808 |
2207.01079
|
N M Anoop Krishnan
|
Tanishq Gupta, Mohd Zaki, N. M. Anoop Krishnan, Mausam
|
DiSCoMaT: Distantly Supervised Composition Extraction from Tables in
Materials Science Articles
|
Accepted long paper at ACL 2023
(https://2023.aclweb.org/program/accepted_main_conference/)
| null | null | null |
cs.CL cond-mat.mtrl-sci cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A crucial component in the curation of KB for a scientific domain is
information extraction from tables in the domain's published articles -- tables
carry important information (often numeric), which must be adequately extracted
for a comprehensive machine understanding of an article. Existing table
extractors assume prior knowledge of table structure and format, which may not
be known in scientific tables. We study a specific and challenging table
extraction problem: extracting compositions of materials (e.g., glasses,
alloys). We first observe that materials science researchers organize similar
compositions in a wide variety of table styles, necessitating an intelligent
model for table understanding and composition extraction. Consequently, we
define this novel task as a challenge for the ML community and create a
training dataset comprising 4,408 distantly supervised tables, along with 1,475
manually annotated dev and test tables. We also present DiSCoMaT, a strong
baseline geared towards this specific task, which combines multiple graph
neural networks with several task-specific regular expressions, features, and
constraints. We show that DiSCoMaT outperforms recent table processing
architectures by significant margins.
|
[
{
"version": "v1",
"created": "Sun, 3 Jul 2022 17:11:17 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Jul 2022 08:19:26 GMT"
},
{
"version": "v3",
"created": "Sat, 24 Jun 2023 11:55:56 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Gupta",
"Tanishq",
""
],
[
"Zaki",
"Mohd",
""
],
[
"Krishnan",
"N. M. Anoop",
""
],
[
"Mausam",
"",
""
]
] |
new_dataset
| 0.961448 |
2207.11243
|
Alexander Richard
|
Cheng-hsin Wuu, Ningyuan Zheng, Scott Ardisson, Rohan Bali, Danielle
Belko, Eric Brockmeyer, Lucas Evans, Timothy Godisart, Hyowon Ha, Xuhua
Huang, Alexander Hypes, Taylor Koska, Steven Krenn, Stephen Lombardi, Xiaomin
Luo, Kevyn McPhail, Laura Millerschoen, Michal Perdoch, Mark Pitts, Alexander
Richard, Jason Saragih, Junko Saragih, Takaaki Shiratori, Tomas Simon, Matt
Stewart, Autumn Trimble, Xinshuo Weng, David Whitewolf, Chenglei Wu, Shoou-I
Yu, Yaser Sheikh
|
Multiface: A Dataset for Neural Face Rendering
| null | null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Photorealistic avatars of human faces have come a long way in recent years,
yet research along this area is limited by a lack of publicly available,
high-quality datasets covering both, dense multi-view camera captures, and rich
facial expressions of the captured subjects. In this work, we present
Multiface, a new multi-view, high-resolution human face dataset collected from
13 identities at Reality Labs Research for neural face rendering. We introduce
Mugsy, a large scale multi-camera apparatus to capture high-resolution
synchronized videos of a facial performance. The goal of Multiface is to close
the gap in accessibility to high quality data in the academic community and to
enable research in VR telepresence. Along with the release of the dataset, we
conduct ablation studies on the influence of different model architectures
toward the model's interpolation capacity of novel viewpoint and expressions.
With a conditional VAE model serving as our baseline, we found that adding
spatial bias, texture warp field, and residual connections improves performance
on novel view synthesis. Our code and data is available at:
https://github.com/facebookresearch/multiface
|
[
{
"version": "v1",
"created": "Fri, 22 Jul 2022 17:55:39 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jun 2023 17:43:18 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Wuu",
"Cheng-hsin",
""
],
[
"Zheng",
"Ningyuan",
""
],
[
"Ardisson",
"Scott",
""
],
[
"Bali",
"Rohan",
""
],
[
"Belko",
"Danielle",
""
],
[
"Brockmeyer",
"Eric",
""
],
[
"Evans",
"Lucas",
""
],
[
"Godisart",
"Timothy",
""
],
[
"Ha",
"Hyowon",
""
],
[
"Huang",
"Xuhua",
""
],
[
"Hypes",
"Alexander",
""
],
[
"Koska",
"Taylor",
""
],
[
"Krenn",
"Steven",
""
],
[
"Lombardi",
"Stephen",
""
],
[
"Luo",
"Xiaomin",
""
],
[
"McPhail",
"Kevyn",
""
],
[
"Millerschoen",
"Laura",
""
],
[
"Perdoch",
"Michal",
""
],
[
"Pitts",
"Mark",
""
],
[
"Richard",
"Alexander",
""
],
[
"Saragih",
"Jason",
""
],
[
"Saragih",
"Junko",
""
],
[
"Shiratori",
"Takaaki",
""
],
[
"Simon",
"Tomas",
""
],
[
"Stewart",
"Matt",
""
],
[
"Trimble",
"Autumn",
""
],
[
"Weng",
"Xinshuo",
""
],
[
"Whitewolf",
"David",
""
],
[
"Wu",
"Chenglei",
""
],
[
"Yu",
"Shoou-I",
""
],
[
"Sheikh",
"Yaser",
""
]
] |
new_dataset
| 0.999834 |
2208.11602
|
Bingde Liu
|
Bingde Liu, Chang Xu, Wen Yang, Huai Yu, Lei Yu
|
Motion Robust High-Speed Light-Weighted Object Detection With Event
Camera
|
Published in: IEEE Transactions on Instrumentation and Measurement
(Volume: 72) 2023
| null |
10.1109/TIM.2023.3269780
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose a motion robust and high-speed detection pipeline
which better leverages the event data. First, we design an event stream
representation called temporal active focus (TAF), which efficiently utilizes
the spatial-temporal asynchronous event stream, constructing event tensors
robust to object motions. Then, we propose a module called the bifurcated
folding module (BFM), which encodes the rich temporal information in the TAF
tensor at the input layer of the detector. Following this, we design a
high-speed lightweight detector called agile event detector (AED) plus a simple
but effective data augmentation method, to enhance the detection accuracy and
reduce the model's parameter. Experiments on two typical real-scene event
camera object detection datasets show that our method is competitive in terms
of accuracy, efficiency, and the number of parameters. By classifying objects
into multiple motion levels based on the optical flow density metric, we
further illustrated the robustness of our method for objects with different
velocities relative to the camera. The codes and trained models are available
at https://github.com/HarmoniaLeo/FRLW-EvD .
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 15:15:24 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jun 2023 01:18:16 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Liu",
"Bingde",
""
],
[
"Xu",
"Chang",
""
],
[
"Yang",
"Wen",
""
],
[
"Yu",
"Huai",
""
],
[
"Yu",
"Lei",
""
]
] |
new_dataset
| 0.991486 |
2209.08470
|
Lei Wang
|
Lei Wang, Bo Liu, Bincheng Wang, Fuqiang Yu
|
GaitMM: Multi-Granularity Motion Sequence Learning for Gait Recognition
|
Accepted to ICIP2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gait recognition aims to identify individual-specific walking patterns by
observing the different periodic movements of each body part. However, most
existing methods treat each part equally and fail to account for the data
redundancy caused by the different step frequencies and sampling rates of gait
sequences. In this study, we propose a multi-granularity motion representation
network (GaitMM) for gait sequence learning. In GaitMM, we design a combined
full-body and fine-grained sequence learning module (FFSL) to explore
part-independent spatio-temporal representations. Moreover, we utilize a
frame-wise compression strategy, referred to as multi-scale motion aggregation
(MSMA), to capture discriminative information in the gait sequence. Experiments
on two public datasets, CASIA-B and OUMVLP, show that our approach reaches
state-of-the-art performances.
|
[
{
"version": "v1",
"created": "Sun, 18 Sep 2022 04:07:33 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Jun 2023 04:48:05 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Wang",
"Lei",
""
],
[
"Liu",
"Bo",
""
],
[
"Wang",
"Bincheng",
""
],
[
"Yu",
"Fuqiang",
""
]
] |
new_dataset
| 0.9542 |
2301.01228
|
Chanjun Park
|
Eujeong Choi, Chanjun Park
|
DMOps: Data Management Operation and Recipes
|
Accepted for Data-centric Machine Learning Research (DMLR) Workshop
at ICML 2023
| null | null | null |
cs.DB cs.LG stat.ME
|
http://creativecommons.org/licenses/by/4.0/
|
Data-centric AI has shed light on the significance of data within the machine
learning (ML) pipeline. Recognizing its significance, academia, industry, and
government departments have suggested various NLP data research initiatives.
While the ability to utilize existing data is essential, the ability to build a
dataset has become more critical than ever, especially in the industry. In
consideration of this trend, we propose a "Data Management Operations and
Recipes" to guide the industry in optimizing the building of datasets for NLP
products. This paper presents the concept of DMOps which is derived from
real-world experiences with NLP data management and aims to streamline data
operations by offering a baseline.
|
[
{
"version": "v1",
"created": "Mon, 2 Jan 2023 09:46:53 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2023 02:47:21 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Jun 2023 01:23:05 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Choi",
"Eujeong",
""
],
[
"Park",
"Chanjun",
""
]
] |
new_dataset
| 0.99247 |
2301.01906
|
Jinze Liu
|
Jinze Liu, Minzhe Li, Jiunn-Kai Huang, and Jessy W. Grizzle
|
Realtime Safety Control for Bipedal Robots to Avoid Multiple Obstacles
via CLF-CBF Constraints
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a reactive planning system that allows a Cassie-series
bipedal robot to avoid multiple non-overlapping obstacles via a single,
continuously differentiable control barrier function (CBF). The overall system
detects an individual obstacle via a height map derived from a LiDAR point
cloud and computes an elliptical outer approximation, which is then turned into
a CBF. The QP-CLF-CBF formalism developed by Ames et al. is applied to ensure
that safe trajectories are generated. Liveness is ensured by an analysis of
induced equilibrium points that are distinct from the goal state. Safe planning
in environments with multiple obstacles is demonstrated both in simulation and
experimentally on the Cassie biped.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 04:35:30 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Jun 2023 02:01:04 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Liu",
"Jinze",
""
],
[
"Li",
"Minzhe",
""
],
[
"Huang",
"Jiunn-Kai",
""
],
[
"Grizzle",
"Jessy W.",
""
]
] |
new_dataset
| 0.997666 |
2301.04311
|
Zhenyu Kang
|
Zhenyu Kang, Changsheng You, and Rui Zhang
|
Active-IRS-Aided Wireless Communication: Fundamentals, Designs and Open
Issues
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Intelligent reflecting surface (IRS) has emerged as a promising technology to
realize smart radio environment for future wireless communication systems.
Existing works in this line of research have mainly considered the conventional
passive IRS that reflects wireless signals without power amplification, while
in this article, we give an overview of a new type of IRS, called active IRS,
which enables simultaneous signal reflection and amplification, thus
significantly extending the signal coverage of passive IRS. We first present
the fundamentals of active IRS, including its hardware architecture, signal and
channel models, as well as practical constraints, in comparison with those of
passive IRS. Then, we discuss new considerations and open issues in designing
active-IRS-aided wireless communications, such as the reflection optimization,
channel estimation, and deployment for active IRS, as well as its integrated
design with passive IRS. Finally, numerical results are provided to show the
potential performance gains of active IRS as compared to passive IRS and
traditional active relay.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 05:02:35 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Jun 2023 09:24:46 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Kang",
"Zhenyu",
""
],
[
"You",
"Changsheng",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.964191 |
2303.07327
|
Cong Cao
|
Cong Cao, Huanjing Yue, Xin Liu, Jingyu Yang
|
Unsupervised HDR Image and Video Tone Mapping via Contrastive Learning
|
Accepted by IEEE Transactions on Circuits and Systems for Video
Technology (TCSVT)
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Capturing high dynamic range (HDR) images (videos) is attractive because it
can reveal the details in both dark and bright regions. Since the mainstream
screens only support low dynamic range (LDR) content, tone mapping algorithm is
required to compress the dynamic range of HDR images (videos). Although image
tone mapping has been widely explored, video tone mapping is lagging behind,
especially for the deep-learning-based methods, due to the lack of HDR-LDR
video pairs. In this work, we propose a unified framework (IVTMNet) for
unsupervised image and video tone mapping. To improve unsupervised training, we
propose domain and instance based contrastive learning loss. Instead of using a
universal feature extractor, such as VGG to extract the features for similarity
measurement, we propose a novel latent code, which is an aggregation of the
brightness and contrast of extracted features, to measure the similarity of
different pairs. We totally construct two negative pairs and three positive
pairs to constrain the latent codes of tone mapped results. For the network
structure, we propose a spatial-feature-enhanced (SFE) module to enable
information exchange and transformation of nonlocal regions. For video tone
mapping, we propose a temporal-feature-replaced (TFR) module to efficiently
utilize the temporal correlation and improve the temporal consistency of video
tone-mapped results. We construct a large-scale unpaired HDR-LDR video dataset
to facilitate the unsupervised training process for video tone mapping.
Experimental results demonstrate that our method outperforms state-of-the-art
image and video tone mapping methods. Our code and dataset are available at
https://github.com/cao-cong/UnCLTMO.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 17:45:39 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jun 2023 13:56:52 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Cao",
"Cong",
""
],
[
"Yue",
"Huanjing",
""
],
[
"Liu",
"Xin",
""
],
[
"Yang",
"Jingyu",
""
]
] |
new_dataset
| 0.98917 |
2303.14070
|
Yunxiang Li
|
Yunxiang Li, Zihan Li, Kai Zhang, Ruilong Dan, Steve Jiang, You Zhang
|
ChatDoctor: A Medical Chat Model Fine-Tuned on a Large Language Model
Meta-AI (LLaMA) Using Medical Domain Knowledge
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The primary aim of this research was to address the limitations observed in
the medical knowledge of prevalent large language models (LLMs) such as
ChatGPT, by creating a specialized language model with enhanced accuracy in
medical advice. We achieved this by adapting and refining the large language
model meta-AI (LLaMA) using a large dataset of 100,000 patient-doctor dialogues
sourced from a widely used online medical consultation platform. These
conversations were cleaned and anonymized to respect privacy concerns. In
addition to the model refinement, we incorporated a self-directed information
retrieval mechanism, allowing the model to access and utilize real-time
information from online sources like Wikipedia and data from curated offline
medical databases. The fine-tuning of the model with real-world patient-doctor
interactions significantly improved the model's ability to understand patient
needs and provide informed advice. By equipping the model with self-directed
information retrieval from reliable online and offline sources, we observed
substantial improvements in the accuracy of its responses. Our proposed
ChatDoctor, represents a significant advancement in medical LLMs, demonstrating
a significant improvement in understanding patient inquiries and providing
accurate advice. Given the high stakes and low error tolerance in the medical
field, such enhancements in providing accurate and reliable information are not
only beneficial but essential.
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 15:29:16 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2023 20:41:46 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Apr 2023 18:00:33 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Apr 2023 18:54:29 GMT"
},
{
"version": "v5",
"created": "Sat, 24 Jun 2023 15:26:44 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Li",
"Yunxiang",
""
],
[
"Li",
"Zihan",
""
],
[
"Zhang",
"Kai",
""
],
[
"Dan",
"Ruilong",
""
],
[
"Jiang",
"Steve",
""
],
[
"Zhang",
"You",
""
]
] |
new_dataset
| 0.999566 |
2304.00050
|
Artem Lensky
|
Muhammad S. Battikh, Dillon Hammill, Matthew Cook, Artem Lensky
|
kNN-Res: Residual Neural Network with kNN-Graph coherence for point
cloud registration
|
27 pages, 13 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a residual neural network-based method for point
set registration that preserves the topological structure of the target point
set. Similar to coherent point drift (CPD), the registration (alignment)
problem is viewed as the movement of data points sampled from a target
distribution along a regularized displacement vector field. While the coherence
constraint in CPD is stated in terms of local motion coherence, the proposed
regularization term relies on a global smoothness constraint as a proxy for
preserving local topology. This makes CPD less flexible when the deformation is
locally rigid but globally non-rigid as in the case of multiple objects and
articulate pose registration. A Jacobian-based cost function and
geometric-aware statistical distances are proposed to mitigate these issues.
The latter allows for measuring misalignment between the target and the
reference. The justification for the k-Nearest Neighbour(kNN) graph
preservation of target data, when the Jacobian cost is used, is also provided.
Further, to tackle the registration of high-dimensional point sets, a constant
time stochastic approximation of the Jacobian cost is introduced. The proposed
method is illustrated on several 2-dimensional toy examples and tested on
high-dimensional flow Cytometry datasets where the task is to align two
distributions of cells whilst preserving the kNN-graph in order to preserve the
biological signal of the transformed data. The implementation of the proposed
approach is available at https://github.com/MuhammadSaeedBatikh/kNN-Res_Demo/
under the MIT license.
|
[
{
"version": "v1",
"created": "Fri, 31 Mar 2023 18:06:26 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jun 2023 10:50:37 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Battikh",
"Muhammad S.",
""
],
[
"Hammill",
"Dillon",
""
],
[
"Cook",
"Matthew",
""
],
[
"Lensky",
"Artem",
""
]
] |
new_dataset
| 0.987963 |
2304.03872
|
Baosheng Zhang
|
Baosheng Zhang
|
LSGDDN-LCD: An Appearance-based Loop Closure Detection using Local
Superpixel Grid Descriptors and Incremental Dynamic Nodes
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Loop Closure Detection (LCD) is an essential component of visual simultaneous
localization and mapping (SLAM) systems. It enables the recognition of
previously visited scenes to eliminate pose and map estimate drifts arising
from long-term exploration. However, current appearance-based LCD methods face
significant challenges, including high computational costs, viewpoint variance,
and dynamic objects in scenes. This paper introduced an online appearance based
LCD using local superpixel grids descriptor and dynamic node, i.e, LSGDDN-LCD,
to find similarities between scenes via hand-crafted features extracted from
LSGD. Unlike traditional Bag-of-Words (BoW) based LCD, which requires
pre-training, we proposed an adaptive mechanism to group similar images called
$\textbf{\textit{dynamic}}$ $\textbf{\textit{node}}$, which incrementally
adjusted the database in an online manner, allowing for efficient and online
retrieval of previously viewed images without need of the pre-training.
Experimental results confirmed that the LSGDDN-LCD significantly improved LCD
precision-recall and efficiency, and outperformed several state-of-the-art
(SOTA) approaches on multiple typical datasets, indicating its great potential
as a generic LCD framework.
|
[
{
"version": "v1",
"created": "Sat, 8 Apr 2023 00:00:05 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Jun 2023 09:47:25 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Zhang",
"Baosheng",
""
]
] |
new_dataset
| 0.999655 |
2304.08486
|
Kathryn Wantlin
|
Kathryn Wantlin, Chenwei Wu, Shih-Cheng Huang, Oishi Banerjee, Farah
Dadabhoy, Veeral Vipin Mehta, Ryan Wonhee Han, Fang Cao, Raja R. Narayan,
Errol Colak, Adewole Adamson, Laura Heacock, Geoffrey H. Tison, Alex Tamkin,
Pranav Rajpurkar
|
BenchMD: A Benchmark for Unified Learning on Medical Images and Sensors
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Medical data poses a daunting challenge for AI algorithms: it exists in many
different modalities, experiences frequent distribution shifts, and suffers
from a scarcity of examples and labels. Recent advances, including transformers
and self-supervised learning, promise a more universal approach that can be
applied flexibly across these diverse conditions. To measure and drive progress
in this direction, we present BenchMD: a benchmark that tests how well unified,
modality-agnostic methods, including architectures and training techniques
(e.g. self-supervised learning, ImageNet pretraining),perform on a diverse
array of clinically-relevant medical tasks. BenchMD combines 19 publicly
available datasets for 7 medical modalities, including 1D sensor data, 2D
images, and 3D volumetric scans. Our benchmark reflects real-world data
constraints by evaluating methods across a range of dataset sizes, including
challenging few-shot settings that incentivize the use of pretraining. Finally,
we evaluate performance on out-of-distribution data collected at different
hospitals than the training data, representing naturally-occurring distribution
shifts that frequently degrade the performance of medical AI models. Our
baseline results demonstrate that no unified learning technique achieves strong
performance across all modalities, leaving ample room for improvement on the
benchmark. Code is released at https://github.com/rajpurkarlab/BenchMD.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 17:59:26 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jun 2023 15:47:27 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Wantlin",
"Kathryn",
""
],
[
"Wu",
"Chenwei",
""
],
[
"Huang",
"Shih-Cheng",
""
],
[
"Banerjee",
"Oishi",
""
],
[
"Dadabhoy",
"Farah",
""
],
[
"Mehta",
"Veeral Vipin",
""
],
[
"Han",
"Ryan Wonhee",
""
],
[
"Cao",
"Fang",
""
],
[
"Narayan",
"Raja R.",
""
],
[
"Colak",
"Errol",
""
],
[
"Adamson",
"Adewole",
""
],
[
"Heacock",
"Laura",
""
],
[
"Tison",
"Geoffrey H.",
""
],
[
"Tamkin",
"Alex",
""
],
[
"Rajpurkar",
"Pranav",
""
]
] |
new_dataset
| 0.997327 |
2304.11029
|
Shangda Wu
|
Shangda Wu, Dingyao Yu, Xu Tan, Maosong Sun
|
CLaMP: Contrastive Language-Music Pre-training for Cross-Modal Symbolic
Music Information Retrieval
|
11 pages, 5 figures, 5 tables, accepted by ISMIR 2023
| null | null | null |
cs.SD cs.IR eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce CLaMP: Contrastive Language-Music Pre-training, which learns
cross-modal representations between natural language and symbolic music using a
music encoder and a text encoder trained jointly with a contrastive loss. To
pre-train CLaMP, we collected a large dataset of 1.4 million music-text pairs.
It employed text dropout as a data augmentation technique and bar patching to
efficiently represent music data which reduces sequence length to less than
10%. In addition, we developed a masked music model pre-training objective to
enhance the music encoder's comprehension of musical context and structure.
CLaMP integrates textual information to enable semantic search and zero-shot
classification for symbolic music, surpassing the capabilities of previous
models. To support the evaluation of semantic search and music classification,
we publicly release WikiMusicText (WikiMT), a dataset of 1010 lead sheets in
ABC notation, each accompanied by a title, artist, genre, and description. In
comparison to state-of-the-art models that require fine-tuning, zero-shot CLaMP
demonstrated comparable or superior performance on score-oriented datasets. Our
models and code are available at
https://github.com/microsoft/muzic/tree/main/clamp.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 15:23:00 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Apr 2023 16:31:00 GMT"
},
{
"version": "v3",
"created": "Sat, 24 Jun 2023 15:04:28 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Wu",
"Shangda",
""
],
[
"Yu",
"Dingyao",
""
],
[
"Tan",
"Xu",
""
],
[
"Sun",
"Maosong",
""
]
] |
new_dataset
| 0.999136 |
2304.14226
|
Yueming Hao
|
Yueming Hao, Xu Zhao, Bin Bao, David Berard, Will Constable, Adnan
Aziz, Xu Liu
|
TorchBench: Benchmarking PyTorch with High API Surface Coverage
| null | null | null | null |
cs.LG cs.AI cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
Deep learning (DL) has been a revolutionary technique in various domains. To
facilitate the model development and deployment, many deep learning frameworks
are proposed, among which PyTorch is one of the most popular solutions. The
performance of ecosystem around PyTorch is critically important, which saves
the costs of training models and reduces the response time of model inferences.
In this paper, we propose TorchBench, a novel benchmark suite to study the
performance of PyTorch software stack. Unlike existing benchmark suites,
TorchBench encloses many representative models, covering a large PyTorch API
surface. TorchBench is able to comprehensively characterize the performance of
the PyTorch software stack, guiding the performance optimization across models,
PyTorch framework, and GPU libraries. We show two practical use cases of
TorchBench. (1) We profile TorchBench to identify GPU performance
inefficiencies in PyTorch. We are able to optimize many performance bugs and
upstream patches to the official PyTorch repository. (2) We integrate
TorchBench into PyTorch continuous integration system. We are able to identify
performance regression in multiple daily code checkins to prevent PyTorch
repository from introducing performance bugs. TorchBench is open source and
keeps evolving.
|
[
{
"version": "v1",
"created": "Thu, 27 Apr 2023 14:37:05 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Apr 2023 19:56:19 GMT"
},
{
"version": "v3",
"created": "Sat, 24 Jun 2023 16:57:43 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Hao",
"Yueming",
""
],
[
"Zhao",
"Xu",
""
],
[
"Bao",
"Bin",
""
],
[
"Berard",
"David",
""
],
[
"Constable",
"Will",
""
],
[
"Aziz",
"Adnan",
""
],
[
"Liu",
"Xu",
""
]
] |
new_dataset
| 0.997949 |
2304.14621
|
Chenqing Hua
|
Chenqing Hua, Sitao Luan, Minkai Xu, Rex Ying, Jie Fu, Stefano Ermon,
Doina Precup
|
MUDiff: Unified Diffusion for Complete Molecule Generation
| null | null | null | null |
cs.LG q-bio.BM
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Molecule generation is a very important practical problem, with uses in drug
discovery and material design, and AI methods promise to provide useful
solutions. However, existing methods for molecule generation focus either on 2D
graph structure or on 3D geometric structure, which is not sufficient to
represent a complete molecule as 2D graph captures mainly topology while 3D
geometry captures mainly spatial atom arrangements. Combining these
representations is essential to better represent a molecule. In this paper, we
present a new model for generating a comprehensive representation of molecules,
including atom features, 2D discrete molecule structures, and 3D continuous
molecule coordinates, by combining discrete and continuous diffusion processes.
The use of diffusion processes allows for capturing the probabilistic nature of
molecular processes and exploring the effect of different factors on molecular
structures. Additionally, we propose a novel graph transformer architecture to
denoise the diffusion process. The transformer adheres to 3D roto-translation
equivariance constraints, allowing it to learn invariant atom and edge
representations while preserving the equivariance of atom coordinates. This
transformer can be used to learn molecular representations robust to geometric
transformations. We evaluate the performance of our model through experiments
and comparisons with existing methods, showing its ability to generate more
stable and valid molecules. Our model is a promising approach for designing
stable and diverse molecules and can be applied to a wide range of tasks in
molecular modeling.
|
[
{
"version": "v1",
"created": "Fri, 28 Apr 2023 04:25:57 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Jun 2023 23:42:44 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Hua",
"Chenqing",
""
],
[
"Luan",
"Sitao",
""
],
[
"Xu",
"Minkai",
""
],
[
"Ying",
"Rex",
""
],
[
"Fu",
"Jie",
""
],
[
"Ermon",
"Stefano",
""
],
[
"Precup",
"Doina",
""
]
] |
new_dataset
| 0.959612 |
2305.11959
|
Jianjian Wu
|
Jianjian Wu (1 and 2), Chi-Tsun Cheng (3), Qingfeng Zhou (1) ((1)
Dongguan University of Technology, (2) Hefei University of Technology, (3)
RMIT University)
|
CIAMA: A Multiple Access Scheme with High Diversity and Multiplexing
Gains for Next-gen Wireless Networks
|
The second version. It is currently submitted to a potential journal.
A new co-author added: thanks for the significant suggestions on the paper
editing from C.T. Cheng
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies advanced multi-access techniques to support high volumes
of concurrent access in wireless networks. Sparse code multiple access (SCMA),
as a code-domain Non-Orthogonal Multiple Access (NOMA), serves multiple users
simultaneously by adopting frequency-domain coding. Blind Interference
Alignment, in contrast, applies time-domain coding to accommodate multiple
users. Unlike beamforming, both of them need no Channel State Information at
the Transmitter (CSIT), which saves control overheads on channel information
feedback. To further increase multiplexing gain and diversity order, we propose
a new multiple access framework, which utilizes both time and frequency coding
by combining SCMA and BIA, which is CIAMA (sparseCode-and-bIA-based multiple
access). Two decoding schemes, namely the two-stage decoding scheme consisting
of zero-forcing and Message Passing Algorithm (MPA), and the Joint Message
Passing Algorithm (JMPA) enhanced by constructing a virtual factor graph, have
been analyzed. Simulation results indicate that although the performance of the
two-stage decoding scheme is inferior to both BIA and SCMA, it has a relatively
low decoding complexity. Nonetheless, the JMPA decoding scheme achieves the
same diversity gain as an STBC-based SCMA and with an even higher multiplexing
gain, which makes the CIAMA with JMPA decoding scheme a promising MA scheme for
next-gen wireless networks.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 18:49:19 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Jun 2023 14:57:26 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Wu",
"Jianjian",
"",
"1 and 2"
],
[
"Cheng",
"Chi-Tsun",
""
],
[
"Zhou",
"Qingfeng",
""
]
] |
new_dataset
| 0.961491 |
2305.15225
|
Hongyin Luo
|
Hongyin Luo, Yung-Sung Chuang, Yuan Gong, Tianhua Zhang, Yoon Kim,
Xixin Wu, Danny Fox, Helen Meng, James Glass
|
SAIL: Search-Augmented Instruction Learning
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have been significantly improved by instruction
fine-tuning, but still lack transparency and the ability to utilize up-to-date
knowledge and information. In this work, we propose search-augmented
instruction learning (SAIL), which grounds the language generation and
instruction following abilities on complex search results generated by in-house
and external search engines. With an instruction tuning corpus, we collect
search results for each training case from different search APIs and domains,
and construct a new search-grounded training set containing
\textit{(instruction, grounding information, response)} triplets. We then
fine-tune the LLaMA-7B model on the constructed training set. Since the
collected results contain unrelated and disputing languages, the model needs to
learn to ground on trustworthy search results, filter out distracting passages,
and generate the target response. The search result-denoising process entails
explicit trustworthy information selection and multi-hop reasoning, since the
retrieved passages might be informative but not contain the
instruction-following answer. Experiments show that the fine-tuned SAIL-7B
model has a strong instruction-following ability, and it performs significantly
better on transparency-sensitive tasks, including open-ended question answering
and fact checking.
|
[
{
"version": "v1",
"created": "Wed, 24 May 2023 15:07:30 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Jun 2023 17:56:37 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Luo",
"Hongyin",
""
],
[
"Chuang",
"Yung-Sung",
""
],
[
"Gong",
"Yuan",
""
],
[
"Zhang",
"Tianhua",
""
],
[
"Kim",
"Yoon",
""
],
[
"Wu",
"Xixin",
""
],
[
"Fox",
"Danny",
""
],
[
"Meng",
"Helen",
""
],
[
"Glass",
"James",
""
]
] |
new_dataset
| 0.99406 |
2306.04858
|
Kamalakar Karlapalem
|
Vijayraj Shanmugaraj, Lini Thomas, Kamalakar Karlapalem
|
Scenic Routes with Weighted Points in 2D
|
To appear as poster in International Geometry Summit 2023 (IGC'23)
3-7 July 2023, Genova, Italy
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a given 2D space, we can have points with different levels of importance.
One would prefer viewing those points from a closer/farther position per their
level of importance. A point in 2D from where the user can view two given
points per his/her preference of distance is termed a scenic point. We develop
the concept of scenic paths in a 2D space for two points that have weights
associated with them. Subsequently, we propose algorithms to generate scenic
routes a traveler can take, which cater to certain principles which define the
scenic routes. Following are the contributions of this paper: (1) mathematical
formulation of a scenic point, (2) introduction of scenic routes formed by such
scenic points in two-class point configurations in 2D spaces, and (3) design of
scenic route generation algorithms that fulfill certain defined requirements.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 01:11:51 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Jun 2023 06:58:54 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Shanmugaraj",
"Vijayraj",
""
],
[
"Thomas",
"Lini",
""
],
[
"Karlapalem",
"Kamalakar",
""
]
] |
new_dataset
| 0.985925 |
2306.06202
|
Tyler Derr
|
Anwar Said, Roza G. Bayrak, Tyler Derr, Mudassir Shabbir, Daniel
Moyer, Catie Chang, Xenofon Koutsoukos
|
NeuroGraph: Benchmarks for Graph Machine Learning in Brain Connectomics
| null | null | null | null |
cs.LG cs.AI q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Machine learning provides a valuable tool for analyzing high-dimensional
functional neuroimaging data, and is proving effective in predicting various
neurological conditions, psychiatric disorders, and cognitive patterns. In
functional Magnetic Resonance Imaging (MRI) research, interactions between
brain regions are commonly modeled using graph-based representations. The
potency of graph machine learning methods has been established across myriad
domains, marking a transformative step in data interpretation and predictive
modeling. Yet, despite their promise, the transposition of these techniques to
the neuroimaging domain remains surprisingly under-explored due to the
expansive preprocessing pipeline and large parameter search space for
graph-based datasets construction. In this paper, we introduce NeuroGraph, a
collection of graph-based neuroimaging datasets that span multiple categories
of behavioral and cognitive traits. We delve deeply into the dataset generation
search space by crafting 35 datasets within both static and dynamic contexts,
running in excess of 15 baseline methods for benchmarking. Additionally, we
provide generic frameworks for learning on dynamic as well as static graphs.
Our extensive experiments lead to several key observations. Notably, using
correlation vectors as node features, incorporating larger number of regions of
interest, and employing sparser graphs lead to improved performance. To foster
further advancements in graph-based data driven Neuroimaging, we offer a
comprehensive open source Python package that includes the datasets, baseline
implementations, model training, and standard evaluation. The package is
publicly accessible at https://anwar-said.github.io/anwarsaid/neurograph.html .
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 19:10:16 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Jun 2023 06:29:31 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Said",
"Anwar",
""
],
[
"Bayrak",
"Roza G.",
""
],
[
"Derr",
"Tyler",
""
],
[
"Shabbir",
"Mudassir",
""
],
[
"Moyer",
"Daniel",
""
],
[
"Chang",
"Catie",
""
],
[
"Koutsoukos",
"Xenofon",
""
]
] |
new_dataset
| 0.994031 |
2306.08560
|
John Lloyd Dr
|
John Lloyd and Nathan Lepora
|
A pose and shear-based tactile robotic system for object tracking,
surface following and object pushing
|
A video demonstrating the methods described in this paper is
available at https://www.youtube.com/watch?v=xVs4hd34ek0
| null |
10.5281/zenodo.7937248
| null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Tactile perception is a crucial sensing modality in robotics, particularly in
scenarios that require precise manipulation and safe interaction with other
objects. Previous research in this area has focused extensively on tactile
perception of contact poses as this is an important capability needed for tasks
such as traversing an object's surface or edge, manipulating an object, or
pushing an object along a predetermined path. Another important capability
needed for tasks such as object tracking and manipulation is estimation of
post-contact shear but this has received much less attention. Indeed,
post-contact shear has often been considered a "nuisance variable" and is
removed if possible because it can have an adverse effect on other types of
tactile perception such as contact pose estimation. This paper proposes a
tactile robotic system that can simultaneously estimate both the contact pose
and post-contact shear, and use this information to control its interaction
with other objects. Moreover, our new system is capable of interacting with
other objects in a smooth and continuous manner, unlike the stepwise,
position-controlled systems we have used in the past. We demonstrate the
capabilities of our new system using several different controller
configurations, on tasks including object tracking, surface following,
single-arm object pushing, and dual-arm object pushing.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 15:06:26 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jun 2023 17:25:30 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Lloyd",
"John",
""
],
[
"Lepora",
"Nathan",
""
]
] |
new_dataset
| 0.999492 |
2306.08571
|
Mingjian Zhu
|
Mingjian Zhu, Hanting Chen, Qiangyu Yan, Xudong Huang, Guanyu Lin, Wei
Li, Zhijun Tu, Hailin Hu, Jie Hu, Yunhe Wang
|
GenImage: A Million-Scale Benchmark for Detecting AI-Generated Image
|
GitHub: https://github.com/GenImage-Dataset/GenImage
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The extraordinary ability of generative models to generate photographic
images has intensified concerns about the spread of disinformation, thereby
leading to the demand for detectors capable of distinguishing between
AI-generated fake images and real images. However, the lack of large datasets
containing images from the most advanced image generators poses an obstacle to
the development of such detectors. In this paper, we introduce the GenImage
dataset, which has the following advantages: 1) Plenty of Images, including
over one million pairs of AI-generated fake images and collected real images.
2) Rich Image Content, encompassing a broad range of image classes. 3)
State-of-the-art Generators, synthesizing images with advanced diffusion models
and GANs. The aforementioned advantages allow the detectors trained on GenImage
to undergo a thorough evaluation and demonstrate strong applicability to
diverse images. We conduct a comprehensive analysis of the dataset and propose
two tasks for evaluating the detection method in resembling real-world
scenarios. The cross-generator image classification task measures the
performance of a detector trained on one generator when tested on the others.
The degraded image classification task assesses the capability of the detectors
in handling degraded images such as low-resolution, blurred, and compressed
images. With the GenImage dataset, researchers can effectively expedite the
development and evaluation of superior AI-generated image detectors in
comparison to prevailing methodologies.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 15:21:09 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Jun 2023 08:41:47 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Zhu",
"Mingjian",
""
],
[
"Chen",
"Hanting",
""
],
[
"Yan",
"Qiangyu",
""
],
[
"Huang",
"Xudong",
""
],
[
"Lin",
"Guanyu",
""
],
[
"Li",
"Wei",
""
],
[
"Tu",
"Zhijun",
""
],
[
"Hu",
"Hailin",
""
],
[
"Hu",
"Jie",
""
],
[
"Wang",
"Yunhe",
""
]
] |
new_dataset
| 0.999534 |
2306.08997
|
Iddo Drori
|
Sarah J. Zhang, Samuel Florin, Ariel N. Lee, Eamon Niknafs, Andrei
Marginean, Annie Wang, Keith Tyser, Zad Chin, Yann Hicke, Nikhil Singh,
Madeleine Udell, Yoon Kim, Tonio Buonassisi, Armando Solar-Lezama, Iddo Drori
|
Exploring the MIT Mathematics and EECS Curriculum Using Large Language
Models
|
Did not receive permission to release the data or model fine-tuned on
the data
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We curate a comprehensive dataset of 4,550 questions and solutions from
problem sets, midterm exams, and final exams across all MIT Mathematics and
Electrical Engineering and Computer Science (EECS) courses required for
obtaining a degree. We evaluate the ability of large language models to fulfill
the graduation requirements for any MIT major in Mathematics and EECS. Our
results demonstrate that GPT-3.5 successfully solves a third of the entire MIT
curriculum, while GPT-4, with prompt engineering, achieves a perfect solve rate
on a test set excluding questions based on images. We fine-tune an open-source
large language model on this dataset. We employ GPT-4 to automatically grade
model responses, providing a detailed performance breakdown by course,
question, and answer type. By embedding questions in a low-dimensional space,
we explore the relationships between questions, topics, and classes and
discover which questions and classes are required for solving other questions
and classes through few-shot learning. Our analysis offers valuable insights
into course prerequisites and curriculum design, highlighting language models'
potential for learning and improving Mathematics and EECS education.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 09:48:14 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Jun 2023 12:39:06 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Zhang",
"Sarah J.",
""
],
[
"Florin",
"Samuel",
""
],
[
"Lee",
"Ariel N.",
""
],
[
"Niknafs",
"Eamon",
""
],
[
"Marginean",
"Andrei",
""
],
[
"Wang",
"Annie",
""
],
[
"Tyser",
"Keith",
""
],
[
"Chin",
"Zad",
""
],
[
"Hicke",
"Yann",
""
],
[
"Singh",
"Nikhil",
""
],
[
"Udell",
"Madeleine",
""
],
[
"Kim",
"Yoon",
""
],
[
"Buonassisi",
"Tonio",
""
],
[
"Solar-Lezama",
"Armando",
""
],
[
"Drori",
"Iddo",
""
]
] |
new_dataset
| 0.968107 |
2306.09344
|
Stephanie Fu
|
Stephanie Fu, Netanel Tamir, Shobhita Sundaram, Lucy Chai, Richard
Zhang, Tali Dekel, Phillip Isola
|
DreamSim: Learning New Dimensions of Human Visual Similarity using
Synthetic Data
|
Website: https://dreamsim-nights.github.io/ Code:
https://github.com/ssundaram21/dreamsim; Fixed in-text citation, figure
alignment, and typos
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Current perceptual similarity metrics operate at the level of pixels and
patches. These metrics compare images in terms of their low-level colors and
textures, but fail to capture mid-level similarities and differences in image
layout, object pose, and semantic content. In this paper, we develop a
perceptual metric that assesses images holistically. Our first step is to
collect a new dataset of human similarity judgments over image pairs that are
alike in diverse ways. Critical to this dataset is that judgments are nearly
automatic and shared by all observers. To achieve this we use recent
text-to-image models to create synthetic pairs that are perturbed along various
dimensions. We observe that popular perceptual metrics fall short of explaining
our new data, and we introduce a new metric, DreamSim, tuned to better align
with human perception. We analyze how our metric is affected by different
visual attributes, and find that it focuses heavily on foreground objects and
semantic content while also being sensitive to color and layout. Notably,
despite being trained on synthetic data, our metric generalizes to real images,
giving strong results on retrieval and reconstruction tasks. Furthermore, our
metric outperforms both prior learned metrics and recent large vision models on
these tasks.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 17:59:50 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jun 2023 17:57:37 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Fu",
"Stephanie",
""
],
[
"Tamir",
"Netanel",
""
],
[
"Sundaram",
"Shobhita",
""
],
[
"Chai",
"Lucy",
""
],
[
"Zhang",
"Richard",
""
],
[
"Dekel",
"Tali",
""
],
[
"Isola",
"Phillip",
""
]
] |
new_dataset
| 0.999604 |
2306.10228
|
Shuhao Zhang
|
Xianzhi Zeng and Shuhao Zhang
|
CStream: Parallel Data Stream Compression on Multicore Edge Devices
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
In the burgeoning realm of Internet of Things (IoT) applications on edge
devices, data stream compression has become increasingly pertinent. The
integration of added compression overhead and limited hardware resources on
these devices calls for a nuanced software-hardware co-design. This paper
introduces CStream, a pioneering framework crafted for parallelizing stream
compression on multicore edge devices. CStream grapples with the distinct
challenges of delivering a high compression ratio, high throughput, low
latency, and low energy consumption. Notably, CStream distinguishes itself by
accommodating an array of stream compression algorithms, a variety of hardware
architectures and configurations, and an innovative set of parallelization
strategies, some of which are proposed herein for the first time. Our
evaluation showcases the efficacy of a thoughtful co-design involving a lossy
compression algorithm, asymmetric multicore processors, and our novel,
hardware-conscious parallelization strategies. This approach achieves a 2.8x
compression ratio with only marginal information loss, 4.3x throughput, 65%
latency reduction and 89% energy consumption reduction, compared to designs
lacking such strategic integration.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 01:34:36 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Jun 2023 03:20:10 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Zeng",
"Xianzhi",
""
],
[
"Zhang",
"Shuhao",
""
]
] |
new_dataset
| 0.992665 |
2306.10350
|
Weichen Zhang
|
Weichen Zhang, Xiang Zhou, Yukang Cao, Wensen Feng, Chun Yuan
|
MA-NeRF: Motion-Assisted Neural Radiance Fields for Face Synthesis from
Sparse Images
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We address the problem of photorealistic 3D face avatar synthesis from sparse
images. Existing Parametric models for face avatar reconstruction struggle to
generate details that originate from inputs. Meanwhile, although current
NeRF-based avatar methods provide promising results for novel view synthesis,
they fail to generalize well for unseen expressions. We improve from NeRF and
propose a novel framework that, by leveraging the parametric 3DMM models, can
reconstruct a high-fidelity drivable face avatar and successfully handle the
unseen expressions. At the core of our implementation are structured
displacement feature and semantic-aware learning module. Our structured
displacement feature will introduce the motion prior as an additional
constraints and help perform better for unseen expressions, by constructing
displacement volume. Besides, the semantic-aware learning incorporates
multi-level prior, e.g., semantic embedding, learnable latent code, to lift the
performance to a higher level. Thorough experiments have been doen both
quantitatively and qualitatively to demonstrate the design of our framework,
and our method achieves much better results than the current state-of-the-arts.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 13:49:56 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Jun 2023 13:14:35 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Zhang",
"Weichen",
""
],
[
"Zhou",
"Xiang",
""
],
[
"Cao",
"Yukang",
""
],
[
"Feng",
"Wensen",
""
],
[
"Yuan",
"Chun",
""
]
] |
new_dataset
| 0.995963 |
2306.11686
|
Shilei Tian
|
Shilei Tian and Tom Scogland and Barbara Chapman and Johannes Doerfert
|
GPU First -- Execution of Legacy CPU Codes on GPUs
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Utilizing GPUs is critical for high performance on heterogeneous systems.
However, leveraging the full potential of GPUs for accelerating legacy CPU
applications can be a challenging task for developers. The porting process
requires identifying code regions amenable to acceleration, managing distinct
memories, synchronizing host and device execution, and handling library
functions that may not be directly executable on the device. This complexity
makes it challenging for non-experts to leverage GPUs effectively, or even to
start offloading parts of a large legacy application. In this paper, we propose
a novel compilation scheme called "GPU First" that automatically compiles
legacy CPU applications directly for GPUs without any modification of the
application source. Library calls inside the application are either resolved
through our partial libc GPU implementation or via automatically generated
remote procedure calls to the host. Our approach simplifies the task of
identifying code regions amenable to acceleration and enables rapid testing of
code modifications on actual GPU hardware in order to guide porting efforts.
Our evaluation on two HPC proxy applications with OpenMP CPU and GPU
parallelism, four micro benchmarks with originally GPU only parallelism, as
well as three benchmarks from the SPEC OMP 2012 suite featuring hand-optimized
OpenMP CPU parallelism showcases the simplicity of porting host applications to
the GPU. For existing parallel loops, we often match the performance of
corresponding manually offloaded kernels, with up to 14.36x speedup on the GPU,
validating that our GPU First methodology can effectively guide porting efforts
of large legacy applications.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 17:03:16 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 15:37:12 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Jun 2023 14:35:03 GMT"
}
] | 2023-06-27T00:00:00 |
[
[
"Tian",
"Shilei",
""
],
[
"Scogland",
"Tom",
""
],
[
"Chapman",
"Barbara",
""
],
[
"Doerfert",
"Johannes",
""
]
] |
new_dataset
| 0.956843 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.