id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.06746
|
Hao Cui
|
Hao Cui, Rahmadi Trimananda, Athina Markopoulou, Scott Jordan
|
PoliGraph: Automated Privacy Policy Analysis using Knowledge Graphs
|
24 pages, 15 figures (including subfigures), 9 tables. This is the
extended version of the paper with the same title published at USENIX
Security '23
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Privacy policies disclose how an organization collects and handles personal
information. Recent work has made progress in leveraging natural language
processing (NLP) to automate privacy policy analysis and extract data
collection statements from different sentences, considered in isolation from
each other. In this paper, we view and analyze, for the first time, the entire
text of a privacy policy in an integrated way. In terms of methodology: (1) we
define PoliGraph, a type of knowledge graph that captures statements in a
privacy policy as relations between different parts of the text; and (2) we
develop an NLP-based tool, PoliGraph-er, to automatically extract PoliGraph
from the text. In addition, (3) we revisit the notion of ontologies, previously
defined in heuristic ways, to capture subsumption relations between terms. We
make a clear distinction between local and global ontologies to capture the
context of individual privacy policies, application domains, and privacy laws.
Using a public dataset for evaluation, we show that PoliGraph-er identifies 40%
more collection statements than prior state-of-the-art, with 97% precision. In
terms of applications, PoliGraph enables automated analysis of a corpus of
privacy policies and allows us to: (1) reveal common patterns in the texts
across different privacy policies, and (2) assess the correctness of the terms
as defined within a privacy policy. We also apply PoliGraph to: (3) detect
contradictions in a privacy policy, where we show false alarms by prior work,
and (4) analyze the consistency of privacy policies and network traffic, where
we identify significantly more clear disclosures than prior work.
|
[
{
"version": "v1",
"created": "Thu, 13 Oct 2022 05:16:22 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 19:45:23 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Cui",
"Hao",
""
],
[
"Trimananda",
"Rahmadi",
""
],
[
"Markopoulou",
"Athina",
""
],
[
"Jordan",
"Scott",
""
]
] |
new_dataset
| 0.995925 |
2210.13977
|
Clayton Miller
|
Federico Tartarini, Mario Frei, Stefano Schiavon, Yun Xuan Chua,
Clayton Miller
|
Cozie Apple: An iOS mobile and smartwatch application for environmental
quality satisfaction and physiological data collection
|
Accepted at the CISBAT 2023 The Built Environment in Transition,
Hybrid International Conference, EPFL, Lausanne, Switzerland, 13-15 September
2023
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Collecting feedback from people in indoor and outdoor environments is
traditionally challenging and complex in a reliable, longitudinal, and
non-intrusive way. This paper introduces Cozie Apple, an open-source mobile and
smartwatch application for iOS devices. This platform allows people to complete
a watch-based micro-survey and provide real-time feedback about environmental
conditions via their Apple Watch. It leverages the inbuilt sensors of a
smartwatch to collect physiological (e.g., heart rate, activity) and
environmental (sound level) data. This paper outlines data collected from 48
research participants who used the platform to report perceptions of
urban-scale environmental comfort (noise and thermal) and contextual factors
such as who they were with and what activity they were doing. The results of
2,400 micro-surveys across various urban settings are illustrated in this paper
showing the variability of noise-related distractions, thermal comfort, and
associated context. The results show people experience at least a little noise
distraction 58% of the time, with people talking being the most common reason
(46%). This effort is novel due to its focus on spatial and temporal
scalability and collection of noise, distraction, and associated contextual
information. These data set the stage for larger deployments, deeper analysis,
and more helpful prediction models toward better understanding the occupants'
needs and perceptions. These innovations could result in real-time control
signals to building systems or nudges for people to change their behavior.
|
[
{
"version": "v1",
"created": "Sat, 22 Oct 2022 03:31:25 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 01:39:28 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Tartarini",
"Federico",
""
],
[
"Frei",
"Mario",
""
],
[
"Schiavon",
"Stefano",
""
],
[
"Chua",
"Yun Xuan",
""
],
[
"Miller",
"Clayton",
""
]
] |
new_dataset
| 0.999817 |
2301.04728
|
Michael Mislove
|
Ayberk Tosun, Mart\'in H\"otzel Escard\'o
|
Patch Locale of a Spectral Locale in Univalent Type Theory
| null |
Electronic Notes in Theoretical Informatics and Computer Science,
Volume 1 - Proceedings of MFPS XXXVIII (February 22, 2023) entics:10808
|
10.46298/entics.10808
| null |
cs.LO math.GN
|
http://creativecommons.org/licenses/by/4.0/
|
Stone locales together with continuous maps form a coreflective subcategory
of spectral locales and perfect maps. A proof in the internal language of an
elementary topos was previously given by the second-named author. This proof
can be easily translated to univalent type theory using resizing axioms. In
this work, we show how to achieve such a translation without resizing axioms,
by working with large, locally small, and small complete frames with small
bases. This turns out to be nontrivial and involves predicative reformulations
of several fundamental concepts of locale theory.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 21:43:26 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Feb 2023 14:01:50 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Feb 2023 22:11:22 GMT"
},
{
"version": "v4",
"created": "Mon, 20 Feb 2023 15:51:05 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Tosun",
"Ayberk",
""
],
[
"Escardó",
"Martín Hötzel",
""
]
] |
new_dataset
| 0.999695 |
2301.13346
|
Michael Chesser
|
Michael Chesser, Surya Nepal, Damith C. Ranasinghe
|
Icicle: A Re-Designed Emulator for Grey-Box Firmware Fuzzing
|
Accepted ISSTA 2023. Code: https://github.com/icicle-emu/icicle
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Emulation-based fuzzers enable testing binaries without source code, and
facilitate testing embedded applications where automated execution on the
target hardware architecture is difficult and slow. The instrumentation
techniques added to extract feedback and guide input mutations towards
generating effective test cases is at the core of modern fuzzers. But, modern
emulation-based fuzzers have evolved by re-purposing general-purpose emulators;
consequently, developing and integrating fuzzing techniques, such as
instrumentation methods, are difficult and often added in an ad-hoc manner,
specific to an instruction set architecture (ISA). This limits state-of-the-art
fuzzing techniques to few ISAs such as x86/x86-64 or ARM/AArch64; a significant
problem for firmware fuzzing of diverse ISAs.
This study presents our efforts to re-think emulation for fuzzing. We design
and implement a fuzzing-specific, multi-architecture emulation framework --
Icicle. We demonstrate the capability to add instrumentation once, in an
architecture agnostic manner, with low execution overhead. We employ Icicle as
the emulator for a state-of-the-art ARM firmware fuzzer -- Fuzzware -- and
replicate results. Significantly, we demonstrate the availability of new
instrumentation in Icicle enabled the discovery of new bugs. We demonstrate the
fidelity of Icicle and efficacy of architecture agnostic instrumentation by
discovering LAVA-M benchmark bugs, requiring a known and specific operational
capability of instrumentation techniques, across a diverse set of instruction
set architectures (x86-64, ARM/AArch64, RISC-V, MIPS). Further, to demonstrate
the effectiveness of Icicle to discover bugs in a currently unsupported
architecture in emulation-based fuzzers, we perform a fuzzing campaign with
real-world MSP430 firmware binaries and discovered 7 new bugs.
|
[
{
"version": "v1",
"created": "Tue, 31 Jan 2023 00:32:29 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 06:03:14 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Chesser",
"Michael",
""
],
[
"Nepal",
"Surya",
""
],
[
"Ranasinghe",
"Damith C.",
""
]
] |
new_dataset
| 0.988007 |
2302.04547
|
Deepika Tiwari
|
Deepika Tiwari, Martin Monperrus, Benoit Baudry
|
RICK: Generating Mocks from Production Data
|
Appears in the tool demonstrations track of the IEEE International
Conference on Software Testing, Verification and Validation (ICST), 2023
|
Proceedings of ICST, 2023
|
10.1109/icst57152.2023.00051
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Test doubles, such as mocks and stubs, are nifty fixtures in unit tests. They
allow developers to test individual components in isolation from others that
lie within or outside of the system. However, implementing test doubles within
tests is not straightforward. With this demonstration, we introduce RICK, a
tool that observes executing applications in order to automatically generate
tests with realistic mocks and stubs. RICK monitors the invocation of target
methods and their interactions with external components. Based on the data
collected from these observations, RICK produces unit tests with mocks, stubs,
and mock-based oracles. We highlight the capabilities of RICK, and how it can
be used with real-world Java applications, to generate tests with mocks.
|
[
{
"version": "v1",
"created": "Thu, 9 Feb 2023 10:25:51 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Tiwari",
"Deepika",
""
],
[
"Monperrus",
"Martin",
""
],
[
"Baudry",
"Benoit",
""
]
] |
new_dataset
| 0.994681 |
2302.11033
|
Jose-Luis Blanco-Claraco
|
Jos\'e-Luis Blanco-Claraco, Borys Tymchenko, Francisco Jos\'e
Ma\~nas-Alvarez, Fernando Ca\~nadas-Ar\'anega, \'Angel L\'opez-G\'azquez,
Jos\'e Carlos Moreno
|
MultiVehicle Simulator (MVSim): lightweight dynamics simulator for
multiagents and mobile robotics research
|
6 pages, 6 figures, submitted
| null |
10.1016/j.softx.2023.101443.
| null |
cs.RO cs.MA
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Development of applications related to closed-loop control requires either
testing on the field or on a realistic simulator, with the latter being more
convenient, inexpensive, safe, and leading to shorter development cycles. To
address that need, the present work introduces MVSim, a simulator for multiple
vehicles or robots capable of running dozens of agents in simple scenarios, or
a handful of them in complex scenarios. MVSim employs realistic
physics-grounded friction models for tire-ground interaction, and aims at
accurate and GPU-accelerated simulation of most common modern sensors employed
in mobile robotics and autonomous vehicle research, such as depth and RGB
cameras, or 2D and 3D LiDAR scanners. All depth-related sensors are able to
accurately measure distances to 3D models provided by the user to define custom
world elements. Efficient simulation is achieved by means of focusing on ground
vehicles, which allows the use of a simplified 2D physics engine for body
collisions while solving wheel-ground interaction forces separately. The core
parts of the system are written in C++ for maximum efficiency, while Python,
ROS 1, and ROS 2 wrappers are also offered for easy integration into user
systems. A custom publish/subscribe protocol based on ZeroMQ (ZMQ) is defined
to allow for multiprocess applications to access or modify a running
simulation. This simulator enables and makes easier to do research and
development on vehicular dynamics, autonomous navigation algorithms, and
simultaneous localization and mapping (SLAM) methods.
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 22:22:21 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Blanco-Claraco",
"José-Luis",
""
],
[
"Tymchenko",
"Borys",
""
],
[
"Mañas-Alvarez",
"Francisco José",
""
],
[
"Cañadas-Aránega",
"Fernando",
""
],
[
"López-Gázquez",
"Ángel",
""
],
[
"Moreno",
"José Carlos",
""
]
] |
new_dataset
| 0.999076 |
2303.05368
|
Quoc Huy Vu
|
Alex B. Grilo, Or Sattath, Quoc-Huy Vu
|
Encryption with Quantum Public Keys
|
This paper is subsumed and superseded by arXiv:2306.07698
| null | null | null |
cs.CR quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
It is an important question to find constructions of quantum cryptographic
protocols which rely on weaker computational assumptions than classical
protocols. Recently, it has been shown that oblivious transfer and multi-party
computation can be constructed from one-way functions, whereas this is
impossible in the classical setting in a black-box way. In this work, we study
the question of building quantum public-key encryption schemes from one-way
functions and even weaker assumptions. Firstly, we revisit the definition of
IND-CPA security to this setting. Then, we propose three schemes for quantum
public-key encryption from one-way functions, pseudorandom function-like states
with proof of deletion and pseudorandom function-like states, respectively.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 16:17:19 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 10:11:12 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Jun 2023 11:28:01 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Grilo",
"Alex B.",
""
],
[
"Sattath",
"Or",
""
],
[
"Vu",
"Quoc-Huy",
""
]
] |
new_dataset
| 0.981481 |
2303.07156
|
Chaofeng Guan
|
Chaofeng Guan, Ruihu Li, Yiting Liu, Zhi Ma
|
Some quaternary additive codes outperform linear counterparts
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The additive codes may have better parameters than linear codes. However, it
is still a challenging problem to efficiently construct additive codes that
outperform linear codes, especially those with greater distances than linear
codes of the same lengths and dimensions. This paper focuses on constructing
additive codes that outperform linear codes based on quasi-cyclic codes and
combinatorial methods. Firstly, we propose a lower bound on the symplectic
distance of 1-generator quasi-cyclic codes of index even. Secondly, we get many
binary quasi-cyclic codes with large symplectic distances utilizing
computer-supported combination and search methods, all of which correspond to
good quaternary additive codes. Notably, some additive codes have greater
distances than best-known quaternary linear codes in Grassl's code table
(bounds on the minimum distance of quaternary linear codes
http://www.codetables.de) for the same lengths and dimensions. Moreover,
employing a combinatorial approach, we partially determine the parameters of
optimal quaternary additive 3.5-dimensional codes with lengths from $28$ to
$254$. Finally, as an extension, we also construct some good additive
complementary dual codes with larger distances than the best-known quaternary
linear complementary dual codes in the literature.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 14:30:22 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Mar 2023 14:07:12 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Jun 2023 03:14:53 GMT"
},
{
"version": "v4",
"created": "Wed, 21 Jun 2023 13:20:23 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Guan",
"Chaofeng",
""
],
[
"Li",
"Ruihu",
""
],
[
"Liu",
"Yiting",
""
],
[
"Ma",
"Zhi",
""
]
] |
new_dataset
| 0.996755 |
2304.04624
|
Xuan Yu
|
Xuan Yu, Yili Liu, Sitong Mao, Shunbo Zhou, Rong Xiong, Yiyi Liao, Yue
Wang
|
NF-Atlas: Multi-Volume Neural Feature Fields for Large Scale LiDAR
Mapping
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR Mapping has been a long-standing problem in robotics. Recent progress
in neural implicit representation has brought new opportunities to robotic
mapping. In this paper, we propose the multi-volume neural feature fields,
called NF-Atlas, which bridge the neural feature volumes with pose graph
optimization. By regarding the neural feature volume as pose graph nodes and
the relative pose between volumes as pose graph edges, the entire neural
feature field becomes both locally rigid and globally elastic. Locally, the
neural feature volume employs a sparse feature Octree and a small MLP to encode
the submap SDF with an option of semantics. Learning the map using this
structure allows for end-to-end solving of maximum a posteriori (MAP) based
probabilistic mapping. Globally, the map is built volume by volume
independently, avoiding catastrophic forgetting when mapping incrementally.
Furthermore, when a loop closure occurs, with the elastic pose graph based
representation, only updating the origin of neural volumes is required without
remapping. Finally, these functionalities of NF-Atlas are validated. Thanks to
the sparsity and the optimization based formulation, NF-Atlas shows competitive
performance in terms of accuracy, efficiency and memory usage on both
simulation and real-world datasets.
|
[
{
"version": "v1",
"created": "Mon, 10 Apr 2023 14:41:08 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 18:17:42 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Yu",
"Xuan",
""
],
[
"Liu",
"Yili",
""
],
[
"Mao",
"Sitong",
""
],
[
"Zhou",
"Shunbo",
""
],
[
"Xiong",
"Rong",
""
],
[
"Liao",
"Yiyi",
""
],
[
"Wang",
"Yue",
""
]
] |
new_dataset
| 0.969011 |
2305.16758
|
Michal Kepkowski
|
Wei-Zhu Yeoh, Michal Kepkowski, Gunnar Heide, Dali Kaafar, Lucjan
Hanzlik
|
Fast IDentity Online with Anonymous Credentials (FIDO-AC)
|
to be published in the 32nd USENIX Security Symposium(USENIX 2023)
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Web authentication is a critical component of today's Internet and the
digital world we interact with. The FIDO2 protocol enables users to leverage
common devices to easily authenticate to online services in both mobile and
desktop environments following the passwordless authentication approach based
on cryptography and biometric verification. However, there is little to no
connection between the authentication process and users' attributes. More
specifically, the FIDO protocol does not specify methods that could be used to
combine trusted attributes with the FIDO authentication process generically and
allows users to disclose them to the relying party arbitrarily. In essence,
applications requiring attributes verification (e.g. age or expiry date of a
driver's license, etc.) still rely on ad-hoc approaches, not satisfying the
data minimization principle and not allowing the user to vet the disclosed
data. A primary recent example is the data breach on Singtel Optus, one of the
major telecommunications providers in Australia, where very personal and
sensitive data (e.g. passport numbers) were leaked. This paper introduces
FIDO-AC, a novel framework that combines the FIDO2 authentication process with
the user's digital and non-shareable identity. We show how to instantiate this
framework using off-the-shelf FIDO tokens and any electronic identity document,
e.g., the ICAO biometric passport (ePassport). We demonstrate the practicality
of our approach by evaluating a prototype implementation of the FIDO-AC system.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 09:19:39 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2023 06:51:43 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Jun 2023 23:08:40 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Yeoh",
"Wei-Zhu",
""
],
[
"Kepkowski",
"Michal",
""
],
[
"Heide",
"Gunnar",
""
],
[
"Kaafar",
"Dali",
""
],
[
"Hanzlik",
"Lucjan",
""
]
] |
new_dataset
| 0.999071 |
2306.00887
|
Xueqing Wu
|
Xueqing Wu, Sha Li, Heng Ji
|
OpenPI-C: A Better Benchmark and Stronger Baseline for Open-Vocabulary
State Tracking
|
ACL 2023 findings (fix typo)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Open-vocabulary state tracking is a more practical version of state tracking
that aims to track state changes of entities throughout a process without
restricting the state space and entity space. OpenPI is to date the only
dataset annotated for open-vocabulary state tracking. However, we identify
issues with the dataset quality and evaluation metric. For the dataset, we
categorize 3 types of problems on the procedure level, step level and state
change level respectively, and build a clean dataset OpenPI-C using multiple
rounds of human judgment. For the evaluation metric, we propose a cluster-based
metric to fix the original metric's preference for repetition.
Model-wise, we enhance the seq2seq generation baseline by reinstating two key
properties for state tracking: temporal dependency and entity awareness. The
state of the world after an action is inherently dependent on the previous
state. We model this dependency through a dynamic memory bank and allow the
model to attend to the memory slots during decoding. On the other hand, the
state of the world is naturally a union of the states of involved entities.
Since the entities are unknown in the open-vocabulary setting, we propose a
two-stage model that refines the state change prediction conditioned on
entities predicted from the first stage. Empirical results show the
effectiveness of our proposed model especially on the cluster-based metric. The
code and data are released at https://github.com/shirley-wu/openpi-c
|
[
{
"version": "v1",
"created": "Thu, 1 Jun 2023 16:48:20 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 19:47:20 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Wu",
"Xueqing",
""
],
[
"Li",
"Sha",
""
],
[
"Ji",
"Heng",
""
]
] |
new_dataset
| 0.999767 |
2306.10410
|
Matthew Drescher
|
Matthew Drescher, Muhammad A. Awad, Serban D. Porumbescu, John D.
Owens
|
BOBA: A Parallel Lightweight Graph Reordering Algorithm with Heavyweight
Implications
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a simple parallel-friendly lightweight graph reordering algorithm
for COO graphs (edge lists). Our
``Batched Order By Attachment'' (BOBA) algorithm is linear in the number of
edges in terms of reads and linear in the number of vertices for writes through
to main memory. It is highly parallelizable on GPUs\@. We show that, compared
to a randomized baseline, the ordering produced gives improved locality of
reference in sparse matrix-vector multiplication (SpMV) as well as other graph
algorithms. Moreover, it can substantially speed up the conversion from a COO
representation to the compressed format CSR, a very common workflow. Thus, it
can give \emph{end-to-end} speedups even in SpMV\@. Unlike other lightweight
approaches, this reordering does not rely on explicitly knowing the degrees of
the vertices, and indeed its runtime is comparable to that of computing
degrees. Instead, it uses the structure and edge distribution inherent in the
input edge list, making it a candidate for default use in a pragmatic graph
creation pipeline. This algorithm is suitable for road-type networks as well as
scale-free. It improves cache locality on both CPUs and GPUs, achieving hit
rates similar to the heavyweight techniques (e.g., for SpMV, 7--52\% and
11--67\% in the L1 and L2 caches, respectively). Compared to randomly labeled
graphs, BOBA-reordered graphs achieve end-to-end speedups of up to 3.45. The
reordering time is approximately one order of magnitude faster than existing
lightweight techniques and up to 2.5 orders of magnitude faster than
heavyweight techniques.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 19:15:56 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 14:31:15 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Drescher",
"Matthew",
""
],
[
"Awad",
"Muhammad A.",
""
],
[
"Porumbescu",
"Serban D.",
""
],
[
"Owens",
"John D.",
""
]
] |
new_dataset
| 0.999463 |
2306.10832
|
Ivan Virgala
|
Martin Varga, Ivan Virgala, Michal Kelemen, Lubica Mikova, Zdenko
Bobovsky, Peter Jan Sincak, Tomas Merva
|
Pneumatic bellows actuated parallel platform control with adjustable
stiffness using a hybrid feed-forward and variable gain I-controller
|
13 pages, 24 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Redundant cascade manipulators actuated by pneumatic bellows actuators are
passively compliant, rugged and dexterous which are qualities making them
exceptionally well suited for applications in agriculture. Unfortunately
bellows actuators are notoriously difficult to precisely position. This paper
presents a novel control algorithm for the control of a parallel platform
actuated by pneumatic bellows actuators, which is serving as one module of a
cascade manipulator. The algorithm combines a feed-forward controller and a
variable gain I-controller. The feed-forward controller was designed using
experimental data and two regression steps to create a mathematical
representation of the data. The gain of the I-controller depends linearly on
the total reference error, which allows the I-controller to work in concert
with the feed-forward part of the controller. The presented algorithm was
experimentally verified and its performance was compared with two controllers,
an ANFIS controller and a constant gain PID controller, to satisfactory
results. The controller was also tested under dynamic loading conditions
showing promising results.
|
[
{
"version": "v1",
"created": "Mon, 19 Jun 2023 10:34:32 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 12:11:10 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Varga",
"Martin",
""
],
[
"Virgala",
"Ivan",
""
],
[
"Kelemen",
"Michal",
""
],
[
"Mikova",
"Lubica",
""
],
[
"Bobovsky",
"Zdenko",
""
],
[
"Sincak",
"Peter Jan",
""
],
[
"Merva",
"Tomas",
""
]
] |
new_dataset
| 0.985893 |
2306.11290
|
Yongsen Mao
|
Mukul Khanna, Yongsen Mao, Hanxiao Jiang, Sanjay Haresh, Brennan
Shacklett, Dhruv Batra, Alexander Clegg, Eric Undersander, Angel X. Chang,
Manolis Savva
|
Habitat Synthetic Scenes Dataset (HSSD-200): An Analysis of 3D Scene
Scale and Realism Tradeoffs for ObjectGoal Navigation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We contribute the Habitat Synthetic Scene Dataset, a dataset of 211
high-quality 3D scenes, and use it to test navigation agent generalization to
realistic 3D environments. Our dataset represents real interiors and contains a
diverse set of 18,656 models of real-world objects. We investigate the impact
of synthetic 3D scene dataset scale and realism on the task of training
embodied agents to find and navigate to objects (ObjectGoal navigation). By
comparing to synthetic 3D scene datasets from prior work, we find that scale
helps in generalization, but the benefits quickly saturate, making visual
fidelity and correlation to real-world scenes more important. Our experiments
show that agents trained on our smaller-scale dataset can match or outperform
agents trained on much larger datasets. Surprisingly, we observe that agents
trained on just 122 scenes from our dataset outperform agents trained on 10,000
scenes from the ProcTHOR-10K dataset in terms of zero-shot generalization in
real-world scanned environments.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 05:07:23 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 03:19:20 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Khanna",
"Mukul",
""
],
[
"Mao",
"Yongsen",
""
],
[
"Jiang",
"Hanxiao",
""
],
[
"Haresh",
"Sanjay",
""
],
[
"Shacklett",
"Brennan",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Clegg",
"Alexander",
""
],
[
"Undersander",
"Eric",
""
],
[
"Chang",
"Angel X.",
""
],
[
"Savva",
"Manolis",
""
]
] |
new_dataset
| 0.999835 |
2306.11335
|
Yuhang Wen
|
Pengzhen Ren, Kaidong Zhang, Hetao Zheng, Zixuan Li, Yuhang Wen,
Fengda Zhu, Mas Ma, Xiaodan Liang
|
RM-PRT: Realistic Robotic Manipulation Simulator and Benchmark with
Progressive Reasoning Tasks
| null | null | null | null |
cs.RO cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, the advent of pre-trained large-scale language models (LLMs) like
ChatGPT and GPT-4 have significantly advanced the machine's natural language
understanding capabilities. This breakthrough has allowed us to seamlessly
integrate these open-source LLMs into a unified robot simulator environment to
help robots accurately understand and execute human natural language
instructions. To this end, in this work, we introduce a realistic robotic
manipulation simulator and build a Robotic Manipulation with Progressive
Reasoning Tasks (RM-PRT) benchmark on this basis. Specifically, the RM-PRT
benchmark builds a new high-fidelity digital twin scene based on Unreal Engine
5, which includes 782 categories, 2023 objects, and 15K natural language
instructions generated by ChatGPT for a detailed evaluation of robot
manipulation. We propose a general pipeline for the RM-PRT benchmark that takes
as input multimodal prompts containing natural language instructions and
automatically outputs actions containing the movement and position transitions.
We set four natural language understanding tasks with progressive reasoning
levels and evaluate the robot's ability to understand natural language
instructions in two modes of adsorption and grasping. In addition, we also
conduct a comprehensive analysis and comparison of the differences and
advantages of 10 different LLMs in instruction understanding and generation
quality. We hope the new simulator and benchmark will facilitate future
research on language-guided robotic manipulation. Project website:
https://necolizer.github.io/RM-PRT/ .
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 07:06:04 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2023 06:56:47 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Ren",
"Pengzhen",
""
],
[
"Zhang",
"Kaidong",
""
],
[
"Zheng",
"Hetao",
""
],
[
"Li",
"Zixuan",
""
],
[
"Wen",
"Yuhang",
""
],
[
"Zhu",
"Fengda",
""
],
[
"Ma",
"Mas",
""
],
[
"Liang",
"Xiaodan",
""
]
] |
new_dataset
| 0.999155 |
2306.11739
|
Ziwei Liao
|
Ziwei Liao, Steven L. Waslander
|
Multi-view 3D Object Reconstruction and Uncertainty Modelling with
Neural Shape Prior
|
12 pages, 8 figures
| null | null | null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D object reconstruction is important for semantic scene understanding. It is
challenging to reconstruct detailed 3D shapes from monocular images directly
due to a lack of depth information, occlusion and noise. Most current methods
generate deterministic object models without any awareness of the uncertainty
of the reconstruction. We tackle this problem by leveraging a neural object
representation which learns an object shape distribution from large dataset of
3d object models and maps it into a latent space. We propose a method to model
uncertainty as part of the representation and define an uncertainty-aware
encoder which generates latent codes with uncertainty directly from individual
input images. Further, we propose a method to propagate the uncertainty in the
latent code to SDF values and generate a 3d object mesh with local uncertainty
for each mesh component. Finally, we propose an incremental fusion method under
a Bayesian framework to fuse the latent codes from multi-view observations. We
evaluate the system in both synthetic and real datasets to demonstrate the
effectiveness of uncertainty-based fusion to improve 3D object reconstruction
accuracy.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 03:25:13 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Liao",
"Ziwei",
""
],
[
"Waslander",
"Steven L.",
""
]
] |
new_dataset
| 0.993841 |
2306.11758
|
Haitong Huang
|
Haitong Huang, Cheng Liu, Xinghua Xue, Ying Wang, Huawei Li, Xiaowei
Li
|
MRFI: An Open Source Multi-Resolution Fault Injection Framework for
Neural Network Processing
|
8 pages, 11 figures, source code is on
https://github.com/fffasttime/MRFI
| null | null | null |
cs.LG cs.AI cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
To ensure resilient neural network processing on even unreliable hardware,
comprehensive reliability analysis against various hardware faults is generally
required before the deep neural network models are deployed, and efficient
error injection tools are highly demanded. However, most existing fault
injection tools remain rather limited to basic fault injection to neurons and
fail to provide fine-grained vulnerability analysis capability. In addition,
many of the fault injection tools still need to change the neural network
models and make the fault injection closely coupled with normal neural network
processing, which further complicates the use of the fault injection tools and
slows down the fault simulation. In this work, we propose MRFI, a highly
configurable multi-resolution fault injection tool for deep neural networks. It
enables users to modify an independent fault configuration file rather than
neural network models for the fault injection and vulnerability analysis.
Particularly, it integrates extensive fault analysis functionalities from
different perspectives and enables multi-resolution investigation of the
vulnerability of neural networks. In addition, it does not modify the major
neural network computing framework of PyTorch. Hence, it allows parallel
processing on GPUs naturally and exhibits fast fault simulation according to
our experiments.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 06:46:54 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Huang",
"Haitong",
""
],
[
"Liu",
"Cheng",
""
],
[
"Xue",
"Xinghua",
""
],
[
"Wang",
"Ying",
""
],
[
"Li",
"Huawei",
""
],
[
"Li",
"Xiaowei",
""
]
] |
new_dataset
| 0.998613 |
2306.11762
|
Dongoo Lee Ph.D
|
Seunghan Park, Dongoo Lee, Yeonju Choi, SungTae Moon
|
MultiEarth 2023 Deforestation Challenge -- Team FOREVER
|
CVPR 2023, MultiEarth 2023, Deforestation Estimation Challenge
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
It is important problem to accurately estimate deforestation of satellite
imagery since this approach can analyse extensive area without direct human
access. However, it is not simple problem because of difficulty in observing
the clear ground surface due to extensive cloud cover during long rainy season.
In this paper, we present a multi-view learning strategy to predict
deforestation status in the Amazon rainforest area with latest deep neural
network models. Multi-modal dataset consists of three types of different
satellites imagery, Sentinel-1, Sentinel-2 and Landsat 8 is utilized to train
and predict deforestation status. MMsegmentation framework is selected to apply
comprehensive data augmentation and diverse networks. The proposed method
effectively and accurately predicts the deforestation status of new queries.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 09:10:06 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Park",
"Seunghan",
""
],
[
"Lee",
"Dongoo",
""
],
[
"Choi",
"Yeonju",
""
],
[
"Moon",
"SungTae",
""
]
] |
new_dataset
| 0.999414 |
2306.11878
|
Mary Doerfler
|
Mary C. Doerfler, Katalin Sch\"affer, Margaret M. Coad
|
Hybrid Soft-Rigid Continuum Robot Inspired by Spider Monkey Tail
|
6 pages, 8 figures. Published in 2023 IEEE International Conference
on Soft Robotics (RoboSoft)
| null |
10.1109/RoboSoft55895.2023.10122106
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spider monkeys (genus Ateles) have a prehensile tail that functions as a
flexible, multipurpose fifth limb, enabling them to navigate complex terrains,
grasp objects of various sizes, and swing between supports. Inspired by the
spider monkey tail, we present a life size hybrid soft-rigid continuum robot
designed to imitate the function of the tail. Our planar design has a rigid
skeleton with soft elements at its joints that achieve decreasing stiffness
along its length. Five manually-operated wires along this central structure
control the motion of the tail to form a variety of possible shapes in the 2D
plane. Our design also includes a skin-like silicone and fabric tail pad that
moves with the tail's tip and assists with object grasping. We quantify the
force required to pull various objects out of the robot's grasp and demonstrate
that this force increases with the object diameter and the number of edges in a
polygonal object. We demonstrate the robot's ability to grasp, move, and
release objects of various diameters, as well as to navigate around obstacles,
and to retrieve an object after passing under a low passageway.
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 20:34:17 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Doerfler",
"Mary C.",
""
],
[
"Schäffer",
"Katalin",
""
],
[
"Coad",
"Margaret M.",
""
]
] |
new_dataset
| 0.962552 |
2306.11891
|
Pieter-Jan Toye
|
Pieter-Jan Toye
|
Vital Videos: A dataset of videos with PPG and blood pressure ground
truths
|
13 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We collected a large dataset consisting of nearly 900 unique participants.
For every participant we recorded two 30 second uncompressed videos,
synchronized PPG waveforms and a single blood pressure measurement. Gender, age
and skin color were also registered for every participant. The dataset includes
roughly equal numbers of males and females, as well as participants of all
ages. While the skin color distribution could have been more balanced, the
dataset contains individuals from every skin color. The data was collected in a
diverse set of locations to ensure a wide variety of backgrounds and lighting
conditions. In an effort to assist in the research and development of remote
vital sign measurement we are now opening up access to this dataset.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 17:47:29 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Toye",
"Pieter-Jan",
""
]
] |
new_dataset
| 0.999869 |
2306.11920
|
Marcos V. Conde
|
Marcos V. Conde, Javier Vazquez-Corral, Michael S. Brown, Radu Timofte
|
NILUT: Conditional Neural Implicit 3D Lookup Tables for Image
Enhancement
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
3D lookup tables (3D LUTs) are a key component for image enhancement. Modern
image signal processors (ISPs) have dedicated support for these as part of the
camera rendering pipeline. Cameras typically provide multiple options for
picture styles, where each style is usually obtained by applying a unique
handcrafted 3D LUT. Current approaches for learning and applying 3D LUTs are
notably fast, yet not so memory-efficient, as storing multiple 3D LUTs is
required. For this reason and other implementation limitations, their use on
mobile devices is less popular. In this work, we propose a Neural Implicit LUT
(NILUT), an implicitly defined continuous 3D color transformation parameterized
by a neural network. We show that NILUTs are capable of accurately emulating
real 3D LUTs. Moreover, a NILUT can be extended to incorporate multiple styles
into a single network with the ability to blend styles implicitly. Our novel
approach is memory-efficient, controllable and can complement previous methods,
including learned ISPs. Code, models and dataset available at:
https://github.com/mv-lab/nilut
|
[
{
"version": "v1",
"created": "Tue, 20 Jun 2023 22:06:39 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Conde",
"Marcos V.",
""
],
[
"Vazquez-Corral",
"Javier",
""
],
[
"Brown",
"Michael S.",
""
],
[
"Timofte",
"Radu",
""
]
] |
new_dataset
| 0.998849 |
2306.11970
|
Xiangjun Tang
|
Xiangjun Tang, Linjun Wu, He Wang, Bo Hu, Xu Gong, Yuchen Liao,
Songnan Li, Qilong Kou, Xiaogang Jin
|
RSMT: Real-time Stylized Motion Transition for Characters
| null |
SIGGRAPH 2023 Conference Proceedings
|
10.1145/3588432.3591514
| null |
cs.CV cs.GR cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
Styled online in-between motion generation has important application
scenarios in computer animation and games. Its core challenge lies in the need
to satisfy four critical requirements simultaneously: generation speed, motion
quality, style diversity, and synthesis controllability. While the first two
challenges demand a delicate balance between simple fast models and learning
capacity for generation quality, the latter two are rarely investigated
together in existing methods, which largely focus on either control without
style or uncontrolled stylized motions. To this end, we propose a Real-time
Stylized Motion Transition method (RSMT) to achieve all aforementioned goals.
Our method consists of two critical, independent components: a general motion
manifold model and a style motion sampler. The former acts as a high-quality
motion source and the latter synthesizes styled motions on the fly under
control signals. Since both components can be trained separately on different
datasets, our method provides great flexibility, requires less data, and
generalizes well when no/few samples are available for unseen styles. Through
exhaustive evaluation, our method proves to be fast, high-quality, versatile,
and controllable. The code and data are available at
{https://github.com/yuyujunjun/RSMT-Realtime-Stylized-Motion-Transition.}
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 01:50:04 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Tang",
"Xiangjun",
""
],
[
"Wu",
"Linjun",
""
],
[
"Wang",
"He",
""
],
[
"Hu",
"Bo",
""
],
[
"Gong",
"Xu",
""
],
[
"Liao",
"Yuchen",
""
],
[
"Li",
"Songnan",
""
],
[
"Kou",
"Qilong",
""
],
[
"Jin",
"Xiaogang",
""
]
] |
new_dataset
| 0.991697 |
2306.12014
|
Nigel Fernandez
|
Sneha Singhania, Nigel Fernandez, Shrisha Rao
|
3HAN: A Deep Neural Network for Fake News Detection
|
Published as a conference paper at ICONIP 2017
| null |
10.1007/978-3-319-70096-0_59
| null |
cs.LG cs.CL cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
The rapid spread of fake news is a serious problem calling for AI solutions.
We employ a deep learning based automated detector through a three level
hierarchical attention network (3HAN) for fast, accurate detection of fake
news. 3HAN has three levels, one each for words, sentences, and the headline,
and constructs a news vector: an effective representation of an input news
article, by processing an article in an hierarchical bottom-up manner. The
headline is known to be a distinguishing feature of fake news, and furthermore,
relatively few words and sentences in an article are more important than the
rest. 3HAN gives a differential importance to parts of an article, on account
of its three layers of attention. By experiments on a large real-world data
set, we observe the effectiveness of 3HAN with an accuracy of 96.77%. Unlike
some other deep learning models, 3HAN provides an understandable output through
the attention weights given to different parts of an article, which can be
visualized through a heatmap to enable further manual fact checking.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 04:34:27 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Singhania",
"Sneha",
""
],
[
"Fernandez",
"Nigel",
""
],
[
"Rao",
"Shrisha",
""
]
] |
new_dataset
| 0.995352 |
2306.12050
|
Daichi Haraguchi
|
Naoya Yasukochi, Hideaki Hayashi, Daichi Haraguchi, Seiichi Uchida
|
Analyzing Font Style Usage and Contextual Factors in Real Images
|
Accepted at ICDAR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are various font styles in the world. Different styles give different
impressions and readability. This paper analyzes the relationship between font
styles and contextual factors that might affect font style selection with
large-scale datasets. For example, we will analyze the relationship between
font style and its surrounding object (such as ``bus'') by using about 800,000
words in the Open Images dataset. We also use a book cover dataset to analyze
the relationship between font styles with book genres. Moreover, the meaning of
the word is assumed as another contextual factor. For these numeric analyses,
we utilize our own font-style feature extraction model and word2vec. As a
result of co-occurrence-based relationship analysis, we found several instances
of specific font styles being used for specific contextual factors.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 06:43:22 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Yasukochi",
"Naoya",
""
],
[
"Hayashi",
"Hideaki",
""
],
[
"Haraguchi",
"Daichi",
""
],
[
"Uchida",
"Seiichi",
""
]
] |
new_dataset
| 0.955101 |
2306.12063
|
Tomas Palenik
|
Tomas Palenik (1), Viktor Szitkey (1) ((1) Slovak University of
Technology, Slovakia)
|
High Throughput Open-Source Implementation of Wi-Fi 6 and WiMAX LDPC
Encoder and Decoder
|
18 pages, 2 figures, Sources available on GitHub:
https://github.com/talenik/YALDPC Published in:
https://www.paneurouni.com/veda/vedecke-casopisy/aplikacie-informacnych-technologii
|
Information Technology Applications (ITA), Vol. 11, 15-32 (2022)
| null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes the design and C99 implementation of a free and
open-source Low-Density Parity-Check (LDPC) codes encoder and decoder focused
primarily on the Quasi-Cyclic LDPC (QCLDPC) codes utilized in the IEEE
802.11ax-2021 (Wi-Fi 6) and IEEE 802.16-2017 (WiMAX) standards. The encoder is
designed in two variants: the first one universal, the other a minimal memory
usage design. The decoder provides a single- and multi- threaded implementation
of the layered singlescan min-sum LDPC decoding algorithm both for floating
point and fixed-point arithmetic. Both encoder and decoder are directly
callable from MATLAB using the provided MEX wrappers but are designed to be
simply used in any C project. A comparison of throughput and error performance
with the recent commercial closed-source MEX implementation of an LDPC encoder
and decoder introduced in MATLAB R2021b Communications Toolbox is provided.
Source code portability to alternative nonx86 architectures is facilitated by
using only the standard C99 constructs, GNU tools, and POSIX libraries. The
implementation maintains low-memory requirements, enabling its deployment in a
constrained-architecture in the context of Internet of Things. All source codes
are freely available on GitHub under a permissive BSD license.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 07:17:50 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Palenik",
"Tomas",
""
],
[
"Szitkey",
"Viktor",
""
]
] |
new_dataset
| 0.991449 |
2306.12073
|
Yufei Guo
|
Yufei Guo and Yuanpei Chen
|
NeuroCLIP: Neuromorphic Data Understanding by CLIP and SNN
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, the neuromorphic vision sensor has received more and more interest.
However, the neuromorphic data consists of asynchronous event spikes, which is
not natural and difficult to construct a benchmark, thus limiting the
neuromorphic data understanding for "unseen" objects by deep learning.
Zero-shot and few-shot learning via Contrastive Vision-Language Pre-training
(CLIP) have shown inspirational performance in 2D frame image recognition. To
handle "unseen" recognition for the neuromorphic data, in this paper, we
propose NeuroCLIP, which transfers the CLIP's 2D pre-trained knowledge to event
spikes. To improve the few-shot performance, we also provide an inter-timestep
adapter based on a spiking neural network. Our code is open-sourced at
https://github.com/yfguo91/NeuroCLIP.git.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 07:46:27 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Guo",
"Yufei",
""
],
[
"Chen",
"Yuanpei",
""
]
] |
new_dataset
| 0.998778 |
2306.12085
|
Hanyu Mao
|
Chanyue Wu, Dong Wang, Hanyu Mao, Ying Li
|
HSR-Diff:Hyperspectral Image Super-Resolution via Conditional Diffusion
Models
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the proven significance of hyperspectral images (HSIs) in performing
various computer vision tasks, its potential is adversely affected by the
low-resolution (LR) property in the spatial domain, resulting from multiple
physical factors. Inspired by recent advancements in deep generative models, we
propose an HSI Super-resolution (SR) approach with Conditional Diffusion Models
(HSR-Diff) that merges a high-resolution (HR) multispectral image (MSI) with
the corresponding LR-HSI. HSR-Diff generates an HR-HSI via repeated refinement,
in which the HR-HSI is initialized with pure Gaussian noise and iteratively
refined. At each iteration, the noise is removed with a Conditional Denoising
Transformer (CDF ormer) that is trained on denoising at different noise levels,
conditioned on the hierarchical feature maps of HR-MSI and LR-HSI. In addition,
a progressive learning strategy is employed to exploit the global information
of full-resolution images. Systematic experiments have been conducted on four
public datasets, demonstrating that HSR-Diff outperforms state-of-the-art
methods.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 08:04:30 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Wu",
"Chanyue",
""
],
[
"Wang",
"Dong",
""
],
[
"Mao",
"Hanyu",
""
],
[
"Li",
"Ying",
""
]
] |
new_dataset
| 0.986128 |
2306.12144
|
Ying Li
|
Ying Li, Xiaodong Lee, Botao Peng, Themis Palpanas, Jingan Xue
|
PrivSketch: A Private Sketch-based Frequency Estimation Protocol for
Data Streams
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Local differential privacy (LDP) has recently become a popular
privacy-preserving data collection technique protecting users' privacy. The
main problem of data stream collection under LDP is the poor utility due to
multi-item collection from a very large domain. This paper proposes PrivSketch,
a high-utility frequency estimation protocol taking advantage of sketches,
suitable for private data stream collection. Combining the proposed background
information and a decode-first collection-side workflow, PrivSketch improves
the utility by reducing the errors introduced by the sketching algorithm and
the privacy budget utilization when collecting multiple items. We analytically
prove the superior accuracy and privacy characteristics of PrivSketch, and also
evaluate them experimentally. Our evaluation, with several diverse synthetic
and real datasets, demonstrates that PrivSketch is 1-3 orders of magnitude
better than the competitors in terms of utility in both frequency estimation
and frequent item estimation, while being up to ~100x faster.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 09:42:13 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Li",
"Ying",
""
],
[
"Lee",
"Xiaodong",
""
],
[
"Peng",
"Botao",
""
],
[
"Palpanas",
"Themis",
""
],
[
"Xue",
"Jingan",
""
]
] |
new_dataset
| 0.99688 |
2306.12161
|
Mouna Rabhi
|
Mouna Rabhi and Roberto Di Pietro
|
Adversarial Attacks Neutralization via Data Set Randomization
| null | null | null | null |
cs.LG cs.AI cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Adversarial attacks on deep-learning models pose a serious threat to their
reliability and security. Existing defense mechanisms are narrow addressing a
specific type of attack or being vulnerable to sophisticated attacks. We
propose a new defense mechanism that, while being focused on image-based
classifiers, is general with respect to the cited category. It is rooted on
hyperspace projection. In particular, our solution provides a pseudo-random
projection of the original dataset into a new dataset. The proposed defense
mechanism creates a set of diverse projected datasets, where each projected
dataset is used to train a specific classifier, resulting in different trained
classifiers with different decision boundaries. During testing, it randomly
selects a classifier to test the input. Our approach does not sacrifice
accuracy over legitimate input. Other than detailing and providing a thorough
characterization of our defense mechanism, we also provide a proof of concept
of using four optimization-based adversarial attacks (PGD, FGSM, IGSM, and
C\&W) and a generative adversarial attack testing them on the MNIST dataset.
Our experimental results show that our solution increases the robustness of
deep learning models against adversarial attacks and significantly reduces the
attack success rate by at least 89% for optimization attacks and 78% for
generative attacks. We also analyze the relationship between the number of used
hyperspaces and the efficacy of the defense mechanism. As expected, the two are
positively correlated, offering an easy-to-tune parameter to enforce the
desired level of security. The generality and scalability of our solution and
adaptability to different attack scenarios, combined with the excellent
achieved results, other than providing a robust defense against adversarial
attacks on deep learning networks, also lay the groundwork for future research
in the field.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 10:17:55 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Rabhi",
"Mouna",
""
],
[
"Di Pietro",
"Roberto",
""
]
] |
new_dataset
| 0.997588 |
2306.12240
|
Arnaud Valence
|
Arnaud Valence
|
ICAR, a categorical framework to connect vulnerability, threat and asset
managements
|
26 pages, 6 figures
| null | null | null |
cs.CR math.CT
|
http://creativecommons.org/licenses/by/4.0/
|
We present ICAR, a mathematical framework derived from category theory for
representing cybersecurity NIST and MITRE's ontologies. Designed for
cybersecurity, ICAR is a category whose objects are cybersecurity knowledge
(weakness, vulnerability, impacted product, attack technique, etc.) and whose
morphisms are relations between this knowledge, that make sense for
cybersecurity. Within this rigorous and unified framework, we obtain a
knowledge graph capable of identifying the attack and weakness structures of an
IS, at the interface between description logics, database theory and
cybersecurity. We then define ten cybersecurity queries to help understand the
risks incurred by IS and organise their defence.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 12:59:29 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Valence",
"Arnaud",
""
]
] |
new_dataset
| 0.998664 |
2306.12251
|
Jianheng Tang
|
Jianheng Tang, Fengrui Hua, Ziqi Gao, Peilin Zhao, Jia Li
|
GADBench: Revisiting and Benchmarking Supervised Graph Anomaly Detection
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With a long history of traditional Graph Anomaly Detection (GAD) algorithms
and recently popular Graph Neural Networks (GNNs), it is still not clear (1)
how they perform under a standard comprehensive setting, (2) whether GNNs
outperform traditional algorithms such as tree ensembles, and (3) their
efficiency on large-scale graphs. In response, we present GADBench -- a
comprehensive benchmark for supervised anomalous node detection on static
graphs. GADBench provides a thorough comparison across 23 distinct models on
ten real-world GAD datasets ranging from thousands to millions of nodes
($\sim$6M). Our main finding is that tree ensembles with simple neighborhood
aggregation outperform all other baselines, including the latest GNNs tailored
for the GAD task. By making GADBench available as an open-source tool, we offer
pivotal insights into the current advancements of GAD and establish a solid
foundation for future research. Our code is available at
https://github.com/squareRoot3/GADBench.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 13:16:10 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Tang",
"Jianheng",
""
],
[
"Hua",
"Fengrui",
""
],
[
"Gao",
"Ziqi",
""
],
[
"Zhao",
"Peilin",
""
],
[
"Li",
"Jia",
""
]
] |
new_dataset
| 0.985149 |
2306.12255
|
Carolyn Anderson
|
Jingmiao Zhao and Carolyn Jane Anderson
|
Solving and Generating NPR Sunday Puzzles with Large Language Models
|
To appear in the Proceedings of the 14th International Conference on
Computational Creativity (ICCC)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We explore the ability of large language models to solve and generate puzzles
from the NPR Sunday Puzzle game show using PUZZLEQA, a dataset comprising 15
years of on-air puzzles. We evaluate four large language models using PUZZLEQA,
in both multiple choice and free response formats, and explore two prompt
engineering techniques to improve free response performance: chain-of-thought
reasoning and prompt summarization. We find that state-of-the-art large
language models can solve many PUZZLEQA puzzles: the best model, GPT-3.5,
achieves 50.2% loose accuracy. However, in our few-shot puzzle generation
experiment, we find no evidence that models can generate puzzles: GPT-3.5
generates puzzles with answers that do not conform to the generated rules.
Puzzle generation remains a challenging task for future work.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 13:23:48 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Zhao",
"Jingmiao",
""
],
[
"Anderson",
"Carolyn Jane",
""
]
] |
new_dataset
| 0.999816 |
2306.12331
|
Aniket Sharma
|
Aniket Sharma and Nandan K Sinha
|
Decentralized Aerial Transportation and Manipulation of a Cable-Slung
Payload With Swarm of Agents
| null | null | null | null |
cs.MA cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
With the advent of Unmanned Aerial Vehicles (UAV) and Micro Aerial Vehicles
(MAV) in commercial sectors, their application for transporting and
manipulating payloads has attracted many research work. A swarm of agents,
cooperatively working to transport and manipulate a payload can overcome the
physical limitations of a single agent, adding redundancy and tolerance against
failures. In this paper, the dynamics of a swarm connected to a payload via
flexible cables are modeled, and a decentralized control is designed using
Artificial Potential Field (APF). The swarm is able to transport the payload
through an unknown environment to a goal position while avoiding obstacles from
the local information received from the onboard sensors. The key contributions
are (a) the cables are modelled more accurately using lumped mass model instead
of geometric constraints, (b) a decentralized swarm control is designed using
potential field approach to ensure hover stability of system without payload
state information, (c) the manipulation of payload elevation and azimuth angles
are controlled by APF, and (d) the trajectory of the payload for transportation
is governed by potential fields generated by goal point and obstacles. The
efficacy of the method proposed in this work are evaluated through numerical
simulations under the influence of external disturbances and failure of agents.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 15:20:53 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Sharma",
"Aniket",
""
],
[
"Sinha",
"Nandan K",
""
]
] |
new_dataset
| 0.962349 |
2306.12402
|
Ken Pfeuffer
|
Ken Pfeuffer, Jan Obernolte, Felix Dietz, Ville M\"akel\"a, Ludwig
Sidenmark, Pavel Manakhov, Minna Pakanen, Florian Alt
|
PalmGazer: Unimanual Eye-hand Menus in Augmented Reality
|
12 pages, 11 figures
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How can we design the user interfaces for augmented reality (AR) so that we
can interact as simple, flexible and expressive as we can with smartphones in
one hand? To explore this question, we propose PalmGazer as an interaction
concept integrating eye-hand interaction to establish a singlehandedly operable
menu system. In particular, PalmGazer is designed to support quick and
spontaneous digital commands -- such as to play a music track, check
notifications or browse visual media -- through our devised three-way
interaction model: hand opening to summon the menu UI, eye-hand input for
selection of items, and dragging gesture for navigation. A key aspect is that
it remains always-accessible and movable to the user, as the menu supports
meaningful hand and head based reference frames. We demonstrate the concept in
practice through a prototypical personal UI with application probes, and
describe technique designs specifically-tailored to the application UI. A
qualitative evaluation highlights the system's design benefits and drawbacks,
e.g., that common 2D scroll and selection tasks are simple to operate, but
higher degrees of freedom may be reserved for two hands. Our work contributes
interaction techniques and design insights to expand AR's uni-manual
capabilities.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 17:39:50 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Pfeuffer",
"Ken",
""
],
[
"Obernolte",
"Jan",
""
],
[
"Dietz",
"Felix",
""
],
[
"Mäkelä",
"Ville",
""
],
[
"Sidenmark",
"Ludwig",
""
],
[
"Manakhov",
"Pavel",
""
],
[
"Pakanen",
"Minna",
""
],
[
"Alt",
"Florian",
""
]
] |
new_dataset
| 0.991931 |
2306.12410
|
Iona Thomas
|
Iona Thomas (1), Vincent Aranega (1), St\'ephane Ducasse (1),
Guillermo Polito (1), Pablo Tesone (1) ((1) University of Lille, France /
Inria, France / CNRS, France / Centrale Lille, France / CRIStAL, France)
|
A VM-Agnostic and Backwards Compatible Protected Modifier for
Dynamically-Typed Languages
| null |
The Art, Science, and Engineering of Programming, 2024, Vol. 8,
Issue 1, Article 2
|
10.22152/programming-journal.org/2024/8/2
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In object-oriented languages, method visibility modifiers hold a key role in
separating internal methods from the public API. Protected visibility modifiers
offer a way to hide methods from external objects while authorizing internal
use and overriding in subclasses. While present in main statically-typed
languages, visibility modifiers are not as common or mature in
dynamically-typed languages. In this article, we present ProtDyn, a
self-send-based visibility model calculated at compile time for
dynamically-typed languages relying on name-mangling and syntactic
differentiation of self vs non self sends. We present #Pharo, a ProtDyn
implementation of this model that is backwards compatible with existing
programs, and its port to Python. Using these implementations we study the
performance impact of ProtDyn on the method lookup, in the presence of global
lookup caches and polymorphic inline caches. We show that our name mangling and
double method registration technique has a very low impact on performance and
keeps the benefits from the global lookup cache and polymorphic inline cache.
We also show that the memory overhead on a real use case is between 2% and 13%
in the worst-case scenario. Protected modifier semantics enforces encapsulation
such as private but allow developers to still extend the class in subclasses.
ProtDyn offers a VM-agnostic and backwards-compatible design to introduce
protected semantics in dynamically-typed languages.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 17:48:17 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Thomas",
"Iona",
""
],
[
"Aranega",
"Vincent",
""
],
[
"Ducasse",
"Stéphane",
""
],
[
"Polito",
"Guillermo",
""
],
[
"Tesone",
"Pablo",
""
]
] |
new_dataset
| 0.999007 |
2306.12411
|
Wendlasida Ouedraogo
|
Wendlasida Ouedraogo (1), Gabriel Scherer (2), Lutz Strassburger (2)
((1) Siemens Mobility, France / Inria, France, (2) Inria, France / \'Ecole
Polytechnique, France)
|
Coqlex: Generating Formally Verified Lexers
| null |
The Art, Science, and Engineering of Programming, 2024, Vol. 8,
Issue 1, Article 3
|
10.22152/programming-journal.org/2024/8/3
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A compiler consists of a sequence of phases going from lexical analysis to
code generation. Ideally, the formal verification of a compiler should include
the formal verification of each component of the tool-chain. An example is the
CompCert project, a formally verified C compiler, that comes with associated
tools and proofs that allow to formally verify most of those components.
However, some components, in particular the lexer, remain unverified. In fact,
the lexer of Compcert is generated using OCamllex, a lex-like OCaml lexer
generator that produces lexers from a set of regular expressions with
associated semantic actions. Even though there exist various approaches, like
CakeML or Verbatim++, to write verified lexers, they all have only limited
practical applicability. In order to contribute to the end-to-end verification
of compilers, we implemented a generator of verified lexers whose usage is
similar to OCamllex. Our software, called Coqlex, reads a lexer specification
and generates a lexer equipped with a Coq proof of its correctness. It provides
a formally verified implementation of most features of standard, unverified
lexer generators.
The conclusions of our work are two-fold: Firstly, verified lexers gain to
follow a user experience similar to lex/flex or OCamllex, with a
domain-specific syntax to write lexers comfortably. This introduces a small gap
between the written artifact and the verified lexer, but our design minimizes
this gap and makes it practical to review the generated lexer. The user remains
able to prove further properties of their lexer. Secondly, it is possible to
combine simplicity and decent performance. Our implementation approach that
uses Brzozowski derivatives is noticeably simpler than the previous work in
Verbatim++ that tries to generate a deterministic finite automaton (DFA) ahead
of time, and it is also noticeably faster thanks to careful design choices.
We wrote several example lexers that suggest that the convenience of using
Coqlex is close to that of standard verified generators, in particular,
OCamllex. We used Coqlex in an industrial project to implement a verified lexer
of Ada. This lexer is part of a tool to optimize safety-critical programs, some
of which are very large. This experience confirmed that Coqlex is usable in
practice, and in particular that its performance is good enough. Finally, we
performed detailed performance comparisons between Coqlex, OCamllex, and
Verbatim++. Verbatim++ is the state-of-the-art tool for verified lexers in Coq,
and the performance of its lexer was carefully optimized in previous work by
Egolf and al. (2022). Our results suggest that Coqlex is two orders of
magnitude slower than OCamllex, but two orders of magnitude faster than
Verbatim++. Verified compilers and other language-processing tools are becoming
important tools for safety-critical or security-critical applications. They
provide trust and replace more costly approaches to certification, such as
manually reading the generated code. Verified lexers are a missing piece in
several Coq-based verified compilers today. Coqlex comes with safety
guarantees, and thus shows that it is possible to build formally verified
front-ends.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 17:48:54 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Ouedraogo",
"Wendlasida",
""
],
[
"Scherer",
"Gabriel",
""
],
[
"Strassburger",
"Lutz",
""
]
] |
new_dataset
| 0.999243 |
2306.12424
|
Aleksandar Shtedritski
|
Siobhan Mackenzie Hall, Fernanda Gon\c{c}alves Abrantes, Hanwen Zhu,
Grace Sodunke, Aleksandar Shtedritski, Hannah Rose Kirk
|
VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution
|
Data and code available at https://github.com/oxai/visogender
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce VisoGender, a novel dataset for benchmarking gender bias in
vision-language models. We focus on occupation-related gender biases, inspired
by Winograd and Winogender schemas, where each image is associated with a
caption containing a pronoun relationship of subjects and objects in the scene.
VisoGender is balanced by gender representation in professional roles,
supporting bias evaluation in two ways: i) resolution bias, where we evaluate
the difference between gender resolution accuracies for men and women and ii)
retrieval bias, where we compare ratios of male and female professionals
retrieved for a gender-neutral search query. We benchmark several
state-of-the-art vision-language models and find that they lack the reasoning
abilities to correctly resolve gender in complex scenes. While the direction
and magnitude of gender bias depends on the task and the model being evaluated,
captioning models generally are more accurate and less biased than CLIP-like
models. Dataset and code are available at https://github.com/oxai/visogender
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 17:59:51 GMT"
}
] | 2023-06-22T00:00:00 |
[
[
"Hall",
"Siobhan Mackenzie",
""
],
[
"Abrantes",
"Fernanda Gonçalves",
""
],
[
"Zhu",
"Hanwen",
""
],
[
"Sodunke",
"Grace",
""
],
[
"Shtedritski",
"Aleksandar",
""
],
[
"Kirk",
"Hannah Rose",
""
]
] |
new_dataset
| 0.999526 |
2010.03902
|
Mahesh Pal Dr.
|
Mahesh Pal, Akshay, B. Charan Teja
|
IRX-1D: A Simple Deep Learning Architecture for Remote Sensing
Classifications
|
Want to improve this manuscript as it is not accepted by journal in
present form
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We proposes a simple deep learning architecture combining elements of
Inception, ResNet and Xception networks. Four new datasets were used for
classification with both small and large training samples. Results in terms of
classification accuracy suggests improved performance by proposed architecture
in comparison to Bayesian optimised 2D-CNN with small training samples.
Comparison of results using small training sample with Indiana Pines
hyperspectral dataset suggests comparable or better performance by proposed
architecture than nine reported works using different deep learning
architectures. In spite of achieving high classification accuracy with limited
training samples, comparison of classified image suggests different land cover
classes are assigned to same area when compared with the classified image
provided by the model trained using large training samples with all datasets.
|
[
{
"version": "v1",
"created": "Thu, 8 Oct 2020 11:07:02 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jun 2023 05:51:05 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Pal",
"Mahesh",
""
],
[
"Akshay",
"",
""
],
[
"Teja",
"B. Charan",
""
]
] |
new_dataset
| 0.997322 |
2105.11213
|
Avi Mohan
|
Avinash Mohan, Arpan Chattopadhyay, Shivam Vinayak Vatsa, and Anurag
Kumar
|
A Low-Delay MAC for IoT Applications: Decentralized Optimal Scheduling
of Queues without Explicit State Information Sharing
|
28 pages, 19 figures
| null | null | null |
cs.NI cs.LG math.PR
|
http://creativecommons.org/licenses/by/4.0/
|
We consider a system of several collocated nodes sharing a time slotted
wireless channel, and seek a MAC (medium access control) that (i) provides low
mean delay, (ii) has distributed control (i.e., there is no central scheduler),
and (iii) does not require explicit exchange of state information or control
signals. The design of such MAC protocols must keep in mind the need for
contention access at light traffic, and scheduled access in heavy traffic,
leading to the long-standing interest in hybrid, adaptive MACs.
Working in the discrete time setting, for the distributed MAC design, we
consider a practical information structure where each node has local
information and some common information obtained from overhearing. In this
setting, "ZMAC" is an existing protocol that is hybrid and adaptive. We
approach the problem via two steps (1) We show that it is sufficient for the
policy to be "greedy" and "exhaustive". Limiting the policy to this class
reduces the problem to obtaining a queue switching policy at queue emptiness
instants. (2) Formulating the delay optimal scheduling as a POMDP (partially
observed Markov decision process), we show that the optimal switching rule is
Stochastic Largest Queue (SLQ).
Using this theory as the basis, we then develop a practical distributed
scheduler, QZMAC, which is also tunable. We implement QZMAC on standard
off-the-shelf TelosB motes and also use simulations to compare QZMAC with the
full-knowledge centralized scheduler, and with ZMAC. We use our implementation
to study the impact of false detection while overhearing the common
information, and the efficiency of QZMAC. Our simulation results show that the
mean delay with QZMAC is close that of the full-knowledge centralized
scheduler.
|
[
{
"version": "v1",
"created": "Mon, 24 May 2021 11:44:08 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 14:03:48 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Mohan",
"Avinash",
""
],
[
"Chattopadhyay",
"Arpan",
""
],
[
"Vatsa",
"Shivam Vinayak",
""
],
[
"Kumar",
"Anurag",
""
]
] |
new_dataset
| 0.986083 |
2201.03521
|
Karolina Seweryn
|
Daniel Ziembicki, Anna Wr\'oblewska, Karolina Seweryn
|
Polish Natural Language Inference and Factivity -- an Expert-based
Dataset and Benchmarks
| null | null |
10.1017/S1351324923000220
| null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite recent breakthroughs in Machine Learning for Natural Language
Processing, the Natural Language Inference (NLI) problems still constitute a
challenge. To this purpose we contribute a new dataset that focuses exclusively
on the factivity phenomenon; however, our task remains the same as other NLI
tasks, i.e. prediction of entailment, contradiction or neutral (ECN). The
dataset contains entirely natural language utterances in Polish and gathers
2,432 verb-complement pairs and 309 unique verbs. The dataset is based on the
National Corpus of Polish (NKJP) and is a representative sample in regards to
frequency of main verbs and other linguistic features (e.g. occurrence of
internal negation). We found that transformer BERT-based models working on
sentences obtained relatively good results ($\approx89\%$ F1 score). Even
though better results were achieved using linguistic features ($\approx91\%$ F1
score), this model requires more human labour (humans in the loop) because
features were prepared manually by expert linguists. BERT-based models
consuming only the input sentences show that they capture most of the
complexity of NLI/factivity. Complex cases in the phenomenon - e.g. cases with
entitlement (E) and non-factive verbs - remain an open issue for further
research.
|
[
{
"version": "v1",
"created": "Mon, 10 Jan 2022 18:32:55 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Ziembicki",
"Daniel",
""
],
[
"Wróblewska",
"Anna",
""
],
[
"Seweryn",
"Karolina",
""
]
] |
new_dataset
| 0.999806 |
2203.10247
|
Qing Cai
|
Qing Cai, Yiming Qian, Jinxing Li, Jun Lv, Yee-Hong Yang, Feng Wu,
David Zhang
|
HIPA: Hierarchical Patch Transformer for Single Image Super Resolution
| null | null |
10.1109/TIP.2023.3279977
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer-based architectures start to emerge in single image super
resolution (SISR) and have achieved promising performance. Most existing Vision
Transformers divide images into the same number of patches with a fixed size,
which may not be optimal for restoring patches with different levels of texture
richness. This paper presents HIPA, a novel Transformer architecture that
progressively recovers the high resolution image using a hierarchical patch
partition. Specifically, we build a cascaded model that processes an input
image in multiple stages, where we start with tokens with small patch sizes and
gradually merge to the full resolution. Such a hierarchical patch mechanism not
only explicitly enables feature aggregation at multiple resolutions but also
adaptively learns patch-aware features for different image regions, e.g., using
a smaller patch for areas with fine details and a larger patch for textureless
regions. Meanwhile, a new attention-based position encoding scheme for
Transformer is proposed to let the network focus on which tokens should be paid
more attention by assigning different weights to different tokens, which is the
first time to our best knowledge. Furthermore, we also propose a new
multi-reception field attention module to enlarge the convolution reception
field from different branches. The experimental results on several public
datasets demonstrate the superior performance of the proposed HIPA over
previous methods quantitatively and qualitatively.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 05:09:34 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2023 01:39:31 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Cai",
"Qing",
""
],
[
"Qian",
"Yiming",
""
],
[
"Li",
"Jinxing",
""
],
[
"Lv",
"Jun",
""
],
[
"Yang",
"Yee-Hong",
""
],
[
"Wu",
"Feng",
""
],
[
"Zhang",
"David",
""
]
] |
new_dataset
| 0.999759 |
2205.02574
|
S\'ebastien Labb\'e
|
S\'ebastien Labb\'e, Jana Lep\v{s}ov\'a
|
A Fibonacci analogue of the two's complement numeration system
|
v3: 21 pages, 3 figures, 3 tables. v4: 24 pages, added a new section
characterizing the Fibonacci's complement numeration system as an increasing
bijection. v5: changes after review
| null | null | null |
cs.FL math.NT
|
http://creativecommons.org/licenses/by/4.0/
|
Using the classic two's complement notation of signed integers, the
fundamental arithmetic operations of addition, subtraction, and multiplication
are identical to those for unsigned binary numbers. We introduce a
Fibonacci-equivalent of the two's complement notation and we show that addition
in this numeration system can be performed by a deterministic finite-state
transducer. The result is based on the Berstel adder, which performs addition
of the usual Fibonacci representations of nonnegative integers and for which we
provide a new constructive proof. Moreover, we characterize the
Fibonacci-equivalent of the two's complement notation as an increasing
bijection between $\mathbb{Z}$ and a particular language.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 11:16:15 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Dec 2022 17:22:07 GMT"
},
{
"version": "v3",
"created": "Tue, 10 Jan 2023 13:49:11 GMT"
},
{
"version": "v4",
"created": "Wed, 8 Mar 2023 13:57:59 GMT"
},
{
"version": "v5",
"created": "Mon, 19 Jun 2023 16:07:23 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Labbé",
"Sébastien",
""
],
[
"Lepšová",
"Jana",
""
]
] |
new_dataset
| 0.998021 |
2207.01054
|
Kristian Miok
|
Kristian Miok, Encarnacion Hidalgo-Tenorio, Petya Osenova,
Miguel-Angel Benitez-Castro and Marko Robnik-Sikonja
|
Multi-aspect Multilingual and Cross-lingual Parliamentary Speech
Analysis
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Parliamentary and legislative debate transcripts provide informative insight
into elected politicians' opinions, positions, and policy preferences. They are
interesting for political and social sciences as well as linguistics and
natural language processing (NLP) research. While existing research studied
individual parliaments, we apply advanced NLP methods to a joint and
comparative analysis of six national parliaments (Bulgarian, Czech, French,
Slovene, Spanish, and United Kingdom) between 2017 and 2020. We analyze
emotions and sentiment in the transcripts from the ParlaMint dataset collection
and assess if the age, gender, and political orientation of speakers can be
detected from their speeches. The results show some commonalities and many
surprising differences among the analyzed countries.
|
[
{
"version": "v1",
"created": "Sun, 3 Jul 2022 14:31:32 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 13:32:02 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Miok",
"Kristian",
""
],
[
"Hidalgo-Tenorio",
"Encarnacion",
""
],
[
"Osenova",
"Petya",
""
],
[
"Benitez-Castro",
"Miguel-Angel",
""
],
[
"Robnik-Sikonja",
"Marko",
""
]
] |
new_dataset
| 0.999073 |
2208.01582
|
Junru Gu
|
Junru Gu, Chenxu Hu, Tianyuan Zhang, Xuanyao Chen, Yilun Wang, Yue
Wang, Hang Zhao
|
ViP3D: End-to-end Visual Trajectory Prediction via 3D Agent Queries
|
CVPR 2023
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Perception and prediction are two separate modules in the existing autonomous
driving systems. They interact with each other via hand-picked features such as
agent bounding boxes and trajectories. Due to this separation, prediction, as a
downstream module, only receives limited information from the perception
module. To make matters worse, errors from the perception modules can propagate
and accumulate, adversely affecting the prediction results. In this work, we
propose ViP3D, a query-based visual trajectory prediction pipeline that
exploits rich information from raw videos to directly predict future
trajectories of agents in a scene. ViP3D employs sparse agent queries to
detect, track, and predict throughout the pipeline, making it the first fully
differentiable vision-based trajectory prediction approach. Instead of using
historical feature maps and trajectories, useful information from previous
timestamps is encoded in agent queries, which makes ViP3D a concise streaming
prediction method. Furthermore, extensive experimental results on the nuScenes
dataset show the strong vision-based prediction performance of ViP3D over
traditional pipelines and previous end-to-end models.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 16:38:28 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Oct 2022 17:05:36 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Jun 2023 11:50:41 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Gu",
"Junru",
""
],
[
"Hu",
"Chenxu",
""
],
[
"Zhang",
"Tianyuan",
""
],
[
"Chen",
"Xuanyao",
""
],
[
"Wang",
"Yilun",
""
],
[
"Wang",
"Yue",
""
],
[
"Zhao",
"Hang",
""
]
] |
new_dataset
| 0.974044 |
2209.01992
|
Qian Chen
|
Qian Chen, Xingjian Dong, Guowei Tu, Dong Wang, Baoxuan Zhao and Zhike
Peng
|
TFN: An Interpretable Neural Network with Time-Frequency Transform
Embedded for Intelligent Fault Diagnosis
|
20 pages, 15 figures, 5 tables
| null | null | null |
cs.AI cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional Neural Networks (CNNs) are widely used in fault diagnosis of
mechanical systems due to their powerful feature extraction and classification
capabilities. However, the CNN is a typical black-box model, and the mechanism
of CNN's decision-making are not clear, which limits its application in
high-reliability-required fault diagnosis scenarios. To tackle this issue, we
propose a novel interpretable neural network termed as Time-Frequency Network
(TFN), where the physically meaningful time-frequency transform (TFT) method is
embedded into the traditional convolutional layer as an adaptive preprocessing
layer. This preprocessing layer named as time-frequency convolutional (TFconv)
layer, is constrained by a well-designed kernel function to extract
fault-related time-frequency information. It not only improves the diagnostic
performance but also reveals the logical foundation of the CNN prediction in
the frequency domain. Different TFT methods correspond to different kernel
functions of the TFconv layer. In this study, four typical TFT methods are
considered to formulate the TFNs and their effectiveness and interpretability
are proved through three mechanical fault diagnosis experiments. Experimental
results also show that the proposed TFconv layer can be easily generalized to
other CNNs with different depths. The code of TFN is available on
https://github.com/ChenQian0618/TFN.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2022 14:48:52 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jun 2023 08:55:08 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Chen",
"Qian",
""
],
[
"Dong",
"Xingjian",
""
],
[
"Tu",
"Guowei",
""
],
[
"Wang",
"Dong",
""
],
[
"Zhao",
"Baoxuan",
""
],
[
"Peng",
"Zhike",
""
]
] |
new_dataset
| 0.996478 |
2209.07857
|
Hao Cheng
|
Hao Cheng, Mengmeng Liu, Lin Chen, Hellward Broszio, Monika Sester,
Michael Ying Yang
|
GATraj: A Graph- and Attention-based Multi-Agent Trajectory Prediction
Model
| null | null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Trajectory prediction has been a long-standing problem in intelligent systems
like autonomous driving and robot navigation. Models trained on large-scale
benchmarks have made significant progress in improving prediction accuracy.
However, the importance on efficiency for real-time applications has been less
emphasized. This paper proposes an attention-based graph model, named GATraj,
which achieves a good balance of prediction accuracy and inference speed. We
use attention mechanisms to model the spatial-temporal dynamics of agents, such
as pedestrians or vehicles, and a graph convolutional network to model their
interactions. Additionally, a Laplacian mixture decoder is implemented to
mitigate mode collapse and generate diverse multimodal predictions for each
agent. GATraj achieves state-of-the-art prediction performance at a much higher
speed when tested on the ETH/UCY datasets for pedestrian trajectories, and good
performance at about 100 Hz inference speed when tested on the nuScenes dataset
for autonomous driving. We conduct extensive experiments to analyze the
probability estimation of the Laplacian mixture decoder and compare it with a
Gaussian mixture decoder for predicting different multimodalities. Furthermore,
comprehensive ablation studies demonstrate the effectiveness of each proposed
module in GATraj. The code is released at
https://github.com/mengmengliu1998/GATraj.
|
[
{
"version": "v1",
"created": "Fri, 16 Sep 2022 11:29:19 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jun 2023 13:05:02 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Cheng",
"Hao",
""
],
[
"Liu",
"Mengmeng",
""
],
[
"Chen",
"Lin",
""
],
[
"Broszio",
"Hellward",
""
],
[
"Sester",
"Monika",
""
],
[
"Yang",
"Michael Ying",
""
]
] |
new_dataset
| 0.998173 |
2210.01597
|
Eleonora Giunchiglia
|
Eleonora Giunchiglia and Mihaela C\u{a}t\u{a}lina Stoian and Salman
Khan and Fabio Cuzzolin and Thomas Lukasiewicz
|
ROAD-R: The Autonomous Driving Dataset with Logical Requirements
| null | null |
10.1007/s10994-023-06322-z
| null |
cs.LG cs.AI cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural networks have proven to be very powerful at computer vision tasks.
However, they often exhibit unexpected behaviours, violating known requirements
expressing background knowledge. This calls for models (i) able to learn from
the requirements, and (ii) guaranteed to be compliant with the requirements
themselves. Unfortunately, the development of such models is hampered by the
lack of datasets equipped with formally specified requirements. In this paper,
we introduce the ROad event Awareness Dataset with logical Requirements
(ROAD-R), the first publicly available dataset for autonomous driving with
requirements expressed as logical constraints. Given ROAD-R, we show that
current state-of-the-art models often violate its logical constraints, and that
it is possible to exploit them to create models that (i) have a better
performance, and (ii) are guaranteed to be compliant with the requirements
themselves.
|
[
{
"version": "v1",
"created": "Tue, 4 Oct 2022 13:22:19 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2022 11:42:42 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Giunchiglia",
"Eleonora",
""
],
[
"Stoian",
"Mihaela Cătălina",
""
],
[
"Khan",
"Salman",
""
],
[
"Cuzzolin",
"Fabio",
""
],
[
"Lukasiewicz",
"Thomas",
""
]
] |
new_dataset
| 0.99906 |
2211.10420
|
Quentin Berthet
|
Marin Ballu, Quentin Berthet
|
Mirror Sinkhorn: Fast Online Optimization on Transport Polytopes
|
ICML 2023
| null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Optimal transport is an important tool in machine learning, allowing to
capture geometric properties of the data through a linear program on transport
polytopes. We present a single-loop optimization algorithm for minimizing
general convex objectives on these domains, utilizing the principles of
Sinkhorn matrix scaling and mirror descent. The proposed algorithm is robust to
noise, and can be used in an online setting. We provide theoretical guarantees
for convex objectives and experimental results showcasing it effectiveness on
both synthetic and real-world data.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 18:35:14 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Jan 2023 16:07:57 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Jun 2023 13:01:54 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Ballu",
"Marin",
""
],
[
"Berthet",
"Quentin",
""
]
] |
new_dataset
| 0.950861 |
2211.15864
|
Gabriel Poesia
|
Gabriel Poesia and Noah D. Goodman
|
Peano: Learning Formal Mathematical Reasoning
| null | null |
10.1098/rsta.2022.0044
| null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
General mathematical reasoning is computationally undecidable, but humans
routinely solve new problems. Moreover, discoveries developed over centuries
are taught to subsequent generations quickly. What structure enables this, and
how might that inform automated mathematical reasoning? We posit that central
to both puzzles is the structure of procedural abstractions underlying
mathematics. We explore this idea in a case study on 5 sections of beginning
algebra on the Khan Academy platform. To define a computational foundation, we
introduce Peano, a theorem-proving environment where the set of valid actions
at any point is finite. We use Peano to formalize introductory algebra problems
and axioms, obtaining well-defined search problems. We observe existing
reinforcement learning methods for symbolic reasoning to be insufficient to
solve harder problems. Adding the ability to induce reusable abstractions
("tactics") from its own solutions allows an agent to make steady progress,
solving all problems. Furthermore, these abstractions induce an order to the
problems, seen at random during training. The recovered order has significant
agreement with the expert-designed Khan Academy curriculum, and
second-generation agents trained on the recovered curriculum learn
significantly faster. These results illustrate the synergistic role of
abstractions and curricula in the cultural transmission of mathematics.
|
[
{
"version": "v1",
"created": "Tue, 29 Nov 2022 01:42:26 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Poesia",
"Gabriel",
""
],
[
"Goodman",
"Noah D.",
""
]
] |
new_dataset
| 0.996027 |
2212.01558
|
Minghua Liu
|
Minghua Liu, Yinhao Zhu, Hong Cai, Shizhong Han, Zhan Ling, Fatih
Porikli, Hao Su
|
PartSLIP: Low-Shot Part Segmentation for 3D Point Clouds via Pretrained
Image-Language Models
|
CVPR 2023, project page: https://colin97.github.io/PartSLIP_page/
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Generalizable 3D part segmentation is important but challenging in vision and
robotics. Training deep models via conventional supervised methods requires
large-scale 3D datasets with fine-grained part annotations, which are costly to
collect. This paper explores an alternative way for low-shot part segmentation
of 3D point clouds by leveraging a pretrained image-language model, GLIP, which
achieves superior performance on open-vocabulary 2D detection. We transfer the
rich knowledge from 2D to 3D through GLIP-based part detection on point cloud
rendering and a novel 2D-to-3D label lifting algorithm. We also utilize
multi-view 3D priors and few-shot prompt tuning to boost performance
significantly. Extensive evaluation on PartNet and PartNet-Mobility datasets
shows that our method enables excellent zero-shot 3D part segmentation. Our
few-shot version not only outperforms existing few-shot approaches by a large
margin but also achieves highly competitive results compared to the fully
supervised counterpart. Furthermore, we demonstrate that our method can be
directly applied to iPhone-scanned point clouds without significant domain
gaps.
|
[
{
"version": "v1",
"created": "Sat, 3 Dec 2022 06:59:01 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jun 2023 07:27:14 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Liu",
"Minghua",
""
],
[
"Zhu",
"Yinhao",
""
],
[
"Cai",
"Hong",
""
],
[
"Han",
"Shizhong",
""
],
[
"Ling",
"Zhan",
""
],
[
"Porikli",
"Fatih",
""
],
[
"Su",
"Hao",
""
]
] |
new_dataset
| 0.99942 |
2212.03291
|
Pavamana Katti
|
Pavamana K J, Chandramani Kishore Singh
|
Caching Contents with Varying Popularity using Restless Bandits
|
There were a mistakes while submitting updated version. I have
submitted a fresh new submissions arXiv:2304.12227
| null | null | null |
cs.NI cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile networks are experiencing prodigious increase in data volume and user
density , which exerts a great burden on mobile core networks and backhaul
links. An efficient technique to lessen this problem is to use caching i.e. to
bring the data closer to the users by making use of the caches of edge network
nodes, such as fixed or mobile access points and even user devices. The
performance of a caching depends on contents that are cached. In this paper, we
examine the problem of content caching at the wireless edge(i.e. base stations)
to minimize the discounted cost incurred over infinite horizon. We formulate
this problem as a restless bandit problem, which is hard to solve. We begin by
showing an optimal policy is of threshold type. Using these structural results,
we prove the indexability of the problem, and use Whittle index policy to
minimize the discounted cost.
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 16:24:45 GMT"
},
{
"version": "v2",
"created": "Sat, 31 Dec 2022 06:42:42 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Jun 2023 08:51:37 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"J",
"Pavamana K",
""
],
[
"Singh",
"Chandramani Kishore",
""
]
] |
new_dataset
| 0.999075 |
2212.03588
|
Yifan Liu
|
Ziqin Zhou, Bowen Zhang, Yinjie Lei, Lingqiao Liu, Yifan Liu
|
ZegCLIP: Towards Adapting CLIP for Zero-shot Semantic Segmentation
|
12 pages, 8 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recently, CLIP has been applied to pixel-level zero-shot learning tasks via a
two-stage scheme. The general idea is to first generate class-agnostic region
proposals and then feed the cropped proposal regions to CLIP to utilize its
image-level zero-shot classification capability. While effective, such a scheme
requires two image encoders, one for proposal generation and one for CLIP,
leading to a complicated pipeline and high computational cost. In this work, we
pursue a simpler-and-efficient one-stage solution that directly extends CLIP's
zero-shot prediction capability from image to pixel level. Our investigation
starts with a straightforward extension as our baseline that generates semantic
masks by comparing the similarity between text and patch embeddings extracted
from CLIP. However, such a paradigm could heavily overfit the seen classes and
fail to generalize to unseen classes. To handle this issue, we propose three
simple-but-effective designs and figure out that they can significantly retain
the inherent zero-shot capacity of CLIP and improve pixel-level generalization
ability. Incorporating those modifications leads to an efficient zero-shot
semantic segmentation system called ZegCLIP. Through extensive experiments on
three public benchmarks, ZegCLIP demonstrates superior performance,
outperforming the state-of-the-art methods by a large margin under both
"inductive" and "transductive" zero-shot settings. In addition, compared with
the two-stage method, our one-stage ZegCLIP achieves a speedup of about 5 times
faster during inference. We release the code at
https://github.com/ZiqinZhou66/ZegCLIP.git.
|
[
{
"version": "v1",
"created": "Wed, 7 Dec 2022 12:05:00 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Dec 2022 15:38:18 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Jun 2023 17:50:05 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Zhou",
"Ziqin",
""
],
[
"Zhang",
"Bowen",
""
],
[
"Lei",
"Yinjie",
""
],
[
"Liu",
"Lingqiao",
""
],
[
"Liu",
"Yifan",
""
]
] |
new_dataset
| 0.996816 |
2212.04420
|
Hongwei Yi
|
Hongwei Yi, Hualin Liang, Yifei Liu, Qiong Cao, Yandong Wen, Timo
Bolkart, Dacheng Tao, Michael J. Black
|
Generating Holistic 3D Human Motion from Speech
|
Project Webpage: https://talkshow.is.tue.mpg.de; CVPR2023
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
This work addresses the problem of generating 3D holistic body motions from
human speech. Given a speech recording, we synthesize sequences of 3D body
poses, hand gestures, and facial expressions that are realistic and diverse. To
achieve this, we first build a high-quality dataset of 3D holistic body meshes
with synchronous speech. We then define a novel speech-to-motion generation
framework in which the face, body, and hands are modeled separately. The
separated modeling stems from the fact that face articulation strongly
correlates with human speech, while body poses and hand gestures are less
correlated. Specifically, we employ an autoencoder for face motions, and a
compositional vector-quantized variational autoencoder (VQ-VAE) for the body
and hand motions. The compositional VQ-VAE is key to generating diverse
results. Additionally, we propose a cross-conditional autoregressive model that
generates body poses and hand gestures, leading to coherent and realistic
motions. Extensive experiments and user studies demonstrate that our proposed
approach achieves state-of-the-art performance both qualitatively and
quantitatively. Our novel dataset and code will be released for research
purposes at https://talkshow.is.tue.mpg.de.
|
[
{
"version": "v1",
"created": "Thu, 8 Dec 2022 17:25:19 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Jun 2023 22:23:13 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Yi",
"Hongwei",
""
],
[
"Liang",
"Hualin",
""
],
[
"Liu",
"Yifei",
""
],
[
"Cao",
"Qiong",
""
],
[
"Wen",
"Yandong",
""
],
[
"Bolkart",
"Timo",
""
],
[
"Tao",
"Dacheng",
""
],
[
"Black",
"Michael J.",
""
]
] |
new_dataset
| 0.99396 |
2212.10455
|
Nikita Moghe
|
Nikita Moghe, Evgeniia Razumovskaia, Liane Guillou, Ivan Vuli\'c, Anna
Korhonen, Alexandra Birch
|
MULTI3NLU++: A Multilingual, Multi-Intent, Multi-Domain Dataset for
Natural Language Understanding in Task-Oriented Dialogue
|
ACL 2023 (Findings) Camera Ready
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Task-oriented dialogue (TOD) systems have been widely deployed in many
industries as they deliver more efficient customer support. These systems are
typically constructed for a single domain or language and do not generalise
well beyond this. To support work on Natural Language Understanding (NLU) in
TOD across multiple languages and domains simultaneously, we constructed
MULTI3NLU++, a multilingual, multi-intent, multi-domain dataset. MULTI3NLU++
extends the English only NLU++ dataset to include manual translations into a
range of high, medium, and low resource languages (Spanish, Marathi, Turkish
and Amharic), in two domains (BANKING and HOTELS). Because of its multi-intent
property, MULTI3NLU++ represents complex and natural user goals, and therefore
allows us to measure the realistic performance of TOD systems in a varied set
of the world's languages. We use MULTI3NLU++ to benchmark state-of-the-art
multilingual models for the NLU tasks of intent detection and slot labelling
for TOD systems in the multilingual setting. The results demonstrate the
challenging nature of the dataset, particularly in the low-resource language
setting, offering ample room for future experimentation in multi-domain
multilingual TOD setups.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 17:34:25 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jun 2023 04:09:37 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Moghe",
"Nikita",
""
],
[
"Razumovskaia",
"Evgeniia",
""
],
[
"Guillou",
"Liane",
""
],
[
"Vulić",
"Ivan",
""
],
[
"Korhonen",
"Anna",
""
],
[
"Birch",
"Alexandra",
""
]
] |
new_dataset
| 0.999833 |
2301.03865
|
Daniel Gon\c{c}alves
|
Daniel Gon\c{c}alves, Vincent Limouzy, Pascal Ochem
|
Contact graphs of boxes with unidirectional contacts
|
23 pages, 11 figures
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper is devoted to the study of particular geometrically defined
intersection classes of graphs. Those were previously studied by Magnant and
Martin, who proved that these graphs have arbitrary large chromatic number,
while being triangle-free. We give several structural properties of these
graphs, and we raise several questions.
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 09:26:12 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jun 2023 09:24:03 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Gonçalves",
"Daniel",
""
],
[
"Limouzy",
"Vincent",
""
],
[
"Ochem",
"Pascal",
""
]
] |
new_dataset
| 0.978049 |
2301.12477
|
N M Anoop Krishnan
|
Vaibhav Bihani, Sahil Manchanda, Srikanth Sastry, Sayan Ranu, N.M.
Anoop Krishnan
|
StriderNET: A Graph Reinforcement Learning Approach to Optimize Atomic
Structures on Rough Energy Landscapes
| null | null | null | null |
cs.LG cond-mat.dis-nn
|
http://creativecommons.org/licenses/by/4.0/
|
Optimization of atomic structures presents a challenging problem, due to
their highly rough and non-convex energy landscape, with wide applications in
the fields of drug design, materials discovery, and mechanics. Here, we present
a graph reinforcement learning approach, StriderNET, that learns a policy to
displace the atoms towards low energy configurations. We evaluate the
performance of StriderNET on three complex atomic systems, namely, binary
Lennard-Jones particles, calcium silicate hydrates gel, and disordered silicon.
We show that StriderNET outperforms all classical optimization algorithms and
enables the discovery of a lower energy minimum. In addition, StriderNET
exhibits a higher rate of reaching minima with energies, as confirmed by the
average over multiple realizations. Finally, we show that StriderNET exhibits
inductivity to unseen system sizes that are an order of magnitude different
from the training system.
|
[
{
"version": "v1",
"created": "Sun, 29 Jan 2023 16:06:16 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Bihani",
"Vaibhav",
""
],
[
"Manchanda",
"Sahil",
""
],
[
"Sastry",
"Srikanth",
""
],
[
"Ranu",
"Sayan",
""
],
[
"Krishnan",
"N. M. Anoop",
""
]
] |
new_dataset
| 0.96853 |
2302.02213
|
Shashank Agnihotri
|
Shashank Agnihotri and Steffen Jung and Margret Keuper
|
CosPGD: a unified white-box adversarial attack for pixel-wise prediction
tasks
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
While neural networks allow highly accurate predictions in many tasks, their
lack of robustness towards even slight input perturbations hampers their
deployment in many real-world applications. Recent research towards evaluating
the robustness of neural networks such as the seminal projected gradient
descent(PGD) attack and subsequent works have drawn significant attention, as
they provide an effective insight into the quality of representations learned
by the network. However, these methods predominantly focus on image
classification tasks, while only a few approaches specifically address the
analysis of pixel-wise prediction tasks such as semantic segmentation, optical
flow, disparity estimation, and others, respectively. Thus, there is a lack of
a unified adversarial robustness benchmarking tool(algorithm) that is
applicable to all such pixel-wise prediction tasks. In this work, we close this
gap and propose CosPGD, a novel white-box adversarial attack that allows
optimizing dedicated attacks for any pixel-wise prediction task in a unified
setting. It leverages the cosine similarity between the distributions over the
predictions and ground truth (or target) to extend directly from classification
tasks to regression settings. We outperform the SotA on semantic segmentation
attacks in our experiments on PASCAL VOC2012 and CityScapes. Further, we set a
new benchmark for adversarial attacks on optical flow, and image restoration
displaying the ability to extend to any pixel-wise prediction task.
|
[
{
"version": "v1",
"created": "Sat, 4 Feb 2023 17:59:30 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jun 2023 20:24:28 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Agnihotri",
"Shashank",
""
],
[
"Jung",
"Steffen",
""
],
[
"Keuper",
"Margret",
""
]
] |
new_dataset
| 0.996243 |
2302.04024
|
Hymalai Bello
|
Hymalai Bello, Luis Alfredo Sanchez Marin, Sungho Suh, Bo Zhou and
Paul Lukowicz
|
InMyFace: Inertial and Mechanomyography-Based Sensor Fusion for Wearable
Facial Activity Recognition
|
Submitted to Information Fusion, Elsevier
|
Information Fusion Elsevier 2023
|
10.1016/j.inffus.2023.101886
| null |
cs.LG eess.SP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recognizing facial activity is a well-understood (but non-trivial) computer
vision problem. However, reliable solutions require a camera with a good view
of the face, which is often unavailable in wearable settings. Furthermore, in
wearable applications, where systems accompany users throughout their daily
activities, a permanently running camera can be problematic for privacy (and
legal) reasons. This work presents an alternative solution based on the fusion
of wearable inertial sensors, planar pressure sensors, and acoustic
mechanomyography (muscle sounds). The sensors were placed unobtrusively in a
sports cap to monitor facial muscle activities related to facial expressions.
We present our integrated wearable sensor system, describe data fusion and
analysis methods, and evaluate the system in an experiment with thirteen
subjects from different cultural backgrounds (eight countries) and both sexes
(six women and seven men). In a one-model-per-user scheme and using a late
fusion approach, the system yielded an average F1 score of 85.00% for the case
where all sensing modalities are combined. With a cross-user validation and a
one-model-for-all-user scheme, an F1 score of 79.00% was obtained for thirteen
participants (six females and seven males). Moreover, in a hybrid fusion
(cross-user) approach and six classes, an average F1 score of 82.00% was
obtained for eight users. The results are competitive with state-of-the-art
non-camera-based solutions for a cross-user study. In addition, our unique set
of participants demonstrates the inclusiveness and generalizability of the
approach.
|
[
{
"version": "v1",
"created": "Wed, 8 Feb 2023 12:49:02 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Bello",
"Hymalai",
""
],
[
"Marin",
"Luis Alfredo Sanchez",
""
],
[
"Suh",
"Sungho",
""
],
[
"Zhou",
"Bo",
""
],
[
"Lukowicz",
"Paul",
""
]
] |
new_dataset
| 0.980402 |
2302.06836
|
Isha Chaudhary
|
Isha Chaudhary, Alex Renda, Charith Mendis, Gagandeep Singh
|
COMET: X86 Cost Model Explanation Framework
| null | null | null | null |
cs.PF cs.AI cs.AR cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
ML-based program cost models have been shown to yield fairly accurate program
cost predictions. They can replace heavily-engineered analytical program cost
models in mainstream compilers, but their black-box nature discourages their
adoption. In this work, we propose the first framework, COMET, for generating
faithful, generalizable, and intuitive explanations for x86 cost models. COMET
brings interpretability specifically to ML-based cost models, such as Ithemal.
We generate and compare COMET's explanations for Ithemal against COMET's
explanations for a hand-crafted, accurate analytical model, uiCA. Our empirical
findings show an inverse correlation between the error in the cost prediction
of a cost model and the prominence of semantically-richer features in COMET's
explanations for the cost model for a given x86 basic block.
|
[
{
"version": "v1",
"created": "Tue, 14 Feb 2023 05:20:51 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 04:26:38 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Chaudhary",
"Isha",
""
],
[
"Renda",
"Alex",
""
],
[
"Mendis",
"Charith",
""
],
[
"Singh",
"Gagandeep",
""
]
] |
new_dataset
| 0.984285 |
2302.08631
|
Paul Mineiro
|
Mengxiao Zhang, Yuheng Zhang, Olga Vrousgou, Haipeng Luo, Paul Mineiro
|
Practical Contextual Bandits with Feedback Graphs
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While contextual bandit has a mature theory, effectively leveraging different
feedback patterns to enhance the pace of learning remains unclear. Bandits with
feedback graphs, which interpolates between the full information and bandit
regimes, provides a promising framework to mitigate the statistical complexity
of learning. In this paper, we propose and analyze an approach to contextual
bandits with feedback graphs based upon reduction to regression. The resulting
algorithms are computationally practical and achieve established minimax rates,
thereby reducing the statistical complexity in real-world applications.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 00:06:42 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Jun 2023 18:11:04 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Zhang",
"Mengxiao",
""
],
[
"Zhang",
"Yuheng",
""
],
[
"Vrousgou",
"Olga",
""
],
[
"Luo",
"Haipeng",
""
],
[
"Mineiro",
"Paul",
""
]
] |
new_dataset
| 0.997627 |
2302.13825
|
Marco Favorito
|
Marco Favorito
|
Forward LTLf Synthesis: DPLL At Work
| null | null | null | null |
cs.LO cs.AI cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a new AND-OR graph search framework for synthesis of
Linear Temporal Logic on finite traces (\LTLf), that overcomes some limitations
of previous approaches. Within such framework, we devise a procedure inspired
by the Davis-Putnam-Logemann-Loveland (DPLL) algorithm to generate the next
available agent-environment moves in a truly depth-first fashion, possibly
avoiding exhaustive enumeration or costly compilations. We also propose a novel
equivalence check for search nodes based on syntactic equivalence of state
formulas. Since the resulting procedure is not guaranteed to terminate, we
identify a stopping condition to abort execution and restart the search with
state-equivalence checking based on Binary Decision Diagrams (BDD), which we
show to be correct. The experimental results show that in many cases the
proposed techniques outperform other state-of-the-art approaches. Our
implementation Nike competed in the LTLf Realizability Track in the 2023
edition of SYNTCOMP, and won the competition.
|
[
{
"version": "v1",
"created": "Mon, 27 Feb 2023 14:33:50 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jun 2023 17:02:21 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Favorito",
"Marco",
""
]
] |
new_dataset
| 0.958394 |
2303.12153
|
Kevin Lin
|
Kevin Lin and Christopher Agia and Toki Migimatsu and Marco Pavone and
Jeannette Bohg
|
Text2Motion: From Natural Language Instructions to Feasible Plans
|
https://sites.google.com/stanford.edu/text2motion
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We propose Text2Motion, a language-based planning framework enabling robots
to solve sequential manipulation tasks that require long-horizon reasoning.
Given a natural language instruction, our framework constructs both a task- and
motion-level plan that is verified to reach inferred symbolic goals.
Text2Motion uses feasibility heuristics encoded in Q-functions of a library of
skills to guide task planning with Large Language Models. Whereas previous
language-based planners only consider the feasibility of individual skills,
Text2Motion actively resolves geometric dependencies spanning skill sequences
by performing geometric feasibility planning during its search. We evaluate our
method on a suite of problems that require long-horizon reasoning,
interpretation of abstract goals, and handling of partial affordance
perception. Our experiments show that Text2Motion can solve these challenging
problems with a success rate of 82%, while prior state-of-the-art
language-based planning methods only achieve 13%. Text2Motion thus provides
promising generalization characteristics to semantically diverse sequential
manipulation tasks with geometric dependencies between skills.
|
[
{
"version": "v1",
"created": "Tue, 21 Mar 2023 19:23:30 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2023 23:46:05 GMT"
},
{
"version": "v3",
"created": "Sat, 17 Jun 2023 22:33:11 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Lin",
"Kevin",
""
],
[
"Agia",
"Christopher",
""
],
[
"Migimatsu",
"Toki",
""
],
[
"Pavone",
"Marco",
""
],
[
"Bohg",
"Jeannette",
""
]
] |
new_dataset
| 0.999287 |
2304.01498
|
Wencong Wu
|
Wencong Wu, Guannan Lv, Yingying Duan, Peng Liang, Yungang Zhang,
Yuelong Xia
|
DCANet: Dual Convolutional Neural Network with Attention for Image Blind
Denoising
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Noise removal of images is an essential preprocessing procedure for many
computer vision tasks. Currently, many denoising models based on deep neural
networks can perform well in removing the noise with known distributions (i.e.
the additive Gaussian white noise). However eliminating real noise is still a
very challenging task, since real-world noise often does not simply follow one
single type of distribution, and the noise may spatially vary. In this paper,
we present a new dual convolutional neural network (CNN) with attention for
image blind denoising, named as the DCANet. To the best of our knowledge, the
proposed DCANet is the first work that integrates both the dual CNN and
attention mechanism for image denoising. The DCANet is composed of a noise
estimation network, a spatial and channel attention module (SCAM), and a CNN
with a dual structure. The noise estimation network is utilized to estimate the
spatial distribution and the noise level in an image. The noisy image and its
estimated noise are combined as the input of the SCAM, and a dual CNN contains
two different branches is designed to learn the complementary features to
obtain the denoised image. The experimental results have verified that the
proposed DCANet can suppress both synthetic and real noise effectively. The
code of DCANet is available at https://github.com/WenCongWu/DCANet.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 03:18:27 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Jun 2023 01:19:41 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Wu",
"Wencong",
""
],
[
"Lv",
"Guannan",
""
],
[
"Duan",
"Yingying",
""
],
[
"Liang",
"Peng",
""
],
[
"Zhang",
"Yungang",
""
],
[
"Xia",
"Yuelong",
""
]
] |
new_dataset
| 0.977266 |
2304.01844
|
Jingyi Feng
|
Jingyi Feng and Chenming Zhang
|
Grid-SD2E: A General Grid-Feedback in a System for Cognitive Learning
|
19 pages, 7 figures, 8 formulas
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Comprehending how the brain interacts with the external world through
generated neural signals is crucial for determining its working mechanism,
treating brain diseases, and understanding intelligence. Although many
theoretical models have been proposed, they have thus far been difficult to
integrate and develop. In this study, we were inspired in part by grid cells in
creating a more general and robust grid module and constructing an interactive
and self-reinforcing cognitive system together with Bayesian reasoning, an
approach called space-division and exploration-exploitation with grid-feedback
(Grid-SD2E). Here, a grid module can be used as an interaction medium between
the outside world and a system, as well as a self-reinforcement medium within
the system. The space-division and exploration-exploitation (SD2E) receives the
0/1 signals of a grid through its space-division (SD) module. The system
described in this paper is also a theoretical model derived from experiments
conducted by other researchers and our experience on neural decoding. Herein,
we analyse the rationality of the system based on the existing theories in both
neuroscience and cognitive science, and attempt to propose special and general
rules to explain the different interactions between people and between people
and the external world. What's more, based on this model, the smallest
computing unit is extracted, which is analogous to a single neuron in the
brain.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 14:54:12 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 09:28:20 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Feng",
"Jingyi",
""
],
[
"Zhang",
"Chenming",
""
]
] |
new_dataset
| 0.963527 |
2304.05661
|
Haojia Yu
|
Haojia Yu, Han Hu, Bo Xu, Qisen Shang, Zhendong Wang and Qing Zhu
|
SuperpixelGraph: Semi-automatic generation of building footprint through
semantic-sensitive superpixel and neural graph networks
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Most urban applications necessitate building footprints in the form of
concise vector graphics with sharp boundaries rather than pixel-wise raster
images. This need contrasts with the majority of existing methods, which
typically generate over-smoothed footprint polygons. Editing these
automatically produced polygons can be inefficient, if not more time-consuming
than manual digitization. This paper introduces a semi-automatic approach for
building footprint extraction through semantically-sensitive superpixels and
neural graph networks. Drawing inspiration from object-based classification
techniques, we first learn to generate superpixels that are not only
boundary-preserving but also semantically-sensitive. The superpixels respond
exclusively to building boundaries rather than other natural objects, while
simultaneously producing semantic segmentation of the buildings. These
intermediate superpixel representations can be naturally considered as nodes
within a graph. Consequently, graph neural networks are employed to model the
global interactions among all superpixels and enhance the representativeness of
node features for building segmentation. Classical approaches are utilized to
extract and regularize boundaries for the vectorized building footprints.
Utilizing minimal clicks and straightforward strokes, we efficiently accomplish
accurate segmentation outcomes, eliminating the necessity for editing polygon
vertices. Our proposed approach demonstrates superior precision and efficacy,
as validated by experimental assessments on various public benchmark datasets.
A significant improvement of 8% in AP50 was observed in vector graphics
evaluation, surpassing established techniques. Additionally, we have devised an
optimized and sophisticated pipeline for interactive editing, poised to further
augment the overall quality of the results.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 07:39:20 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 08:07:09 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Yu",
"Haojia",
""
],
[
"Hu",
"Han",
""
],
[
"Xu",
"Bo",
""
],
[
"Shang",
"Qisen",
""
],
[
"Wang",
"Zhendong",
""
],
[
"Zhu",
"Qing",
""
]
] |
new_dataset
| 0.981193 |
2304.05934
|
Aashaka Desai
|
Aashaka Desai, Lauren Berger, Fyodor O. Minakov, Vanessa Milan,
Chinmay Singh, Kriston Pumphrey, Richard E. Ladner, Hal Daum\'e III, Alex X.
Lu, Naomi Caselli, Danielle Bragg
|
ASL Citizen: A Community-Sourced Dataset for Advancing Isolated Sign
Language Recognition
| null | null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Sign languages are used as a primary language by approximately 70 million
D/deaf people world-wide. However, most communication technologies operate in
spoken and written languages, creating inequities in access. To help tackle
this problem, we release ASL Citizen, the first crowdsourced Isolated Sign
Language Recognition (ISLR) dataset, collected with consent and containing
83,399 videos for 2,731 distinct signs filmed by 52 signers in a variety of
environments. We propose that this dataset be used for sign language dictionary
retrieval for American Sign Language (ASL), where a user demonstrates a sign to
their webcam to retrieve matching signs from a dictionary. We show that
training supervised machine learning classifiers with our dataset advances the
state-of-the-art on metrics relevant for dictionary retrieval, achieving 63%
accuracy and a recall-at-10 of 91%, evaluated entirely on videos of users who
are not present in the training or validation sets. An accessible PDF of this
article is available at the following link:
https://aashakadesai.github.io/research/ASLCitizen_arxiv_updated.pdf
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 15:52:53 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 03:20:18 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Desai",
"Aashaka",
""
],
[
"Berger",
"Lauren",
""
],
[
"Minakov",
"Fyodor O.",
""
],
[
"Milan",
"Vanessa",
""
],
[
"Singh",
"Chinmay",
""
],
[
"Pumphrey",
"Kriston",
""
],
[
"Ladner",
"Richard E.",
""
],
[
"Daumé",
"Hal",
"III"
],
[
"Lu",
"Alex X.",
""
],
[
"Caselli",
"Naomi",
""
],
[
"Bragg",
"Danielle",
""
]
] |
new_dataset
| 0.999883 |
2304.07204
|
Ningyu He
|
Ningyu He, Zhehao Zhao, Jikai Wang, Yubin Hu, Shengjian Guo, Haoyu
Wang, Guangtai Liang, Ding Li, Xiangqun Chen, Yao Guo
|
Eunomia: Enabling User-specified Fine-Grained Search in Symbolically
Executing WebAssembly Binaries
|
!!!NOTE HERE!!! In arxiv v2 version, I have replaced the original
repo link to a new one, because the original one is hijacked to a extremely
frightening and jump-scare webpage. PLEASE REFER TO
https://github.com/HNYuuu/Eunomia-ISSTA23 NOT THE ORIGINAL shorturl ONE!
| null | null | null |
cs.SE cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Although existing techniques have proposed automated approaches to alleviate
the path explosion problem of symbolic execution, users still need to optimize
symbolic execution by applying various searching strategies carefully. As
existing approaches mainly support only coarse-grained global searching
strategies, they cannot efficiently traverse through complex code structures.
In this paper, we propose Eunomia, a symbolic execution technique that allows
users to specify local domain knowledge to enable fine-grained search. In
Eunomia, we design an expressive DSL, Aes, that lets users precisely pinpoint
local searching strategies to different parts of the target program. To further
optimize local searching strategies, we design an interval-based algorithm that
automatically isolates the context of variables for different local searching
strategies, avoiding conflicts between local searching strategies for the same
variable. We implement Eunomia as a symbolic execution platform targeting
WebAssembly, which enables us to analyze applications written in various
languages (like C and Go) but can be compiled into WebAssembly. To the best of
our knowledge, Eunomia is the first symbolic execution engine that supports the
full features of the WebAssembly runtime. We evaluate Eunomia with a dedicated
microbenchmark suite for symbolic execution and six real-world applications.
Our evaluation shows that Eunomia accelerates bug detection in real-world
applications by up to three orders of magnitude. According to the results of a
comprehensive user study, users can significantly improve the efficiency and
effectiveness of symbolic execution by writing a simple and intuitive Aes
script. Besides verifying six known real-world bugs, Eunomia also detected two
new zero-day bugs in a popular open-source project, Collections-C.
|
[
{
"version": "v1",
"created": "Fri, 14 Apr 2023 15:31:18 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Jun 2023 06:05:59 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"He",
"Ningyu",
""
],
[
"Zhao",
"Zhehao",
""
],
[
"Wang",
"Jikai",
""
],
[
"Hu",
"Yubin",
""
],
[
"Guo",
"Shengjian",
""
],
[
"Wang",
"Haoyu",
""
],
[
"Liang",
"Guangtai",
""
],
[
"Li",
"Ding",
""
],
[
"Chen",
"Xiangqun",
""
],
[
"Guo",
"Yao",
""
]
] |
new_dataset
| 0.980537 |
2304.12991
|
Miguel \'Angel Navarro-P\'erez
|
Clementa Alonso-Gonz\'alez and Miguel \'Angel Navarro-P\'erez
|
A new invariant for cyclic orbit flag codes
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In the network coding framework, given a prime power $q$ and the vector space
$\mathbb{F}_q^n$, a constant type flag code is a set of nested sequences of
$\mathbb{F}_q$-subspaces (flags) with the same increasing sequence of
dimensions (the type of the flag). If a flag code arises as the orbit under the
action of a cyclic subgroup of the general linear group over a flag, we say
that it is a cyclic orbit flag code. Among the parameters of such a family of
codes, we have its best friend, that is the largest field over which all the
subspaces in the generating flag are vector spaces. This object permits to
compute the cardinality of the code and estimate its minimum distance. However,
as it occurs with other absolute parameters of a flag code, the information
given by the best friend is not complete in many cases due to the fact that it
can be obtained in different ways. In this work, we present a new invariant,
the best friend vector, that captures the specific way the best friend can be
unfolded. Furthermore, throughout the paper we analyze the strong underlying
interaction between this invariant and other parameters such as the
cardinality, the flag distance, or the type vector, and how it conditions them.
Finally, we investigate the realizability of a prescribed best friend vector in
a vector space.
|
[
{
"version": "v1",
"created": "Tue, 25 Apr 2023 17:01:19 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 16:46:33 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Alonso-González",
"Clementa",
""
],
[
"Navarro-Pérez",
"Miguel Ángel",
""
]
] |
new_dataset
| 0.99932 |
2304.14701
|
Andrew Lewis-Pye
|
Andrew Lewis-Pye and Tim Roughgarden
|
Permissionless Consensus
|
This is a journal version of the paper that subsumes earlier
(conference) versions "Byzantine Generals in the Permissionless Setting" and
"Resource Pools and the CAP Theorem"
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Blockchain protocols typically aspire to run in the permissionless setting,
in which nodes are owned and operated by a large number of diverse and unknown
entities, with each node free to start or stop running the protocol at any
time. This setting is more challenging than the traditional permissioned
setting, in which the set of nodes that will be running the protocol is fixed
and known at the time of protocol deployment. The goal of this paper is to
provide a framework for reasoning about the rich design space of blockchain
protocols and their capabilities and limitations in the permissionless setting.
This paper offers a hierarchy of settings with different "degrees of
permissionlessness", specified by the amount of knowledge that a protocol has
about the current participants: These are the fully permissionless, dynamically
available and quasi-permissionless settings.
The paper also proves several results illustrating the utility of our
analysis framework for reasoning about blockchain protocols in these settings.
For example:
(1) In the fully permissionless setting, even with synchronous communication
and with severe restrictions on the total size of the Byzantine players, every
deterministic protocol for Byzantine agreement has an infinite execution.
(2) In the dynamically available and partially synchronous setting, no
protocol can solve the Byzantine agreement problem with high probability, even
if there are no Byzantine players at all.
(3) In the quasi-permissionless and partially synchronous setting, by
contrast, assuming a bound on the total size of the Byzantine players, there is
a deterministic protocol guaranteed to solve the Byzantine agreement problem in
a finite amount of time.
(4) In the quasi-permissionless and synchronous setting, every proof-of-stake
protocol that does not use advanced cryptography is vulnerable to long-range
attacks.
|
[
{
"version": "v1",
"created": "Fri, 28 Apr 2023 09:15:55 GMT"
},
{
"version": "v2",
"created": "Thu, 11 May 2023 10:46:22 GMT"
},
{
"version": "v3",
"created": "Sat, 17 Jun 2023 12:40:23 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Lewis-Pye",
"Andrew",
""
],
[
"Roughgarden",
"Tim",
""
]
] |
new_dataset
| 0.974824 |
2305.10764
|
Minghua Liu
|
Minghua Liu, Ruoxi Shi, Kaiming Kuang, Yinhao Zhu, Xuanlin Li,
Shizhong Han, Hong Cai, Fatih Porikli, Hao Su
|
OpenShape: Scaling Up 3D Shape Representation Towards Open-World
Understanding
|
Project Website: https://colin97.github.io/OpenShape/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce OpenShape, a method for learning multi-modal joint
representations of text, image, and point clouds. We adopt the commonly used
multi-modal contrastive learning framework for representation alignment, but
with a specific focus on scaling up 3D representations to enable open-world 3D
shape understanding. To achieve this, we scale up training data by ensembling
multiple 3D datasets and propose several strategies to automatically filter and
enrich noisy text descriptions. We also explore and compare strategies for
scaling 3D backbone networks and introduce a novel hard negative mining module
for more efficient training. We evaluate OpenShape on zero-shot 3D
classification benchmarks and demonstrate its superior capabilities for
open-world recognition. Specifically, OpenShape achieves a zero-shot accuracy
of 46.8% on the 1,156-category Objaverse-LVIS benchmark, compared to less than
10% for existing methods. OpenShape also achieves an accuracy of 85.3% on
ModelNet40, outperforming previous zero-shot baseline methods by 20% and
performing on par with some fully-supervised methods. Furthermore, we show that
our learned embeddings encode a wide range of visual and semantic concepts
(e.g., subcategories, color, shape, style) and facilitate fine-grained text-3D
and image-3D interactions. Due to their alignment with CLIP embeddings, our
learned shape representations can also be integrated with off-the-shelf
CLIP-based models for various applications, such as point cloud captioning and
point cloud-conditioned image generation.
|
[
{
"version": "v1",
"created": "Thu, 18 May 2023 07:07:19 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 23:31:40 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Liu",
"Minghua",
""
],
[
"Shi",
"Ruoxi",
""
],
[
"Kuang",
"Kaiming",
""
],
[
"Zhu",
"Yinhao",
""
],
[
"Li",
"Xuanlin",
""
],
[
"Han",
"Shizhong",
""
],
[
"Cai",
"Hong",
""
],
[
"Porikli",
"Fatih",
""
],
[
"Su",
"Hao",
""
]
] |
new_dataset
| 0.999235 |
2305.14019
|
Kaiyan Chang
|
Kaiyan Chang and Ying Wang and Haimeng Ren and Mengdi Wang and
Shengwen Liang and Yinhe Han and Huawei Li and Xiaowei Li
|
ChipGPT: How far are we from natural language hardware design
| null | null | null | null |
cs.AI cs.AR cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
As large language models (LLMs) like ChatGPT exhibited unprecedented machine
intelligence, it also shows great performance in assisting hardware engineers
to realize higher-efficiency logic design via natural language interaction. To
estimate the potential of the hardware design process assisted by LLMs, this
work attempts to demonstrate an automated design environment that explores LLMs
to generate hardware logic designs from natural language specifications. To
realize a more accessible and efficient chip development flow, we present a
scalable four-stage zero-code logic design framework based on LLMs without
retraining or finetuning. At first, the demo, ChipGPT, begins by generating
prompts for the LLM, which then produces initial Verilog programs. Second, an
output manager corrects and optimizes these programs before collecting them
into the final design space. Eventually, ChipGPT will search through this space
to select the optimal design under the target metrics. The evaluation sheds
some light on whether LLMs can generate correct and complete hardware logic
designs described by natural language for some specifications. It is shown that
ChipGPT improves programmability, and controllability, and shows broader design
optimization space compared to prior work and native LLMs alone.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 12:54:02 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Jun 2023 13:24:11 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Jun 2023 08:28:15 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Chang",
"Kaiyan",
""
],
[
"Wang",
"Ying",
""
],
[
"Ren",
"Haimeng",
""
],
[
"Wang",
"Mengdi",
""
],
[
"Liang",
"Shengwen",
""
],
[
"Han",
"Yinhe",
""
],
[
"Li",
"Huawei",
""
],
[
"Li",
"Xiaowei",
""
]
] |
new_dataset
| 0.998953 |
2305.14335
|
Henghui Ding
|
Shuting He, Xudong Jiang, Wei Jiang, Henghui Ding
|
Prototype Adaption and Projection for Few- and Zero-shot 3D Point Cloud
Semantic Segmentation
|
IEEE TIP
| null |
10.1109/TIP.2023.3279660
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we address the challenging task of few-shot and zero-shot 3D
point cloud semantic segmentation. The success of few-shot semantic
segmentation in 2D computer vision is mainly driven by the pre-training on
large-scale datasets like imagenet. The feature extractor pre-trained on
large-scale 2D datasets greatly helps the 2D few-shot learning. However, the
development of 3D deep learning is hindered by the limited volume and instance
modality of datasets due to the significant cost of 3D data collection and
annotation. This results in less representative features and large intra-class
feature variation for few-shot 3D point cloud segmentation. As a consequence,
directly extending existing popular prototypical methods of 2D few-shot
classification/segmentation into 3D point cloud segmentation won't work as well
as in 2D domain. To address this issue, we propose a Query-Guided Prototype
Adaption (QGPA) module to adapt the prototype from support point clouds feature
space to query point clouds feature space. With such prototype adaption, we
greatly alleviate the issue of large feature intra-class variation in point
cloud and significantly improve the performance of few-shot 3D segmentation.
Besides, to enhance the representation of prototypes, we introduce a
Self-Reconstruction (SR) module that enables prototype to reconstruct the
support mask as well as possible. Moreover, we further consider zero-shot 3D
point cloud semantic segmentation where there is no support sample. To this
end, we introduce category words as semantic information and propose a
semantic-visual projection model to bridge the semantic and visual spaces. Our
proposed method surpasses state-of-the-art algorithms by a considerable 7.90%
and 14.82% under the 2-way 1-shot setting on S3DIS and ScanNet benchmarks,
respectively. Code is available at https://github.com/heshuting555/PAP-FZS3D.
|
[
{
"version": "v1",
"created": "Tue, 23 May 2023 17:58:05 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"He",
"Shuting",
""
],
[
"Jiang",
"Xudong",
""
],
[
"Jiang",
"Wei",
""
],
[
"Ding",
"Henghui",
""
]
] |
new_dataset
| 0.990242 |
2305.16307
|
Jay Gala
|
AI4Bharat and Jay Gala and Pranjal A. Chitale and Raghavan AK and
Sumanth Doddapaneni and Varun Gumma and Aswanth Kumar and Janki Nawale and
Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar
and Mitesh M. Khapra and Raj Dabre and Anoop Kunchukuttan
|
IndicTrans2: Towards High-Quality and Accessible Machine Translation
Models for all 22 Scheduled Indian Languages
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
India has a rich linguistic landscape with languages from 4 major language
families spoken by over a billion people. 22 of these languages are listed in
the Constitution of India (referred to as scheduled languages) are the focus of
this work. Given the linguistic diversity, high-quality and accessible Machine
Translation (MT) systems are essential in a country like India. Prior to this
work, there was (i) no parallel training data spanning all the 22 languages,
(ii) no robust benchmarks covering all these languages and containing content
relevant to India, and (iii) no existing translation models which support all
the 22 scheduled languages of India. In this work, we aim to address this gap
by focusing on the missing pieces required for enabling wide, easy, and open
access to good machine translation systems for all 22 scheduled Indian
languages. We identify four key areas of improvement: curating and creating
larger training datasets, creating diverse and high-quality benchmarks,
training multilingual models, and releasing models with open access. Our first
contribution is the release of the Bharat Parallel Corpus Collection (BPCC),
the largest publicly available parallel corpora for Indic languages. BPCC
contains a total of 230M bitext pairs, of which a total of 126M were newly
added, including 644K manually translated sentence pairs created as part of
this work. Our second contribution is the release of the first n-way parallel
benchmark covering all 22 Indian languages, featuring diverse domains,
Indian-origin content, and source-original test sets. Next, we present
IndicTrans2, the first model to support all 22 languages, surpassing existing
models on multiple existing and new benchmarks created as a part of this work.
Lastly, to promote accessibility and collaboration, we release our models and
associated data with permissive licenses at
https://github.com/ai4bharat/IndicTrans2.
|
[
{
"version": "v1",
"created": "Thu, 25 May 2023 17:57:43 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Jun 2023 04:00:19 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"AI4Bharat",
"",
""
],
[
"Gala",
"Jay",
""
],
[
"Chitale",
"Pranjal A.",
""
],
[
"AK",
"Raghavan",
""
],
[
"Doddapaneni",
"Sumanth",
""
],
[
"Gumma",
"Varun",
""
],
[
"Kumar",
"Aswanth",
""
],
[
"Nawale",
"Janki",
""
],
[
"Sujatha",
"Anupama",
""
],
[
"Puduppully",
"Ratish",
""
],
[
"Raghavan",
"Vivek",
""
],
[
"Kumar",
"Pratyush",
""
],
[
"Khapra",
"Mitesh M.",
""
],
[
"Dabre",
"Raj",
""
],
[
"Kunchukuttan",
"Anoop",
""
]
] |
new_dataset
| 0.999785 |
2305.17262
|
Bhishma Dedhia
|
Bhishma Dedhia, Michael Chang, Jake C. Snell, Thomas L. Griffiths,
Niraj K. Jha
|
Im-Promptu: In-Context Composition from Image Prompts
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models are few-shot learners that can solve diverse tasks from
a handful of demonstrations. This implicit understanding of tasks suggests that
the attention mechanisms over word tokens may play a role in analogical
reasoning. In this work, we investigate whether analogical reasoning can enable
in-context composition over composable elements of visual stimuli. First, we
introduce a suite of three benchmarks to test the generalization properties of
a visual in-context learner. We formalize the notion of an analogy-based
in-context learner and use it to design a meta-learning framework called
Im-Promptu. Whereas the requisite token granularity for language is well
established, the appropriate compositional granularity for enabling in-context
generalization in visual stimuli is usually unspecified. To this end, we use
Im-Promptu to train multiple agents with different levels of compositionality,
including vector representations, patch representations, and object slots. Our
experiments reveal tradeoffs between extrapolation abilities and the degree of
compositionality, with non-compositional representations extending learned
composition rules to unseen domains but performing poorly on combinatorial
tasks. Patch-based representations require patches to contain entire objects
for robust extrapolation. At the same time, object-centric tokenizers coupled
with a cross-attention module generate consistent and high-fidelity solutions,
with these inductive biases being particularly crucial for compositional
generalization. Lastly, we demonstrate a use case of Im-Promptu as an intuitive
programming interface for image generation.
|
[
{
"version": "v1",
"created": "Fri, 26 May 2023 21:10:11 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Jun 2023 00:06:34 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Dedhia",
"Bhishma",
""
],
[
"Chang",
"Michael",
""
],
[
"Snell",
"Jake C.",
""
],
[
"Griffiths",
"Thomas L.",
""
],
[
"Jha",
"Niraj K.",
""
]
] |
new_dataset
| 0.999446 |
2305.19981
|
Yan Wang
|
Yan Wang, Heidi Ann Scharf Donovan, Sabit Hassan, Mailhe Alikhani
|
MedNgage: A Dataset for Understanding Engagement in Patient-Nurse
Conversations
|
ACL Findings 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Patients who effectively manage their symptoms often demonstrate higher
levels of engagement in conversations and interventions with healthcare
practitioners. This engagement is multifaceted, encompassing cognitive and
socio-affective dimensions. Consequently, it is crucial for AI systems to
understand the engagement in natural conversations between patients and
practitioners to better contribute toward patient care. In this paper, we
present a novel dataset (MedNgage), which consists of patient-nurse
conversations about cancer symptom management. We manually annotate the dataset
with a novel framework of categories of patient engagement from two different
angles, namely: i) socio-affective (3.1K spans), and ii) cognitive use of
language (1.8K spans). Through statistical analysis of the data that is
annotated using our framework, we show a positive correlation between patient
symptom management outcomes and their engagement in conversations.
Additionally, we demonstrate that pre-trained transformer models fine-tuned on
our dataset can reliably predict engagement classes in patient-nurse
conversations. Lastly, we use LIME (Ribeiro et al., 2016) to analyze the
underlying challenges of the tasks that state-of-the-art transformer models
encounter. The de-identified data is available for research purposes upon
request.
|
[
{
"version": "v1",
"created": "Wed, 31 May 2023 16:06:07 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 16:52:56 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Wang",
"Yan",
""
],
[
"Donovan",
"Heidi Ann Scharf",
""
],
[
"Hassan",
"Sabit",
""
],
[
"Alikhani",
"Mailhe",
""
]
] |
new_dataset
| 0.999828 |
2306.02144
|
ShengZhuo Wei
|
Shengzhuo Wei and Yan Lan
|
A two-way translation system of Chinese sign language based on computer
vision
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the main means of communication for deaf people, sign language has a
special grammatical order, so it is meaningful and valuable to develop a
real-time translation system for sign language. In the research process, we
added a TSM module to the lightweight neural network model for the large
Chinese continuous sign language dataset . It effectively improves the network
performance with high accuracy and fast recognition speed. At the same time, we
improve the Bert-Base-Chinese model to divide Chinese sentences into words and
mapping the natural word order to the statute sign language order, and finally
use the corresponding word videos in the isolated sign language dataset to
generate the sentence video, so as to achieve the function of text-to-sign
language translation. In the last of our research we built a system with sign
language recognition and translation functions, and conducted performance tests
on the complete dataset. The sign language video recognition accuracy reached
about 99.3% with a time of about 0.05 seconds, and the sign language generation
video time was about 1.3 seconds. The sign language system has good performance
performance and is feasible.
|
[
{
"version": "v1",
"created": "Sat, 3 Jun 2023 16:00:57 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Jun 2023 18:04:07 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Wei",
"Shengzhuo",
""
],
[
"Lan",
"Yan",
""
]
] |
new_dataset
| 0.986035 |
2306.02323
|
Ganghui Lin
|
Ganghui Lin, Ahmed Elzanaty, Mohamed-Slim Alouini
|
LoRa Backscatter Communications: Temporal, Spectral, and Error
Performance Analysis
|
Early access in IEEE Journal of Internet of Things. Codes are
provided in Github:
https://github.com/SlinGovie/LoRa-Backscatter-Performance-Analysis
|
IEEE Internet of Things Journal
|
10.1109/JIOT.2023.3268113
| null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
LoRa backscatter (LB) communication systems can be considered as a potential
candidate for ultra low power wide area networks (LPWAN) because of their low
cost and low power consumption. In this paper, we comprehensively analyze LB
modulation from various aspects, i.e., temporal, spectral, and error
performance characteristics. First, we propose a signal model for LB signals
that accounts for the limited number of loads in the tag. Then, we investigate
the spectral properties of LB signals, obtaining a closed-form expression for
the power spectrum. Finally, we derived the symbol error rate (SER) of LB with
two decoders, i.e., the maximum likelihood (ML) and fast Fourier transform
(FFT) decoders, in both additive white Gaussian noise (AWGN) and double
Nakagami-m fading channels. The spectral analysis shows that out-of-band
emissions for LB satisfy the European Telecommunications Standards Institute
(ETSI) regulation only when considering a relatively large number of loads. For
the error performance, unlike conventional LoRa, the FFT decoder is not
optimal. Nevertheless, the ML decoder can achieve a performance similar to
conventional LoRa with a moderate number of loads.
|
[
{
"version": "v1",
"created": "Sun, 4 Jun 2023 10:30:04 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 14:33:44 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Lin",
"Ganghui",
""
],
[
"Elzanaty",
"Ahmed",
""
],
[
"Alouini",
"Mohamed-Slim",
""
]
] |
new_dataset
| 0.99663 |
2306.06687
|
Zhenfei Yin
|
Zhenfei Yin, Jiong Wang, Jianjian Cao, Zhelun Shi, Dingning Liu, Mukai
Li, Lu Sheng, Lei Bai, Xiaoshui Huang, Zhiyong Wang, Jing Shao, Wanli Ouyang
|
LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset,
Framework, and Benchmark
|
37 pages, 33 figures. Code available at
https://github.com/OpenLAMM/LAMM ; Project page: https://openlamm.github.io/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models have become a potential pathway toward achieving
artificial general intelligence. Recent works on multi-modal large language
models have demonstrated their effectiveness in handling visual modalities. In
this work, we extend the research of MLLMs to point clouds and present the
LAMM-Dataset and LAMM-Benchmark for 2D image and 3D point cloud understanding.
We also establish an extensible framework to facilitate the extension of MLLMs
to additional modalities. Our main contribution is three-fold: 1) We present
the LAMM-Dataset and LAMM-Benchmark, which cover almost all high-level vision
tasks for 2D and 3D vision. Extensive experiments validate the effectiveness of
our dataset and benchmark. 2) We demonstrate the detailed methods of
constructing instruction-tuning datasets and benchmarks for MLLMs, which will
enable future research on MLLMs to scale up and extend to other domains, tasks,
and modalities faster. 3) We provide a primary but potential MLLM training
framework optimized for modalities' extension. We also provide baseline models,
comprehensive experimental observations, and analysis to accelerate future
research. Codes and datasets are now available at
https://github.com/OpenLAMM/LAMM.
|
[
{
"version": "v1",
"created": "Sun, 11 Jun 2023 14:01:17 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Jun 2023 13:15:47 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Yin",
"Zhenfei",
""
],
[
"Wang",
"Jiong",
""
],
[
"Cao",
"Jianjian",
""
],
[
"Shi",
"Zhelun",
""
],
[
"Liu",
"Dingning",
""
],
[
"Li",
"Mukai",
""
],
[
"Sheng",
"Lu",
""
],
[
"Bai",
"Lei",
""
],
[
"Huang",
"Xiaoshui",
""
],
[
"Wang",
"Zhiyong",
""
],
[
"Shao",
"Jing",
""
],
[
"Ouyang",
"Wanli",
""
]
] |
new_dataset
| 0.999619 |
2306.07245
|
Deison Preve
|
Deison Preve, Pietro Lenarda, Daniele Bianchi and Alessio Gizzi
|
Phase field modelling and simulation of damage occurring in human
vertebra after screws fixation procedure
|
23 pages, 9 figures. arXiv admin note: text overlap with
arXiv:2207.09362
| null | null | null |
cs.CE q-bio.TO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The present endeavor numerically exploits the use of a phase-field model to
simulate and investigate fracture patterns, deformation mechanisms, damage, and
mechanical responses in a human vertebra after the incision of pedicle screws
under compressive regimes. Moreover, the proposed phase field framework can
elucidate scenarios where different damage patterns, such as crack nucleation
sites and crack trajectories, play a role after the spine fusion procedure,
considering several simulated physiological movements of the vertebral body. A
convergence analysis has been conducted for the vertebra-screws model,
considering several mesh refinements, which has demonstrated good agreement
with the existing literature on this topic. Consequently, by assuming different
angles for the insertion of the pedicle screws and taking into account a few
vertebral motion loading regimes, a plethora of numerical results
characterizing the damage occurring within the vertebral model has been
derived. Overall, the phase field results may shed more light on the medical
community, which will be useful in enhancing clinical interventions and
reducing post-surgery bone failure and screw loosening.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 17:11:35 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Jun 2023 21:52:39 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Preve",
"Deison",
""
],
[
"Lenarda",
"Pietro",
""
],
[
"Bianchi",
"Daniele",
""
],
[
"Gizzi",
"Alessio",
""
]
] |
new_dataset
| 0.990545 |
2306.07547
|
Chenpeng Du
|
Chenpeng Du, Yiwei Guo, Feiyu Shen, Zhijun Liu, Zheng Liang, Xie Chen,
Shuai Wang, Hui Zhang, Kai Yu
|
UniCATS: A Unified Context-Aware Text-to-Speech Framework with
Contextual VQ-Diffusion and Vocoding
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The utilization of discrete speech tokens, divided into semantic tokens and
acoustic tokens, has been proven superior to traditional acoustic feature
mel-spectrograms in terms of naturalness and robustness for text-to-speech
(TTS) synthesis. Recent popular models, such as VALL-E and SPEAR-TTS, allow
zero-shot speaker adaptation through auto-regressive (AR) continuation of
acoustic tokens extracted from a short speech prompt. However, these AR models
are restricted to generate speech only in a left-to-right direction, making
them unsuitable for speech editing where both preceding and following contexts
are provided. Furthermore, these models rely on acoustic tokens, which have
audio quality limitations imposed by the performance of audio codec models. In
this study, we propose a unified context-aware TTS framework called UniCATS,
which is capable of both speech continuation and editing. UniCATS comprises two
components, an acoustic model CTX-txt2vec and a vocoder CTX-vec2wav.
CTX-txt2vec employs contextual VQ-diffusion to predict semantic tokens from the
input text, enabling it to incorporate the semantic context and maintain
seamless concatenation with the surrounding context. Following that,
CTX-vec2wav utilizes contextual vocoding to convert these semantic tokens into
waveforms, taking into consideration the acoustic context. Our experimental
results demonstrate that CTX-vec2wav outperforms HifiGAN and AudioLM in terms
of speech resynthesis from semantic tokens. Moreover, we show that UniCATS
achieves state-of-the-art performance in both speech continuation and editing.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 05:38:34 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Jun 2023 07:30:52 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Du",
"Chenpeng",
""
],
[
"Guo",
"Yiwei",
""
],
[
"Shen",
"Feiyu",
""
],
[
"Liu",
"Zhijun",
""
],
[
"Liang",
"Zheng",
""
],
[
"Chen",
"Xie",
""
],
[
"Wang",
"Shuai",
""
],
[
"Zhang",
"Hui",
""
],
[
"Yu",
"Kai",
""
]
] |
new_dataset
| 0.976568 |
2306.07890
|
Haoping Bai
|
Haoping Bai, Shancong Mou, Tatiana Likhomanenko, Ramazan Gokberk
Cinbis, Oncel Tuzel, Ping Huang, Jiulong Shan, Jianjun Shi, Meng Cao
|
VISION Datasets: A Benchmark for Vision-based InduStrial InspectiON
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Despite progress in vision-based inspection algorithms, real-world industrial
challenges -- specifically in data availability, quality, and complex
production requirements -- often remain under-addressed. We introduce the
VISION Datasets, a diverse collection of 14 industrial inspection datasets,
uniquely poised to meet these challenges. Unlike previous datasets, VISION
brings versatility to defect detection, offering annotation masks across all
splits and catering to various detection methodologies. Our datasets also
feature instance-segmentation annotation, enabling precise defect
identification. With a total of 18k images encompassing 44 defect types, VISION
strives to mirror a wide range of real-world production scenarios. By
supporting two ongoing challenge competitions on the VISION Datasets, we hope
to foster further advancements in vision-based industrial inspection.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 16:31:02 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Jun 2023 01:11:04 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Bai",
"Haoping",
""
],
[
"Mou",
"Shancong",
""
],
[
"Likhomanenko",
"Tatiana",
""
],
[
"Cinbis",
"Ramazan Gokberk",
""
],
[
"Tuzel",
"Oncel",
""
],
[
"Huang",
"Ping",
""
],
[
"Shan",
"Jiulong",
""
],
[
"Shi",
"Jianjun",
""
],
[
"Cao",
"Meng",
""
]
] |
new_dataset
| 0.993437 |
2306.08341
|
Yuxuan Zhou
|
Yuxuan Zhou, Xingxing Li, Shengyu Li, Xuanbin Wang, Zhiheng Shen
|
Ground-VIO: Monocular Visual-Inertial Odometry with Online Calibration
of Camera-Ground Geometric Parameters
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Monocular visual-inertial odometry (VIO) is a low-cost solution to provide
high-accuracy, low-drifting pose estimation. However, it has been meeting
challenges in vehicular scenarios due to limited dynamics and lack of stable
features. In this paper, we propose Ground-VIO, which utilizes ground features
and the specific camera-ground geometry to enhance monocular VIO performance in
realistic road environments. In the method, the camera-ground geometry is
modeled with vehicle-centered parameters and integrated into an
optimization-based VIO framework. These parameters could be calibrated online
and simultaneously improve the odometry accuracy by providing stable
scale-awareness. Besides, a specially designed visual front-end is developed to
stably extract and track ground features via the inverse perspective mapping
(IPM) technique. Both simulation tests and real-world experiments are conducted
to verify the effectiveness of the proposed method. The results show that our
implementation could dramatically improve monocular VIO accuracy in vehicular
scenarios, achieving comparable or even better performance than state-of-art
stereo VIO solutions. The system could also be used for the auto-calibration of
IPM which is widely used in vehicle perception. A toolkit for ground feature
processing, together with the experimental datasets, would be made open-source
(https://github.com/GREAT-WHU/gv_tools).
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 08:18:35 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Jun 2023 09:29:09 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Zhou",
"Yuxuan",
""
],
[
"Li",
"Xingxing",
""
],
[
"Li",
"Shengyu",
""
],
[
"Wang",
"Xuanbin",
""
],
[
"Shen",
"Zhiheng",
""
]
] |
new_dataset
| 0.995667 |
2306.09030
|
Hengli Li
|
Hengli Li, Song-Chun Zhu, Zilong Zheng
|
DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings
that frequently arise in real-life conversations and is essential for the
development of communicative social agents. In this paper, we introduce a novel
challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic
reasoning and situated conversational understanding. Compared with previous
works that treat different figurative expressions (e.g. metaphor, sarcasm) as
individual tasks, DiPlomat provides a cohesive framework towards general
pragmatic understanding. Our dataset is created through the utilization of
Amazon Mechanical Turk ( AMT ), resulting in a total of 4, 177 multi-turn
dialogues. In conjunction with the dataset, we propose two tasks, Pragmatic
Identification and Reasoning (PIR) and Conversational Question Answering (CQA).
Experimental results with state-of-the-art (SOTA) neural architectures reveal
several significant findings: 1) large language models ( LLMs) exhibit poor
performance in tackling this subjective domain; 2) comprehensive comprehension
of context emerges as a critical factor for establishing benign human-machine
interactions; 3) current models defect in the application of pragmatic
reasoning. As a result, we call on more attention to improve the ability of
context understanding, reasoning, and implied meaning modeling.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 10:41:23 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jun 2023 07:31:55 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Li",
"Hengli",
""
],
[
"Zhu",
"Song-Chun",
""
],
[
"Zheng",
"Zilong",
""
]
] |
new_dataset
| 0.999803 |
2306.09501
|
Alessandro Ottaviano
|
Alessandro Ottaviano, Robert Balas, Giovanni Bambini, Antonio del
Vecchio, Maicol Ciani, Davide Rossi, Luca Benini, Andrea Bartolini
|
ControlPULP: A RISC-V On-Chip Parallel Power Controller for Many-Core
HPC Processors with FPGA-Based Hardware-In-The-Loop Power and Thermal
Emulation
|
33 pages, 11 figures
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-Performance Computing (HPC) processors are nowadays integrated
Cyber-Physical Systems demanding complex and high-bandwidth closed-loop power
and thermal control strategies. To efficiently satisfy real-time multi-input
multi-output (MIMO) optimal power requirements, high-end processors integrate
an on-die power controller system (PCS).
While traditional PCSs are based on a simple microcontroller (MCU)-class
core, more scalable and flexible PCS architectures are required to support
advanced MIMO control algorithms for managing the ever-increasing number of
cores, power states, and process, voltage, and temperature variability.
This paper presents ControlPULP, an open-source, HW/SW RISC-V parallel PCS
platform consisting of a single-core MCU with fast interrupt handling coupled
with a scalable multi-core programmable cluster accelerator and a specialized
DMA engine for the parallel acceleration of real-time power management
policies. ControlPULP relies on FreeRTOS to schedule a reactive power control
firmware (PCF) application layer.
We demonstrate ControlPULP in a power management use-case targeting a
next-generation 72-core HPC processor. We first show that the multi-core
cluster accelerates the PCF, achieving 4.9x speedup compared to single-core
execution, enabling more advanced power management algorithms within the
control hyper-period at a shallow area overhead, about 0.1% the area of a
modern HPC CPU die. We then assess the PCS and PCF by designing an FPGA-based,
closed-loop emulation framework that leverages the heterogeneous SoCs paradigm,
achieving DVFS tracking with a mean deviation within 3% the plant's thermal
design power (TDP) against a software-equivalent model-in-the-loop approach.
Finally, we show that the proposed PCF compares favorably with an
industry-grade control algorithm under computational-intensive workloads.
|
[
{
"version": "v1",
"created": "Thu, 15 Jun 2023 20:51:01 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jun 2023 06:47:31 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Ottaviano",
"Alessandro",
""
],
[
"Balas",
"Robert",
""
],
[
"Bambini",
"Giovanni",
""
],
[
"del Vecchio",
"Antonio",
""
],
[
"Ciani",
"Maicol",
""
],
[
"Rossi",
"Davide",
""
],
[
"Benini",
"Luca",
""
],
[
"Bartolini",
"Andrea",
""
]
] |
new_dataset
| 0.997643 |
2306.09754
|
Daniel Reijsbergen
|
Dani\"el Reijsbergen, Bretislav Hajek, Tien Tuan Anh Dinh, Jussi
Keppo, Hank Korth, Anwitaman Datta
|
CroCoDai: A Stablecoin for Cross-Chain Commerce
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Decentralized Finance (DeFi), in which digital assets are exchanged without
trusted intermediaries, has grown rapidly in value in recent years. The global
DeFi ecosystem is fragmented into multiple blockchains, fueling the demand for
cross-chain commerce. Existing approaches for cross-chain transactions, e.g.,
bridges and cross-chain deals, achieve atomicity by locking assets in escrow.
However, locking up assets increases the financial risks for the participants,
especially due to price fluctuations and the long latency of cross-chain
transactions. Stablecoins, which are pegged to a non-volatile asset such as the
US dollar, help mitigate the risk associated with price fluctuations. However,
existing stablecoin designs are tied to individual blockchain platforms, and
trusted parties or complex protocols are needed to exchange stablecoin tokens
between blockchains.
Our goal is to design a practical stablecoin for cross-chain commerce.
Realizing this goal requires addressing two challenges. The first challenge is
to support a large and growing number of blockchains efficiently. The second
challenge is to be resilient to price fluctuations and blockchain platform
failures. We present CroCoDai to address these challenges. We also present
three prototype implementations of our stablecoin system, and show that it
incurs small execution overhead.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 10:41:28 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2023 08:20:07 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Reijsbergen",
"Daniël",
""
],
[
"Hajek",
"Bretislav",
""
],
[
"Dinh",
"Tien Tuan Anh",
""
],
[
"Keppo",
"Jussi",
""
],
[
"Korth",
"Hank",
""
],
[
"Datta",
"Anwitaman",
""
]
] |
new_dataset
| 0.991963 |
2306.10019
|
Raula Gaikovina Kula Dr
|
Marc Cheong, Raula Gaikovina Kula, Christoph Treude
|
Ethical Considerations Towards Protestware
|
Under submission
| null | null | null |
cs.CY cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
A key drawback to using a Open Source third-party library is the risk of
introducing malicious attacks. In recently times, these threats have taken a
new form, when maintainers turn their Open Source libraries into protestware.
This is defined as software containing political messages delivered through
these libraries, which can either be malicious or benign. Since developers are
willing to freely open-up their software to these libraries, much trust and
responsibility are placed on the maintainers to ensure that the library does
what it promises to do. This paper takes a look into the possible scenarios
where developers might consider turning their Open Source Software into
protestware, using an ethico-philosophical lens. Using different frameworks
commonly used in AI ethics, we explore the different dilemmas that may result
in protestware. Additionally, we illustrate how an open-source maintainer's
decision to protest is influenced by different stakeholders (viz., their
membership in the OSS community, their personal views, financial motivations,
social status, and moral viewpoints), making protestware a multifaceted and
intricate matter.
|
[
{
"version": "v1",
"created": "Sat, 27 May 2023 10:59:48 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Cheong",
"Marc",
""
],
[
"Kula",
"Raula Gaikovina",
""
],
[
"Treude",
"Christoph",
""
]
] |
new_dataset
| 0.979834 |
2306.10020
|
Raula Gaikovina Kula Dr
|
Raula Gaikovina Kula and Gregorio Robles
|
The Life and Death of Software Ecosystems
|
Book Chapter
| null |
10.1007/978-981-13-7099-1_6
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Software ecosystems have gained a lot of attention in recent times. Industry
and developers gather around technologies and collaborate to their advancement;
when the boundaries of such an effort go beyond certain amount of projects, we
are witnessing the appearance of Free/Libre and Open Source Software (FLOSS)
ecosystems.
In this chapter, we explore two aspects that contribute to a healthy
ecosystem, related to the attraction (and detraction) and the death of
ecosystems. To function and survive, ecosystems need to attract people, get
them on-boarded and retain them. In Section One we explore possibilities with
provocative research questions for attracting and detracting contributors (and
users): the lifeblood of FLOSS ecosystems. Then in the Section Two, we focus on
the death of systems, exploring some presumed to be dead systems and their
state in the afterlife.
|
[
{
"version": "v1",
"created": "Sun, 28 May 2023 23:43:19 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Kula",
"Raula Gaikovina",
""
],
[
"Robles",
"Gregorio",
""
]
] |
new_dataset
| 0.999408 |
2306.10035
|
Amir Bahrami
|
Amir Bahrami, Zo\'e-Lise Deck-L\'eger, Zhiyu Li, Christophe Caloz
|
Generalized FDTD Scheme for Moving Electromagnetic Structures with
Arbitrary Space-Time Configurations
|
13 pages, 9 figures
| null | null | null |
cs.CE physics.class-ph physics.comp-ph physics.optics
|
http://creativecommons.org/licenses/by/4.0/
|
We present a generalized FDTD scheme to simulate moving electromagnetic
structures with arbitrary space-time configurations. This scheme is a local
adaptation and 2+1-dimensional extension of the uniform and 1+1-dimensional
scheme recently reported in [1]. The local adaptation, which is allowed by the
inherently matched nature of the generalized Yee cell to the conventional Yee
cell, extends the range of applicability of the scheme in [1] to moving
structures that involve multiple and arbitrary velocity profiles while being
fully compatible with conventional absorbing boundary conditions and standard
treatments of medium dispersion. We show that a direct application of the
conventional FDTD scheme predicts qualitatively correct spectral transitions
but quantitatively erroneous scattering amplitudes, we infer from this
observation generalized, hybrid - physical and auxiliary (non-physical) -
fields that automatically satisfy moving boundary conditions in the laboratory
frame, and accordingly establish local update equations based on the related
Maxwell's equations and constitutive relations. We finally validate and
illustrate the proposed method by three canonical examples - a space-time
interface, a space-time wedge and a space-time accelerated interface - whose
combination represent arbitrary space-time configurations. The proposed scheme
fills an important gap in the open literature on computational electromagnetics
and offers an unprecedented, direct solution for moving structures in
commercial software platforms.
|
[
{
"version": "v1",
"created": "Thu, 8 Jun 2023 10:02:10 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Bahrami",
"Amir",
""
],
[
"Deck-Léger",
"Zoé-Lise",
""
],
[
"Li",
"Zhiyu",
""
],
[
"Caloz",
"Christophe",
""
]
] |
new_dataset
| 0.998792 |
2306.10049
|
Tom Kennes
|
Tom Kennes
|
Measuring IT Carbon Footprint: What is the Current Status Actually?
|
16 pages, no figures
| null | null | null |
cs.SE cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Despite the new Corporate Sustainability Reporting Directive from the
European Union, which presses large enterprises to be more transparent about
their GHG emissions, and though large technology- or advisory firms might
peddle otherwise, there are plenty of challenges ahead when it comes to
measuring GHG emissions from IT activities in the first place. This paper
categories those challenges into 4 categories, and explains the current status,
shortcomings and potential future research directions. These categories are:
measuring software energy consumption, server overhead energy consumption,
Energy Mix and emissions from embodied carbon. Next to that, various non-profit
and open-source initiatives are introduced as well as a mathematical framework,
based on CPU consumption, that can act as a rule-of-thumb for quick and
effortless assessments.
|
[
{
"version": "v1",
"created": "Mon, 12 Jun 2023 13:56:58 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Kennes",
"Tom",
""
]
] |
new_dataset
| 0.998142 |
2306.10053
|
Seonmi Kim
|
Seonmi Kim, Youngbin Lee, Yejin Kim, Joohwan Hong, and Yongjae Lee
|
NFTs to MARS: Multi-Attention Recommender System for NFTs
| null | null | null | null |
cs.IR cs.AI econ.GN q-fin.EC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recommender systems have become essential tools for enhancing user
experiences across various domains. While extensive research has been conducted
on recommender systems for movies, music, and e-commerce, the rapidly growing
and economically significant Non-Fungible Token (NFT) market remains
underexplored. The unique characteristics and increasing prominence of the NFT
market highlight the importance of developing tailored recommender systems to
cater to its specific needs and unlock its full potential. In this paper, we
examine the distinctive characteristics of NFTs and propose the first
recommender system specifically designed to address NFT market challenges. In
specific, we develop a Multi-Attention Recommender System for NFTs (NFT-MARS)
with three key characteristics: (1) graph attention to handle sparse user-item
interactions, (2) multi-modal attention to incorporate feature preference of
users, and (3) multi-task learning to consider the dual nature of NFTs as both
artwork and financial assets. We demonstrate the effectiveness of NFT-MARS
compared to various baseline models using the actual transaction data of NFTs
collected directly from blockchain for four of the most popular NFT
collections. The source code and data are available at
https://anonymous.4open.science/r/RecSys2023-93ED.
|
[
{
"version": "v1",
"created": "Tue, 13 Jun 2023 11:53:24 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Kim",
"Seonmi",
""
],
[
"Lee",
"Youngbin",
""
],
[
"Kim",
"Yejin",
""
],
[
"Hong",
"Joohwan",
""
],
[
"Lee",
"Yongjae",
""
]
] |
new_dataset
| 0.95947 |
2306.10079
|
Yang Jingsong
|
Jingsong Yang, Guanzhou Han, Deqing Yang, Jingping Liu, Yanghua Xiao,
Xiang Xu, Baohua Wu, Shenghua Ni
|
M3PT: A Multi-Modal Model for POI Tagging
|
Accepted by KDD 2023
| null |
10.1145/3580305.3599862
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
POI tagging aims to annotate a point of interest (POI) with some informative
tags, which facilitates many services related to POIs, including search,
recommendation, and so on. Most of the existing solutions neglect the
significance of POI images and seldom fuse the textual and visual features of
POIs, resulting in suboptimal tagging performance. In this paper, we propose a
novel Multi-Modal Model for POI Tagging, namely M3PT, which achieves enhanced
POI tagging through fusing the target POI's textual and visual features, and
the precise matching between the multi-modal representations. Specifically, we
first devise a domain-adaptive image encoder (DIE) to obtain the image
embeddings aligned to their gold tags' semantics. Then, in M3PT's text-image
fusion module (TIF), the textual and visual representations are fully fused
into the POIs' content embeddings for the subsequent matching. In addition, we
adopt a contrastive learning strategy to further bridge the gap between the
representations of different modalities. To evaluate the tagging models'
performance, we have constructed two high-quality POI tagging datasets from the
real-world business scenario of Ali Fliggy. Upon the datasets, we conducted the
extensive experiments to demonstrate our model's advantage over the baselines
of uni-modality and multi-modality, and verify the effectiveness of important
components in M3PT, including DIE, TIF and the contrastive learning strategy.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 05:46:27 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Yang",
"Jingsong",
""
],
[
"Han",
"Guanzhou",
""
],
[
"Yang",
"Deqing",
""
],
[
"Liu",
"Jingping",
""
],
[
"Xiao",
"Yanghua",
""
],
[
"Xu",
"Xiang",
""
],
[
"Wu",
"Baohua",
""
],
[
"Ni",
"Shenghua",
""
]
] |
new_dataset
| 0.993815 |
2306.10087
|
Lukas Rauch
|
Lukas Rauch, Matthias A{\ss}enmacher, Denis Huseljic, Moritz Wirth,
Bernd Bischl, Bernhard Sick
|
ActiveGLAE: A Benchmark for Deep Active Learning with Transformers
|
Accepted @ ECML PKDD 2023. This is the author's version of the work.
The definitive Version of Record will be published in the Proceedings of ECML
PKDD 2023
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep active learning (DAL) seeks to reduce annotation costs by enabling the
model to actively query instance annotations from which it expects to learn the
most. Despite extensive research, there is currently no standardized evaluation
protocol for transformer-based language models in the field of DAL. Diverse
experimental settings lead to difficulties in comparing research and deriving
recommendations for practitioners. To tackle this challenge, we propose the
ActiveGLAE benchmark, a comprehensive collection of data sets and evaluation
guidelines for assessing DAL. Our benchmark aims to facilitate and streamline
the evaluation process of novel DAL strategies. Additionally, we provide an
extensive overview of current practice in DAL with transformer-based language
models. We identify three key challenges - data set selection, model training,
and DAL settings - that pose difficulties in comparing query strategies. We
establish baseline results through an extensive set of experiments as a
reference point for evaluating future work. Based on our findings, we provide
guidelines for researchers and practitioners.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 13:07:29 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Rauch",
"Lukas",
""
],
[
"Aßenmacher",
"Matthias",
""
],
[
"Huseljic",
"Denis",
""
],
[
"Wirth",
"Moritz",
""
],
[
"Bischl",
"Bernd",
""
],
[
"Sick",
"Bernhard",
""
]
] |
new_dataset
| 0.982788 |
2306.10091
|
Kayu\~a Oleques Paim M.Sc.
|
Kayu\~a Oleques Paim and Ricardo Rohweder and Mariana
Recamonde-Mendoza and Rodrigo Brand\~ao Mansilha1 and Weverton Cordeiro
|
Acoustic Identification of Ae. aegypti Mosquitoes using Smartphone Apps
and Residual Convolutional Neural Networks
| null | null | null | null |
cs.SD cs.AI cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we advocate in favor of smartphone apps as low-cost,
easy-to-deploy solution for raising awareness among the population on the
proliferation of Aedes aegypti mosquitoes. Nevertheless, devising such a
smartphone app is challenging, for many reasons, including the required
maturity level of techniques for identifying mosquitoes based on features that
can be captured using smartphone resources. In this paper, we identify a set of
(non-exhaustive) requirements that smartphone apps must meet to become an
effective tooling in the fight against Ae. aegypti, and advance the
state-of-the-art with (i) a residual convolutional neural network for
classifying Ae. aegypti mosquitoes from their wingbeat sound, (ii) a
methodology for reducing the influence of background noise in the
classification process, and (iii) a dataset for benchmarking solutions for
detecting Ae. aegypti mosquitoes from wingbeat sound recordings. From the
analysis of accuracy and recall, we provide evidence that convolutional neural
networks have potential as a cornerstone for tracking mosquito apps for
smartphones.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 13:41:01 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Paim",
"Kayuã Oleques",
""
],
[
"Rohweder",
"Ricardo",
""
],
[
"Recamonde-Mendoza",
"Mariana",
""
],
[
"Mansilha1",
"Rodrigo Brandão",
""
],
[
"Cordeiro",
"Weverton",
""
]
] |
new_dataset
| 0.966696 |
2306.10095
|
Gengchen Mai
|
Haixing Dai, Yiwei Li, Zhengliang Liu, Lin Zhao, Zihao Wu, Suhang
Song, Ye Shen, Dajiang Zhu, Xiang Li, Sheng Li, Xiaobai Yao, Lu Shi,
Quanzheng Li, Zhuo Chen, Donglan Zhang, Gengchen Mai, Tianming Liu
|
AD-AutoGPT: An Autonomous GPT for Alzheimer's Disease Infodemiology
|
20 pages, 4 figures
| null | null | null |
cs.CL cs.AI cs.IR
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In this pioneering study, inspired by AutoGPT, the state-of-the-art
open-source application based on the GPT-4 large language model, we develop a
novel tool called AD-AutoGPT which can conduct data collection, processing, and
analysis about complex health narratives of Alzheimer's Disease in an
autonomous manner via users' textual prompts. We collated comprehensive data
from a variety of news sources, including the Alzheimer's Association, BBC,
Mayo Clinic, and the National Institute on Aging since June 2022, leading to
the autonomous execution of robust trend analyses, intertopic distance maps
visualization, and identification of salient terms pertinent to Alzheimer's
Disease. This approach has yielded not only a quantifiable metric of relevant
discourse but also valuable insights into public focus on Alzheimer's Disease.
This application of AD-AutoGPT in public health signifies the transformative
potential of AI in facilitating a data-rich understanding of complex health
narratives like Alzheimer's Disease in an autonomous manner, setting the
groundwork for future AI-driven investigations in global health landscapes.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 16:35:59 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Dai",
"Haixing",
""
],
[
"Li",
"Yiwei",
""
],
[
"Liu",
"Zhengliang",
""
],
[
"Zhao",
"Lin",
""
],
[
"Wu",
"Zihao",
""
],
[
"Song",
"Suhang",
""
],
[
"Shen",
"Ye",
""
],
[
"Zhu",
"Dajiang",
""
],
[
"Li",
"Xiang",
""
],
[
"Li",
"Sheng",
""
],
[
"Yao",
"Xiaobai",
""
],
[
"Shi",
"Lu",
""
],
[
"Li",
"Quanzheng",
""
],
[
"Chen",
"Zhuo",
""
],
[
"Zhang",
"Donglan",
""
],
[
"Mai",
"Gengchen",
""
],
[
"Liu",
"Tianming",
""
]
] |
new_dataset
| 0.997775 |
2306.10149
|
Rikke Bjerg Jensen
|
Jessica McClearn, Rikke Bjerg Jensen, Reem Talhouk
|
Othered, Silenced and Scapegoated: Understanding the Situated Security
of Marginalised Populations in Lebanon
|
To appear at the USENIX Security Symposium 2023
| null | null | null |
cs.CY cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we explore the digital security experiences of marginalised
populations in Lebanon such as LGBTQI+ identifying people, refugees and women.
We situate our work in the post-conflict Lebanese context, which is shaped by
sectarian divides, failing governance and economic collapse. We do so through
an ethnographically informed study conducted in Beirut, Lebanon, in July 2022
and through interviews with 13 people with Lebanese digital and human rights
expertise. Our research highlights how LGBTQI+ identifying people and refugees
are scapegoated for the failings of the Lebanese government, while women who
speak out against such failings are silenced. We show how government-supported
incitements of violence aimed at transferring blame from the political
leadership to these groups lead to amplified digital security risks for already
at-risk populations. Positioning our work in broader sociological
understandings of security, we discuss how the Lebanese context impacts
identity and ontological security. We conclude by proposing to design for and
with positive security in post-conflict settings.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 19:36:39 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"McClearn",
"Jessica",
""
],
[
"Jensen",
"Rikke Bjerg",
""
],
[
"Talhouk",
"Reem",
""
]
] |
new_dataset
| 0.978846 |
2306.10241
|
Jiaan Wang
|
Jiaan Wang, Jianfeng Qu, Yunlong Liang, Zhixu Li, An Liu, Guanfeng
Liu, Xin Zheng
|
Snowman: A Million-scale Chinese Commonsense Knowledge Graph Distilled
from Foundation Model
|
tech report
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Constructing commonsense knowledge graphs (CKGs) has attracted wide research
attention due to its significant importance in cognitive intelligence.
Nevertheless, existing CKGs are typically oriented to English, limiting the
research in non-English languages. Meanwhile, the emergence of foundation
models like ChatGPT and GPT-4 has shown promising intelligence with the help of
reinforcement learning from human feedback. Under the background, in this
paper, we utilize foundation models to construct a Chinese CKG, named Snowman.
Specifically, we distill different types of commonsense head items from
ChatGPT, and continue to use it to collect tail items with respect to the head
items and pre-defined relations. Based on the preliminary analysis, we find the
negative commonsense knowledge distilled by ChatGPT achieves lower human
acceptance compared to other knowledge. Therefore, we design a simple yet
effective self-instruct filtering strategy to filter out invalid negative
commonsense. Overall, the constructed Snowman covers more than ten million
Chinese commonsense triples, making it the largest Chinese CKG. Moreover, human
studies show the acceptance of Snowman achieves 90.6\%, indicating the
high-quality triples distilled by the cutting-edge foundation model. We also
conduct experiments on commonsense knowledge models to show the usability and
effectiveness of our Snowman.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 02:51:33 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Wang",
"Jiaan",
""
],
[
"Qu",
"Jianfeng",
""
],
[
"Liang",
"Yunlong",
""
],
[
"Li",
"Zhixu",
""
],
[
"Liu",
"An",
""
],
[
"Liu",
"Guanfeng",
""
],
[
"Zheng",
"Xin",
""
]
] |
new_dataset
| 0.985107 |
2306.10276
|
Hari Krishna Hari Prasad
|
Hari Krishna Hari Prasad, Ross L. Hatton and Kaushik Jayaram
|
Geometric Mechanics of Contact-Switching Systems
|
6 pages, 7 figures, and link to associated video:
https://drive.google.com/file/d/12Sgl0R1oDLDWRrqlwwAt3JR2Gc3rEB4T/view?usp=sharing
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Discrete and periodic contact switching is a key characteristic of steady
state legged locomotion. This paper introduces a framework for modeling and
analyzing this contact-switching behavior through the framework of geometric
mechanics on a toy robot model that can make continuous limb swings and
discrete contact switches. The kinematics of this model forms a hybrid shape
space and by extending the generalized Stokes' theorem to compute discrete
curvature functions called stratified panels, we determine average locomotion
generated by gaits spanning multiple contact modes. Using this tool, we also
demonstrate the ability to optimize gaits based on system's locomotion
constraints and perform gait reduction on a complex gait spanning multiple
contact modes to highlight the scalability to multilegged systems.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 06:51:04 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Prasad",
"Hari Krishna Hari",
""
],
[
"Hatton",
"Ross L.",
""
],
[
"Jayaram",
"Kaushik",
""
]
] |
new_dataset
| 0.960981 |
2306.10280
|
Zhiyao Zhou
|
Zhiyao Zhou, Sheng Zhou, Bochao Mao, Xuanyi Zhou, Jiawei Chen, Qiaoyu
Tan, Daochen Zha, Can Wang, Yan Feng, Chun Chen
|
OpenGSL: A Comprehensive Benchmark for Graph Structure Learning
|
9 pages, 4 figures
| null | null | null |
cs.LG cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Graph Neural Networks (GNNs) have emerged as the de facto standard for
representation learning on graphs, owing to their ability to effectively
integrate graph topology and node attributes. However, the inherent suboptimal
nature of node connections, resulting from the complex and contingent formation
process of graphs, presents significant challenges in modeling them
effectively. To tackle this issue, Graph Structure Learning (GSL), a family of
data-centric learning approaches, has garnered substantial attention in recent
years. The core concept behind GSL is to jointly optimize the graph structure
and the corresponding GNN models. Despite the proposal of numerous GSL methods,
the progress in this field remains unclear due to inconsistent experimental
protocols, including variations in datasets, data processing techniques, and
splitting strategies. In this paper, we introduce OpenGSL, the first
comprehensive benchmark for GSL, aimed at addressing this gap. OpenGSL enables
a fair comparison among state-of-the-art GSL methods by evaluating them across
various popular datasets using uniform data processing and splitting
strategies. Through extensive experiments, we observe that existing GSL methods
do not consistently outperform vanilla GNN counterparts. However, we do observe
that the learned graph structure demonstrates a strong generalization ability
across different GNN backbones, despite its high computational and space
requirements. We hope that our open-sourced library will facilitate rapid and
equitable evaluation and inspire further innovative research in the field of
GSL. The code of the benchmark can be found in
https://github.com/OpenGSL/OpenGSL.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 07:22:25 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Zhou",
"Zhiyao",
""
],
[
"Zhou",
"Sheng",
""
],
[
"Mao",
"Bochao",
""
],
[
"Zhou",
"Xuanyi",
""
],
[
"Chen",
"Jiawei",
""
],
[
"Tan",
"Qiaoyu",
""
],
[
"Zha",
"Daochen",
""
],
[
"Wang",
"Can",
""
],
[
"Feng",
"Yan",
""
],
[
"Chen",
"Chun",
""
]
] |
new_dataset
| 0.989461 |
2306.10293
|
Yu-Hsi Chen
|
Yu-Hsi Chen
|
A New Perspective for Shuttlecock Hitting Event Detection
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This article introduces a novel approach to shuttlecock hitting event
detection. Instead of depending on generic methods, we capture the hitting
action of players by reasoning over a sequence of images. To learn the features
of hitting events in a video clip, we specifically utilize a deep learning
model known as SwingNet. This model is designed to capture the relevant
characteristics and patterns associated with the act of hitting in badminton.
By training SwingNet on the provided video clips, we aim to enable the model to
accurately recognize and identify the instances of hitting events based on
their distinctive features. Furthermore, we apply the specific video processing
technique to extract the prior features from the video, which significantly
reduces the learning difficulty for the model. The proposed method not only
provides an intuitive and user-friendly approach but also presents a fresh
perspective on the task of detecting badminton hitting events. The source code
will be available at
https://github.com/TW-yuhsi/A-New-Perspective-for-Shuttlecock-Hitting-Event-Detection.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 08:34:53 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Chen",
"Yu-Hsi",
""
]
] |
new_dataset
| 0.985291 |
2306.10324
|
Shaoshan Liu
|
Tim Tianyi Yang, Tom Tianze Yang, Na An, Ao Kong, Shaoshan Liu, and
Steve Xue Liu
|
AI Clinics on Mobile (AICOM): Universal AI Doctors for the Underserved
and Hard-to-Reach
| null | null | null | null |
cs.AI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces Artificial Intelligence Clinics on Mobile (AICOM), an
open-source project devoted to answering the United Nations Sustainable
Development Goal 3 (SDG3) on health, which represents a universal recognition
that health is fundamental to human capital and social and economic
development. The core motivation for the AICOM project is the fact that over
80% of the people in the least developed countries (LDCs) own a mobile phone,
even though less than 40% of these people have internet access. Hence, through
enabling AI-based disease diagnostics and screening capability on affordable
mobile phones without connectivity will be a critical first step to addressing
healthcare access problems. The technologies developed in the AICOM project
achieve exactly this goal, and we have demonstrated the effectiveness of AICOM
on monkeypox screening tasks. We plan to continue expanding and open-sourcing
the AICOM platform, aiming for it to evolve into an universal AI doctor for the
Underserved and Hard-to-Reach.
|
[
{
"version": "v1",
"created": "Sat, 17 Jun 2023 11:59:03 GMT"
}
] | 2023-06-21T00:00:00 |
[
[
"Yang",
"Tim Tianyi",
""
],
[
"Yang",
"Tom Tianze",
""
],
[
"An",
"Na",
""
],
[
"Kong",
"Ao",
""
],
[
"Liu",
"Shaoshan",
""
],
[
"Liu",
"Steve Xue",
""
]
] |
new_dataset
| 0.999058 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.