id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2209.13423
|
Gianluca De Marco
|
Gianluca De Marco and Dariusz R. Kowalski and Grzegorz Stachowiak
|
Deterministic non-adaptive contention resolution on a shared channel
| null | null | null | null |
cs.IT cs.DC math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In a multiple access channel, autonomous stations are able to transmit and
listen to a shared device. A fundamental problem, called \textit{contention
resolution}, is to allow any station to successfully deliver its message by
resolving the conflicts that arise when several stations transmit
simultaneously. Despite a long history on such a problem, most of the results
deal with the static setting when all stations start simultaneously, while many
fundamental questions remain open in the realistic scenario when stations can
join the channel at arbitrary times.
In this paper, we explore the impact that three major channel features
(asynchrony among stations, knowledge of the number of contenders and
possibility of switching off stations after a successful transmission) can have
on the time complexity of non-adaptive deterministic algorithms. We establish
upper and lower bounds allowing to understand which parameters permit
time-efficient contention resolution and which do not.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 14:26:00 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Oct 2022 15:45:51 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"De Marco",
"Gianluca",
""
],
[
"Kowalski",
"Dariusz R.",
""
],
[
"Stachowiak",
"Grzegorz",
""
]
] |
new_dataset
| 0.98931 |
2210.00026
|
Jacob King
|
Jacob King, William Ryan, Richard D. Wesel
|
CRC-Aided Short Convolutional Codes and RCU Bounds for Orthogonal
Signaling
|
GLOBECOM 2022 camera-ready version
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We extend earlier work on the design of convolutional code-specific CRC codes
to $Q$-ary alphabets, with an eye toward $Q$-ary orthogonal signaling. Starting
with distance-spectrum optimal, zero-terminated, $Q$-ary convolutional codes,
we design $Q$-ary CRC codes so that the CRC/convolutional concatenation is
distance-spectrum optimal. The $Q$-ary code symbols are mapped to a $Q$-ary
orthogonal signal set and sent over an AWGN channel with noncoherent reception.
We focus on $Q = 4$, rate-1/2 convolutional codes in our designs. The random
coding union bound and normal approximation are used in earlier works as
benchmarks for performance for distance-spectrum optimal convolutional codes.
We derive a saddlepoint approximation of the random coding union bound for the
coded noncoherent signaling channel, as well as a normal approximation for this
channel, and compare the performance of our codes to these limits. Our best
design is within $0.6$ dB of the RCU bound at a frame error rate of $10^{-4}$.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 18:03:45 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"King",
"Jacob",
""
],
[
"Ryan",
"William",
""
],
[
"Wesel",
"Richard D.",
""
]
] |
new_dataset
| 0.990138 |
2210.00058
|
Gino Chacon
|
Gino A. Chacon, Charles Williams, Johann Knechtel, Ozgur Sinanoglu,
Paul V. Gratz
|
Hardware Trojan Threats to Cache Coherence in Modern 2.5D Chiplet
Systems
| null | null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
As industry moves toward chiplet-based designs, the insertion of hardware
Trojans poses a significant threat to the security of these systems. These
systems rely heavily on cache coherence for coherent data communication, making
coherence an attractive target. Critically, unlike prior work, which focuses
only on malicious packet modifications, a Trojan attack that exploits coherence
can modify data in memory that was never touched and is not owned by the
chiplet which contains the Trojan. Further, the Trojan need not even be
physically between the victim and the memory controller to attack the victim's
memory transactions. Here, we explore the fundamental attack vectors possible
in chiplet-based systems and provide an example Trojan implementation capable
of directly modifying victim data in memory. This work aims to highlight the
need for developing mechanisms that can protect and secure the coherence scheme
from these forms of attacks.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 19:45:04 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Chacon",
"Gino A.",
""
],
[
"Williams",
"Charles",
""
],
[
"Knechtel",
"Johann",
""
],
[
"Sinanoglu",
"Ozgur",
""
],
[
"Gratz",
"Paul V.",
""
]
] |
new_dataset
| 0.995473 |
2210.00087
|
Junhyung Lee
|
Junhyung Lee, Junho Koh, Youngwoo Lee, Jun Won Choi
|
D-Align: Dual Query Co-attention Network for 3D Object Detection Based
on Multi-frame Point Cloud Sequence
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR sensors are widely used for 3D object detection in various mobile
robotics applications. LiDAR sensors continuously generate point cloud data in
real-time. Conventional 3D object detectors detect objects using a set of
points acquired over a fixed duration. However, recent studies have shown that
the performance of object detection can be further enhanced by utilizing
spatio-temporal information obtained from point cloud sequences. In this paper,
we propose a new 3D object detector, named D-Align, which can effectively
produce strong bird's-eye-view (BEV) features by aligning and aggregating the
features obtained from a sequence of point sets. The proposed method includes a
novel dual-query co-attention network that uses two types of queries, including
target query set (T-QS) and support query set (S-QS), to update the features of
target and support frames, respectively. D-Align aligns S-QS to T-QS based on
the temporal context features extracted from the adjacent feature maps and then
aggregates S-QS with T-QS using a gated attention mechanism. The dual queries
are updated through multiple attention layers to progressively enhance the
target frame features used to produce the detection results. Our experiments on
the nuScenes dataset show that the proposed D-Align method greatly improved the
performance of a single frame-based baseline method and significantly
outperformed the latest 3D object detectors.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 20:41:25 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Lee",
"Junhyung",
""
],
[
"Koh",
"Junho",
""
],
[
"Lee",
"Youngwoo",
""
],
[
"Choi",
"Jun Won",
""
]
] |
new_dataset
| 0.999333 |
2210.00121
|
Andrea Sipos
|
Yizhou Chen, Andrea Sipos, Mark Van der Merwe, Nima Fazeli
|
Visuo-Tactile Transformers for Manipulation
|
Accepted to CoRL 2022
| null | null | null |
cs.RO cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Learning representations in the joint domain of vision and touch can improve
manipulation dexterity, robustness, and sample-complexity by exploiting mutual
information and complementary cues. Here, we present Visuo-Tactile Transformers
(VTTs), a novel multimodal representation learning approach suited for
model-based reinforcement learning and planning. Our approach extends the
Visual Transformer \cite{dosovitskiy2021image} to handle visuo-tactile
feedback. Specifically, VTT uses tactile feedback together with self and
cross-modal attention to build latent heatmap representations that focus
attention on important task features in the visual domain. We demonstrate the
efficacy of VTT for representation learning with a comparative evaluation
against baselines on four simulated robot tasks and one real world block
pushing task. We conduct an ablation study over the components of VTT to
highlight the importance of cross-modality in representation learning.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 22:38:29 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Chen",
"Yizhou",
""
],
[
"Sipos",
"Andrea",
""
],
[
"Van der Merwe",
"Mark",
""
],
[
"Fazeli",
"Nima",
""
]
] |
new_dataset
| 0.959802 |
2210.00128
|
Andrea Araldo
|
Amirhesam Badeanlou, Andrea Araldo, Marco Diana, Vincent Gauthier
|
Equity Scores for Public Transit Lines from Open-Data and Accessibility
Measures
| null |
Research Board (TRB) 102nd Annual Meeting, 2023
| null | null |
cs.CY econ.GN q-fin.EC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current transit suffers from an evident inequity: the level of service of
transit in suburbs is much less satisfying than in city centers. As a
consequence, private cars are still the dominant transportation mode for
suburban people, which results in congestion and pollution. To achieve
sustainability goals and reduce car-dependency, transit should be (re)designed
around equity. To this aim, it is necessary to (i) quantify the "level of
equity" of the transit system and (ii) provide an indicator that scores the
transit lines that contribute the most to keep transit equitable. This
indicator could suggest on which lines the transit operator must invest to
increase the service level (frequency or coverage) in order to reduce inequity
in the system.
To the best of our knowledge, this paper is the first to tackle (ii). To this
aim, we propose efficient scoring methods that rely solely on open data, which
allows us to perform the analysis on multiple cities (7 in this paper). Our
method can be used to guide large-scale iterative optimization algorithms to
improve accessibility equity.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 22:58:11 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Badeanlou",
"Amirhesam",
""
],
[
"Araldo",
"Andrea",
""
],
[
"Diana",
"Marco",
""
],
[
"Gauthier",
"Vincent",
""
]
] |
new_dataset
| 0.974817 |
2210.00130
|
Yunuo Chen
|
Yunuo Chen, Minchen Li, Wenlong Lu, Chuyuan Fu, Chenfanfu Jiang
|
Midas: A Multi-Joint Robotics Simulator with Intersection-Free
Frictional Contact
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Midas, a robotics simulation framework based on the Incremental
Potential Contact (IPC) model. Our simulator guarantees intersection-free,
stable, and accurate resolution of frictional contact. We demonstrate the
efficacy of our framework with experimental validations on high-precision tasks
and through comparisons with Bullet physics. A reinforcement learning pipeline
using Midas is also developed and tested to perform intersection-free
peg-in-hole tasks.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 23:08:28 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Chen",
"Yunuo",
""
],
[
"Li",
"Minchen",
""
],
[
"Lu",
"Wenlong",
""
],
[
"Fu",
"Chuyuan",
""
],
[
"Jiang",
"Chenfanfu",
""
]
] |
new_dataset
| 0.998518 |
2210.00137
|
Jean-Philippe Roberge
|
Rachel Thomasson, Etienne Roberge, Mark R. Cutkosky, Jean-Philippe
Roberge
|
Going In Blind: Object Motion Classification using Distributed Tactile
Sensing for Safe Reaching in Clutter
|
This paper has been accepted to be presented at the IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS) 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic manipulators navigating cluttered shelves or cabinets may find it
challenging to avoid contact with obstacles. Indeed, rearranging obstacles may
be necessary to access a target. Rather than planning explicit motions that
place obstacles into a desired pose, we suggest allowing incidental contacts to
rearrange obstacles while monitoring contacts for safety. Bypassing object
identification, we present a method for categorizing object motions from
tactile data collected from incidental contacts with a capacitive tactile skin
on an Allegro Hand. We formalize tactile cues associated with categories of
object motion, demonstrating that they can determine with $>90$% accuracy
whether an object is movable and whether a contact is causing the object to
slide stably (safe contact) or tip (unsafe).
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 23:37:53 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Thomasson",
"Rachel",
""
],
[
"Roberge",
"Etienne",
""
],
[
"Cutkosky",
"Mark R.",
""
],
[
"Roberge",
"Jean-Philippe",
""
]
] |
new_dataset
| 0.975436 |
2210.00171
|
Dongyun Han
|
Dongyun Han, Donghoon Kim, Isaac Cho
|
PORTAL: Portal Widget for Remote Target Acquisition and Control in
Immersive Virtual Environments
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper introduces PORTAL (POrtal widget for Remote Target Acquisition and
controL) that allows the user to interact with out-of-reach objects in a
virtual environment. We describe the PORTAL interaction technique for placing a
portal widget and interacting with target objects through the portal. We
conduct two formal user studies to evaluate PORTAL for selection and
manipulation functionalities. The results show PORTAL supports participants to
interact with remote objects successfully and precisely. Following that, we
discuss its potential and limitations, and future works.
|
[
{
"version": "v1",
"created": "Sat, 1 Oct 2022 02:41:14 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Han",
"Dongyun",
""
],
[
"Kim",
"Donghoon",
""
],
[
"Cho",
"Isaac",
""
]
] |
new_dataset
| 0.999553 |
2210.00175
|
Ali AlQahtani
|
Ali Abdullah S. AlQahtani, Hosam Alamleh, Baker Al Smadi
|
Technical Report-IoT Devices Proximity Authentication In Ad Hoc Network
Environment
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Internet of Things (IoT) is a distributed communication technology system
that offers the possibility for physical devices (e.g. vehicles home appliances
sensors actuators etc.) known as Things to connect and exchange data more
importantly without human interaction. Since IoT plays a significant role in
our daily lives we must secure the IoT environment to work effectively. Among
the various security requirements authentication to the IoT devices is
essential as it is the first step in preventing any negative impact of possible
attackers. Using the current IEEE 802.11 infrastructure this paper implements
an IoT devices authentication scheme based on something that is in the IoT
devices environment (i.e. ambient access points). Data from the broadcast
messages (i.e. beacon frame characteristics) are utilized to implement the
authentication factor that confirms proximity between two devices in an ad hoc
IoT network.
|
[
{
"version": "v1",
"created": "Sat, 1 Oct 2022 03:07:42 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"AlQahtani",
"Ali Abdullah S.",
""
],
[
"Alamleh",
"Hosam",
""
],
[
"Smadi",
"Baker Al",
""
]
] |
new_dataset
| 0.99617 |
2210.00213
|
Manisha Dubey
|
Manisha Dubey, P.K. Srijith, Maunendra Sankar Desarkar
|
HyperHawkes: Hypernetwork based Neural Temporal Point Process
|
9 pages, 2 figures
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal point process serves as an essential tool for modeling time-to-event
data in continuous time space. Despite having massive amounts of event sequence
data from various domains like social media, healthcare etc., real world
application of temporal point process faces two major challenges: 1) it is not
generalizable to predict events from unseen sequences in dynamic environment 2)
they are not capable of thriving in continually evolving environment with
minimal supervision while retaining previously learnt knowledge. To tackle
these issues, we propose \textit{HyperHawkes}, a hypernetwork based temporal
point process framework which is capable of modeling time of occurrence of
events for unseen sequences. Thereby, we solve the problem of zero-shot
learning for time-to-event modeling. We also develop a hypernetwork based
continually learning temporal point process for continuous modeling of
time-to-event sequences with minimal forgetting. In this way,
\textit{HyperHawkes} augments the temporal point process with zero-shot
modeling and continual learning capabilities. We demonstrate the application of
the proposed framework through our experiments on two real-world datasets. Our
results show the efficacy of the proposed approach in terms of predicting
future events under zero-shot regime for unseen event sequences. We also show
that the proposed model is able to predict sequences continually while
retaining information from previous event sequences, hence mitigating
catastrophic forgetting for time-to-event data.
|
[
{
"version": "v1",
"created": "Sat, 1 Oct 2022 07:14:19 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Dubey",
"Manisha",
""
],
[
"Srijith",
"P. K.",
""
],
[
"Desarkar",
"Maunendra Sankar",
""
]
] |
new_dataset
| 0.965515 |
2210.00235
|
Alexander Okhotin
|
Olga Martynova, Alexander Okhotin
|
The maximum length of shortest accepted strings for
direction-determinate two-way finite automata
|
14 pages, 8 figures
| null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
It is shown that, for every $n \geqslant 2$, the maximum length of the
shortest string accepted by an $n$-state direction-determinate two-way finite
automaton is exactly $\binom{n}{\lfloor\frac{n}{2}\rfloor}-1$
(direction-determinate automata are those that always remember in the current
state whether the last move was to the left or to the right). For two-way
finite automata of the general form, a family of $n$-state automata with
shortest accepted strings of length $\frac{3}{4} \cdot 2^n - 1$ is constructed.
|
[
{
"version": "v1",
"created": "Sat, 1 Oct 2022 10:00:58 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Martynova",
"Olga",
""
],
[
"Okhotin",
"Alexander",
""
]
] |
new_dataset
| 0.987546 |
2210.00264
|
Zerui Cheng
|
Tiancheng Xie, Jiaheng Zhang, Zerui Cheng, Fan Zhang, Yupeng Zhang,
Yongzheng Jia, Dan Boneh, Dawn Song
|
zkBridge: Trustless Cross-chain Bridges Made Practical
|
An extended version of the paper to appear in ACM CCS 2022
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blockchains have seen growing traction with cryptocurrencies reaching a
market cap of over 1 trillion dollars, major institution investors taking
interests, and global impacts on governments, businesses, and individuals. Also
growing significantly is the heterogeneity of the ecosystem where a variety of
blockchains co-exist. Cross-chain bridge is a necessary building block in this
multi-chain ecosystem. Existing solutions, however, either suffer from
performance issues or rely on trust assumptions of committees that
significantly lower the security. Recurring attacks against bridges have cost
users more than 1.5 billion USD. In this paper, we introduce zkBridge, an
efficient cross-chain bridge that guarantees strong security without external
trust assumptions. With succinct proofs, zkBridge not only guarantees
correctness, but also significantly reduces on-chain verification cost. We
propose novel succinct proof protocols that are orders-of-magnitude faster than
existing solutions for workload in zkBridge. With a modular design, zkBridge
enables a broad spectrum of use cases and capabilities, including message
passing, token transferring, and other computational logic operating on state
changes from different chains. To demonstrate the practicality of zkBridge, we
implemented a prototype bridge from Cosmos to Ethereum, a particularly
challenging direction that involves large proof circuits that existing systems
cannot efficiently handle. Our evaluation shows that zkBridge achieves
practical performance: proof generation takes less than 20 seconds, while
verifying proofs on-chain costs less than 230K gas. For completeness, we also
implemented and evaluated the direction from Ethereum to other EVM-compatible
chains (such as BSC) which involves smaller circuits and incurs much less
overhead.
|
[
{
"version": "v1",
"created": "Sat, 1 Oct 2022 12:13:03 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Xie",
"Tiancheng",
""
],
[
"Zhang",
"Jiaheng",
""
],
[
"Cheng",
"Zerui",
""
],
[
"Zhang",
"Fan",
""
],
[
"Zhang",
"Yupeng",
""
],
[
"Jia",
"Yongzheng",
""
],
[
"Boneh",
"Dan",
""
],
[
"Song",
"Dawn",
""
]
] |
new_dataset
| 0.996766 |
2210.00377
|
Spring Berman
|
Sangeet Sankaramangalam Ulhas, Aditya Ravichander, Kathryn A. Johnson,
Theodore P. Pavlic, Lance Gharavi, and Spring Berman
|
CHARTOPOLIS: A Small-Scale Labor-art-ory for Research and Reflection on
Autonomous Vehicles, Human-Robot Interaction, and Sociotechnical Imaginaries
|
Submission to 2022 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2022) Workshop on Miniature Robot Platforms for Full
Scale Autonomous Vehicle Research
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
CHARTOPOLIS is a multi-faceted sociotechnical testbed meant to aid in
building connections among engineers, psychologists, anthropologists,
ethicists, and artists. Superficially, it is an urban autonomous-vehicle
testbed that includes both a physical environment for small-scale robotic
vehicles as well as a high-fidelity virtual replica that provides extra
flexibility by way of computer simulation. However, both environments have been
developed to allow for participatory simulation with human drivers as well.
Each physical vehicle can be remotely operated by human drivers that have a
driver-seat point of view that immerses them within the small-scale testbed,
and those same drivers can also pilot high-fidelity models of those vehicles in
a virtual replica of the environment. Juxtaposing human driving performance
across these two contexts will help identify to what extent human driving
behaviors are sensorimotor responses or involve psychological engagement with a
system that has physical, not virtual, side effects and consequences.
Furthermore, through collaboration with artists, we have designed the physical
testbed to make tangible the reality that technological advancement causes the
history of a city to fork into multiple, parallel timelines that take place
within populations whose increasing isolation effectively creates multiple
independent cities in one. Ultimately, CHARTOPOLIS is meant to challenge
engineers to take a more holistic view when designing autonomous systems, while
also enabling them to gather novel data that will assist them in making these
systems more trustworthy.
|
[
{
"version": "v1",
"created": "Sat, 1 Oct 2022 21:21:09 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Ulhas",
"Sangeet Sankaramangalam",
""
],
[
"Ravichander",
"Aditya",
""
],
[
"Johnson",
"Kathryn A.",
""
],
[
"Pavlic",
"Theodore P.",
""
],
[
"Gharavi",
"Lance",
""
],
[
"Berman",
"Spring",
""
]
] |
new_dataset
| 0.999684 |
2210.00443
|
Gautam Choudhary
|
Gautam Choudhary, Natwar Modani, Nitish Maurya
|
ReAct: A Review Comment Dataset for Actionability (and more)
|
Published at WISE 2021
| null |
10.1007/978-3-030-91560-5_24
| null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Review comments play an important role in the evolution of documents. For a
large document, the number of review comments may become large, making it
difficult for the authors to quickly grasp what the comments are about. It is
important to identify the nature of the comments to identify which comments
require some action on the part of document authors, along with identifying the
types of these comments. In this paper, we introduce an annotated review
comment dataset ReAct. The review comments are sourced from OpenReview site. We
crowd-source annotations for these reviews for actionability and type of
comments. We analyze the properties of the dataset and validate the quality of
annotations. We release the dataset (https://github.com/gtmdotme/ReAct) to the
research community as a major contribution. We also benchmark our data with
standard baselines for classification tasks and analyze their performance.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2022 07:09:38 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Choudhary",
"Gautam",
""
],
[
"Modani",
"Natwar",
""
],
[
"Maurya",
"Nitish",
""
]
] |
new_dataset
| 0.979645 |
2210.00448
|
Xueying Li
|
Xueying Li, Ryan Grammenos
|
A Smart Recycling Bin Using Waste Image Classification At The Edge
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rapid economic growth gives rise to the urgent demand for a more efficient
waste recycling system. This work thereby developed an innovative recycling bin
that automatically separates urban waste to increase the recycling rate. We
collected 1800 recycling waste images and combined them with an existing public
dataset to train classification models for two embedded systems, Jetson Nano
and K210, targeting different markets. The model reached an accuracy of 95.98%
on Jetson Nano and 96.64% on K210. A bin program was designed to collect
feedback from users. On Jetson Nano, the overall power consumption of the
application was reduced by 30% from the previous work to 4.7 W, while the
second system, K210, only needed 0.89 W of power to operate. In summary, our
work demonstrated a fully functional prototype of an energy-saving,
high-accuracy smart recycling bin, which can be commercialized in the future to
improve urban waste recycling.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2022 07:40:25 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Li",
"Xueying",
""
],
[
"Grammenos",
"Ryan",
""
]
] |
new_dataset
| 0.999022 |
2210.00451
|
Qingfeng Lin
|
Yang Li, Qingfeng Lin, Ya-Feng Liu, Bo Ai, and Yik-Chung Wu
|
Asynchronous Activity Detection for Cell-Free Massive MIMO: From
Centralized to Distributed Algorithms
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Device activity detection in the emerging cell-free massive multiple-input
multiple-output (MIMO) systems has been recognized as a crucial task in
machine-type communications, in which multiple access points (APs) jointly
identify the active devices from a large number of potential devices based on
the received signals. Most of the existing works addressing this problem rely
on the impractical assumption that different active devices transmit signals
synchronously. However, in practice, synchronization cannot be guaranteed due
to the low-cost oscillators, which brings additional discontinuous and
nonconvex constraints to the detection problem. To address this challenge, this
paper reveals an equivalent reformulation to the asynchronous activity
detection problem, which facilitates the development of a centralized algorithm
and a distributed algorithm that satisfy the highly nonconvex constraints in a
gentle fashion as the iteration number increases, so that the sequence
generated by the proposed algorithms can get around bad stationary points. To
reduce the capacity requirements of the fronthauls, we further design a
communication-efficient accelerated distributed algorithm. Simulation results
demonstrate that the proposed centralized and distributed algorithms outperform
state-of-the-art approaches, and the proposed accelerated distributed algorithm
achieves a close detection performance to that of the centralized algorithm but
with a much smaller number of bits to be transmitted on the fronthaul links.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2022 07:50:29 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Li",
"Yang",
""
],
[
"Lin",
"Qingfeng",
""
],
[
"Liu",
"Ya-Feng",
""
],
[
"Ai",
"Bo",
""
],
[
"Wu",
"Yik-Chung",
""
]
] |
new_dataset
| 0.99337 |
2210.00503
|
Christian M. Dahl
|
Christian M. Dahl, Torben S. D. Johansen, Emil N. S{\o}rensen,
Christian E. Westermann, Simon F. Wittrock
|
DARE: A large-scale handwritten date recognition system
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Handwritten text recognition for historical documents is an important task
but it remains difficult due to a lack of sufficient training data in
combination with a large variability of writing styles and degradation of
historical documents. While recurrent neural network architectures are commonly
used for handwritten text recognition, they are often computationally expensive
to train and the benefit of recurrence drastically differs by task. For these
reasons, it is important to consider non-recurrent architectures. In the
context of handwritten date recognition, we propose an architecture based on
the EfficientNetV2 class of models that is fast to train, robust to parameter
choices, and accurately transcribes handwritten dates from a number of sources.
For training, we introduce a database containing almost 10 million tokens,
originating from more than 2.2 million handwritten dates which are segmented
from different historical documents. As dates are some of the most common
information on historical documents, and with historical archives containing
millions of such documents, the efficient and automatic transcription of dates
has the potential to lead to significant cost-savings over manual
transcription. We show that training on handwritten text with high variability
in writing styles result in robust models for general handwritten text
recognition and that transfer learning from the DARE system increases
transcription accuracy substantially, allowing one to obtain high accuracy even
when using a relatively small training sample.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2022 12:47:36 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Dahl",
"Christian M.",
""
],
[
"Johansen",
"Torben S. D.",
""
],
[
"Sørensen",
"Emil N.",
""
],
[
"Westermann",
"Christian E.",
""
],
[
"Wittrock",
"Simon F.",
""
]
] |
new_dataset
| 0.976432 |
2210.00627
|
Hongsuk Choi
|
Hongsuk Choi, Gyeongsik Moon, Matthieu Armando, Vincent Leroy, Kyoung
Mu Lee, Gregory Rogez
|
MonoNHR: Monocular Neural Human Renderer
|
Hongsuk Choi and Gyeongsik Moon contributed equally, 15 pages
including the reference and supplementary material
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Existing neural human rendering methods struggle with a single image input
due to the lack of information in invisible areas and the depth ambiguity of
pixels in visible areas. In this regard, we propose Monocular Neural Human
Renderer (MonoNHR), a novel approach that renders robust free-viewpoint images
of an arbitrary human given only a single image. MonoNHR is the first method
that (i) renders human subjects never seen during training in a monocular
setup, and (ii) is trained in a weakly-supervised manner without geometry
supervision. First, we propose to disentangle 3D geometry and texture features
and to condition the texture inference on the 3D geometry features. Second, we
introduce a Mesh Inpainter module that inpaints the occluded parts exploiting
human structural priors such as symmetry. Experiments on ZJU-MoCap, AIST, and
HUMBI datasets show that our approach significantly outperforms the recent
methods adapted to the monocular case.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2022 21:01:02 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Choi",
"Hongsuk",
""
],
[
"Moon",
"Gyeongsik",
""
],
[
"Armando",
"Matthieu",
""
],
[
"Leroy",
"Vincent",
""
],
[
"Lee",
"Kyoung Mu",
""
],
[
"Rogez",
"Gregory",
""
]
] |
new_dataset
| 0.991829 |
2210.00629
|
Hongkai Dai
|
Hongkai Dai and Frank Permenter
|
Convex synthesis and verification of control-Lyapunov and barrier
functions with input constraints
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Control Lyapunov functions (CLFs) and control barrier functions (CBFs) are
widely used tools for synthesizing controllers subject to stability and safety
constraints. Paired with online optimization, they provide stabilizing control
actions that satisfy input constraints and avoid unsafe regions of state-space.
Designing CLFs and CBFs with rigorous performance guarantees is computationally
challenging. To certify existence of control actions, current techniques not
only design a CLF/CBF, but also a nominal controller. This can make the
synthesis task more expensive, and performance estimation more conservative. In
this work, we characterize polynomial CLFs/CBFs using sum-of-squares
conditions, which can be directly certified using convex optimization. This
yields a CLF and CBF synthesis technique that does not rely on a nominal
controller. We then present algorithms for iteratively enlarging estimates of
the stabilizable and safe regions. We demonstrate our algorithms on a 2D toy
system, a pendulum and a quadrotor.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2022 21:05:42 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Dai",
"Hongkai",
""
],
[
"Permenter",
"Frank",
""
]
] |
new_dataset
| 0.996495 |
2210.00645
|
Yufei Huang
|
Shan Jiang, Yufei Huang, Mohsen Jafari, and Mohammad Jalayer
|
Economic-Driven Adaptive Traffic Signal Control
|
18 pages, 12 figures, presented at the Transportation Research Board
(TRB) 100th Annual Meeting, 2021
| null | null | null |
cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the emerging connected-vehicle technologies and smart roads, the need
for intelligent adaptive traffic signal controls is more than ever before. This
paper proposes a novel Economic-driven Adaptive Traffic Signal Control (eATSC)
model with a hyper control variable - interest rate defined in economics for
traffic signal control at signalized intersections. The eATSC uses a continuous
compounding function that captures both the total number of vehicles and the
accumulated waiting time of each vehicle to compute penalties for different
directions. The computed penalties grow with waiting time and is used for
signal control decisions. Each intersection is assigned two intelligent agents
adjusting interest rate and signal length for different directions according to
the traffic patterns, respectively. The problem is formulated as a Markov
Decision Process (MDP) problem to reduce congestions, and a two-agent Double
Dueling Deep Q Network (DDDQN) is utilized to solve the problem. Under the
optimal policy, the agents can select the optimal interest rates and signal
time to minimize the likelihood of traffic congestion. To evaluate the
superiority of our method, a VISSIM simulation model with classic four-leg
signalized intersections is developed. The results indicate that the proposed
model is adequately able to maintain healthy traffic flow at the intersection.
|
[
{
"version": "v1",
"created": "Sun, 2 Oct 2022 22:24:58 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Jiang",
"Shan",
""
],
[
"Huang",
"Yufei",
""
],
[
"Jafari",
"Mohsen",
""
],
[
"Jalayer",
"Mohammad",
""
]
] |
new_dataset
| 0.984489 |
2210.00664
|
Peter Schaldenbrand
|
Peter Schaldenbrand, James McCann and Jean Oh
|
FRIDA: A Collaborative Robot Painter with a Differentiable,
Real2Sim2Real Planning Environment
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Painting is an artistic process of rendering visual content that achieves the
high-level communication goals of an artist that may change dynamically
throughout the creative process. In this paper, we present a Framework and
Robotics Initiative for Developing Arts (FRIDA) that enables humans to produce
paintings on canvases by collaborating with a painter robot using simple inputs
such as language descriptions or images. FRIDA introduces several technical
innovations for computationally modeling a creative painting process. First, we
develop a fully differentiable simulation environment for painting, adopting
the idea of real to simulation to real (real2sim2real). We show that our
proposed simulated painting environment is higher fidelity to reality than
existing simulation environments used for robot painting. Second, to model the
evolving dynamics of a creative process, we develop a planning approach that
can continuously optimize the painting plan based on the evolving canvas with
respect to the high-level goals. In contrast to existing approaches where the
content generation process and action planning are performed independently and
sequentially, FRIDA adapts to the stochastic nature of using paint and a brush
by continually re-planning and re-assessing its semantic goals based on its
visual perception of the painting progress. We describe the details on the
technical approach as well as the system integration.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 00:41:59 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Schaldenbrand",
"Peter",
""
],
[
"McCann",
"James",
""
],
[
"Oh",
"Jean",
""
]
] |
new_dataset
| 0.999566 |
2210.00689
|
Hongyi Pan Mr.
|
Hongyi Pan, Salih Atici, Ahmet Enis Cetin
|
Multipod Convolutional Network
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce a convolutional network which we call MultiPodNet
consisting of a combination of two or more convolutional networks which process
the input image in parallel to achieve the same goal. Output feature maps of
parallel convolutional networks are fused at the fully connected layer of the
network. We experimentally observed that three parallel pod networks
(TripodNet) produce the best results in commonly used object recognition
datasets. Baseline pod networks can be of any type. In this paper, we use
ResNets as baseline networks and their inputs are augmented image patches. The
number of parameters of the TripodNet is about three times that of a single
ResNet. We train the TripodNet using the standard backpropagation type
algorithms. In each individual ResNet, parameters are initialized with
different random numbers during training. The TripodNet achieved
state-of-the-art performance on CIFAR-10 and ImageNet datasets. For example, it
improved the accuracy of a single ResNet from 91.66% to 92.47% under the same
training process on the CIFAR-10 dataset.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 02:37:57 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Pan",
"Hongyi",
""
],
[
"Atici",
"Salih",
""
],
[
"Cetin",
"Ahmet Enis",
""
]
] |
new_dataset
| 0.950529 |
2210.00733
|
Ashwini B P
|
B. P. Ashwini, R. Sumathi, H. S. Sudhira
|
A Dynamic Model for Bus Arrival Time Estimation based on Spatial
Patterns using Machine Learning
| null | null |
10.14445/22315381/IJETT-V70I9P219
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The notion of smart cities is being adapted globally to provide a better
quality of living. A smart city's smart mobility component focuses on providing
smooth and safe commuting for its residents and promotes eco-friendly and
sustainable alternatives such as public transit (bus). Among several smart
applications, a system that provides up-to-the-minute information like bus
arrival, travel duration, schedule, etc., improves the reliability of public
transit services. Still, this application needs live information on traffic
flow, accidents, events, and the location of the buses. Most cities lack the
infrastructure to provide these data. In this context, a bus arrival prediction
model is proposed for forecasting the arrival time using limited data sets. The
location data of public transit buses and spatial characteristics are used for
the study. One of the routes of Tumakuru city service, Tumakuru, India, is
selected and divided into two spatial patterns: sections with intersections and
sections without intersections. The machine learning model XGBoost is modeled
for both spatial patterns individually. A model to dynamically predict bus
arrival time is developed using the preceding trip information and the machine
learning model to estimate the arrival time at a downstream bus stop. The
performance of models is compared based on the R-squared values of the
predictions made, and the proposed model established superior results. It is
suggested to predict bus arrival in the study area. The proposed model can also
be extended to other similar cities with limited traffic-related
infrastructure.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 06:35:03 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Ashwini",
"B. P.",
""
],
[
"Sumathi",
"R.",
""
],
[
"Sudhira",
"H. S.",
""
]
] |
new_dataset
| 0.963025 |
2210.00735
|
Chaoran Chen
|
Chaoran Chen, Brad A. Myers, Cem Ergin, Emily Porat, Sijia Li, Chun
Wang
|
ScrollTest: Evaluating Scrolling Speed and Accuracy
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scrolling is an essential interaction technique enabling users to display
previously off-screen content. Existing evaluation models for scrolling are
often entangled with the selection of content, e.g., when scrolling on the
phone for reading. Furthermore, some evaluation models overlook whether the
user knows the target position. We have developed ScrollTest, a general-purpose
evaluation tool for scrolling speed and accuracy that avoids the need for
selection. We tested it across four dimensions: 11 different scrolling
techniques/devices, 5 frame heights, 13 scrolling distances, and 2 scrolling
conditions (i.e., with or without knowing the target position). The results
show that flicking and two-finger scrolling are the fastest; flicking is also
relatively precise for scrolling to targets already onscreen, but pressing
arrow buttons on the scrollbar is the most accurate for scrolling to nearby
targets. Mathematical models of scrolling are highly linear when the target
position is unknown but like Fitts' law when known.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 06:39:54 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Chen",
"Chaoran",
""
],
[
"Myers",
"Brad A.",
""
],
[
"Ergin",
"Cem",
""
],
[
"Porat",
"Emily",
""
],
[
"Li",
"Sijia",
""
],
[
"Wang",
"Chun",
""
]
] |
new_dataset
| 0.997771 |
2210.00756
|
Carmelo Scribano
|
Carmelo Scribano, Giorgia Franchini, Ignacio Sa\~nudo Olmedo, Marko
Bertogna
|
CERBERUS: Simple and Effective All-In-One Automotive Perception Model
with Multi Task Learning
|
Presented at IROS 2022 PNARUDE Workshop
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Perceiving the surrounding environment is essential for enabling autonomous
or assisted driving functionalities. Common tasks in this domain include
detecting road users, as well as determining lane boundaries and classifying
driving conditions. Over the last few years, a large variety of powerful Deep
Learning models have been proposed to address individual tasks of camera-based
automotive perception with astonishing performances. However, the limited
capabilities of in-vehicle embedded computing platforms cannot cope with the
computational effort required to run a heavy model for each individual task. In
this work, we present CERBERUS (CEnteR Based End-to-end peRception Using a
Single model), a lightweight model that leverages a multitask-learning approach
to enable the execution of multiple perception tasks at the cost of a single
inference. The code will be made publicly available at
https://github.com/cscribano/CERBERUS
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 08:17:26 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Scribano",
"Carmelo",
""
],
[
"Franchini",
"Giorgia",
""
],
[
"Olmedo",
"Ignacio Sañudo",
""
],
[
"Bertogna",
"Marko",
""
]
] |
new_dataset
| 0.955385 |
2210.00798
|
Romain Egele
|
Matthieu Dorier, Romain Egele, Prasanna Balaprakash, Jaehoon Koo,
Sandeep Madireddy, Srinivasan Ramesh, Allen D. Malony, Rob Ross
|
HPC Storage Service Autotuning Using Variational-Autoencoder-Guided
Asynchronous Bayesian Optimization
|
Accepted at IEEE Cluster 2022
| null |
10.1109/CLUSTER51413.2022.00049
| null |
cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Distributed data storage services tailored to specific applications have
grown popular in the high-performance computing (HPC) community as a way to
address I/O and storage challenges. These services offer a variety of specific
interfaces, semantics, and data representations. They also expose many tuning
parameters, making it difficult for their users to find the best configuration
for a given workload and platform.
To address this issue, we develop a novel variational-autoencoder-guided
asynchronous Bayesian optimization method to tune HPC storage service
parameters. Our approach uses transfer learning to leverage prior tuning
results and use a dynamically updated surrogate model to explore the large
parameter search space in a systematic way.
We implement our approach within the DeepHyper open-source framework, and
apply it to the autotuning of a high-energy physics workflow on Argonne's Theta
supercomputer. We show that our transfer-learning approach enables a more than
$40\times$ search speedup over random search, compared with a $2.5\times$ to
$10\times$ speedup when not using transfer learning. Additionally, we show that
our approach is on par with state-of-the-art autotuning frameworks in speed and
outperforms them in resource utilization and parallelization capabilities.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 10:12:57 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Dorier",
"Matthieu",
""
],
[
"Egele",
"Romain",
""
],
[
"Balaprakash",
"Prasanna",
""
],
[
"Koo",
"Jaehoon",
""
],
[
"Madireddy",
"Sandeep",
""
],
[
"Ramesh",
"Srinivasan",
""
],
[
"Malony",
"Allen D.",
""
],
[
"Ross",
"Rob",
""
]
] |
new_dataset
| 0.992539 |
2210.00812
|
Sier Ha
|
Ha Sier, Li Qingqing, Yu Xianjia, Jorge Pe\~na Queralta, Zhuo Zou,
Tomi Westerlund
|
A Benchmark for Multi-Modal Lidar SLAM with Ground Truth in GNSS-Denied
Environments
|
6 pages
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lidar-based simultaneous localization and mapping (SLAM) approaches have
obtained considerable success in autonomous robotic systems. This is in part
owing to the high-accuracy of robust SLAM algorithms and the emergence of new
and lower-cost lidar products. This study benchmarks current state-of-the-art
lidar SLAM algorithms with a multi-modal lidar sensor setup showcasing diverse
scanning modalities (spinning and solid-state) and sensing technologies, and
lidar cameras, mounted on a mobile sensing and computing platform. We extend
our previous multi-modal multi-lidar dataset with additional sequences and new
sources of ground truth data. Specifically, we propose a new multi-modal
multi-lidar SLAM-assisted and ICP-based sensor fusion method for generating
ground truth maps. With these maps, we then match real-time pointcloud data
using a natural distribution transform (NDT) method to obtain the ground truth
with full 6 DOF pose estimation. This novel ground truth data leverages
high-resolution spinning and solid-state lidars. We also include new open road
sequences with GNSS-RTK data and additional indoor sequences with motion
capture (MOCAP) ground truth, complementing the previous forest sequences with
MOCAP data. We perform an analysis of the positioning accuracy achieved with
ten different SLAM algorithm and lidar combinations. We also report the
resource utilization in four different computational platforms and a total of
five settings (Intel and Jetson ARM CPUs). Our experimental results show that
current state-of-the-art lidar SLAM algorithms perform very differently for
different types of sensors. More results, code, and the dataset can be found
at:
\href{https://github.com/TIERS/tiers-lidars-dataset-enhanced}{github.com/TIERS/tiers-lidars-dataset-enhanced.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 10:46:53 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Sier",
"Ha",
""
],
[
"Qingqing",
"Li",
""
],
[
"Xianjia",
"Yu",
""
],
[
"Queralta",
"Jorge Peña",
""
],
[
"Zou",
"Zhuo",
""
],
[
"Westerlund",
"Tomi",
""
]
] |
new_dataset
| 0.999658 |
2210.00833
|
Jaume Abella
|
Fabio Mazzocchetti, Sergi Alcaide, Francisco Bas, Pedro Benedicte,
Guillem Cabo, Feng Chang, Francisco Fuentes, Jaume Abella
|
SafeSoftDR: A Library to Enable Software-based Diverse Redundancy for
Safety-Critical Tasks
|
FORECAST 2022 Functional Properties and Dependability in
Cyber-Physical Systems Workshop (held jointly with HiPEAC Conference)
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Applications with safety requirements have become ubiquitous nowadays and can
be found in edge devices of all kinds. However, microcontrollers in those
devices, despite offering moderate performance by implementing multicores and
cache hierarchies, may fail to offer adequate support to implement some safety
measures needed for the highest integrity levels, such as lockstepped execution
to avoid so-called common cause failures (i.e., a fault affecting redundant
components causing the same error in all of them). To respond to this
limitation, an approach based on a software monitor enforcing some sort of
software-based lockstepped execution across cores has been proposed recently,
providing a proof of concept. This paper presents SafeSoftDR, a library
providing a standard interface to deploy software-based lockstepped execution
across non-natively lockstepped cores relieving end-users from having to manage
the burden to create redundant processes, copying input/output data, and
performing result comparison. Our library has been tested on x86-based Linux
and is currently being integrated on top of an open-source RISC-V platform
targeting safety-related applications, hence offering a convenient environment
for safety-critical applications.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 11:37:29 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Mazzocchetti",
"Fabio",
""
],
[
"Alcaide",
"Sergi",
""
],
[
"Bas",
"Francisco",
""
],
[
"Benedicte",
"Pedro",
""
],
[
"Cabo",
"Guillem",
""
],
[
"Chang",
"Feng",
""
],
[
"Fuentes",
"Francisco",
""
],
[
"Abella",
"Jaume",
""
]
] |
new_dataset
| 0.999724 |
2210.00902
|
Weiguo Wang
|
Weiguo Wang, Xiaolong Zheng, Yuan He, Xiuzhen Guo
|
AdaComm: Tracing Channel Dynamics for Reliable Cross-Technology
Communication
| null | null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Cross-Technology Communication (CTC) is an emerging technology to support
direct communication between wireless devices that follow different standards.
In spite of the many different proposals from the community to enable CTC, the
performance aspect of CTC is an equally important problem but has seldom been
studied before. We find this problem is extremely challenging, due to the
following reasons: on one hand, a link for CTC is essentially different from a
conventional wireless link. The conventional link indicators like RSSI
(received signal strength indicator) and SNR (signal to noise ratio) cannot be
used to directly characterize a CTC link. On the other hand, the indirect
indicators like PER (packet error rate), which is adopted by many existing CTC
proposals, cannot capture the short-term link behavior. As a result, the
existing CTC proposals fail to keep reliable performance under dynamic channel
conditions. In order to address the above challenge, we in this paper propose
AdaComm, a generic framework to achieve self-adaptive CTC in dynamic channels.
Instead of reactively adjusting the CTC sender, AdaComm adopts online learning
mechanism to adaptively adjust the decoding model at the CTC receiver. The
self-adaptive decoding model automatically learns the effective features
directly from the raw received signals that are embedded with the current
channel state. With the lossless channel information, AdaComm further adopts
the fine tuning and full training modes to cope with the continuous and abrupt
channel dynamics. We implement AdaComm and integrate it with two existing CTC
approaches that respectively employ CSI (channel state information) and RSSI as
the information carrier. The evaluation results demonstrate that AdaComm can
significantly reduce the SER (symbol error rate) by 72.9% and 49.2%,
respectively, compared with the existing approaches.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 09:21:07 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Wang",
"Weiguo",
""
],
[
"Zheng",
"Xiaolong",
""
],
[
"He",
"Yuan",
""
],
[
"Guo",
"Xiuzhen",
""
]
] |
new_dataset
| 0.996625 |
2210.00903
|
Weiguo Wang
|
Weiguo Wang, Jinming Li, Yuan He, Xiuzhen Guo, Yunhao Liu
|
MotorBeat: Acoustic Communication for Home Appliances via Variable Pulse
Width Modulation
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
More and more home appliances are now connected to the Internet, thus
enabling various smart home applications. However, a critical problem that may
impede the further development of smart home is overlooked: Small appliances
account for the majority of home appliances, but they receive little attention
and most of them are cut off from the Internet. To fill this gap, we propose
MotorBeat, an acoustic communication approach that connects small appliances to
a smart speaker. Our key idea is to exploit direct current (DC) motors, which
are common components of small appliances, to transmit acoustic messages. We
design a novel scheme named Variable Pulse Width Modulation (V-PWM) to drive DC
motors. MotorBeat achieves the following 3C goals: (1) Comfortable to hear, (2)
Compatible with multiple motor modes, and (3) Concurrent transmission. We
implement MotorBeat with commercial devices and evaluate its performance on
three small appliances and ten DC motors. The results show that the
communication range can be up to 10 m
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 09:12:42 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Wang",
"Weiguo",
""
],
[
"Li",
"Jinming",
""
],
[
"He",
"Yuan",
""
],
[
"Guo",
"Xiuzhen",
""
],
[
"Liu",
"Yunhao",
""
]
] |
new_dataset
| 0.999667 |
2210.01002
|
Zepeng Zhang
|
Zepeng Zhang, Songtao Lu, Zengfeng Huang, Ziping Zhao
|
ASGNN: Graph Neural Networks with Adaptive Structure
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The graph neural network (GNN) models have presented impressive achievements
in numerous machine learning tasks. However, many existing GNN models are shown
to be vulnerable to adversarial attacks, which creates a stringent need to
build robust GNN architectures. In this work, we propose a novel interpretable
message passing scheme with adaptive structure (ASMP) to defend against
adversarial attacks on graph structure. Layers in ASMP are derived based on
optimization steps that minimize an objective function that learns the node
feature and the graph structure simultaneously. ASMP is adaptive in the sense
that the message passing process in different layers is able to be carried out
over dynamically adjusted graphs. Such property allows more fine-grained
handling of the noisy (or perturbed) graph structure and hence improves the
robustness. Convergence properties of the ASMP scheme are theoretically
established. Integrating ASMP with neural networks can lead to a new family of
GNN models with adaptive structure (ASGNN). Extensive experiments on
semi-supervised node classification tasks demonstrate that the proposed ASGNN
outperforms the state-of-the-art GNN architectures in terms of classification
performance under various adversarial attacks.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 15:10:40 GMT"
}
] | 2022-10-04T00:00:00 |
[
[
"Zhang",
"Zepeng",
""
],
[
"Lu",
"Songtao",
""
],
[
"Huang",
"Zengfeng",
""
],
[
"Zhao",
"Ziping",
""
]
] |
new_dataset
| 0.965197 |
2103.15145
|
Yihong Xu
|
Yihong Xu, Yutong Ban, Guillaume Delorme, Chuang Gan, Daniela Rus,
Xavier Alameda-Pineda
|
TransCenter: Transformers with Dense Representations for Multiple-Object
Tracking
|
17 pages, 10 figures, updated results and add comparisons
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Transformers have proven superior performance for a wide variety of tasks
since they were introduced. In recent years, they have drawn attention from the
vision community in tasks such as image classification and object detection.
Despite this wave, an accurate and efficient multiple-object tracking (MOT)
method based on transformers is yet to be designed. We argue that the direct
application of a transformer architecture with quadratic complexity and
insufficient noise-initialized sparse queries - is not optimal for MOT. We
propose TransCenter, a transformer-based MOT architecture with dense
representations for accurately tracking all the objects while keeping a
reasonable runtime. Methodologically, we propose the use of image-related dense
detection queries and efficient sparse tracking queries produced by our
carefully designed query learning networks (QLN). On one hand, the dense
image-related detection queries allow us to infer targets' locations globally
and robustly through dense heatmap outputs. On the other hand, the set of
sparse tracking queries efficiently interacts with image features in our
TransCenter Decoder to associate object positions through time. As a result,
TransCenter exhibits remarkable performance improvements and outperforms by a
large margin the current state-of-the-art methods in two standard MOT
benchmarks with two tracking settings (public/private). TransCenter is also
proven efficient and accurate by an extensive ablation study and comparisons to
more naive alternatives and concurrent works. For scientific interest, the code
is made publicly available at https://github.com/yihongxu/transcenter.
|
[
{
"version": "v1",
"created": "Sun, 28 Mar 2021 14:49:36 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Aug 2021 21:06:08 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Apr 2022 08:51:19 GMT"
},
{
"version": "v4",
"created": "Fri, 30 Sep 2022 10:00:00 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Xu",
"Yihong",
""
],
[
"Ban",
"Yutong",
""
],
[
"Delorme",
"Guillaume",
""
],
[
"Gan",
"Chuang",
""
],
[
"Rus",
"Daniela",
""
],
[
"Alameda-Pineda",
"Xavier",
""
]
] |
new_dataset
| 0.951832 |
2104.14019
|
Ga\"etan Dou\'eneau-Tabot
|
Ga\"etan Dou\'eneau-Tabot
|
Pebble transducers with unary output
|
39 pages
| null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
Boja\'nczyk recently initiated an intensive study of deterministic pebble
transducers, which are two-way automata that can drop marks (named "pebbles")
on their input word, and produce an output word. They describe functions from
words to words. Two natural restrictions of this definition have been
investigated: marble transducers by Dou\'eneau-Tabot et al., and
comparison-free pebble transducers (that we rename here "blind transducers") by
Nguy\^en et al.
Here, we study the decidability of membership problems between the classes of
functions computed by pebble, marble and blind transducers that produce a unary
output. First, we show that pebble and marble transducers have the same
expressive power when the outputs are unary (which is false over non-unary
outputs). Then, we characterize 1-pebble transducers with unary output that
describe a function computable by a blind transducer, and show that the
membership problem is decidable. These results can be interpreted in terms of
automated simplification of programs.
|
[
{
"version": "v1",
"created": "Wed, 28 Apr 2021 20:52:04 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Apr 2021 19:27:14 GMT"
},
{
"version": "v3",
"created": "Sun, 11 Jul 2021 10:13:25 GMT"
},
{
"version": "v4",
"created": "Sat, 4 Sep 2021 07:15:53 GMT"
},
{
"version": "v5",
"created": "Tue, 19 Apr 2022 08:40:49 GMT"
},
{
"version": "v6",
"created": "Fri, 30 Sep 2022 15:06:02 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Douéneau-Tabot",
"Gaëtan",
""
]
] |
new_dataset
| 0.97111 |
2106.14321
|
Royi Lachmy
|
Royi Lachmy, Valentina Pyatkin, Avshalom Manevich, Reut Tsarfaty
|
Draw Me a Flower: Processing and Grounding Abstraction in Natural
Language
|
Accepted to the TACL journal. This is a pre-MIT Press publication
version
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Abstraction is a core tenet of human cognition and communication. When
composing natural language instructions, humans naturally evoke abstraction to
convey complex procedures in an efficient and concise way. Yet, interpreting
and grounding abstraction expressed in NL has not yet been systematically
studied in NLP, with no accepted benchmarks specifically eliciting abstraction
in NL. In this work, we set the foundation for a systematic study of processing
and grounding abstraction in NLP. First, we deliver a novel abstraction
elicitation method and present Hexagons, a 2D instruction-following game. Using
Hexagons we collected over 4k naturally-occurring visually-grounded
instructions rich with diverse types of abstractions. From these data, we
derive an instruction-to-execution task and assess different types of neural
models. Our results show that contemporary models and modeling practices are
substantially inferior to human performance, and that models' performance is
inversely correlated with the level of abstraction, showing less satisfying
performance on higher levels of abstraction. These findings are consistent
across models and setups, confirming that abstraction is a challenging
phenomenon deserving further attention and study in NLP/AI research.
|
[
{
"version": "v1",
"created": "Sun, 27 Jun 2021 21:11:16 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Sep 2022 10:54:40 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Lachmy",
"Royi",
""
],
[
"Pyatkin",
"Valentina",
""
],
[
"Manevich",
"Avshalom",
""
],
[
"Tsarfaty",
"Reut",
""
]
] |
new_dataset
| 0.985061 |
2109.10705
|
Manfred Scheucher
|
Oswin Aichholzer and Jan Kyn\v{c}l and Manfred Scheucher and Birgit
Vogtenhuber and Pavel Valtr
|
On Crossing-Families in Planar Point Sets
| null |
Computational Geometry: Theory and Applications 107 (2022), Paper
No. 101899, 8 pp
|
10.1016/j.comgeo.2022.101899
| null |
cs.CG math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A $k$-crossing family in a point set $S$ in general position is a set of $k$
segments spanned by points of $S$ such that all $k$ segments mutually cross. In
this short note we present two statements on crossing families which are based
on sets of small cardinality: (1) Any set of at least 15 points contains a
crossing family of size 4. (2) There are sets of $n$ points which do not
contain a crossing family of size larger than $8\lceil \frac{n}{41} \rceil$.
Both results improve the previously best known bounds.
|
[
{
"version": "v1",
"created": "Wed, 22 Sep 2021 12:56:00 GMT"
},
{
"version": "v2",
"created": "Thu, 26 May 2022 10:42:46 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Aichholzer",
"Oswin",
""
],
[
"Kynčl",
"Jan",
""
],
[
"Scheucher",
"Manfred",
""
],
[
"Vogtenhuber",
"Birgit",
""
],
[
"Valtr",
"Pavel",
""
]
] |
new_dataset
| 0.98083 |
2109.15158
|
Brady Moon
|
Jay Patrikar, Brady Moon, Jean Oh, Sebastian Scherer
|
Predicting Like A Pilot: Dataset and Method to Predict Socially-Aware
Aircraft Trajectories in Non-Towered Terminal Airspace
|
7 pages, 4 figures, ICRA 2022
| null |
10.1109/ICRA46639.2022.9811972
| null |
cs.RO cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pilots operating aircraft in un-towered airspace rely on their situational
awareness and prior knowledge to predict the future trajectories of other
agents. These predictions are conditioned on the past trajectories of other
agents, agent-agent social interactions and environmental context such as
airport location and weather. This paper provides a dataset,
$\textit{TrajAir}$, that captures this behaviour in a non-towered terminal
airspace around a regional airport. We also present a baseline socially-aware
trajectory prediction algorithm, $\textit{TrajAirNet}$, that uses the dataset
to predict the trajectories of all agents. The dataset is collected for 111
days over 8 months and contains ADS-B transponder data along with the
corresponding METAR weather data. The data is processed to be used as a
benchmark with other publicly available social navigation datasets. To the best
of authors' knowledge, this is the first 3D social aerial navigation dataset
thus introducing social navigation for autonomous aviation.
$\textit{TrajAirNet}$ combines state-of-the-art modules in social navigation to
provide predictions in a static environment with a dynamic context. Both the
$\textit{TrajAir}$ dataset and $\textit{TrajAirNet}$ prediction algorithm are
open-source. The dataset, codebase, and video are available at
https://theairlab.org/trajair/, https://github.com/castacks/trajairnet, and
https://youtu.be/elAQXrxB2gw respectively.
|
[
{
"version": "v1",
"created": "Thu, 30 Sep 2021 14:20:48 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Mar 2022 01:30:11 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Patrikar",
"Jay",
""
],
[
"Moon",
"Brady",
""
],
[
"Oh",
"Jean",
""
],
[
"Scherer",
"Sebastian",
""
]
] |
new_dataset
| 0.969529 |
2202.03163
|
Nicolas Cherel
|
Nicolas Cherel, Andr\'es Almansa, Yann Gousseau, Alasdair Newson
|
Patch-Based Stochastic Attention for Image Editing
|
17 pages, 11 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Attention mechanisms have become of crucial importance in deep learning in
recent years. These non-local operations, which are similar to traditional
patch-based methods in image processing, complement local convolutions.
However, computing the full attention matrix is an expensive step with heavy
memory and computational loads. These limitations curb network architectures
and performances, in particular for the case of high resolution images. We
propose an efficient attention layer based on the stochastic algorithm
PatchMatch, which is used for determining approximate nearest neighbors. We
refer to our proposed layer as a "Patch-based Stochastic Attention Layer"
(PSAL). Furthermore, we propose different approaches, based on patch
aggregation, to ensure the differentiability of PSAL, thus allowing end-to-end
training of any network containing our layer. PSAL has a small memory footprint
and can therefore scale to high resolution images. It maintains this footprint
without sacrificing spatial precision and globality of the nearest neighbors,
which means that it can be easily inserted in any level of a deep architecture,
even in shallower levels. We demonstrate the usefulness of PSAL on several
image editing tasks, such as image inpainting, guided image colorization, and
single-image super-resolution. Our code is available at:
https://github.com/ncherel/psal
|
[
{
"version": "v1",
"created": "Mon, 7 Feb 2022 13:42:00 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Feb 2022 10:50:18 GMT"
},
{
"version": "v3",
"created": "Fri, 30 Sep 2022 15:47:13 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Cherel",
"Nicolas",
""
],
[
"Almansa",
"Andrés",
""
],
[
"Gousseau",
"Yann",
""
],
[
"Newson",
"Alasdair",
""
]
] |
new_dataset
| 0.98999 |
2202.07029
|
C\'esar Soto-Valero
|
C\'esar Soto-Valero, Martin Monperrus, Benoit Baudry
|
The Multibillion Dollar Software Supply Chain of Ethereum
|
8 pages, 2 figures, 2 tables
|
IEEE Computer, 2022
|
10.1109/MC.2022.3175542
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rise of blockchain technologies has triggered tremendous research
interest, coding efforts, and monetary investments in the last decade. Ethereum
is the single largest programmable blockchain platform today. It features
cryptocurrency trading, digital art, and decentralized finance through smart
contracts. So-called Ethereum nodes operate the blockchain, relying on a vast
supply chain of third-party software dependencies maintained by diverse
organizations. These software suppliers have a direct impact on the reliability
and the security of Ethereum. In this article, we perform an analysis of the
software supply chain of Java Ethereum nodes and distill the challenges of
maintaining and securing this blockchain technology.
|
[
{
"version": "v1",
"created": "Mon, 14 Feb 2022 20:48:55 GMT"
},
{
"version": "v2",
"created": "Fri, 13 May 2022 11:44:07 GMT"
},
{
"version": "v3",
"created": "Mon, 16 May 2022 10:33:25 GMT"
},
{
"version": "v4",
"created": "Mon, 8 Aug 2022 07:02:43 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Soto-Valero",
"César",
""
],
[
"Monperrus",
"Martin",
""
],
[
"Baudry",
"Benoit",
""
]
] |
new_dataset
| 0.999406 |
2203.10749
|
Shiqi Zhang
|
Wei Zhao, Shiqi Zhang, Bing Zhou, Bei Wang
|
STCGAT: A Spatio-temporal Causal Graph Attention Network for traffic
flow prediction in Intelligent Transportation Systems
| null |
IoT-25949-2022
| null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Air pollution and carbon emissions caused by modern transportation are
closely related to global climate change. With the help of next-generation
information technology such as Internet of Things (IoT) and Artificial
Intelligence (AI), accurate traffic flow prediction can effectively solve
problems such as traffic congestion and mitigate environmental pollution and
climate change. It further promotes the development of Intelligent
Transportation Systems (ITS) and smart cities. However, the strong spatial and
temporal correlation of traffic data makes the task of accurate traffic
forecasting a significant challenge. Existing methods are usually based on
graph neural networks using predefined spatial adjacency graphs of traffic
networks to model spatial dependencies, ignoring the dynamic correlation of
relationships between road nodes. In addition, they usually use independent
Spatio-temporal components to capture Spatio-temporal dependencies and do not
effectively model global Spatio-temporal dependencies. This paper proposes a
new Spatio-temporal Causal Graph Attention Network (STCGAT) for traffic
prediction to address the above challenges. In STCGAT, we use a node embedding
approach that can adaptively generate spatial adjacency subgraphs at each time
step without a priori geographic knowledge and fine-grained modeling of the
topology of dynamically generated graphs for different time steps. Meanwhile,
we propose an efficient causal temporal correlation component that contains
node adaptive learning, graph convolution, and local and global causal temporal
convolution modules to learn local and global Spatio-temporal dependencies
jointly. Extensive experiments on four real, large traffic datasets show that
our model consistently outperforms all baseline models.
|
[
{
"version": "v1",
"created": "Mon, 21 Mar 2022 06:38:34 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Sep 2022 09:34:34 GMT"
},
{
"version": "v3",
"created": "Thu, 29 Sep 2022 14:21:19 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Zhao",
"Wei",
""
],
[
"Zhang",
"Shiqi",
""
],
[
"Zhou",
"Bing",
""
],
[
"Wang",
"Bei",
""
]
] |
new_dataset
| 0.995667 |
2204.02782
|
Johannes Gasteiger
|
Johannes Gasteiger, Muhammed Shuaibi, Anuroop Sriram, Stephan
G\"unnemann, Zachary Ulissi, C. Lawrence Zitnick, Abhishek Das
|
GemNet-OC: Developing Graph Neural Networks for Large and Diverse
Molecular Simulation Datasets
| null | null | null | null |
cs.LG cond-mat.mtrl-sci physics.chem-ph physics.comp-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Recent years have seen the advent of molecular simulation datasets that are
orders of magnitude larger and more diverse. These new datasets differ
substantially in four aspects of complexity: 1. Chemical diversity (number of
different elements), 2. system size (number of atoms per sample), 3. dataset
size (number of data samples), and 4. domain shift (similarity of the training
and test set). Despite these large differences, benchmarks on small and narrow
datasets remain the predominant method of demonstrating progress in graph
neural networks (GNNs) for molecular simulation, likely due to cheaper training
compute requirements. This raises the question -- does GNN progress on small
and narrow datasets translate to these more complex datasets? This work
investigates this question by first developing the GemNet-OC model based on the
large Open Catalyst 2020 (OC20) dataset. GemNet-OC outperforms the previous
state-of-the-art on OC20 by 16% while reducing training time by a factor of 10.
We then compare the impact of 18 model components and hyperparameter choices on
performance in multiple datasets. We find that the resulting model would be
drastically different depending on the dataset used for making model choices.
To isolate the source of this discrepancy we study six subsets of the OC20
dataset that individually test each of the above-mentioned four dataset
aspects. We find that results on the OC-2M subset correlate well with the full
OC20 dataset while being substantially cheaper to train on. Our findings
challenge the common practice of developing GNNs solely on small datasets, but
highlight ways of achieving fast development cycles and generalizable results
via moderately-sized, representative datasets such as OC-2M and efficient
models such as GemNet-OC. Our code and pretrained model weights are
open-sourced.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2022 12:52:34 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Sep 2022 20:09:10 GMT"
},
{
"version": "v3",
"created": "Fri, 30 Sep 2022 15:21:29 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Gasteiger",
"Johannes",
""
],
[
"Shuaibi",
"Muhammed",
""
],
[
"Sriram",
"Anuroop",
""
],
[
"Günnemann",
"Stephan",
""
],
[
"Ulissi",
"Zachary",
""
],
[
"Zitnick",
"C. Lawrence",
""
],
[
"Das",
"Abhishek",
""
]
] |
new_dataset
| 0.999433 |
2206.00314
|
Gilles Stoltz
|
Zhen Li, Gilles Stoltz (LMO, CELESTE, HEC Paris)
|
Contextual Bandits with Knapsacks for a Conversion Model
|
Thirty-sixth Conference on Neural Information Processing Systems,
2022, New Orleans, United States
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider contextual bandits with knapsacks, with an underlying structure
between rewards generated and cost vectors suffered. We do so motivated by
sales with commercial discounts. At each round, given the stochastic i.i.d.\
context $\mathbf{x}_t$ and the arm picked $a_t$ (corresponding, e.g., to a
discount level), a customer conversion may be obtained, in which case a reward
$r(a,\mathbf{x}_t)$ is gained and vector costs $c(a_t,\mathbf{x}_t)$ are
suffered (corresponding, e.g., to losses of earnings). Otherwise, in the
absence of a conversion, the reward and costs are null. The reward and costs
achieved are thus coupled through the binary variable measuring conversion or
the absence thereof. This underlying structure between rewards and costs is
different from the linear structures considered by Agrawal and Devanur [2016]
(but we show that the techniques introduced in the present article may also be
applied to the case of these linear structures). The adaptive policies
exhibited solve at each round a linear program based on upper-confidence
estimates of the probabilities of conversion given $a$ and $\mathbf{x}$. This
kind of policy is most natural and achieves a regret bound of the typical order
(OPT/$B$) $\sqrt{T}$, where $B$ is the total budget allowed, OPT is the optimal
expected reward achievable by a static policy, and $T$ is the number of rounds.
|
[
{
"version": "v1",
"created": "Wed, 1 Jun 2022 08:29:07 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Sep 2022 07:34:22 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Li",
"Zhen",
"",
"LMO, CELESTE, HEC Paris"
],
[
"Stoltz",
"Gilles",
"",
"LMO, CELESTE, HEC Paris"
]
] |
new_dataset
| 0.992906 |
2206.10531
|
Federica Proietto Salanitri
|
Federica Proietto Salanitri, Giovanni Bellitto, Simone Palazzo, Ismail
Irmakci, Michael B. Wallace, Candice W. Bolan, Megan Engels, Sanne
Hoogenboom, Marco Aldinucci, Ulas Bagci, Daniela Giordano, Concetto
Spampinato
|
Neural Transformers for Intraductal Papillary Mucosal Neoplasms (IPMN)
Classification in MRI images
| null | null |
10.1109/EMBC48229.2022.9871547
| null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Early detection of precancerous cysts or neoplasms, i.e., Intraductal
Papillary Mucosal Neoplasms (IPMN), in pancreas is a challenging and complex
task, and it may lead to a more favourable outcome. Once detected, grading
IPMNs accurately is also necessary, since low-risk IPMNs can be under
surveillance program, while high-risk IPMNs have to be surgically resected
before they turn into cancer. Current standards (Fukuoka and others) for IPMN
classification show significant intra- and inter-operator variability, beside
being error-prone, making a proper diagnosis unreliable. The established
progress in artificial intelligence, through the deep learning paradigm, may
provide a key tool for an effective support to medical decision for pancreatic
cancer. In this work, we follow this trend, by proposing a novel AI-based IPMN
classifier that leverages the recent success of transformer networks in
generalizing across a wide variety of tasks, including vision ones. We
specifically show that our transformer-based model exploits pre-training better
than standard convolutional neural networks, thus supporting the sought
architectural universalism of transformers in vision, including the medical
image domain and it allows for a better interpretation of the obtained results.
|
[
{
"version": "v1",
"created": "Tue, 21 Jun 2022 17:00:36 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Salanitri",
"Federica Proietto",
""
],
[
"Bellitto",
"Giovanni",
""
],
[
"Palazzo",
"Simone",
""
],
[
"Irmakci",
"Ismail",
""
],
[
"Wallace",
"Michael B.",
""
],
[
"Bolan",
"Candice W.",
""
],
[
"Engels",
"Megan",
""
],
[
"Hoogenboom",
"Sanne",
""
],
[
"Aldinucci",
"Marco",
""
],
[
"Bagci",
"Ulas",
""
],
[
"Giordano",
"Daniela",
""
],
[
"Spampinato",
"Concetto",
""
]
] |
new_dataset
| 0.999018 |
2208.10684
|
Jean Lee
|
Jean Lee, Taejun Lim, Heejun Lee, Bogeun Jo, Yangsok Kim, Heegeun Yoon
and Soyeon Caren Han
|
K-MHaS: A Multi-label Hate Speech Detection Dataset in Korean Online
News Comment
|
Accepted by COLING 2022
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Online hate speech detection has become an important issue due to the growth
of online content, but resources in languages other than English are extremely
limited. We introduce K-MHaS, a new multi-label dataset for hate speech
detection that effectively handles Korean language patterns. The dataset
consists of 109k utterances from news comments and provides a multi-label
classification using 1 to 4 labels, and handles subjectivity and
intersectionality. We evaluate strong baseline experiments on K-MHaS using
Korean-BERT-based language models with six different metrics. KR-BERT with a
sub-character tokenizer outperforms others, recognizing decomposed characters
in each hate speech class.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 02:10:53 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Aug 2022 11:54:02 GMT"
},
{
"version": "v3",
"created": "Fri, 30 Sep 2022 10:46:40 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Lee",
"Jean",
""
],
[
"Lim",
"Taejun",
""
],
[
"Lee",
"Heejun",
""
],
[
"Jo",
"Bogeun",
""
],
[
"Kim",
"Yangsok",
""
],
[
"Yoon",
"Heegeun",
""
],
[
"Han",
"Soyeon Caren",
""
]
] |
new_dataset
| 0.999893 |
2209.08248
|
Adam Dai
|
Adam Dai, Greg Lund, Grace Gao
|
PlaneSLAM: Plane-based LiDAR SLAM for Motion Planning in Structured 3D
Environments
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR sensors are a powerful tool for robot simultaneous localization and
mapping (SLAM) in unknown environments, but the raw point clouds they produce
are dense, computationally expensive to store, and unsuited for direct use by
downstream autonomy tasks, such as motion planning. For integration with motion
planning, it is desirable for SLAM pipelines to generate lightweight geometric
map representations. Such representations are also particularly well-suited for
man-made environments, which can often be viewed as a so-called "Manhattan
world" built on a Cartesian grid. In this work we present a 3D LiDAR SLAM
algorithm for Manhattan world environments which extracts planar features from
point clouds to achieve lightweight, real-time localization and mapping. Our
approach generates plane-based maps which occupy significantly less memory than
their point cloud equivalents, and are suited towards fast collision checking
for motion planning. By leveraging the Manhattan world assumption, we target
extraction of orthogonal planes to generate maps which are more structured and
organized than those of existing plane-based LiDAR SLAM approaches. We
demonstrate our approach in the high-fidelity AirSim simulator and in
real-world experiments with a ground rover equipped with a Velodyne LiDAR. For
both cases, we are able to generate high quality maps and trajectory estimates
at a rate matching the sensor rate of 10 Hz.
|
[
{
"version": "v1",
"created": "Sat, 17 Sep 2022 05:02:24 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Sep 2022 22:49:34 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Dai",
"Adam",
""
],
[
"Lund",
"Greg",
""
],
[
"Gao",
"Grace",
""
]
] |
new_dataset
| 0.999674 |
2209.09580
|
Jovan Komatovic
|
Martina Camaioni, Rachid Guerraoui, Jovan Komatovic, Matteo Monti,
Manuel Vidigueira
|
Carbon: An Asynchronous Voting-Based Payment System for a Client-Server
Architecture
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
We present Carbon, an asynchronous payment system. To the best of our
knowledge, Carbon is the first asynchronous payment system designed
specifically for a client-server architecture. Namely, besides being able to
make payments, clients of Carbon are capable of changing the set of running
servers using a novel voting mechanism -- asynchronous, balance-based voting.
|
[
{
"version": "v1",
"created": "Tue, 20 Sep 2022 09:50:44 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Sep 2022 09:26:59 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Camaioni",
"Martina",
""
],
[
"Guerraoui",
"Rachid",
""
],
[
"Komatovic",
"Jovan",
""
],
[
"Monti",
"Matteo",
""
],
[
"Vidigueira",
"Manuel",
""
]
] |
new_dataset
| 0.998016 |
2209.14879
|
Michael Sammeth
|
Cosmin Ursache, Michael Sammeth, S\^inic\u{a} Alboaie
|
OpenDSU: Digital Sovereignty in PharmaLedger
|
18 pages, 8 figures
| null | null | null |
cs.CR cs.NI cs.SI cs.SY eess.SY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Distributed ledger networks, chiefly those based on blockchain technologies,
currently are heralding a next generation of computer systems that aims to suit
modern users' demands. Over the recent years, several technologies for
blockchains, off-chaining strategies, as well as decentralised and respectively
self-sovereign identity systems have shot up so fast that standardisation of
the protocols is lagging behind, severely hampering the interoperability of
different approaches. Moreover, most of the currently available solutions for
distributed ledgers focus on either home users or enterprise use case
scenarios, failing to provide integrative solutions addressing the needs of
both.
Herein we introduce the OpenDSU platform that allows to interoperate generic
blockchain technologies, organised - and possibly cascaded in a hierarchical
fashion - in domains. To achieve this flexibility, we seamlessly integrated a
set of well conceived OpenDSU components to orchestrate off-chain data with
granularly resolved and cryptographically secure access levels that are nested
with sovereign identities across the different domains.
Employing our platform to PharmaLedger, an inter-European network for the
standardisation of data handling in the pharmaceutical industry and in
healthcare, we demonstrate that OpenDSU can cope with generic demands of
heterogeneous use cases in both, performance and handling substantially
different business policies. Importantly, whereas available solutions commonly
require a pre-defined and fixed set of components, no such vendor lock-in
restrictions on the blockchain technology or identity system exist in OpenDSU,
making systems built on it flexibly adaptable to new standards evolving in the
future.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2022 15:43:31 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Ursache",
"Cosmin",
""
],
[
"Sammeth",
"Michael",
""
],
[
"Alboaie",
"Sînică",
""
]
] |
new_dataset
| 0.981485 |
2209.15091
|
Han Wang
|
Han Wang, Hanbin Hong, Li Xiong, Zhan Qin, Yuan Hong
|
L-SRR: Local Differential Privacy for Location-Based Services with
Staircase Randomized Response
|
accepted to CCS'22; full version
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Location-based services (LBS) have been significantly developed and widely
deployed in mobile devices. It is also well-known that LBS applications may
result in severe privacy concerns by collecting sensitive locations. A strong
privacy model ''local differential privacy'' (LDP) has been recently deployed
in many different applications (e.g., Google RAPPOR, iOS, and Microsoft
Telemetry) but not effective for LBS applications due to the low utility of
existing LDP mechanisms. To address such deficiency, we propose the first LDP
framework for a variety of location-based services (namely ''L-SRR''), which
privately collects and analyzes user locations with high utility. Specifically,
we design a novel randomization mechanism ''Staircase Randomized Response''
(SRR) and extend the empirical estimation to significantly boost the utility
for SRR in different LBS applications (e.g., traffic density estimation, and
k-nearest neighbors). We have conducted extensive experiments on four real LBS
datasets by benchmarking with other LDP schemes in practical applications. The
experimental results demonstrate that L-SRR significantly outperforms them.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2022 20:53:18 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Wang",
"Han",
""
],
[
"Hong",
"Hanbin",
""
],
[
"Xiong",
"Li",
""
],
[
"Qin",
"Zhan",
""
],
[
"Hong",
"Yuan",
""
]
] |
new_dataset
| 0.993746 |
2209.15153
|
Zixin Zou
|
Zi-Xin Zou, Shi-Sheng Huang, Yan-Pei Cao, Tai-Jiang Mu, Ying Shan,
Hongbo Fu
|
MonoNeuralFusion: Online Monocular Neural 3D Reconstruction with
Geometric Priors
|
12 pages, 12 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-fidelity 3D scene reconstruction from monocular videos continues to be
challenging, especially for complete and fine-grained geometry reconstruction.
The previous 3D reconstruction approaches with neural implicit representations
have shown a promising ability for complete scene reconstruction, while their
results are often over-smooth and lack enough geometric details. This paper
introduces a novel neural implicit scene representation with volume rendering
for high-fidelity online 3D scene reconstruction from monocular videos. For
fine-grained reconstruction, our key insight is to incorporate geometric priors
into both the neural implicit scene representation and neural volume rendering,
thus leading to an effective geometry learning mechanism based on volume
rendering optimization. Benefiting from this, we present MonoNeuralFusion to
perform the online neural 3D reconstruction from monocular videos, by which the
3D scene geometry is efficiently generated and optimized during the on-the-fly
3D monocular scanning. The extensive comparisons with state-of-the-art
approaches show that our MonoNeuralFusion consistently generates much better
complete and fine-grained reconstruction results, both quantitatively and
qualitatively.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 00:44:26 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Zou",
"Zi-Xin",
""
],
[
"Huang",
"Shi-Sheng",
""
],
[
"Cao",
"Yan-Pei",
""
],
[
"Mu",
"Tai-Jiang",
""
],
[
"Shan",
"Ying",
""
],
[
"Fu",
"Hongbo",
""
]
] |
new_dataset
| 0.978131 |
2209.15164
|
Jianzong Wang
|
Denghao Li, Yuqiao Zeng, Jianzong Wang, Lingwei Kong, Zhangcheng
Huang, Ning Cheng, Xiaoyang Qu, Jing Xiao
|
Blur the Linguistic Boundary: Interpreting Chinese Buddhist Sutra in
English via Neural Machine Translation
|
This paper is accepted by ICTAI 2022. The 34th IEEE International
Conference on Tools with Artificial Intelligence (ICTAI)
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Buddhism is an influential religion with a long-standing history and profound
philosophy. Nowadays, more and more people worldwide aspire to learn the
essence of Buddhism, attaching importance to Buddhism dissemination. However,
Buddhist scriptures written in classical Chinese are obscure to most people and
machine translation applications. For instance, general Chinese-English neural
machine translation (NMT) fails in this domain. In this paper, we proposed a
novel approach to building a practical NMT model for Buddhist scriptures. The
performance of our translation pipeline acquired highly promising results in
ablation experiments under three criteria.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 01:26:05 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Li",
"Denghao",
""
],
[
"Zeng",
"Yuqiao",
""
],
[
"Wang",
"Jianzong",
""
],
[
"Kong",
"Lingwei",
""
],
[
"Huang",
"Zhangcheng",
""
],
[
"Cheng",
"Ning",
""
],
[
"Qu",
"Xiaoyang",
""
],
[
"Xiao",
"Jing",
""
]
] |
new_dataset
| 0.99838 |
2209.15169
|
Roberto Bolli Jr.
|
Roberto Bolli, Jr., Paolo Bonato, and Harry Asada
|
Handle Anywhere: A Mobile Robot Arm for Providing Bodily Support to
Elderly Persons
|
8 pages, 10 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Age-related loss of mobility and increased risk of falling remain important
obstacles toward facilitating aging-in-place. Many elderly people lack the
coordination and strength necessary to perform common movements around their
home, such as getting out of bed or stepping into a bathtub. The traditional
solution has been to install grab bars on various surfaces; however, these are
often not placed in optimal locations due to feasibility constraints in room
layout. In this paper, we present a mobile robot that provides an older adult
with a handle anywhere in space - "handle anywhere". The robot consists of an
omnidirectional mobile base attached to a repositionable handle. We analyze the
postural changes in four activities of daily living and determine, in each, the
body pose that requires the maximal muscle effort. Using a simple model of the
human body, we develop a methodology to optimally place the handle to provide
the maximum support for the elderly person at the point of most effort. Our
model is validated with experimental trials. We discuss how the robotic device
could be used to enhance patient mobility and reduce the incidence of falls.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 01:45:43 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Bolli,",
"Roberto",
"Jr."
],
[
"Bonato",
"Paolo",
""
],
[
"Asada",
"Harry",
""
]
] |
new_dataset
| 0.99822 |
2209.15195
|
Xiuzhen Guo
|
Xiuzhen Guo, Yuan He, Zihao Yu, Jiacheng Zhang, Yunhao Liu, Longfei
Shangguan
|
RF-Transformer: A Unified Backscatter Radio Hardware Abstraction
| null | null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents RF-Transformer, a unified backscatter radio hardware
abstraction that allows a low-power IoT device to directly communicate with
heterogeneous wireless receivers at the minimum power consumption. Unlike
existing backscatter systems that are tailored to a specific wireless
communication protocol, RF-Transformer provides a programmable interface to the
micro-controller, allowing IoT devices to synthesize different types of
protocol-compliant backscatter signals sharing radically different PHY-layer
designs. To show the efficacy of our design, we implement a PCB prototype of
RF-Transformer on 2.4 GHz ISM band and showcase its capability on generating
standard ZigBee, Bluetooth, LoRa, and Wi-Fi 802.11b/g/n/ac packets. Our
extensive field studies show that RF-Transformer achieves 23.8 Mbps, 247.1
Kbps, 986.5 Kbps, and 27.3 Kbps throughput when generating standard Wi-Fi,
ZigBee, Bluetooth, and LoRa signals while consuming 7.6-74.2 less power than
their active counterparts. Our ASIC simulation based on the 65-nm CMOS process
shows that the power gain of RF-Transformer can further grow to 92-678. We
further integrate RF-Transformer with pressure sensors and present a case study
on detecting foot traffic density in hallways. Our 7-day case studies
demonstrate RFTransformer can reliably transmit sensor data to a commodity
gateway by synthesizing LoRa packets on top of Wi-Fi signals. Our experimental
results also verify the compatibility of RF-Transformer with commodity
receivers. Code and hardware schematics can be found at:
https://github.com/LeFsCC/RF-Transformer.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 02:50:55 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Guo",
"Xiuzhen",
""
],
[
"He",
"Yuan",
""
],
[
"Yu",
"Zihao",
""
],
[
"Zhang",
"Jiacheng",
""
],
[
"Liu",
"Yunhao",
""
],
[
"Shangguan",
"Longfei",
""
]
] |
new_dataset
| 0.991842 |
2209.15198
|
Songzhou Yang
|
Songzhou Yang, Yuan He, Xiaolong Zheng
|
FoVR: Attention-based VR Streaming through Bandwidth-limited Wireless
Networks
| null | null |
10.1109/SAHCN.2019.8824804
| null |
cs.NI cs.MM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Consumer Virtual Reality (VR) has been widely used in various application
areas, such as entertainment and medicine. In spite of the superb immersion
experience, to enable high-quality VR on untethered mobile devices remains an
extremely challenging task. The high bandwidth demands of VR streaming
generally overburden a conventional wireless connection, which affects the user
experience and in turn limits the usability of VR in practice. In this paper,
we propose FoVR, attention-based hierarchical VR streaming through
bandwidth-limited wireless networks. The design of FoVR stems from the insight
that human's vision is hierarchical, so that different areas in the field of
view (FoV) can be served with VR content of different qualities. By exploiting
the gaze tracking capacity of the VR devices, FoVR is able to accurately
predict the user's attention so that the streaming of hierarchical VR can be
appropriately scheduled. In this way, FoVR significantly reduces the bandwidth
cost and computing cost while keeping high quality of user experience. We
implement FoVR on a commercial VR device and evaluate its performance in
various scenarios. The experiment results show that FoVR reduces the bandwidth
cost by 88.9% and 76.2%, respectively compared to the original VR streaming and
the state-of-the-art approach.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 02:58:08 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Yang",
"Songzhou",
""
],
[
"He",
"Yuan",
""
],
[
"Zheng",
"Xiaolong",
""
]
] |
new_dataset
| 0.988343 |
2209.15258
|
Felicia Ruppel
|
Felicia Ruppel, Florian Faion, Claudius Gl\"aser, Klaus Dietmayer
|
Transformers for Object Detection in Large Point Clouds
|
Accepted for publication at the 2022 25th IEEE International
Conference on Intelligent Transportation Systems (ITSC 2022), Sep 18- Oct 12,
2022, in Macau, China
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present TransLPC, a novel detection model for large point clouds that is
based on a transformer architecture. While object detection with transformers
has been an active field of research, it has proved difficult to apply such
models to point clouds that span a large area, e.g. those that are common in
autonomous driving, with lidar or radar data. TransLPC is able to remedy these
issues: The structure of the transformer model is modified to allow for larger
input sequence lengths, which are sufficient for large point clouds. Besides
this, we propose a novel query refinement technique to improve detection
accuracy, while retaining a memory-friendly number of transformer decoder
queries. The queries are repositioned between layers, moving them closer to the
bounding box they are estimating, in an efficient manner. This simple technique
has a significant effect on detection accuracy, which is evaluated on the
challenging nuScenes dataset on real-world lidar data. Besides this, the
proposed method is compatible with existing transformer-based solutions that
require object detection, e.g. for joint multi-object tracking and detection,
and enables them to be used in conjunction with large point clouds.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 06:35:43 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Ruppel",
"Felicia",
""
],
[
"Faion",
"Florian",
""
],
[
"Gläser",
"Claudius",
""
],
[
"Dietmayer",
"Klaus",
""
]
] |
new_dataset
| 0.998531 |
2209.15270
|
Weichong Yin
|
Bin Shan, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang
|
ERNIE-ViL 2.0: Multi-view Contrastive Learning for Image-Text
Pre-training
|
14 pages, 6 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent Vision-Language Pre-trained (VLP) models based on dual encoder have
attracted extensive attention from academia and industry due to their superior
performance on various cross-modal tasks and high computational efficiency.
They attempt to learn cross-modal representation using contrastive learning on
image-text pairs, however, the built inter-modal correlations only rely on a
single view for each modality. Actually, an image or a text contains various
potential views, just as humans could capture a real-world scene via diverse
descriptions or photos. In this paper, we propose ERNIE-ViL 2.0, a Multi-View
Contrastive learning framework to build intra-modal and inter-modal
correlations between diverse views simultaneously, aiming at learning a more
robust cross-modal representation. Specifically, we construct multiple views
within each modality to learn the intra-modal correlation for enhancing the
single-modal representation. Besides the inherent visual/textual views, we
construct sequences of object tags as a special textual view to narrow the
cross-modal semantic gap on noisy image-text pairs. Pre-trained with 29M
publicly available datasets, ERNIE-ViL 2.0 achieves competitive results on
English cross-modal retrieval. Additionally, to generalize our method to
Chinese cross-modal tasks, we train ERNIE-ViL 2.0 through scaling up the
pre-training datasets to 1.5B Chinese image-text pairs, resulting in
significant improvements compared to previous SOTA results on Chinese
cross-modal retrieval. We release our pre-trained models in
https://github.com/PaddlePaddle/ERNIE.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 07:20:07 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Shan",
"Bin",
""
],
[
"Yin",
"Weichong",
""
],
[
"Sun",
"Yu",
""
],
[
"Tian",
"Hao",
""
],
[
"Wu",
"Hua",
""
],
[
"Wang",
"Haifeng",
""
]
] |
new_dataset
| 0.999037 |
2209.15296
|
Qiuchen Yu
|
Qiuchen Yu, Ruohua Zhou
|
Wake Word Detection Based on Res2Net
| null | null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter proposes a new wake word detection system based on Res2Net. As a
variant of ResNet, Res2Net was first applied to objection detection. Res2Net
realizes multiple feature scales by increasing possible receptive fields. This
multiple scaling mechanism significantly improves the detection ability of wake
words with different durations. Compared with the ResNet-based model, Res2Net
also significantly reduces the model size and is more suitable for detecting
wake words. The proposed system can determine the positions of wake words from
the audio stream without any additional assistance. The proposed method is
verified on the Mobvoi dataset containing two wake words. At a false alarm rate
of 0.5 per hour, the system reduced the false rejection of the two wake words
by more than 12% over prior works.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 08:10:16 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Yu",
"Qiuchen",
""
],
[
"Zhou",
"Ruohua",
""
]
] |
new_dataset
| 0.984773 |
2209.15348
|
Xiuzhen Guo
|
Xiuzhen Guo, Longfei Shangguan, Yuan He, Nan Jing, Jiacheng Zhang,
Haotian Jiang, Yunhao Liu
|
Saiyan: Design and Implementation of a Low-power Demodulator for LoRa
Backscatter Systems
| null | null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
The radio range of backscatter systems continues growing as new wireless
communication primitives are continuously invented. Nevertheless, both the bit
error rate and the packet loss rate of backscatter signals increase rapidly
with the radio range, thereby necessitating the cooperation between the access
point and the backscatter tags through a feedback loop. Unfortunately, the
low-power nature of backscatter tags limits their ability to demodulate
feedback signals from a remote access point and scales down to such
circumstances. This paper presents Saiyan, an ultra-low-power demodulator for
long-range LoRa backscatter systems. With Saiyan, a backscatter tag can
demodulate feedback signals from a remote access point with moderate power
consumption and then perform an immediate packet retransmission in the presence
of packet loss. Moreover, Saiyan enables rate adaption and channel hopping-two
PHY-layer operations that are important to channel efficiency yet unavailable
on long-range backscatter systems. We prototype Saiyan on a two-layer PCB board
and evaluate its performance in different environments. Results show that
Saiyan achieves 5 gain on the demodulation range, compared with
state-of-the-art systems. Our ASIC simulation shows that the power consumption
of Saiyan is around 93.2 uW. Code and hardware schematics can be found at:
https://github.com/ZangJac/Saiyan.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 10:11:21 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Guo",
"Xiuzhen",
""
],
[
"Shangguan",
"Longfei",
""
],
[
"He",
"Yuan",
""
],
[
"Jing",
"Nan",
""
],
[
"Zhang",
"Jiacheng",
""
],
[
"Jiang",
"Haotian",
""
],
[
"Liu",
"Yunhao",
""
]
] |
new_dataset
| 0.990065 |
2209.15390
|
Aaron Saxton
|
Aaron Saxton and Stephen Squaire
|
Deploying a sharded MongoDB cluster as a queued job on a shared HPC
architecture
| null | null | null | null |
cs.DC cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
Data stores are the foundation on which data science, in all its variations,
is built upon. They provide a queryable interface to structured and
unstructured data. Data science often starts by leveraging these query features
to perform initial data preparation. However, most data stores are designed to
run continuously to service disparate user requests with little or no downtime.
Many HPC architectures process user requests by job queue scheduler and
maintain a shard filesystem to store a jobs persistent data. We deploy a
MongoDB sharded cluster with a run script that is designed to run a data
science workload concurrently. As our test piece, we run data ingest and data
queries to measure the performance with different configurations on the Blue
Waters supper computer.
|
[
{
"version": "v1",
"created": "Tue, 6 Sep 2022 17:07:12 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Saxton",
"Aaron",
""
],
[
"Squaire",
"Stephen",
""
]
] |
new_dataset
| 0.965755 |
2209.15407
|
Zihao Yu
|
Zihao Yu, Chengkun Jiang, Yuan He, Xiaolong Zheng, Xiuzhen Guo
|
Crocs: Cross-Technology Clock Synchronization for WiFi and ZigBee
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Clock synchronization is a key function in embedded wireless systems and
networks. This issue is equally important and more challenging in IoT systems
nowadays, which often include heterogeneous wireless devices that follow
different wireless standards. Conventional solutions to this problem employ
gateway-based indirect synchronization, which suffers low accuracy. This paper
for the first time studies the problem of cross-technology clock
synchronization. Our proposal called Crocs synchronizes WiFi and ZigBee devices
by direct cross-technology communication. Crocs decouples the synchronization
signal from the transmission of a timestamp. By incorporating a barker-code
based beacon for time alignment and cross-technology transmission of
timestamps, Crocs achieves robust and accurate synchronization among WiFi and
ZigBee devices, with the synchronization error lower than 1 millisecond. We
further make attempts to implement different cross-technology communication
methods in Crocs and provide insight findings with regard to the achievable
accuracy and expected overhead.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 12:08:08 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Yu",
"Zihao",
""
],
[
"Jiang",
"Chengkun",
""
],
[
"He",
"Yuan",
""
],
[
"Zheng",
"Xiaolong",
""
],
[
"Guo",
"Xiuzhen",
""
]
] |
new_dataset
| 0.979376 |
2209.15418
|
Kshama Dwarakanath
|
Kshama Dwarakanath, Svitlana S Vyetrenko, Tucker Balch
|
Equitable Marketplace Mechanism Design
| null | null | null | null |
cs.GT cs.LG cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a trading marketplace that is populated by traders with diverse
trading strategies and objectives. The marketplace allows the suppliers to list
their goods and facilitates matching between buyers and sellers. In return,
such a marketplace typically charges fees for facilitating trade. The goal of
this work is to design a dynamic fee schedule for the marketplace that is
equitable and profitable to all traders while being profitable to the
marketplace at the same time (from charging fees). Since the traders adapt
their strategies to the fee schedule, we present a reinforcement learning
framework for simultaneously learning a marketplace fee schedule and trading
strategies that adapt to this fee schedule using a weighted optimization
objective of profits and equitability. We illustrate the use of the proposed
approach in detail on a simulated stock exchange with different types of
investors, specifically market makers and consumer investors. As we vary the
equitability weights across different investor classes, we see that the learnt
exchange fee schedule starts favoring the class of investors with the highest
weight. We further discuss the observed insights from the simulated stock
exchange in light of the general framework of equitable marketplace mechanism
design.
|
[
{
"version": "v1",
"created": "Thu, 22 Sep 2022 20:03:34 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Dwarakanath",
"Kshama",
""
],
[
"Vyetrenko",
"Svitlana S",
""
],
[
"Balch",
"Tucker",
""
]
] |
new_dataset
| 0.959472 |
2209.15457
|
EPTCS
|
Surya Murthy (University of Illinois, Urbana-Champaign), Natasha A.
Neogi (NASA Langley Research Center), Suda Bharadwaj (Skygrid, Inc.)
|
Scheduling for Urban Air Mobility using Safe Learning
|
In Proceedings FMAS2022 ASYDE2022, arXiv:2209.13181
|
EPTCS 371, 2022, pp. 86-102
|
10.4204/EPTCS.371.7
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This work considers the scheduling problem for Urban Air Mobility (UAM)
vehicles travelling between origin-destination pairs with both hard and soft
trip deadlines. Each route is described by a discrete probability distribution
over trip completion times (or delay) and over inter-arrival times of requests
(or demand) for the route along with a fixed hard or soft deadline. Soft
deadlines carry a cost that is incurred when the deadline is missed. An online,
safe scheduler is developed that ensures that hard deadlines are never missed,
and that average cost of missing soft deadlines is minimized. The system is
modelled as a Markov Decision Process (MDP) and safe model-based learning is
used to find the probabilistic distributions over route delays and demand.
Monte Carlo Tree Search (MCTS) Earliest Deadline First (EDF) is used to safely
explore the learned models in an online fashion and develop a near-optimal
non-preemptive scheduling policy. These results are compared with Value
Iteration (VI) and MCTS (Random) scheduling solutions.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2022 12:23:38 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Murthy",
"Surya",
"",
"University of Illinois, Urbana-Champaign"
],
[
"Neogi",
"Natasha A.",
"",
"NASA Langley Research Center"
],
[
"Bharadwaj",
"Suda",
"",
"Skygrid, Inc."
]
] |
new_dataset
| 0.997062 |
2209.15474
|
Jag Mohan Singh
|
Jag Mohan Singh, Raghavendra Ramachandra
|
Reliable Face Morphing Attack Detection in On-The-Fly Border Control
Scenario with Variation in Image Resolution and Capture Distance
|
The paper is accepted at the International Joint Conference on
Biometrics (IJCB) 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Face Recognition Systems (FRS) are vulnerable to various attacks performed
directly and indirectly. Among these attacks, face morphing attacks are highly
potential in deceiving automatic FRS and human observers and indicate a severe
security threat, especially in the border control scenario. This work presents
a face morphing attack detection, especially in the On-The-Fly (OTF) Automatic
Border Control (ABC) scenario. We present a novel Differential-MAD (D-MAD)
algorithm based on the spherical interpolation and hierarchical fusion of deep
features computed from six different pre-trained deep Convolutional Neural
Networks (CNNs). Extensive experiments are carried out on the newly generated
face morphing dataset (SCFace-Morph) based on the publicly available SCFace
dataset by considering the real-life scenario of Automatic Border Control (ABC)
gates. Experimental protocols are designed to benchmark the proposed and
state-of-the-art (SOTA) D-MAD techniques for different camera resolutions and
capture distances. Obtained results have indicated the superior performance of
the proposed D-MAD method compared to the existing methods.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 13:58:43 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Singh",
"Jag Mohan",
""
],
[
"Ramachandra",
"Raghavendra",
""
]
] |
new_dataset
| 0.996488 |
2209.15539
|
No\'emie Jaquier
|
No\'emie Jaquier and Tamim Asfour
|
Riemannian geometry as a unifying theory for robot motion learning and
control
|
Published as a blue sky paper at ISRR'22. 8 pages, 2 figures. Video
at https://youtu.be/XblzcKRRITE
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Riemannian geometry is a mathematical field which has been the cornerstone of
revolutionary scientific discoveries such as the theory of general relativity.
Despite early uses in robot design and recent applications for exploiting data
with specific geometries, it mostly remains overlooked in robotics. With this
blue sky paper, we argue that Riemannian geometry provides the most suitable
tools to analyze and generate well-coordinated, energy-efficient motions of
robots with many degrees of freedom. Via preliminary solutions and novel
research directions, we discuss how Riemannian geometry may be leveraged to
design and combine physically-meaningful synergies for robotics, and how this
theory also opens the door to coupling motion synergies with perceptual inputs.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 15:40:00 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Jaquier",
"Noémie",
""
],
[
"Asfour",
"Tamim",
""
]
] |
new_dataset
| 0.95848 |
2209.15614
|
S Ashwin Hebbar
|
S Ashwin Hebbar, Rajesh K Mishra, Sravan Kumar Ankireddy, Ashok V
Makkuva, Hyeji Kim, Pramod Viswanath
|
TinyTurbo: Efficient Turbo Decoders on Edge
|
10 pages, 6 figures. Published at the 2022 IEEE International
Symposium on Information Theory (ISIT)
|
"TinyTurbo: Efficient Turbo Decoders on Edge," 2022 IEEE
International Symposium on Information Theory (ISIT), 2022, pp. 2797-2802
|
10.1109/ISIT50566.2022.9834589
| null |
cs.IT cs.LG eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce a neural-augmented decoder for Turbo codes called
TINYTURBO . TINYTURBO has complexity comparable to the classical max-log-MAP
algorithm but has much better reliability than the max-log-MAP baseline and
performs close to the MAP algorithm. We show that TINYTURBO exhibits strong
robustness on a variety of practical channels of interest, such as EPA and EVA
channels, which are included in the LTE standards. We also show that TINYTURBO
strongly generalizes across different rate, blocklengths, and trellises. We
verify the reliability and efficiency of TINYTURBO via over-the-air
experiments.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 17:38:06 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Hebbar",
"S Ashwin",
""
],
[
"Mishra",
"Rajesh K",
""
],
[
"Ankireddy",
"Sravan Kumar",
""
],
[
"Makkuva",
"Ashok V",
""
],
[
"Kim",
"Hyeji",
""
],
[
"Viswanath",
"Pramod",
""
]
] |
new_dataset
| 0.999375 |
2209.15626
|
Hsin-Yu Liu
|
Hsin-Yu Liu (1), Xiaohan Fu (1), Bharathan Balaji (2), Rajesh Gupta
(1), and Dezhi Hong (2) ((1) University of California, San Diego, (2) Amazon)
|
B2RL: An open-source Dataset for Building Batch Reinforcement Learning
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Batch reinforcement learning (BRL) is an emerging research area in the RL
community. It learns exclusively from static datasets (i.e. replay buffers)
without interaction with the environment. In the offline settings, existing
replay experiences are used as prior knowledge for BRL models to find the
optimal policy. Thus, generating replay buffers is crucial for BRL model
benchmark. In our B2RL (Building Batch RL) dataset, we collected real-world
data from our building management systems, as well as buffers generated by
several behavioral policies in simulation environments. We believe it could
help building experts on BRL research. To the best of our knowledge, we are the
first to open-source building datasets for the purpose of BRL learning.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 17:54:42 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Liu",
"Hsin-Yu",
"",
"University of California, San Diego"
],
[
"Fu",
"Xiaohan",
"",
"University of California, San Diego"
],
[
"Balaji",
"Bharathan",
"",
"Amazon"
],
[
"Gupta",
"Rajesh",
"",
"University of California, San Diego"
],
[
"Hong",
"Dezhi",
"",
"Amazon"
]
] |
new_dataset
| 0.999763 |
2209.15632
|
Daxuan Ren
|
Daxuan Ren, Jianmin Zheng, Jianfei Cai, Jiatong Li, and Junzhe Zhang
|
ExtrudeNet: Unsupervised Inverse Sketch-and-Extrude for Shape Parsing
|
Accepted to ECCV 2022
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Sketch-and-extrude is a common and intuitive modeling process in computer
aided design. This paper studies the problem of learning the shape given in the
form of point clouds by inverse sketch-and-extrude. We present ExtrudeNet, an
unsupervised end-to-end network for discovering sketch and extrude from point
clouds. Behind ExtrudeNet are two new technical components: 1) an effective
representation for sketch and extrude, which can model extrusion with freeform
sketches and conventional cylinder and box primitives as well; and 2) a
numerical method for computing the signed distance field which is used in the
network learning. This is the first attempt that uses machine learning to
reverse engineer the sketch-and-extrude modeling process of a shape in an
unsupervised fashion. ExtrudeNet not only outputs a compact, editable and
interpretable representation of the shape that can be seamlessly integrated
into modern CAD software, but also aligns with the standard CAD modeling
process facilitating various editing applications, which distinguishes our work
from existing shape parsing research. Code is released at
https://github.com/kimren227/ExtrudeNet.
|
[
{
"version": "v1",
"created": "Fri, 30 Sep 2022 17:58:11 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Ren",
"Daxuan",
""
],
[
"Zheng",
"Jianmin",
""
],
[
"Cai",
"Jianfei",
""
],
[
"Li",
"Jiatong",
""
],
[
"Zhang",
"Junzhe",
""
]
] |
new_dataset
| 0.998658 |
2209.15640
|
Xiao Fu
|
Xiao Fu, Xin Yuan, Jinglu Hu
|
HSD: A hierarchical singing annotation dataset
| null | null | null | null |
cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Commonly music has an obvious hierarchical structure, especially for the
singing parts which usually act as the main melody in pop songs. However, most
of the current singing annotation datasets only record symbolic information of
music notes, ignoring the structure of music. In this paper, we propose a
hierarchical singing annotation dataset that consists of 68 pop songs from
Youtube. This dataset records the onset/offset time, pitch, duration, and lyric
of each musical note in an enhanced LyRiCs format to present the hierarchical
structure of music. We annotate each song in a two-stage process: first, create
initial labels with the corresponding musical notation and lyrics file; second,
manually calibrate these labels referring to the raw audio. We mainly validate
the labeling accuracy of the proposed dataset by comparing it with an automatic
singing transcription (AST) dataset. The result indicates that the proposed
dataset reaches the labeling accuracy of AST datasets.
|
[
{
"version": "v1",
"created": "Mon, 26 Sep 2022 17:00:07 GMT"
}
] | 2022-10-03T00:00:00 |
[
[
"Fu",
"Xiao",
""
],
[
"Yuan",
"Xin",
""
],
[
"Hu",
"Jinglu",
""
]
] |
new_dataset
| 0.999618 |
1807.05385
|
Alejandro D\'iaz-Caro
|
Alejandro D\'iaz-Caro and Marcos Villagra
|
Classically Time-Controlled Quantum Automata: Definition and Properties
|
Long revisited version of LNCS 11324:266-278, 2018 (TPNC 2018)
| null | null | null |
cs.FL cs.CC quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce classically time-controlled quantum automata or
CTQA, which is a reasonable modification of Moore-Crutchfield quantum finite
automata that uses time-dependent evolution and a "scheduler" defining how long
each Hamiltonian will run. Surprisingly enough, time-dependent evolution
provides a significant change in the computational power of quantum automata
with respect to a discrete quantum model. Indeed, we show that if a scheduler
is not computationally restricted, then a CTQA can decide the Halting problem.
In order to unearth the computational capabilities of CTQAs we study the case
of a computationally restricted scheduler. In particular, we showed that
depending on the type of restriction imposed on the scheduler, a CTQA can (i)
recognize non-regular languages with cut-point, even in the presence of
Karp-Lipton advice, and (ii) recognize non-regular promise languages with
bounded-error. Furthermore, we study the cutpoint-union of cutpoint languages
by introducing a new model of Moore-Crutchfield quantum finite automata with a
rotating tape head. CTQA presents itself as a new model of computation that
provides a different approach to a formal study of "classical control, quantum
data" schemes in quantum computing.
|
[
{
"version": "v1",
"created": "Sat, 14 Jul 2018 11:57:37 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Sep 2018 18:49:31 GMT"
},
{
"version": "v3",
"created": "Wed, 29 Jan 2020 00:05:00 GMT"
},
{
"version": "v4",
"created": "Sun, 3 May 2020 14:22:21 GMT"
},
{
"version": "v5",
"created": "Thu, 1 Oct 2020 22:52:05 GMT"
},
{
"version": "v6",
"created": "Fri, 18 Dec 2020 17:25:26 GMT"
},
{
"version": "v7",
"created": "Tue, 8 Feb 2022 16:59:36 GMT"
},
{
"version": "v8",
"created": "Sat, 12 Feb 2022 13:32:35 GMT"
},
{
"version": "v9",
"created": "Thu, 29 Sep 2022 13:32:42 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Díaz-Caro",
"Alejandro",
""
],
[
"Villagra",
"Marcos",
""
]
] |
new_dataset
| 0.999502 |
2107.04010
|
Alise Danielle Midtfjord
|
Alise Danielle Midtfjord, Riccardo De Bin and Arne Bang Huseby
|
A Decision Support System for Safer Airplane Landings: Predicting Runway
Conditions Using XGBoost and Explainable AI
| null |
Cold Regions Science and Technology, Volume 199, 2022
|
10.1016/j.coldregions.2022.103556
| null |
cs.CY cs.LG stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The presence of snow and ice on runway surfaces reduces the available
tire-pavement friction needed for retardation and directional control and
causes potential economic and safety threats for the aviation industry during
the winter seasons. To activate appropriate safety procedures, pilots need
accurate and timely information on the actual runway surface conditions. In
this study, XGBoost is used to create a combined runway assessment system,
which includes a classification model to identify slippery conditions and a
regression model to predict the level of slipperiness. The models are trained
on weather data and runway reports. The runway surface conditions are
represented by the tire-pavement friction coefficient, which is estimated from
flight sensor data from landing aircrafts. The XGBoost models are combined with
SHAP approximations to provide a reliable decision support system for airport
operators, which can contribute to safer and more economic operations of
airport runways. To evaluate the performance of the prediction models, they are
compared to several state-of-the-art runway assessment methods. The XGBoost
models identify slippery runway conditions with a ROC AUC of 0.95, predict the
friction coefficient with a MAE of 0.0254, and outperforms all the previous
methods. The results show the strong abilities of machine learning methods to
model complex, physical phenomena with a good accuracy. Published version:
https://doi.org/10.1016/j.coldregions.2022.103556.
|
[
{
"version": "v1",
"created": "Thu, 1 Jul 2021 11:01:13 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Sep 2022 14:54:08 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Midtfjord",
"Alise Danielle",
""
],
[
"De Bin",
"Riccardo",
""
],
[
"Huseby",
"Arne Bang",
""
]
] |
new_dataset
| 0.981085 |
2110.08343
|
Evgeny Osipov
|
Evgeny Osipov, Sachin Kahawala, Dilantha Haputhanthri, Thimal
Kempitiya, Daswin De Silva, Damminda Alahakoon, Denis Kleyko
|
Hyperseed: Unsupervised Learning with Vector Symbolic Architectures
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivated by recent innovations in biologically-inspired neuromorphic
hardware, this article presents a novel unsupervised machine learning algorithm
named Hyperseed that draws on the principles of Vector Symbolic Architectures
(VSA) for fast learning of a topology preserving feature map of unlabelled
data. It relies on two major operations of VSA, binding and bundling. The
algorithmic part of Hyperseed is expressed within Fourier Holographic Reduced
Representations model, which is specifically suited for implementation on
spiking neuromorphic hardware. The two primary contributions of the Hyperseed
algorithm are, few-shot learning and a learning rule based on single vector
operation. These properties are empirically evaluated on synthetic datasets as
well as on illustrative benchmark use-cases, IRIS classification, and a
language identification task using n-gram statistics. The results of these
experiments confirm the capabilities of Hyperseed and its applications in
neuromorphic hardware.
|
[
{
"version": "v1",
"created": "Fri, 15 Oct 2021 20:05:43 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Sep 2022 09:55:31 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Osipov",
"Evgeny",
""
],
[
"Kahawala",
"Sachin",
""
],
[
"Haputhanthri",
"Dilantha",
""
],
[
"Kempitiya",
"Thimal",
""
],
[
"De Silva",
"Daswin",
""
],
[
"Alahakoon",
"Damminda",
""
],
[
"Kleyko",
"Denis",
""
]
] |
new_dataset
| 0.987435 |
2110.08387
|
Jiacheng Liu
|
Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le
Bras, Yejin Choi, Hannaneh Hajishirzi
|
Generated Knowledge Prompting for Commonsense Reasoning
|
ACL 2022 main conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
It remains an open question whether incorporating external knowledge benefits
commonsense reasoning while maintaining the flexibility of pretrained sequence
models. To investigate this question, we develop generated knowledge prompting,
which consists of generating knowledge from a language model, then providing
the knowledge as additional input when answering a question. Our method does
not require task-specific supervision for knowledge integration, or access to a
structured knowledge base, yet it improves performance of large-scale,
state-of-the-art models on four commonsense reasoning tasks, achieving
state-of-the-art results on numerical commonsense (NumerSense), general
commonsense (CommonsenseQA 2.0), and scientific commonsense (QASC) benchmarks.
Generated knowledge prompting highlights large-scale language models as
flexible sources of external knowledge for improving commonsense reasoning. Our
code is available at https://github.com/liujch1998/GKP
|
[
{
"version": "v1",
"created": "Fri, 15 Oct 2021 21:58:03 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2022 00:04:59 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Sep 2022 19:24:24 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Liu",
"Jiacheng",
""
],
[
"Liu",
"Alisa",
""
],
[
"Lu",
"Ximing",
""
],
[
"Welleck",
"Sean",
""
],
[
"West",
"Peter",
""
],
[
"Bras",
"Ronan Le",
""
],
[
"Choi",
"Yejin",
""
],
[
"Hajishirzi",
"Hannaneh",
""
]
] |
new_dataset
| 0.997943 |
2112.00804
|
Brian Chen
|
Brian Chen, Ramprasaath R. Selvaraju, Shih-Fu Chang, Juan Carlos
Niebles, and Nikhil Naik
|
PreViTS: Contrastive Pretraining with Video Tracking Supervision
|
To be presented at WACV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Videos are a rich source for self-supervised learning (SSL) of visual
representations due to the presence of natural temporal transformations of
objects. However, current methods typically randomly sample video clips for
learning, which results in an imperfect supervisory signal. In this work, we
propose PreViTS, an SSL framework that utilizes an unsupervised tracking signal
for selecting clips containing the same object, which helps better utilize
temporal transformations of objects. PreViTS further uses the tracking signal
to spatially constrain the frame regions to learn from and trains the model to
locate meaningful objects by providing supervision on Grad-CAM attention maps.
To evaluate our approach, we train a momentum contrastive (MoCo) encoder on
VGG-Sound and Kinetics-400 datasets with PreViTS. Training with PreViTS
outperforms representations learnt by contrastive strategy alone on video
downstream tasks, obtaining state-of-the-art performance on action
classification. PreViTS helps learn feature representations that are more
robust to changes in background and context, as seen by experiments on datasets
with background changes. Learning from large-scale videos with PreViTS could
lead to more accurate and robust visual feature representations.
|
[
{
"version": "v1",
"created": "Wed, 1 Dec 2021 19:49:57 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Sep 2022 18:35:14 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Chen",
"Brian",
""
],
[
"Selvaraju",
"Ramprasaath R.",
""
],
[
"Chang",
"Shih-Fu",
""
],
[
"Niebles",
"Juan Carlos",
""
],
[
"Naik",
"Nikhil",
""
]
] |
new_dataset
| 0.995398 |
2202.06767
|
Jiaxi Gu
|
Jiaxi Gu, Xiaojun Meng, Guansong Lu, Lu Hou, Minzhe Niu, Xiaodan
Liang, Lewei Yao, Runhui Huang, Wei Zhang, Xin Jiang, Chunjing Xu, Hang Xu
|
Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training
Benchmark
|
Accepted by NeurIPS 2022 Track Datasets and Benchmarks
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision-Language Pre-training (VLP) models have shown remarkable performance
on various downstream tasks. Their success heavily relies on the scale of
pre-trained cross-modal datasets. However, the lack of large-scale datasets and
benchmarks in Chinese hinders the development of Chinese VLP models and broader
multilingual applications. In this work, we release a large-scale Chinese
cross-modal dataset named Wukong, which contains 100 million Chinese image-text
pairs collected from the web. Wukong aims to benchmark different multi-modal
pre-training methods to facilitate the VLP research and community development.
Furthermore, we release a group of models pre-trained with various image
encoders (ViT-B/ViT-L/SwinT) and also apply advanced pre-training techniques
into VLP such as locked-image text tuning, token-wise similarity in contrastive
learning, and reduced-token interaction. Extensive experiments and a
benchmarking of different downstream tasks including a new largest
human-verified image-text test dataset are also provided. Experiments show that
Wukong can serve as a promising Chinese pre-training dataset and benchmark for
different cross-modal learning methods. For the zero-shot image classification
task on 10 datasets, $Wukong_{ViT-L}$ achieves an average accuracy of 73.03%.
For the image-text retrieval task, it achieves a mean recall of 71.6% on
AIC-ICC which is 12.9% higher than WenLan 2.0. Also, our Wukong models are
benchmarked on downstream tasks with other variants on multiple datasets, e.g.,
Flickr8K-CN, Flickr-30K-CN, COCO-CN, et al. More information can be referred
to: https://wukong-dataset.github.io/wukong-dataset/.
|
[
{
"version": "v1",
"created": "Mon, 14 Feb 2022 14:37:15 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Mar 2022 07:11:02 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Jun 2022 03:29:04 GMT"
},
{
"version": "v4",
"created": "Thu, 29 Sep 2022 03:37:02 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Gu",
"Jiaxi",
""
],
[
"Meng",
"Xiaojun",
""
],
[
"Lu",
"Guansong",
""
],
[
"Hou",
"Lu",
""
],
[
"Niu",
"Minzhe",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Yao",
"Lewei",
""
],
[
"Huang",
"Runhui",
""
],
[
"Zhang",
"Wei",
""
],
[
"Jiang",
"Xin",
""
],
[
"Xu",
"Chunjing",
""
],
[
"Xu",
"Hang",
""
]
] |
new_dataset
| 0.999846 |
2203.01769
|
Miao Li
|
Miao Li, Jianzhong Qi, Jey Han Lau
|
PeerSum: A Peer Review Dataset for Abstractive Multi-document
Summarization
|
This is because the paper has changed so much and the arxiv paper no
longer represents the PeerSum
| null | null | null |
cs.IR cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present PeerSum, a new MDS dataset using peer reviews of scientific
publications. Our dataset differs from the existing MDS datasets in that our
summaries (i.e., the meta-reviews) are highly abstractive and they are real
summaries of the source documents (i.e., the reviews) and it also features
disagreements among source documents. We found that current state-of-the-art
MDS models struggle to generate high-quality summaries for PeerSum, offering
new research opportunities.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 15:27:02 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Sep 2022 01:14:20 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Li",
"Miao",
""
],
[
"Qi",
"Jianzhong",
""
],
[
"Lau",
"Jey Han",
""
]
] |
new_dataset
| 0.999631 |
2208.00817
|
Yonghao He
|
Hu Su, Yonghao He, Rui Jiang, Jiabin Zhang, Wei Zou, Bin Fan
|
DSLA: Dynamic smooth label assignment for efficient anchor-free object
detection
|
single column, 33 pages, 7 figures, accepted by Pattern Recognition
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anchor-free detectors basically formulate object detection as dense
classification and regression. For popular anchor-free detectors, it is common
to introduce an individual prediction branch to estimate the quality of
localization. The following inconsistencies are observed when we delve into the
practices of classification and quality estimation. Firstly, for some adjacent
samples which are assigned completely different labels, the trained model would
produce similar classification scores. This violates the training objective and
leads to performance degradation. Secondly, it is found that detected bounding
boxes with higher confidences contrarily have smaller overlaps with the
corresponding ground-truth. Accurately localized bounding boxes would be
suppressed by less accurate ones in the Non-Maximum Suppression (NMS)
procedure. To address the inconsistency problems, the Dynamic Smooth Label
Assignment (DSLA) method is proposed. Based on the concept of centerness
originally developed in FCOS, a smooth assignment strategy is proposed. The
label is smoothed to a continuous value in [0, 1] to make a steady transition
between positive and negative samples. Intersection-of-Union (IoU) is predicted
dynamically during training and is coupled with the smoothed label. The dynamic
smooth label is assigned to supervise the classification branch. Under such
supervision, quality estimation branch is naturally merged into the
classification branch, which simplifies the architecture of anchor-free
detector. Comprehensive experiments are conducted on the MS COCO benchmark. It
is demonstrated that, DSLA can significantly boost the detection accuracy by
alleviating the above inconsistencies for anchor-free detectors. Our codes are
released at https://github.com/YonghaoHe/DSLA.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 12:56:44 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Sep 2022 05:39:56 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Su",
"Hu",
""
],
[
"He",
"Yonghao",
""
],
[
"Jiang",
"Rui",
""
],
[
"Zhang",
"Jiabin",
""
],
[
"Zou",
"Wei",
""
],
[
"Fan",
"Bin",
""
]
] |
new_dataset
| 0.987849 |
2209.14422
|
Md Abdullah Al Alamin
|
Md Abdullah Al Alamin
|
StacerBot: A Stacktrace Search Engine for Stack Overflow
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
We as software developers or researchers very often get stacktrace error
messages while we are trying to write some code or install some packages. Many
times these error messages are very obscure and verbose; do not make much sense
to us. There is a good chance that someone else has also faced similar issues
probably shared similar stacktrace in various online developers' forums.
However traditional google searches or other search engines are not very
helpful to find web pages with similar stacktraces. In order to address this
problem, we have developed a web interface; a better search engine: as an
outcome of this research project where users can find appropriate stack
overflow posts by submitting the whole stacktrace error message. The current
developed solution can serve real-time parallel user queries with top-matched
stack overflow posts within 50 seconds using a server with 300GB RAM. This
study provides a comprehensive overview of the NLP techniques used in this
study and an extensive overview of the research pipeline. This comprehensive
result, limitations, and computational overhead mentioned in this study can be
used by future researchers and software developers to build a better solution
for this same problem or similar large-scale text matching-related tasks.
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2022 21:20:45 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Alamin",
"Md Abdullah Al",
""
]
] |
new_dataset
| 0.998616 |
2209.14591
|
Mohammad Imrul Jubair
|
Mohammad Imrul Jubair, Ali Ahnaf, Tashfiq Nahiyan Khan, Ullash
Bhattacharjee, Tanjila Joti
|
PerSign: Personalized Bangladeshi Sign Letters Synthesis
|
Accepted at ACM UIST 2022 (poster)
| null |
10.1145/3526114.3558712
| null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Bangladeshi Sign Language (BdSL) - like other sign languages - is tough to
learn for general people, especially when it comes to expressing letters. In
this poster, we propose PerSign, a system that can reproduce a person's image
by introducing sign gestures in it. We make this operation personalized, which
means the generated image keeps the person's initial image profile - face, skin
tone, attire, background - unchanged while altering the hand, palm, and finger
positions appropriately. We use an image-to-image translation technique and
build a corresponding unique dataset to accomplish the task. We believe the
translated image can reduce the communication gap between signers (person who
uses sign language) and non-signers without having prior knowledge of BdSL.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2022 07:07:34 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Jubair",
"Mohammad Imrul",
""
],
[
"Ahnaf",
"Ali",
""
],
[
"Khan",
"Tashfiq Nahiyan",
""
],
[
"Bhattacharjee",
"Ullash",
""
],
[
"Joti",
"Tanjila",
""
]
] |
new_dataset
| 0.99991 |
2209.14614
|
Cunliang Kong
|
Jiaxin Yuan, Cunliang Kong, Chenhui Xie, Liner Yang, Erhong Yang
|
COMPILING: A Benchmark Dataset for Chinese Complexity Controllable
Definition Generation
|
Accepted by CCL 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The definition generation task aims to generate a word's definition within a
specific context automatically. However, owing to the lack of datasets for
different complexities, the definitions produced by models tend to keep the
same complexity level. This paper proposes a novel task of generating
definitions for a word with controllable complexity levels. Correspondingly, we
introduce COMPILING, a dataset given detailed information about Chinese
definitions, and each definition is labeled with its complexity levels. The
COMPILING dataset includes 74,303 words and 106,882 definitions. To the best of
our knowledge, it is the largest dataset of the Chinese definition generation
task. We select various representative generation methods as baselines for this
task and conduct evaluations, which illustrates that our dataset plays an
outstanding role in assisting models in generating different complexity-level
definitions. We believe that the COMPILING dataset will benefit further
research in complexity controllable definition generation.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2022 08:17:53 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Yuan",
"Jiaxin",
""
],
[
"Kong",
"Cunliang",
""
],
[
"Xie",
"Chenhui",
""
],
[
"Yang",
"Liner",
""
],
[
"Yang",
"Erhong",
""
]
] |
new_dataset
| 0.999754 |
2209.14642
|
Zhiwei Yang
|
Zhiwei Yang, Jing Ma, Hechang Chen, Hongzhan Lin, Ziyang Luo, Yi Chang
|
A Coarse-to-fine Cascaded Evidence-Distillation Neural Network for
Explainable Fake News Detection
|
Accepted by COLING 2022. The 29th International Conference on
Computational Linguistics, Gyeongju, Republic of Korea
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing fake news detection methods aim to classify a piece of news as true
or false and provide veracity explanations, achieving remarkable performances.
However, they often tailor automated solutions on manual fact-checked reports,
suffering from limited news coverage and debunking delays. When a piece of news
has not yet been fact-checked or debunked, certain amounts of relevant raw
reports are usually disseminated on various media outlets, containing the
wisdom of crowds to verify the news claim and explain its verdict. In this
paper, we propose a novel Coarse-to-fine Cascaded Evidence-Distillation
(CofCED) neural network for explainable fake news detection based on such raw
reports, alleviating the dependency on fact-checked ones. Specifically, we
first utilize a hierarchical encoder for web text representation, and then
develop two cascaded selectors to select the most explainable sentences for
verdicts on top of the selected top-K reports in a coarse-to-fine manner.
Besides, we construct two explainable fake news datasets, which are publicly
available. Experimental results demonstrate that our model significantly
outperforms state-of-the-art baselines and generates high-quality explanations
from diverse evaluation perspectives.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2022 09:05:47 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Yang",
"Zhiwei",
""
],
[
"Ma",
"Jing",
""
],
[
"Chen",
"Hechang",
""
],
[
"Lin",
"Hongzhan",
""
],
[
"Luo",
"Ziyang",
""
],
[
"Chang",
"Yi",
""
]
] |
new_dataset
| 0.983168 |
2209.14764
|
Konstantin Sch\"urholt
|
Konstantin Sch\"urholt, Diyar Taskiran, Boris Knyazev, Xavier
Gir\'o-i-Nieto, Damian Borth
|
Model Zoos: A Dataset of Diverse Populations of Neural Network Models
|
36th Conference on Neural Information Processing Systems (NeurIPS
2022) Track on Datasets and Benchmarks
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the last years, neural networks (NN) have evolved from laboratory
environments to the state-of-the-art for many real-world problems. It was shown
that NN models (i.e., their weights and biases) evolve on unique trajectories
in weight space during training. Following, a population of such neural network
models (referred to as model zoo) would form structures in weight space. We
think that the geometry, curvature and smoothness of these structures contain
information about the state of training and can reveal latent properties of
individual models. With such model zoos, one could investigate novel approaches
for (i) model analysis, (ii) discover unknown learning dynamics, (iii) learn
rich representations of such populations, or (iv) exploit the model zoos for
generative modelling of NN weights and biases. Unfortunately, the lack of
standardized model zoos and available benchmarks significantly increases the
friction for further research about populations of NNs. With this work, we
publish a novel dataset of model zoos containing systematically generated and
diverse populations of NN models for further research. In total the proposed
model zoo dataset is based on eight image datasets, consists of 27 model zoos
trained with varying hyperparameter combinations and includes 50'360 unique NN
models as well as their sparsified twins, resulting in over 3'844'360 collected
model states. Additionally, to the model zoo data we provide an in-depth
analysis of the zoos and provide benchmarks for multiple downstream tasks. The
dataset can be found at www.modelzoos.cc.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2022 13:20:42 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Schürholt",
"Konstantin",
""
],
[
"Taskiran",
"Diyar",
""
],
[
"Knyazev",
"Boris",
""
],
[
"Giró-i-Nieto",
"Xavier",
""
],
[
"Borth",
"Damian",
""
]
] |
new_dataset
| 0.990193 |
2209.14890
|
Zhenfeng Xue
|
Yunliang Jiang, Chenyang Gu, Zhenfeng Xue, Xiongtao Zhang, Yong Liu
|
Mask-Guided Image Person Removal with Data Synthesis
|
10 pages, 8 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
As a special case of common object removal, image person removal is playing
an increasingly important role in social media and criminal investigation
domains. Due to the integrity of person area and the complexity of human
posture, person removal has its own dilemmas. In this paper, we propose a novel
idea to tackle these problems from the perspective of data synthesis.
Concerning the lack of dedicated dataset for image person removal, two dataset
production methods are proposed to automatically generate images, masks and
ground truths respectively. Then, a learning framework similar to local image
degradation is proposed so that the masks can be used to guide the feature
extraction process and more texture information can be gathered for final
prediction. A coarse-to-fine training strategy is further applied to refine the
details. The data synthesis and learning framework combine well with each
other. Experimental results verify the effectiveness of our method
quantitatively and qualitatively, and the trained network proves to have good
generalization ability either on real or synthetic images.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2022 15:58:17 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Jiang",
"Yunliang",
""
],
[
"Gu",
"Chenyang",
""
],
[
"Xue",
"Zhenfeng",
""
],
[
"Zhang",
"Xiongtao",
""
],
[
"Liu",
"Yong",
""
]
] |
new_dataset
| 0.990656 |
2209.14922
|
Sanket Kalwar Mr
|
Sanket Kalwar, Dhruv Patel, Aakash Aanegola, Krishna Reddy Konda,
Sourav Garg, K Madhava Krishna
|
GDIP: Gated Differentiable Image Processing for Object-Detection in
Adverse Conditions
|
Submitted to ICRA2023. More information at https://gatedip.github.io
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Detecting objects under adverse weather and lighting conditions is crucial
for the safe and continuous operation of an autonomous vehicle, and remains an
unsolved problem. We present a Gated Differentiable Image Processing (GDIP)
block, a domain-agnostic network architecture, which can be plugged into
existing object detection networks (e.g., Yolo) and trained end-to-end with
adverse condition images such as those captured under fog and low lighting. Our
proposed GDIP block learns to enhance images directly through the downstream
object detection loss. This is achieved by learning parameters of multiple
image pre-processing (IP) techniques that operate concurrently, with their
outputs combined using weights learned through a novel gating mechanism. We
further improve GDIP through a multi-stage guidance procedure for progressive
image enhancement. Finally, trading off accuracy for speed, we propose a
variant of GDIP that can be used as a regularizer for training Yolo, which
eliminates the need for GDIP-based image enhancement during inference,
resulting in higher throughput and plausible real-world deployment. We
demonstrate significant improvement in detection performance over several
state-of-the-art methods through quantitative and qualitative studies on
synthetic datasets such as PascalVOC, and real-world foggy (RTTS) and
low-lighting (ExDark) datasets.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2022 16:43:13 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Kalwar",
"Sanket",
""
],
[
"Patel",
"Dhruv",
""
],
[
"Aanegola",
"Aakash",
""
],
[
"Konda",
"Krishna Reddy",
""
],
[
"Garg",
"Sourav",
""
],
[
"Krishna",
"K Madhava",
""
]
] |
new_dataset
| 0.989201 |
2209.14924
|
Hrishikesh Terdalkar
|
Hrishikesh Terdalkar, Arnab Bhattacharya
|
Chandojnanam: A Sanskrit Meter Identification and Utilization System
|
to be published in "18th World Sanskrit Conference (WSC 2023)"
| null | null | null |
cs.SE cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Chandoj\~n\=anam, a web-based Sanskrit meter (Chanda)
identification and utilization system. In addition to the core functionality of
identifying meters, it sports a friendly user interface to display the
scansion, which is a graphical representation of the metrical pattern. The
system supports identification of meters from uploaded images by using optical
character recognition (OCR) engines in the backend. It is also able to process
entire text files at a time. The text can be processed in two modes, either by
treating it as a list of individual lines, or as a collection of verses. When a
line or a verse does not correspond exactly to a known meter, Chandoj\~n\=anam
is capable of finding fuzzy (i.e., approximate and close) matches based on
sequence matching. This opens up the scope of a meter-based correction of
erroneous digital corpora. The system is available for use at
https://sanskrit.iitk.ac.in/jnanasangraha/chanda/, and the source code in the
form of a Python library is made available at
https://github.com/hrishikeshrt/chanda/.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2022 16:43:27 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Terdalkar",
"Hrishikesh",
""
],
[
"Bhattacharya",
"Arnab",
""
]
] |
new_dataset
| 0.998362 |
2209.14965
|
Mariia Gladkova
|
Mariia Gladkova, Nikita Korobov, Nikolaus Demmel, Aljo\v{s}a O\v{s}ep,
Laura Leal-Taix\'e and Daniel Cremers
|
DirectTracker: 3D Multi-Object Tracking Using Direct Image Alignment and
Photometric Bundle Adjustment
|
In Proceedings of the IEEE International Conference on Intelligent
Robots and Systems (IROS), 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Direct methods have shown excellent performance in the applications of visual
odometry and SLAM. In this work we propose to leverage their effectiveness for
the task of 3D multi-object tracking. To this end, we propose DirectTracker, a
framework that effectively combines direct image alignment for the short-term
tracking and sliding-window photometric bundle adjustment for 3D object
detection. Object proposals are estimated based on the sparse sliding-window
pointcloud and further refined using an optimization-based cost function that
carefully combines 3D and 2D cues to ensure consistency in image and world
space. We propose to evaluate 3D tracking using the recently introduced
higher-order tracking accuracy (HOTA) metric and the generalized intersection
over union similarity measure to mitigate the limitations of the conventional
use of intersection over union for the evaluation of vision-based trackers. We
perform evaluation on the KITTI Tracking benchmark for the Car class and show
competitive performance in tracking objects both in 2D and 3D.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2022 17:40:22 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Gladkova",
"Mariia",
""
],
[
"Korobov",
"Nikita",
""
],
[
"Demmel",
"Nikolaus",
""
],
[
"Ošep",
"Aljoša",
""
],
[
"Leal-Taixé",
"Laura",
""
],
[
"Cremers",
"Daniel",
""
]
] |
new_dataset
| 0.985951 |
2209.14969
|
Anthony Fuller
|
Anthony Fuller, Koreen Millard, and James R. Green
|
Transfer Learning with Pretrained Remote Sensing Transformers
|
Draft of manuscript that is being prepared for IEEE TGRS
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Although the remote sensing (RS) community has begun to pretrain transformers
(intended to be fine-tuned on RS tasks), it is unclear how these models perform
under distribution shifts. Here, we pretrain a new RS transformer--called
SatViT-V2--on 1.3 million satellite-derived RS images, then fine-tune it (along
with five other models) to investigate how it performs on distributions not
seen during training. We split an expertly labeled land cover dataset into 14
datasets based on source biome. We train each model on each biome separately
and test them on all other biomes. In all, this amounts to 1638 biome transfer
experiments. After fine-tuning, we find that SatViT-V2 outperforms SatViT-V1 by
3.1% on in-distribution (matching biomes) and 2.8% on out-of-distribution
(mismatching biomes) data. Additionally, we find that initializing fine-tuning
from the linear probed solution (i.e., leveraging LPFT [1]) improves
SatViT-V2's performance by another 1.2% on in-distribution and 2.4% on
out-of-distribution data. Next, we find that pretrained RS transformers are
better calibrated under distribution shifts than non-pretrained models and
leveraging LPFT results in further improvements in model calibration. Lastly,
we find that five measures of distribution shift are moderately correlated with
biome transfer performance. We share code and pretrained model weights.
(https://github.com/antofuller/SatViT)
|
[
{
"version": "v1",
"created": "Wed, 28 Sep 2022 17:49:37 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Fuller",
"Anthony",
""
],
[
"Millard",
"Koreen",
""
],
[
"Green",
"James R.",
""
]
] |
new_dataset
| 0.958379 |
2209.14988
|
Ben Poole
|
Ben Poole, Ajay Jain, Jonathan T. Barron, Ben Mildenhall
|
DreamFusion: Text-to-3D using 2D Diffusion
|
see project page at https://dreamfusion3d.github.io/
| null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent breakthroughs in text-to-image synthesis have been driven by diffusion
models trained on billions of image-text pairs. Adapting this approach to 3D
synthesis would require large-scale datasets of labeled 3D data and efficient
architectures for denoising 3D data, neither of which currently exist. In this
work, we circumvent these limitations by using a pretrained 2D text-to-image
diffusion model to perform text-to-3D synthesis. We introduce a loss based on
probability density distillation that enables the use of a 2D diffusion model
as a prior for optimization of a parametric image generator. Using this loss in
a DeepDream-like procedure, we optimize a randomly-initialized 3D model (a
Neural Radiance Field, or NeRF) via gradient descent such that its 2D
renderings from random angles achieve a low loss. The resulting 3D model of the
given text can be viewed from any angle, relit by arbitrary illumination, or
composited into any 3D environment. Our approach requires no 3D training data
and no modifications to the image diffusion model, demonstrating the
effectiveness of pretrained image diffusion models as priors.
|
[
{
"version": "v1",
"created": "Thu, 29 Sep 2022 17:50:40 GMT"
}
] | 2022-09-30T00:00:00 |
[
[
"Poole",
"Ben",
""
],
[
"Jain",
"Ajay",
""
],
[
"Barron",
"Jonathan T.",
""
],
[
"Mildenhall",
"Ben",
""
]
] |
new_dataset
| 0.999474 |
2010.10344
|
Miriam Enzi
|
Miriam Enzi, Sophie N. Parragh, Jakob Puchinger
|
The bi-objective multimodal car-sharing problem
| null | null | null | null |
cs.AI math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The aim of the bi-objective multimodal car-sharing problem (BiO-MMCP) is to
determine the optimal mode of transport assignment for trips and to schedule
the routes of available cars and users whilst minimizing cost and maximizing
user satisfaction. We investigate the BiO-MMCP from a user-centred point of
view. As user satisfaction is a crucial aspect in shared mobility systems, we
consider user preferences in a second objective. Users may choose and rank
their preferred modes of transport for different times of the day. In this way
we account for, e.g., different traffic conditions throughout the planning
horizon.
We study different variants of the problem. In the base problem, the sequence
of tasks a user has to fulfill is fixed in advance and travel times as well as
preferences are constant over the planning horizon. In variant 2,
time-dependent travel times and preferences are introduced. In variant 3, we
examine the challenges when allowing additional routing decisions. Variant 4
integrates variants 2 and 3. For this last variant, we develop a branch-and-cut
algorithm which is embedded in two bi-objective frameworks, namely the
$\epsilon$-constraint method and a weighting binary search method.
Computational experiments show that the branch-and cut algorithm outperforms
the MIP formulation and we discuss changing solutions along the Pareto
frontier.
|
[
{
"version": "v1",
"created": "Sun, 18 Oct 2020 13:48:17 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Sep 2022 12:43:22 GMT"
}
] | 2022-09-29T00:00:00 |
[
[
"Enzi",
"Miriam",
""
],
[
"Parragh",
"Sophie N.",
""
],
[
"Puchinger",
"Jakob",
""
]
] |
new_dataset
| 0.99319 |
2101.03072
|
Sebastian Euler
|
Sebastian Euler, Xingqin Lin, Erika Tejedor, Evanny Obregon
|
A Primer on HIBS -- High Altitude Platform Stations as IMT Base Stations
|
7 pages, 5 figures
| null |
10.1109/MVT.2022.3202004
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile communication via high-altitude platforms operating in the
stratosphere is an idea that has been on the table for decades. In the past few
years, however, with recent advances in technology and parallel progress in
standardization and regulatory bodies like 3GPP and ITU, these ideas have
gained considerable momentum. In this article, we present a comprehensive
overview of HIBS - High Altitude Platform Stations as IMT Base Stations. We lay
out possible use cases and summarize the current status of the development,
from a technological point of view as well as from standardization in 3GPP, and
regarding spectrum aspects. We then present preliminary system level simulation
results to shed light on the performance of HIBS. We conclude with pointing out
several directions for future research.
|
[
{
"version": "v1",
"created": "Fri, 8 Jan 2021 16:04:02 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Sep 2022 07:26:07 GMT"
}
] | 2022-09-29T00:00:00 |
[
[
"Euler",
"Sebastian",
""
],
[
"Lin",
"Xingqin",
""
],
[
"Tejedor",
"Erika",
""
],
[
"Obregon",
"Evanny",
""
]
] |
new_dataset
| 0.99963 |
2102.06491
|
Thanasis Zoumpekas
|
Thanasis Zoumpekas, Anna Puig, Maria Salam\'o, David
Garc\'ia-Sell\'es, Laura Blanco Nu\~nez, Marta Guinau
|
End-to-End Intelligent Framework for Rockfall Detection
| null | null |
10.1002/int.22557
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Rockfall detection is a crucial procedure in the field of geology, which
helps to reduce the associated risks. Currently, geologists identify rockfall
events almost manually utilizing point cloud and imagery data obtained from
different caption devices such as Terrestrial Laser Scanner or digital cameras.
Multi-temporal comparison of the point clouds obtained with these techniques
requires a tedious visual inspection to identify rockfall events which implies
inaccuracies that depend on several factors such as human expertise and the
sensibility of the sensors. This paper addresses this issue and provides an
intelligent framework for rockfall event detection for any individual working
in the intersection of the geology domain and decision support systems. The
development of such an analysis framework poses significant research challenges
and justifies intensive experimental analysis. In particular, we propose an
intelligent system that utilizes multiple machine learning algorithms to detect
rockfall clusters of point cloud data. Due to the extremely imbalanced nature
of the problem, a plethora of state-of-the-art resampling techniques
accompanied by multiple models and feature selection procedures are being
investigated. Various machine learning pipeline combinations have been
benchmarked and compared applying well-known metrics to be incorporated into
our system. Specifically, we developed statistical and machine learning
techniques and applied them to analyze point cloud data extracted from
Terrestrial Laser Scanner in two distinct case studies, involving different
geological contexts: the basaltic cliff of Castellfollit de la Roca and the
conglomerate Montserrat Massif, both located in Spain. Our experimental data
suggest that some of the above-mentioned machine learning pipelines can be
utilized to detect rockfall incidents on mountain walls, with experimentally
proven accuracy.
|
[
{
"version": "v1",
"created": "Fri, 12 Feb 2021 12:48:17 GMT"
}
] | 2022-09-29T00:00:00 |
[
[
"Zoumpekas",
"Thanasis",
""
],
[
"Puig",
"Anna",
""
],
[
"Salamó",
"Maria",
""
],
[
"García-Sellés",
"David",
""
],
[
"Nuñez",
"Laura Blanco",
""
],
[
"Guinau",
"Marta",
""
]
] |
new_dataset
| 0.996312 |
2109.10737
|
Jack Liu
|
Bingchuan Li, Shaofei Cai, Wei Liu, Peng Zhang, Qian He, Miao Hua,
Zili Yi
|
DyStyle: Dynamic Neural Network for Multi-Attribute-Conditioned Style
Editing
|
Accepted to WACV 2023, 19 pages, 20 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The semantic controllability of StyleGAN is enhanced by unremitting research.
Although the existing weak supervision methods work well in manipulating the
style codes along one attribute, the accuracy of manipulating multiple
attributes is neglected. Multi-attribute representations are prone to
entanglement in the StyleGAN latent space, while sequential editing leads to
error accumulation. To address these limitations, we design a Dynamic Style
Manipulation Network (DyStyle) whose structure and parameters vary by input
samples, to perform nonlinear and adaptive manipulation of latent codes for
flexible and precise attribute control. In order to efficient and stable
optimization of the DyStyle network, we propose a Dynamic Multi-Attribute
Contrastive Learning (DmaCL) method: including dynamic multi-attribute
contrastor and dynamic multi-attribute contrastive loss, which simultaneously
disentangle a variety of attributes from the generative image and latent space
of model. As a result, our approach demonstrates fine-grained disentangled
edits along multiple numeric and binary attributes. Qualitative and
quantitative comparisons with existing style manipulation methods verify the
superiority of our method in terms of the multi-attribute control accuracy and
identity preservation without compromising photorealism.
|
[
{
"version": "v1",
"created": "Wed, 22 Sep 2021 13:50:51 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Jan 2022 05:16:00 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Sep 2022 08:25:03 GMT"
}
] | 2022-09-29T00:00:00 |
[
[
"Li",
"Bingchuan",
""
],
[
"Cai",
"Shaofei",
""
],
[
"Liu",
"Wei",
""
],
[
"Zhang",
"Peng",
""
],
[
"He",
"Qian",
""
],
[
"Hua",
"Miao",
""
],
[
"Yi",
"Zili",
""
]
] |
new_dataset
| 0.98329 |
2206.04185
|
Ben Weintraub
|
Ben Weintraub, Christof Ferreira Torres, Cristina Nita-Rotaru, Radu
State
|
A Flash(bot) in the Pan: Measuring Maximal Extractable Value in Private
Pools
|
14 pages, ACM IMC 2022
| null |
10.1145/3517745.3561448
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rise of Ethereum has lead to a flourishing decentralized marketplace that
has, unfortunately, fallen victim to frontrunning and Maximal Extractable Value
(MEV) activities, where savvy participants game transaction orderings within a
block for profit. One popular solution to address such behavior is Flashbots, a
private pool with infrastructure and design goals aimed at eliminating the
negative externalities associated with MEV. While Flashbots has established
laudable goals to address MEV behavior, no evidence has been provided to show
that these goals are achieved in practice.
In this paper, we measure the popularity of Flashbots and evaluate if it is
meeting its chartered goals. We find that (1) Flashbots miners account for over
99.9% of the hashing power in the Ethereum network, (2) powerful miners are
making more than $2\times$ what they were making prior to using Flashbots,
while non-miners' slice of the pie has shrunk commensurately, (3) mining is
just as centralized as it was prior to Flashbots with more than 90% of
Flashbots blocks coming from just two miners, and (4) while more than 80% of
MEV extraction in Ethereum is happening through Flashbots, 13.2% is coming from
other private pools.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 22:52:24 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Sep 2022 17:53:08 GMT"
}
] | 2022-09-29T00:00:00 |
[
[
"Weintraub",
"Ben",
""
],
[
"Torres",
"Christof Ferreira",
""
],
[
"Nita-Rotaru",
"Cristina",
""
],
[
"State",
"Radu",
""
]
] |
new_dataset
| 0.997572 |
2206.07185
|
Son Ho
|
Son Ho, Jonathan Protzenko
|
Aeneas: Rust Verification by Functional Translation
| null |
Proceedings of the ACM on Programming Languages, Volume 6, Issue
ICFP. August 2022. Article No.: 116, pp 711-741
|
10.1145/3547647
| null |
cs.PL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present Aeneas, a new verification toolchain for Rust programs based on a
lightweight functional translation. We leverage Rust's rich region-based type
system to eliminate memory reasoning for many Rust programs, as long as they do
not rely on interior mutability or unsafe code. Doing so, we relieve the proof
engineer of the burden of memory-based reasoning, allowing them to instead
focus on functional properties of their code.
Our first contribution is a new approach to borrows and controlled aliasing.
We propose a pure, functional semantics for LLBC, a Low-Level Borrow Calculus
that captures a large subset of Rust programs. Our semantics is value-based,
meaning there is no notion of memory, addresses or pointer arithmetic. Our
semantics is also ownership-centric, meaning that we enforce soundness of
borrows via a semantic criterion based on loans rather than through a syntactic
type-based lifetime discipline. We claim that our semantics captures the
essence of the borrow mechanism rather than its current implementation in the
Rust compiler.
Our second contribution is a translation from LLBC to a pure lambda-calculus.
This allows the user to reason about the original Rust program through the
theorem prover of their choice. To deal with the well-known technical
difficulty of terminating a borrow, we rely on a novel approach, in which we
approximate the borrow graph in the presence of function calls. This in turn
allows us to perform the translation using a new technical device called
backward functions.
We implement our toolchain in a mixture of Rust and OCaml. Our evaluation
shows significant gains of verification productivity for the programmer.
Rust goes to great lengths to enforce static control of aliasing; the proof
engineer should not waste any time on memory reasoning when so much already
comes "for free"!
|
[
{
"version": "v1",
"created": "Tue, 14 Jun 2022 21:55:31 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Sep 2022 10:19:39 GMT"
}
] | 2022-09-29T00:00:00 |
[
[
"Ho",
"Son",
""
],
[
"Protzenko",
"Jonathan",
""
]
] |
new_dataset
| 0.999297 |
2209.00693
|
Boris Veytsman
|
Ana-Maria Istrate and Donghui Li and Dario Taraborelli and Michaela
Torkar and Boris Veytsman and Ivana Williams
|
A large dataset of software mentions in the biomedical literature
| null | null | null | null |
cs.DL q-bio.OT
|
http://creativecommons.org/licenses/by/4.0/
|
We describe the CZ Software Mentions dataset, a new dataset of software
mentions in biomedical papers. Plain-text software mentions are extracted with
a trained SciBERT model from several sources: the NIH PubMed Central collection
and from papers provided by various publishers to the Chan Zuckerberg
Initiative. The dataset provides sources, context and metadata, and, for a
number of mentions, the disambiguated software entities and links. We extract
1.12 million unique string software mentions from 2.4 million papers in the NIH
PMC-OA Commercial subset, 481k unique mentions from the NIH PMC-OA
Non-Commercial subset (both gathered in October 2021) and 934k unique mentions
from 3 million papers in the Publishers' collection. There is variation in how
software is mentioned in papers and extracted by the NER algorithm. We propose
a clustering-based disambiguation algorithm to map plain-text software mentions
into distinct software entities and apply it on the NIH PubMed Central
Commercial collection. Through this methodology, we disambiguate 1.12 million
unique strings extracted by the NER model into 97600 unique software entities,
covering 78% of all software-paper links. We link 185000 of the mentions to a
repository, covering about 55% of all software-paper links. We describe in
detail the process of building the datasets, disambiguating and linking the
software mentions, as well as opportunities and challenges that come with a
dataset of this size. We make all data and code publicly available as a new
resource to help assess the impact of software (in particular scientific open
source projects) on science.
|
[
{
"version": "v1",
"created": "Thu, 1 Sep 2022 19:04:47 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Sep 2022 23:04:30 GMT"
},
{
"version": "v3",
"created": "Fri, 23 Sep 2022 18:58:19 GMT"
},
{
"version": "v4",
"created": "Tue, 27 Sep 2022 19:37:16 GMT"
}
] | 2022-09-29T00:00:00 |
[
[
"Istrate",
"Ana-Maria",
""
],
[
"Li",
"Donghui",
""
],
[
"Taraborelli",
"Dario",
""
],
[
"Torkar",
"Michaela",
""
],
[
"Veytsman",
"Boris",
""
],
[
"Williams",
"Ivana",
""
]
] |
new_dataset
| 0.999825 |
2209.02903
|
Ge Gao
|
Ge Gao, Jian Zheng, Eun Kyoung Choe, and Naomi Yamashita
|
Taking a Language Detour: How International Migrants Speaking a Minority
Language Seek COVID-Related Information in Their Host Countries
| null |
PACM on Human-Computer Interaction, Vol.6, No.CSCW2, Article 542,
Publication date: November 2022
|
10.1145/3555600
| null |
cs.CY cs.CL cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Information seeking is crucial for people's self-care and wellbeing in times
of public crises. Extensive research has investigated empirical understandings
as well as technical solutions to facilitate information seeking by domestic
citizens of affected regions. However, limited knowledge is established to
support international migrants who need to survive a crisis in their host
countries. The current paper presents an interview study with two cohorts of
Chinese migrants living in Japan (N=14) and the United States (N=14).
Participants reflected on their information seeking experiences during the
COVID pandemic. The reflection was supplemented by two weeks of self-tracking
where participants maintained records of their COVIDrelated information seeking
practice. Our data indicated that participants often took language detours, or
visits to Mandarin resources for information about the COVID outbreak in their
host countries. They also made strategic use of the Mandarin information to
perform selective reading, cross-checking, and contextualized interpretation of
COVID-related information in Japanese or English. While such practices enhanced
participants' perceived effectiveness of COVID-related information gathering
and sensemaking, they disadvantaged people through sometimes incognizant ways.
Further, participants lacked the awareness or preference to review
migrant-oriented information that was issued by the host country's public
authorities despite its availability. Building upon these findings, we
discussed solutions to improve international migrants' COVID-related
information seeking in their non-native language and cultural environment. We
advocated inclusive crisis infrastructures that would engage people with
diverse levels of local language fluency, information literacy, and experience
in leveraging public services.
|
[
{
"version": "v1",
"created": "Wed, 7 Sep 2022 03:28:48 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Sep 2022 19:48:29 GMT"
}
] | 2022-09-29T00:00:00 |
[
[
"Gao",
"Ge",
""
],
[
"Zheng",
"Jian",
""
],
[
"Choe",
"Eun Kyoung",
""
],
[
"Yamashita",
"Naomi",
""
]
] |
new_dataset
| 0.999211 |
2209.06626
|
Tal Hakim
|
Tal Hakim
|
NAAP-440 Dataset and Baseline for Neural Architecture Accuracy
Prediction
| null | null | null | null |
cs.CV cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural architecture search (NAS) has become a common approach to developing
and discovering new neural architectures for different target platforms and
purposes. However, scanning the search space is comprised of long training
processes of many candidate architectures, which is costly in terms of
computational resources and time. Regression algorithms are a common tool to
predicting a candidate architecture's accuracy, which can dramatically
accelerate the search procedure. We aim at proposing a new baseline that will
support the development of regression algorithms that can predict an
architecture's accuracy just from its scheme, or by only training it for a
minimal number of epochs. Therefore, we introduce the NAAP-440 dataset of 440
neural architectures, which were trained on CIFAR10 using a fixed recipe. Our
experiments indicate that by using off-the-shelf regression algorithms and
running up to 10% of the training process, not only is it possible to predict
an architecture's accuracy rather precisely, but that the values predicted for
the architectures also maintain their accuracy order with a minimal number of
monotonicity violations. This approach may serve as a powerful tool for
accelerating NAS-based studies and thus dramatically increase their efficiency.
The dataset and code used in the study have been made public.
|
[
{
"version": "v1",
"created": "Wed, 14 Sep 2022 13:21:39 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Sep 2022 12:24:57 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Sep 2022 12:32:35 GMT"
}
] | 2022-09-29T00:00:00 |
[
[
"Hakim",
"Tal",
""
]
] |
new_dataset
| 0.995428 |
2209.07600
|
Mohammad Mahdavian
|
Mohammad Mahdavian, Payam Nikdel, Mahdi TaherAhmadi and Mo Chen
|
STPOTR: Simultaneous Human Trajectory and Pose Prediction Using a
Non-Autoregressive Transformer for Robot Following Ahead
| null | null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we develop a neural network model to predict future human
motion from an observed human motion history. We propose a non-autoregressive
transformer architecture to leverage its parallel nature for easier training
and fast, accurate predictions at test time. The proposed architecture divides
human motion prediction into two parts: 1) the human trajectory, which is the
hip joint 3D position over time and 2) the human pose which is the all other
joints 3D positions over time with respect to a fixed hip joint. We propose to
make the two predictions simultaneously, as the shared representation can
improve the model performance. Therefore, the model consists of two sets of
encoders and decoders. First, a multi-head attention module applied to encoder
outputs improves human trajectory. Second, another multi-head self-attention
module applied to encoder outputs concatenated with decoder outputs facilitates
learning of temporal dependencies. Our model is well-suited for robotic
applications in terms of test accuracy and speed, and compares favorably with
respect to state-of-the-art methods. We demonstrate the real-world
applicability of our work via the Robot Follow-Ahead task, a challenging yet
practical case study for our proposed model.
|
[
{
"version": "v1",
"created": "Thu, 15 Sep 2022 20:27:54 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 01:54:05 GMT"
},
{
"version": "v3",
"created": "Tue, 27 Sep 2022 21:53:55 GMT"
}
] | 2022-09-29T00:00:00 |
[
[
"Mahdavian",
"Mohammad",
""
],
[
"Nikdel",
"Payam",
""
],
[
"TaherAhmadi",
"Mahdi",
""
],
[
"Chen",
"Mo",
""
]
] |
new_dataset
| 0.990595 |
2209.12879
|
Petra J\"a\"askel\"ainen
|
Andr\'e Holzapfel, Petra J\"a\"askel\"ainen, Anna-Kaisa Kaila
|
Environmental and Social Sustainability of Creative-Ai
|
Presented in CHI 2022 - Generative AI and CHI Workshop
| null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The recent developments of artificial intelligence increase its capability
for the creation of arts in both largely autonomous and collaborative contexts.
In both contexts, Ai aims to imitate, combine, and extend existing artistic
styles, and can transform creative practices. In our ongoing research, we
investigate such Creative-Ai from sustainability and ethical perspectives. The
two main focus areas are understanding the environmental sustainability aspects
(material, practices) in the context of artistic processes that involve
Creative-Ai, and ethical issues related to who gets to be involved in the
creation process (power, authorship, ownership). This paper provides an outline
of our ongoing research in these two directions. We will present our
interdisciplinary approach, which combines interviews, workshops, online
ethnography, and energy measurements, to address our research questions: How is
Creative-Ai currently used by artist communities, and which future applications
do artists imagine? When Ai is applied to creating art, how might it impact the
economy and environment? And, how can answers to these questions guide
requirements for intellectual property regimes for Creative-Ai?
|
[
{
"version": "v1",
"created": "Mon, 26 Sep 2022 17:47:19 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Sep 2022 10:18:49 GMT"
}
] | 2022-09-29T00:00:00 |
[
[
"Holzapfel",
"André",
""
],
[
"Jääskeläinen",
"Petra",
""
],
[
"Kaila",
"Anna-Kaisa",
""
]
] |
new_dataset
| 0.985064 |
2209.13360
|
Nisha Huang
|
Nisha Huang, Fan Tang, Weiming Dong and Changsheng Xu
|
Draw Your Art Dream: Diverse Digital Art Synthesis with Multimodal
Guided Diffusion
|
Accepted by ACM MM 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Digital art synthesis is receiving increasing attention in the multimedia
community because of engaging the public with art effectively. Current digital
art synthesis methods usually use single-modality inputs as guidance, thereby
limiting the expressiveness of the model and the diversity of generated
results. To solve this problem, we propose the multimodal guided artwork
diffusion (MGAD) model, which is a diffusion-based digital artwork generation
approach that utilizes multimodal prompts as guidance to control the
classifier-free diffusion model. Additionally, the contrastive language-image
pretraining (CLIP) model is used to unify text and image modalities. Extensive
experimental results on the quality and quantity of the generated digital art
paintings confirm the effectiveness of the combination of the diffusion model
and multimodal guidance. Code is available at
https://github.com/haha-lisa/MGAD-multimodal-guided-artwork-diffusion.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 13:10:25 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Sep 2022 05:31:18 GMT"
}
] | 2022-09-29T00:00:00 |
[
[
"Huang",
"Nisha",
""
],
[
"Tang",
"Fan",
""
],
[
"Dong",
"Weiming",
""
],
[
"Xu",
"Changsheng",
""
]
] |
new_dataset
| 0.95569 |
2209.13696
|
Keno Bressem
|
Lisa C. Adams, Felix Busch, Daniel Truhn, Marcus R. Makowski, Hugo
JWL. Aerts, Keno K. Bressem
|
What Does DALL-E 2 Know About Radiology?
|
4 Figures
| null | null | null |
cs.CV cs.AI eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Generative models such as DALL-E 2 could represent a promising future tool
for image generation, augmentation, and manipulation for artificial
intelligence research in radiology provided that these models have sufficient
medical domain knowledge. Here we show that DALL-E 2 has learned relevant
representations of X-ray images with promising capabilities in terms of
zero-shot text-to-image generation of new images, continuation of an image
beyond its original boundaries, or removal of elements, while pathology
generation or CT, MRI, and ultrasound images are still limited. The use of
generative models for augmenting and generating radiological data thus seems
feasible, even if further fine-tuning and adaptation of these models to the
respective domain is required beforehand.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 21:15:47 GMT"
}
] | 2022-09-29T00:00:00 |
[
[
"Adams",
"Lisa C.",
""
],
[
"Busch",
"Felix",
""
],
[
"Truhn",
"Daniel",
""
],
[
"Makowski",
"Marcus R.",
""
],
[
"Aerts",
"Hugo JWL.",
""
],
[
"Bressem",
"Keno K.",
""
]
] |
new_dataset
| 0.981072 |
2209.13715
|
Andrew Sabelhaus
|
Ran Jing, Meredith L. Anderson, Miguel Ianus-Valdivia, Amsal Akber
Ali, Carmel Majidi, Andrew P. Sabelhaus
|
Safe Balancing Control of a Soft Legged Robot
|
8 pages, 4 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Legged robots constructed from soft materials are commonly claimed to
demonstrate safer, more robust environmental interactions than their rigid
counterparts. However, this motivating feature of soft robots requires more
rigorous development for comparison to rigid locomotion. This article presents
a soft legged robot platform, Horton, and a feedback control system with safety
guarantees on some aspects of its operation. The robot is constructed using a
series of soft limbs, actuated by thermal shape memory alloy (SMA) wire
muscles, with sensors for its position and its actuator temperatures. A
supervisory control scheme maintains safe actuator states during the operation
of a separate controller for the robot's pose. Experiments demonstrate that
Horton can lift its leg and maintain a balancing stance, a precursor to
locomotion. The supervisor is verified in hardware via a human interaction test
during balancing, keeping all SMA muscles below a temperature threshold. This
work represents the first demonstration of a safety-verified feedback system on
any soft legged robot.
|
[
{
"version": "v1",
"created": "Tue, 27 Sep 2022 21:58:46 GMT"
}
] | 2022-09-29T00:00:00 |
[
[
"Jing",
"Ran",
""
],
[
"Anderson",
"Meredith L.",
""
],
[
"Ianus-Valdivia",
"Miguel",
""
],
[
"Ali",
"Amsal Akber",
""
],
[
"Majidi",
"Carmel",
""
],
[
"Sabelhaus",
"Andrew P.",
""
]
] |
new_dataset
| 0.998997 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.