id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2304.10632
|
Piyush Batra
|
Piyush Batra, Gagan Raj Singh, Ritik Gandhi
|
NFT Marketplace
|
Report for MULTIMEDIA COMMUNICATIONS course project
| null | null | null |
cs.MM cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In an increasingly digitized world, the secure management and trade of
digital assets have become a pressing issue. This project aims to address this
challenge by developing a decentralized application (dApp) that leverages
blockchain technology and deep learning models to provide secure and efficient
digital asset management, with a focus on NFTs. The dApp includes features such
as secure wallet connections, NFT image generation, minting, marketplace, and
profile management. The back-end of the dApp is implemented using the Goerli
testnet with Solidity-based smart contracts, while IPFS and ReactJS/EtherJS are
used for decentralized storage and front-end development, respectively.
Additionally, the OpenAI API is integrated to generate unique NFT images based
on user input. The project demonstrates the practical application of blockchain
technology and deep learning models in developing dApps for secure and
decentralized digital asset management. Overall, the project contributes to the
ongoing research on blockchain-based solutions for secure digital asset
management, while highlighting the potential of blockchain and deep learning
technologies to transform the way we manage and trade digital assets.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 20:24:20 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Batra",
"Piyush",
""
],
[
"Singh",
"Gagan Raj",
""
],
[
"Gandhi",
"Ritik",
""
]
] |
new_dataset
| 0.998527 |
2304.10639
|
Yasir Alanazi
|
Yasir Alanazi, Malachi Schram, Kishansingh Rajput, Steven Goldenberg,
Lasitha Vidyaratne, Chris Pappas, Majdi I. Radaideh, Dan Lu, Pradeep
Ramuhalli, Sarah Cousineau
|
Multi-module based CVAE to predict HVCM faults in the SNS accelerator
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a multi-module framework based on Conditional Variational
Autoencoder (CVAE) to detect anomalies in the power signals coming from
multiple High Voltage Converter Modulators (HVCMs). We condition the model with
the specific modulator type to capture different representations of the normal
waveforms and to improve the sensitivity of the model to identify a specific
type of fault when we have limited samples for a given module type. We studied
several neural network (NN) architectures for our CVAE model and evaluated the
model performance by looking at their loss landscape for stability and
generalization. Our results for the Spallation Neutron Source (SNS)
experimental data show that the trained model generalizes well to detecting
multiple fault types for several HVCM module types. The results of this study
can be used to improve the HVCM reliability and overall SNS uptime
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 20:41:38 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Alanazi",
"Yasir",
""
],
[
"Schram",
"Malachi",
""
],
[
"Rajput",
"Kishansingh",
""
],
[
"Goldenberg",
"Steven",
""
],
[
"Vidyaratne",
"Lasitha",
""
],
[
"Pappas",
"Chris",
""
],
[
"Radaideh",
"Majdi I.",
""
],
[
"Lu",
"Dan",
""
],
[
"Ramuhalli",
"Pradeep",
""
],
[
"Cousineau",
"Sarah",
""
]
] |
new_dataset
| 0.997697 |
2304.10646
|
Rachit Nigam
|
Rachit Nigam, Pedro Henrique Azevedo De Amorim, Adrian Sampson
|
Modular Hardware Design with Timeline Types
|
Extended version of PLDI '23 paper
| null |
10.1145/3591234
| null |
cs.AR cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Modular design is a key challenge for enabling large-scale reuse of hardware
modules. Unlike software, however, hardware designs correspond to physical
circuits and inherit constraints from them. Timing constraints -- which cycle a
signal arrives, when an input is read -- and structural constraints -- how
often a multiplier accepts new inputs -- are fundamental to hardware
interfaces. Existing hardware design languages do not provide a way to encode
these constraints; a user must read documentation, build scripts, or in the
worst case, a module's implementation to understand how to use it. We present
Filament, a language for modular hardware design that supports the
specification and enforcement of timing and structural constraints for
statically scheduled pipelines. Filament uses timeline types, which describe
the intervals of clock-cycle time when a given signal is available or required.
Filament enables safe composition of hardware modules, ensures that the
resulting designs are correctly pipelined, and predictably lowers them to
efficient hardware.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 21:12:09 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Nigam",
"Rachit",
""
],
[
"De Amorim",
"Pedro Henrique Azevedo",
""
],
[
"Sampson",
"Adrian",
""
]
] |
new_dataset
| 0.990752 |
2304.10840
|
Dhinakaran D
|
L. Srinivasan, D. Selvaraj, D. Dhinakaran, T. P. Anish
|
IoT-Based Solution for Paraplegic Sufferer to Send Signals to Physician
via Internet
| null | null |
10.14445/23488379/IJEEE-V10I1P104
| null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
We come across hospitals and non-profit organizations that care for people
with paralysis who have experienced all or portion of their physique being
incapacitated by the paralyzing attack. Due to a lack of motor coordination by
their mind, these persons are typically unable to communicate their
requirements because they can speak clearly or use sign language. In such a
case, we suggest a system that enables a disabled person to move any area of
his body capable of moving to broadcast a text on the LCD. This method also
addresses the circumstance in which the patient cannot be attended to in person
and instead sends an SMS message using GSM. By detecting the user part's tilt
direction, our suggested system operates. As a result, patients can communicate
with physicians, therapists, or their loved ones at home or work over the web.
Case-specific data, such as heart rate, must be continuously reported in health
centers. The suggested method tracks the body of the case's pulse rate and
other comparable data. For instance, photoplethysmography is used to assess
heart rate. The decoded periodic data is transmitted continually via a
Microcontroller coupled to a transmitting module. The croaker's cabin contains
a receiver device that obtains and deciphers data as well as constantly
exhibits it on Graphical interfaces viewable on the laptop. As a result, the
croaker can monitor and handle multiple situations at once.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 09:32:50 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Srinivasan",
"L.",
""
],
[
"Selvaraj",
"D.",
""
],
[
"Dhinakaran",
"D.",
""
],
[
"Anish",
"T. P.",
""
]
] |
new_dataset
| 0.994248 |
2304.10854
|
Zhengcheng Shen
|
Zhengcheng Shen, Yi Gao, Linh K\"astner, Jens Lambrecht
|
HabitatDyn Dataset: Dynamic Object Detection to Kinematics Estimation
|
The paper is under review
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The advancement of computer vision and machine learning has made datasets a
crucial element for further research and applications. However, the creation
and development of robots with advanced recognition capabilities are hindered
by the lack of appropriate datasets. Existing image or video processing
datasets are unable to accurately depict observations from a moving robot, and
they do not contain the kinematics information necessary for robotic tasks.
Synthetic data, on the other hand, are cost-effective to create and offer
greater flexibility for adapting to various applications. Hence, they are
widely utilized in both research and industry. In this paper, we propose the
dataset HabitatDyn, which contains both synthetic RGB videos, semantic labels,
and depth information, as well as kinetics information. HabitatDyn was created
from the perspective of a mobile robot with a moving camera, and contains 30
scenes featuring six different types of moving objects with varying velocities.
To demonstrate the usability of our dataset, two existing algorithms are used
for evaluation and an approach to estimate the distance between the object and
camera is implemented based on these segmentation methods and evaluated through
the dataset. With the availability of this dataset, we aspire to foster further
advancements in the field of mobile robotics, leading to more capable and
intelligent robots that can navigate and interact with their environments more
effectively. The code is publicly available at
https://github.com/ignc-research/HabitatDyn.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 09:57:35 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Shen",
"Zhengcheng",
""
],
[
"Gao",
"Yi",
""
],
[
"Kästner",
"Linh",
""
],
[
"Lambrecht",
"Jens",
""
]
] |
new_dataset
| 0.999852 |
2304.10877
|
Pengfei Qiu
|
Yu Jin, Pengfei Qiu, Chunlu Wang, Yihao Yang, Dongsheng Wang, Gang Qu
|
Timing the Transient Execution: A New Side-Channel Attack on Intel CPUs
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The transient execution attack is a type of attack leveraging the
vulnerability of modern CPU optimization technologies. New attacks surface
rapidly. The side-channel is a key part of transient execution attacks to leak
data. In this work, we discover a vulnerability that the change of the EFLAGS
register in transient execution may have a side effect on the Jcc (jump on
condition code) instruction after it in Intel CPUs. Based on our discovery, we
propose a new side-channel attack that leverages the timing of both transient
execution and Jcc instructions to deliver data. This attack encodes secret data
to the change of register which makes the execution time of context slightly
slower, which can be measured by the attacker to decode data. This attack
doesn't rely on the cache system and doesn't need to reset the EFLAGS register
manually to its initial state before the attack, which may make it more
difficult to detect or mitigate. We implemented this side-channel on machines
with Intel Core i7-6700, i7-7700, and i9-10980XE CPUs. In the first two
processors, we combined it as the side-channel of the Meltdown attack, which
could achieve 100\% success leaking rate. We evaluate and discuss potential
defenses against the attack. Our contributions include discovering security
vulnerabilities in the implementation of Jcc instructions and EFLAGS register
and proposing a new side-channel attack that does not rely on the cache system.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 10:40:20 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Jin",
"Yu",
""
],
[
"Qiu",
"Pengfei",
""
],
[
"Wang",
"Chunlu",
""
],
[
"Yang",
"Yihao",
""
],
[
"Wang",
"Dongsheng",
""
],
[
"Qu",
"Gang",
""
]
] |
new_dataset
| 0.998425 |
2304.10878
|
Rebekah Rousi Dr
|
Rebekah Rousi
|
AI Design, Design AI, Human-Centred AI and the Theatre of the Absurd the
language, life and times of a UX designer
|
14 pages, 6 figures, Nordic network for research on communicative
product design (Nordcode) seminar 2019
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This article connects the concepts and phenomena of Design AI, AI in creative
industries and AIs capacity for creativity. It links Design AI to UX design and
UX designer discourse. Its vagueness and the prominence of UX designers as
speakers and writers in the spectacle of cultural AI discourse. The article
then, draws comparisons between the Theatre of the Absurd and the UX designer
performances of design AI. It additionally sheds light on ToA and the human
condition in terms of existentialism, present within the practice of engaging
in design that intends to link human experience to technological system logic.
This is a theoretical article that utilises examples from UX events published
on Youtube, as well as UX designer blogs, in order to illustrate the mechanics
of the ToA present within contemporary AI and UX designer discourse.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 10:40:58 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Rousi",
"Rebekah",
""
]
] |
new_dataset
| 0.998379 |
2304.10893
|
Runwei Guan
|
Runwei Guan, Ka Lok Man, Feifan Chen, Shanliang Yao, Rongsheng Hu,
Xiaohui Zhu, Jeremy Smith, Eng Gee Lim and Yutao Yue
|
FindVehicle and VehicleFinder: A NER dataset for natural language-based
vehicle retrieval and a keyword-based cross-modal vehicle retrieval system
| null | null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Natural language (NL) based vehicle retrieval is a task aiming to retrieve a
vehicle that is most consistent with a given NL query from among all candidate
vehicles. Because NL query can be easily obtained, such a task has a promising
prospect in building an interactive intelligent traffic system (ITS). Current
solutions mainly focus on extracting both text and image features and mapping
them to the same latent space to compare the similarity. However, existing
methods usually use dependency analysis or semantic role-labelling techniques
to find keywords related to vehicle attributes. These techniques may require a
lot of pre-processing and post-processing work, and also suffer from extracting
the wrong keyword when the NL query is complex. To tackle these problems and
simplify, we borrow the idea from named entity recognition (NER) and construct
FindVehicle, a NER dataset in the traffic domain. It has 42.3k labelled NL
descriptions of vehicle tracks, containing information such as the location,
orientation, type and colour of the vehicle. FindVehicle also adopts both
overlapping entities and fine-grained entities to meet further requirements. To
verify its effectiveness, we propose a baseline NL-based vehicle retrieval
model called VehicleFinder. Our experiment shows that by using text encoders
pre-trained by FindVehicle, VehicleFinder achieves 87.7\% precision and 89.4\%
recall when retrieving a target vehicle by text command on our homemade dataset
based on UA-DETRAC. The time cost of VehicleFinder is 279.35 ms on one ARM v8.2
CPU and 93.72 ms on one RTX A4000 GPU, which is much faster than the
Transformer-based system. The dataset is open-source via the link
https://github.com/GuanRunwei/FindVehicle, and the implementation can be found
via the link https://github.com/GuanRunwei/VehicleFinder-CTIM.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 11:20:23 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Guan",
"Runwei",
""
],
[
"Man",
"Ka Lok",
""
],
[
"Chen",
"Feifan",
""
],
[
"Yao",
"Shanliang",
""
],
[
"Hu",
"Rongsheng",
""
],
[
"Zhu",
"Xiaohui",
""
],
[
"Smith",
"Jeremy",
""
],
[
"Lim",
"Eng Gee",
""
],
[
"Yue",
"Yutao",
""
]
] |
new_dataset
| 0.999621 |
2304.10899
|
Yuriy Pershin
|
Zixi Zhang, Yuriy V. Pershin, and Ivar Martin
|
Electromechanical memcapacitive neurons for energy-efficient spiking
neural networks
| null | null | null | null |
cs.ET cond-mat.mes-hall
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we introduce a new nanoscale electromechanical device -- a
leaky memcapacitor -- and show that it may be useful for the hardware
implementation of spiking neurons. The leaky memcapacitor is a movable-plate
capacitor that becomes quite conductive when the plates come close to each
other. The equivalent circuit of the leaky memcapacitor involves a
memcapacitive and memristive system connected in parallel. In the leaky
memcapacitor, the resistance and capacitance depend on the same internal state
variable, which is the displacement of the movable plate. We have performed a
comprehensive analysis showing that several spiking types observed in
biological neurons can be implemented with the leaky memcapacitor. Significant
attention is paid to the dynamic properties of the model. As in leaky
memcapacitors the capacitive and leaking resistive functionalities are
implemented naturally within the same device structure, their use will simplify
the creation of spiking neural networks.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 11:34:58 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Zhang",
"Zixi",
""
],
[
"Pershin",
"Yuriy V.",
""
],
[
"Martin",
"Ivar",
""
]
] |
new_dataset
| 0.999007 |
2304.10973
|
David Garcia
|
Segun Taofeek Aroyehun, Lukas Malik, Hannah Metzler, Nikolas Haimerl,
Anna Di Natale, David Garcia
|
LEIA: Linguistic Embeddings for the Identification of Affect
| null | null | null | null |
cs.CL cs.AI cs.CY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The wealth of text data generated by social media has enabled new kinds of
analysis of emotions with language models. These models are often trained on
small and costly datasets of text annotations produced by readers who guess the
emotions expressed by others in social media posts. This affects the quality of
emotion identification methods due to training data size limitations and noise
in the production of labels used in model development. We present LEIA, a model
for emotion identification in text that has been trained on a dataset of more
than 6 million posts with self-annotated emotion labels for happiness,
affection, sadness, anger, and fear. LEIA is based on a word masking method
that enhances the learning of emotion words during model pre-training. LEIA
achieves macro-F1 values of approximately 73 on three in-domain test datasets,
outperforming other supervised and unsupervised methods in a strong benchmark
that shows that LEIA generalizes across posts, users, and time periods. We
further perform an out-of-domain evaluation on five different datasets of
social media and other sources, showing LEIA's robust performance across media,
data collection methods, and annotation schemes. Our results show that LEIA
generalizes its classification of anger, happiness, and sadness beyond the
domain it was trained on. LEIA can be applied in future research to provide
better identification of emotions in text from the perspective of the writer.
The models produced for this article are publicly available at
https://huggingface.co/LEIA
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 14:17:10 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Aroyehun",
"Segun Taofeek",
""
],
[
"Malik",
"Lukas",
""
],
[
"Metzler",
"Hannah",
""
],
[
"Haimerl",
"Nikolas",
""
],
[
"Di Natale",
"Anna",
""
],
[
"Garcia",
"David",
""
]
] |
new_dataset
| 0.998789 |
2304.10983
|
Sergey Titov
|
Alexander Agroskin, Elena Lyulina, Sergey Titov, Vladimir Kovalenko
|
Constructing Temporal Networks of OSS Programming Language Ecosystems
|
Accepted to SANER 2023
| null | null | null |
cs.SE cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the primary factors that encourage developers to contribute to open
source software (OSS) projects is the collaborative nature of OSS development.
However, the collaborative structure of these communities largely remains
unclear, partly due to the enormous scale of data to be gathered, processed,
and analyzed. In this work, we utilize the World Of Code dataset, which
contains commit activity data for millions of OSS projects, to build
collaboration networks for ten popular programming language ecosystems,
containing in total over 290M commits across over 18M projects. We build a
collaboration graph representation for each language ecosystem, having authors
and projects as nodes, which enables various forms of social network analysis
on the scale of language ecosystems. Moreover, we capture the information on
the ecosystems' evolution by slicing each network into 30 historical snapshots.
Additionally, we calculate multiple collaboration metrics that characterize the
ecosystems' states. We make the resulting dataset publicly available, including
the constructed graphs and the pipeline enabling the analysis of more
ecosystems.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 14:30:30 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Agroskin",
"Alexander",
""
],
[
"Lyulina",
"Elena",
""
],
[
"Titov",
"Sergey",
""
],
[
"Kovalenko",
"Vladimir",
""
]
] |
new_dataset
| 0.962362 |
2304.10987
|
Jianheng Liu
|
Jianheng Liu, Xuanfu Li, Yueqian Liu, Haoyao Chen
|
RGB-D Inertial Odometry for a Resource-Restricted Robot in Dynamic
Environments
| null |
IEEE Robotics and Automation Letters ( Volume: 7, Issue: 4,
October 2022)
|
10.1109/LRA.2022.3191193
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current simultaneous localization and mapping (SLAM) algorithms perform well
in static environments but easily fail in dynamic environments. Recent works
introduce deep learning-based semantic information to SLAM systems to reduce
the influence of dynamic objects. However, it is still challenging to apply a
robust localization in dynamic environments for resource-restricted robots.
This paper proposes a real-time RGB-D inertial odometry system for
resource-restricted robots in dynamic environments named Dynamic-VINS. Three
main threads run in parallel: object detection, feature tracking, and state
optimization. The proposed Dynamic-VINS combines object detection and depth
information for dynamic feature recognition and achieves performance comparable
to semantic segmentation. Dynamic-VINS adopts grid-based feature detection and
proposes a fast and efficient method to extract high-quality FAST feature
points. IMU is applied to predict motion for feature tracking and moving
consistency check. The proposed method is evaluated on both public datasets and
real-world applications and shows competitive localization accuracy and
robustness in dynamic environments. Yet, to the best of our knowledge, it is
the best-performance real-time RGB-D inertial odometry for resource-restricted
platforms in dynamic environments for now. The proposed system is open source
at: https://github.com/HITSZ-NRSL/Dynamic-VINS.git
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 14:37:11 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Liu",
"Jianheng",
""
],
[
"Li",
"Xuanfu",
""
],
[
"Liu",
"Yueqian",
""
],
[
"Chen",
"Haoyao",
""
]
] |
new_dataset
| 0.978142 |
2304.10990
|
Iris Andrussow
|
Iris Andrussow, Huanbo Sun, Katherine J. Kuchenbecker, and Georg
Martius
|
Minsight: A Fingertip-Sized Vision-Based Tactile Sensor for Robotic
Manipulation
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intelligent interaction with the physical world requires perceptual abilities
beyond vision and hearing; vibrant tactile sensing is essential for autonomous
robots to dexterously manipulate unfamiliar objects or safely contact humans.
Therefore, robotic manipulators need high-resolution touch sensors that are
compact, robust, inexpensive, and efficient. The soft vision-based haptic
sensor presented herein is a miniaturized and optimized version of the
previously published sensor Insight. Minsight has the size and shape of a human
fingertip and uses machine learning methods to output high-resolution maps of
3D contact force vectors at 60 Hz. Experiments confirm its excellent sensing
performance, with a mean absolute force error of 0.07 N and contact location
error of 0.6 mm across its surface area. Minsight's utility is shown in two
robotic tasks on a 3-DoF manipulator. First, closed-loop force control enables
the robot to track the movements of a human finger based only on tactile data.
Second, the informative value of the sensor output is shown by detecting
whether a hard lump is embedded within a soft elastomer with an accuracy of
98%. These findings indicate that Minsight can give robots the detailed
fingertip touch sensing needed for dexterous manipulation and physical
human-robot interaction.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 14:39:47 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Andrussow",
"Iris",
""
],
[
"Sun",
"Huanbo",
""
],
[
"Kuchenbecker",
"Katherine J.",
""
],
[
"Martius",
"Georg",
""
]
] |
new_dataset
| 0.999156 |
2304.11014
|
Philip Saville
|
Hugo Paquet, Philip Saville
|
Strong pseudomonads and premonoidal bicategories
|
Comments and feedback welcome!
| null | null | null |
cs.LO math.CT
|
http://creativecommons.org/licenses/by/4.0/
|
Strong monads and premonoidal categories play a central role in clarifying
the denotational semantics of effectful programming languages. Unfortunately,
this theory excludes many modern semantic models in which the associativity and
unit laws only hold up to coherent isomorphism: for instance, because
composition is defined using a universal property. This paper remedies the
situation. We define premonoidal bicategories and a notion of strength for
pseudomonads, and show that the Kleisli bicategory of a strong pseudomonad is
premonoidal. As often in 2-dimensional category theory, the main difficulty is
to find the correct coherence axioms on 2-cells. We therefore justify our
definitions with numerous examples and by proving a correspondence theorem
between actions and strengths, generalizing a well-known category-theoretic
result.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 15:01:25 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Paquet",
"Hugo",
""
],
[
"Saville",
"Philip",
""
]
] |
new_dataset
| 0.955001 |
2304.11030
|
Jiaao Yu
|
Jiaao Yu, Paul-Philipp Manea, Sara Ameli, Mohammad Hizzani, Amro
Eldebiky, John Paul Strachan
|
Analog Feedback-Controlled Memristor programming Circuit for analog
Content Addressable Memory
| null | null | null | null |
cs.ET cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent breakthroughs in associative memories suggest that silicon memories
are coming closer to human memories, especially for memristive Content
Addressable Memories (CAMs) which are capable to read and write in analog
values. However, the Program-Verify algorithm, the state-of-the-art memristor
programming algorithm, requires frequent switching between verifying and
programming memristor conductance, which brings many defects such as high
dynamic power and long programming time. Here, we propose an analog
feedback-controlled memristor programming circuit that makes use of a novel
look-up table-based (LUT-based) programming algorithm. With the proposed
algorithm, the programming and the verification of a memristor can be performed
in a single-direction sequential process. Besides, we also integrated a single
proposed programming circuit with eight analog CAM (aCAM) cells to build an
aCAM array. We present SPICE simulations on TSMC 28nm process. The theoretical
analysis shows that 1. A memristor conductance within an aCAM cell can be
converted to an output boundary voltage in aCAM searching operations and 2. An
output boundary voltage in aCAM searching operations can be converted to a
programming data line voltage in aCAM programming operations. The simulation
results of the proposed programming circuit prove the theoretical analysis and
thus verify the feasibility to program memristors without frequently switching
between verifying and programming the conductance. Besides, the simulation
results of the proposed aCAM array show that the proposed programming circuit
can be integrated into a large array architecture.
|
[
{
"version": "v1",
"created": "Fri, 21 Apr 2023 15:23:50 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Yu",
"Jiaao",
""
],
[
"Manea",
"Paul-Philipp",
""
],
[
"Ameli",
"Sara",
""
],
[
"Hizzani",
"Mohammad",
""
],
[
"Eldebiky",
"Amro",
""
],
[
"Strachan",
"John Paul",
""
]
] |
new_dataset
| 0.997723 |
2304.11052
|
Thomas Kunz
|
Thomas Kunz, Christian Fisher, James La Novara-Gsell, Christopher
Nguyen, Li Li
|
A Multiagent CyberBattleSim for RL Cyber Operation Agents
|
To appear in Proceedings of the 2022 International Conference on
Computational Science and Computational Intelligence
| null | null | null |
cs.CR cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Hardening cyber physical assets is both crucial and labor-intensive.
Recently, Machine Learning (ML) in general and Reinforcement Learning RL) more
specifically has shown great promise to automate tasks that otherwise would
require significant human insight/intelligence. The development of autonomous
RL agents requires a suitable training environment that allows us to quickly
evaluate various alternatives, in particular how to arrange training scenarios
that pit attackers and defenders against each other. CyberBattleSim is a
training environment that supports the training of red agents, i.e., attackers.
We added the capability to train blue agents, i.e., defenders. The paper
describes our changes and reports on the results we obtained when training blue
agents, either in isolation or jointly with red agents. Our results show that
training a blue agent does lead to stronger defenses against attacks. In
particular, training a blue agent jointly with a red agent increases the blue
agent's capability to thwart sophisticated red agents.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 20:43:19 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Kunz",
"Thomas",
""
],
[
"Fisher",
"Christian",
""
],
[
"La Novara-Gsell",
"James",
""
],
[
"Nguyen",
"Christopher",
""
],
[
"Li",
"Li",
""
]
] |
new_dataset
| 0.995703 |
2304.11060
|
Nan Li
|
Nan Li, Bo Kang, Tijl De Bie
|
SkillGPT: a RESTful API service for skill extraction and standardization
using a Large Language Model
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present SkillGPT, a tool for skill extraction and standardization (SES)
from free-style job descriptions and user profiles with an open-source Large
Language Model (LLM) as backbone. Most previous methods for similar tasks
either need supervision or rely on heavy data-preprocessing and feature
engineering. Directly prompting the latest conversational LLM for standard
skills, however, is slow, costly and inaccurate. In contrast, SkillGPT utilizes
a LLM to perform its tasks in steps via summarization and vector similarity
search, to balance speed with precision. The backbone LLM of SkillGPT is based
on Llama, free for academic use and thus useful for exploratory research and
prototype development. Hence, our cost-free SkillGPT gives users the
convenience of conversational SES, efficiently and reliably.
|
[
{
"version": "v1",
"created": "Mon, 17 Apr 2023 08:43:20 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Li",
"Nan",
""
],
[
"Kang",
"Bo",
""
],
[
"De Bie",
"Tijl",
""
]
] |
new_dataset
| 0.960301 |
2304.11077
|
Vitaly Shalumov
|
Vitaly Shalumov and Harel Haskey
|
HeRo: RoBERTa and Longformer Hebrew Language Models
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we fill in an existing gap in resources available to the
Hebrew NLP community by providing it with the largest so far pre-train dataset
HeDC4, a state-of-the-art pre-trained language model HeRo for standard length
inputs and an efficient transformer LongHeRo for long input sequences. The HeRo
model was evaluated on the sentiment analysis, the named entity recognition,
and the question answering tasks while the LongHeRo model was evaluated on the
document classification task with a dataset composed of long documents. Both
HeRo and LongHeRo presented state-of-the-art performance. The dataset and model
checkpoints used in this work are publicly available.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 05:56:32 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Shalumov",
"Vitaly",
""
],
[
"Haskey",
"Harel",
""
]
] |
new_dataset
| 0.999727 |
2304.11081
|
Shashank Gupta
|
Avval Amil and Shashank Gupta
|
Cryptanalysis of quantum permutation pad
|
7 pages, 1 figures, comments are welcome
| null | null | null |
cs.CR math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cryptanalysis increases the level of confidence in cryptographic algorithms.
We analyze the security of a symmetric cryptographic algorithm - quantum
permutation pad (QPP) [8]. We found the instances of ciphertext the same as
plaintext even after the action of QPP with the probability 1/N when the entire
set of permutation matrices of dimension N is used and with the probability
1/N^m when an incomplete set of m permutation matrices of dimension N are used.
We visually show such instances in a cipher image created by QPP of 256
permutation matrices of different dimensions. For any practical usage of QPP,
we recommend a set of 256 permutation matrices of a dimension more or equal to
2048.
|
[
{
"version": "v1",
"created": "Wed, 5 Apr 2023 17:22:31 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Amil",
"Avval",
""
],
[
"Gupta",
"Shashank",
""
]
] |
new_dataset
| 0.995503 |
2304.11087
|
Ebenezer Isaac
|
Ebenezer R. H. P. Isaac and Jim Reno
|
AI Product Security: A Primer for Developers
|
10 pages, 1 figure
| null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Not too long ago, AI security used to mean the research and practice of how
AI can empower cybersecurity, that is, AI for security. Ever since Ian
Goodfellow and his team popularized adversarial attacks on machine learning,
security for AI became an important concern and also part of AI security. It is
imperative to understand the threats to machine learning products and avoid
common pitfalls in AI product development. This article is addressed to
developers, designers, managers and researchers of AI software products.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 05:22:34 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Isaac",
"Ebenezer R. H. P.",
""
],
[
"Reno",
"Jim",
""
]
] |
new_dataset
| 0.981333 |
2304.11093
|
Meidai Xuanyuan
|
Meidai Xuanyuan, Yuwang Wang, Honglei Guo, Xiao Ma, Yuchen Guo, Tao
Yu, Qionghai Dai
|
Hi Sheldon! Creating Deep Personalized Characters from TV Shows
| null | null | null | null |
cs.CL cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Imagine an interesting multimodal interactive scenario that you can see,
hear, and chat with an AI-generated digital character, who is capable of
behaving like Sheldon from The Big Bang Theory, as a DEEP copy from appearance
to personality. Towards this fantastic multimodal chatting scenario, we propose
a novel task, named Deep Personalized Character Creation (DPCC): creating
multimodal chat personalized characters from multimodal data such as TV shows.
Specifically, given a single- or multi-modality input (text, audio, video), the
goal of DPCC is to generate a multi-modality (text, audio, video) response,
which should be well-matched the personality of a specific character such as
Sheldon, and of high quality as well. To support this novel task, we further
collect a character centric multimodal dialogue dataset, named Deep
Personalized Character Dataset (DPCD), from TV shows. DPCD contains
character-specific multimodal dialogue data of ~10k utterances and ~6 hours of
audio/video per character, which is around 10 times larger compared to existing
related datasets.On DPCD, we present a baseline method for the DPCC task and
create 5 Deep personalized digital Characters (DeepCharacters) from Big Bang TV
Shows. We conduct both subjective and objective experiments to evaluate the
multimodal response from DeepCharacters in terms of characterization and
quality. The results demonstrates that, on our collected DPCD dataset, the
proposed baseline can create personalized digital characters for generating
multimodal response.Our collected DPCD dataset, the code of data collection and
our baseline will be published soon.
|
[
{
"version": "v1",
"created": "Sun, 9 Apr 2023 00:39:43 GMT"
}
] | 2023-04-24T00:00:00 |
[
[
"Xuanyuan",
"Meidai",
""
],
[
"Wang",
"Yuwang",
""
],
[
"Guo",
"Honglei",
""
],
[
"Ma",
"Xiao",
""
],
[
"Guo",
"Yuchen",
""
],
[
"Yu",
"Tao",
""
],
[
"Dai",
"Qionghai",
""
]
] |
new_dataset
| 0.995532 |
1312.3604
|
Michael May
|
Michael P. May
|
A closed-form solution for the flat-state geometry of cylindrical
surface intersections bounded on all sides by orthogonal planes
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A closed-form solution for the boundary of the flat state of an orthogonal
cross section of contiguous surface geometry formed by the intersection of two
cylinders of equal radii oriented in dual directions of rotation about their
intersecting axes.
|
[
{
"version": "v1",
"created": "Thu, 12 Dec 2013 19:51:48 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2023 00:16:15 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"May",
"Michael P.",
""
]
] |
new_dataset
| 0.999725 |
2103.03032
|
Hans van Ditmarsch
|
Hans van Ditmarsch, Roman Kuznets
|
Wanted Dead or Alive : Epistemic logic for impure simplicial complexes
| null | null | null | null |
cs.DC cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a logic of knowledge for impure simplicial complexes. Impure
simplicial complexes represent synchronous distributed systems under
uncertainty over which processes are still active (are alive) and which
processes have failed or crashed (are dead). Our work generalizes the logic of
knowledge for pure simplicial complexes, where all processes are alive, by
Goubault et al. In our semantics, given a designated face in a complex, a
formula can only be true or false there if it is defined. The following are
undefined: dead processes cannot know or be ignorant of any proposition, and
live processes cannot know or be ignorant of factual propositions involving
processes they know to be dead. The semantics are therefore three-valued, with
undefined as the third value. We propose an axiomatization that is a version of
the modal logic S5. We also show that impure simplicial complexes correspond to
certain Kripke models where agents' accessibility relations are equivalence
relations on a subset of the domain only. This work extends a WoLLIC 21
conference publication with the same title.
|
[
{
"version": "v1",
"created": "Thu, 4 Mar 2021 13:47:35 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Feb 2022 12:42:11 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Apr 2023 07:23:24 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"van Ditmarsch",
"Hans",
""
],
[
"Kuznets",
"Roman",
""
]
] |
new_dataset
| 0.979116 |
2204.03245
|
Jiashun Suo
|
Jiashun Suo, Tianyi Wang, Xingzhou Zhang, Haiyang Chen, Wei Zhou,
Weisong Shi
|
HIT-UAV: A high-altitude infrared thermal dataset for Unmanned Aerial
Vehicle-based object detection
| null |
Sci Data 10, 227 (2023)
|
10.1038/s41597-023-02066-6
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the HIT-UAV dataset, a high-altitude infrared thermal dataset for
object detection applications on Unmanned Aerial Vehicles (UAVs). The dataset
comprises 2,898 infrared thermal images extracted from 43,470 frames in
hundreds of videos captured by UAVs in various scenarios including schools,
parking lots, roads, and playgrounds. Moreover, the HIT-UAV provides essential
flight data for each image, such as flight altitude, camera perspective, date,
and daylight intensity. For each image, we have manually annotated object
instances with bounding boxes of two types (oriented and standard) to tackle
the challenge of significant overlap of object instances in aerial images. To
the best of our knowledge, the HIT-UAV is the first publicly available
high-altitude UAV-based infrared thermal dataset for detecting persons and
vehicles. We have trained and evaluated well-established object detection
algorithms on the HIT-UAV. Our results demonstrate that the detection
algorithms perform exceptionally well on the HIT-UAV compared to visual light
datasets since infrared thermal images do not contain significant irrelevant
information about objects. We believe that the HIT-UAV will contribute to
various UAV-based applications and researches. The dataset is freely available
at https://github.com/suojiashun/HIT-UAV-Infrared-Thermal-Dataset.
|
[
{
"version": "v1",
"created": "Thu, 7 Apr 2022 06:23:02 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Mar 2023 11:04:55 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Suo",
"Jiashun",
""
],
[
"Wang",
"Tianyi",
""
],
[
"Zhang",
"Xingzhou",
""
],
[
"Chen",
"Haiyang",
""
],
[
"Zhou",
"Wei",
""
],
[
"Shi",
"Weisong",
""
]
] |
new_dataset
| 0.999855 |
2208.14788
|
Ondrej Bajgar
|
Ondrej Bajgar and Jan Horenovsky
|
Negative Human Rights as a Basis for Long-term AI Safety and Regulation
| null |
Journal of Artificial Intelligence Research 76 (2023) 1043-1075
|
10.1613/jair.1.14020
| null |
cs.CY cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
If autonomous AI systems are to be reliably safe in novel situations, they
will need to incorporate general principles guiding them to recognize and avoid
harmful behaviours. Such principles may need to be supported by a binding
system of regulation, which would need the underlying principles to be widely
accepted. They should also be specific enough for technical implementation.
Drawing inspiration from law, this article explains how negative human rights
could fulfil the role of such principles and serve as a foundation both for an
international regulatory system and for building technical safety constraints
for future AI systems.
|
[
{
"version": "v1",
"created": "Wed, 31 Aug 2022 11:57:13 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2023 09:27:07 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Bajgar",
"Ondrej",
""
],
[
"Horenovsky",
"Jan",
""
]
] |
new_dataset
| 0.961295 |
2210.11234
|
Guowen Li
|
Guowen Li, Zhiyao Yang, Yangyang Fu, Lingyu Ren, Zheng O'Neill, Chirag
Parikh
|
Development of a hardware-In-the-Loop (HIL) testbed for cyber-physical
security in smart buildings
|
Presented at the 2023 ASHRAE Winter Conference
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As smart buildings move towards open communication technologies, providing
access to the Building Automation System (BAS) through the intranet, or even
remotely through the Internet, has become a common practice. However, BAS was
historically developed as a closed environment and designed with limited
cyber-security considerations. Thus, smart buildings are vulnerable to
cyber-attacks with the increased accessibility. This study introduces the
development and capability of a Hardware-in-the-Loop (HIL) testbed for testing
and evaluating the cyber-physical security of typical BASs in smart buildings.
The testbed consists of three subsystems: (1) a real-time HIL emulator
simulating the behavior of a virtual building as well as the Heating,
Ventilation, and Air Conditioning (HVAC) equipment via a dynamic simulation in
Modelica; (2) a set of real HVAC controllers monitoring the virtual building
operation and providing local control signals to control HVAC equipment in the
HIL emulator; and (3) a BAS server along with a web-based service for users to
fully access the schedule, setpoints, trends, alarms, and other control
functions of the HVAC controllers remotely through the BACnet network. The
server generates rule-based setpoints to local HVAC controllers. Based on these
three subsystems, the HIL testbed supports attack/fault-free and
attack/fault-injection experiments at various levels of the building system.
The resulting test data can be used to inform the building community and
support the cyber-physical security technology transfer to the building
industry.
|
[
{
"version": "v1",
"created": "Mon, 17 Oct 2022 02:39:07 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Oct 2022 00:48:15 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Apr 2023 17:15:31 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Li",
"Guowen",
""
],
[
"Yang",
"Zhiyao",
""
],
[
"Fu",
"Yangyang",
""
],
[
"Ren",
"Lingyu",
""
],
[
"O'Neill",
"Zheng",
""
],
[
"Parikh",
"Chirag",
""
]
] |
new_dataset
| 0.97773 |
2301.02711
|
Thomas Thuesen Enevoldsen
|
Thomas T. Enevoldsen, Mogens Blanke, Roberto Galeazzi
|
Autonomy for Ferries and Harbour Buses: a Collision Avoidance
Perspective
|
Accepted for presentation at the IFAC World Congress 2023
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper provides a collision avoidance perspective to maritime autonomy,
in the shift towards Maritime Autonomous Surface Ships (MASS). In particular,
the paper presents the developments related to the Greenhopper, Denmark's first
autonomous harbour bus. The collision and grounding avoidance scheme, called
the Short Horizon Planner (SHP), is described and discussed in detail.
Furthermore, the required autonomy stack for facilitating safe and
rule-compliant collision avoidance is presented. The inherent difficulties
related to adhering to the COLREGs are outlined, highlighting some of the
operational constraints and challenges within the space of autonomous ferries
and harbour buses. Finally, collision and grounding avoidance is demonstrated
using a simulation of the whole Greenhopper autonomy stack.
|
[
{
"version": "v1",
"created": "Fri, 6 Jan 2023 20:57:47 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2023 09:05:48 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Enevoldsen",
"Thomas T.",
""
],
[
"Blanke",
"Mogens",
""
],
[
"Galeazzi",
"Roberto",
""
]
] |
new_dataset
| 0.997864 |
2302.00782
|
Jacob Schrum
|
Alejandro Medina and Melanie Richey and Mark Mueller and Jacob Schrum
|
Evolving Flying Machines in Minecraft Using Quality Diversity
|
In Genetic and Evolutionary Computation Conference (GECCO '23), July
15-19, 2023, Lisbon, Portugal
| null |
10.1145/3583131.3590352
| null |
cs.NE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Minecraft is a great testbed for human creativity that has inspired the
design of various structures and even functioning machines, including flying
machines. EvoCraft is an API for programmatically generating structures in
Minecraft, but the initial work in this domain was not capable of evolving
flying machines. This paper applies fitness-based evolution and quality
diversity search in order to evolve flying machines. Although fitness alone can
occasionally produce flying machines, thanks in part to a more sophisticated
fitness function than was used previously, the quality diversity algorithm
MAP-Elites is capable of discovering flying machines much more reliably, at
least when an appropriate behavior characterization is used to guide the search
for diverse solutions.
|
[
{
"version": "v1",
"created": "Wed, 1 Feb 2023 22:25:28 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Apr 2023 21:35:07 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Medina",
"Alejandro",
""
],
[
"Richey",
"Melanie",
""
],
[
"Mueller",
"Mark",
""
],
[
"Schrum",
"Jacob",
""
]
] |
new_dataset
| 0.995285 |
2302.07777
|
Florentin Putz
|
Leon W\"ursching, Florentin Putz, Steffen Haesler, Matthias Hollick
|
FIDO2 the Rescue? Platform vs. Roaming Authentication on Smartphones
|
16 pages, 6 figures, the dataset is available at
https://doi.org/10.5281/zenodo.7572697 and the source code is available at
https://github.com/seemoo-lab/fido2-the-smartphone
|
ACM CHI 2023
|
10.1145/3544548.3580993
| null |
cs.CR cs.HC cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern smartphones support FIDO2 passwordless authentication using either
external security keys or internal biometric authentication, but it is unclear
whether users appreciate and accept these new forms of web authentication for
their own accounts. We present the first lab study (N=87) comparing platform
and roaming authentication on smartphones, determining the practical strengths
and weaknesses of FIDO2 as perceived by users in a mobile scenario. Most
participants were willing to adopt passwordless authentication during our
in-person user study, but closer analysis shows that participants prioritize
usability, security, and availability differently depending on the account
type. We identify remaining adoption barriers that prevent FIDO2 from
succeeding password authentication, such as missing support for contemporary
usage patterns, including account delegation and usage on multiple clients.
|
[
{
"version": "v1",
"created": "Wed, 15 Feb 2023 16:54:34 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Würsching",
"Leon",
""
],
[
"Putz",
"Florentin",
""
],
[
"Haesler",
"Steffen",
""
],
[
"Hollick",
"Matthias",
""
]
] |
new_dataset
| 0.97079 |
2303.00304
|
Obin Kwon
|
Obin Kwon, Jeongho Park, Songhwai Oh
|
Renderable Neural Radiance Map for Visual Navigation
|
Preprint version. CVPR 2023 accepted, highlight paper. Project page:
https://rllab-snu.github.io/projects/RNR-Map/
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose a novel type of map for visual navigation, a renderable neural
radiance map (RNR-Map), which is designed to contain the overall visual
information of a 3D environment. The RNR-Map has a grid form and consists of
latent codes at each pixel. These latent codes are embedded from image
observations, and can be converted to the neural radiance field which enables
image rendering given a camera pose. The recorded latent codes implicitly
contain visual information about the environment, which makes the RNR-Map
visually descriptive. This visual information in RNR-Map can be a useful
guideline for visual localization and navigation. We develop localization and
navigation frameworks that can effectively utilize the RNR-Map. We evaluate the
proposed frameworks on camera tracking, visual localization, and image-goal
navigation. Experimental results show that the RNR-Map-based localization
framework can find the target location based on a single query image with fast
speed and competitive accuracy compared to other baselines. Also, this
localization framework is robust to environmental changes, and even finds the
most visually similar places when a query image from a different environment is
given. The proposed navigation framework outperforms the existing image-goal
navigation methods in difficult scenarios, under odometry and actuation noises.
The navigation framework shows 65.7% success rate in curved scenarios of the
NRNS dataset, which is an improvement of 18.6% over the current
state-of-the-art. Project page: https://rllab-snu.github.io/projects/RNR-Map/
|
[
{
"version": "v1",
"created": "Wed, 1 Mar 2023 08:00:46 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Mar 2023 11:12:20 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Mar 2023 05:59:24 GMT"
},
{
"version": "v4",
"created": "Thu, 20 Apr 2023 01:50:55 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Kwon",
"Obin",
""
],
[
"Park",
"Jeongho",
""
],
[
"Oh",
"Songhwai",
""
]
] |
new_dataset
| 0.999347 |
2304.02122
|
Joe Yue-Hei Ng
|
Joe Yue-Hei Ng, Kevin McCloskey, Jian Cui, Vincent R. Meijer, Erica
Brand, Aaron Sarna, Nita Goyal, Christopher Van Arsdale, Scott Geraedts
|
OpenContrails: Benchmarking Contrail Detection on GOES-16 ABI
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Contrails (condensation trails) are line-shaped ice clouds caused by aircraft
and are likely the largest contributor of aviation-induced climate change.
Contrail avoidance is potentially an inexpensive way to significantly reduce
the climate impact of aviation. An automated contrail detection system is an
essential tool to develop and evaluate contrail avoidance systems. In this
paper, we present a human-labeled dataset named OpenContrails to train and
evaluate contrail detection models based on GOES-16 Advanced Baseline Imager
(ABI) data. We propose and evaluate a contrail detection model that
incorporates temporal context for improved detection accuracy. The human
labeled dataset and the contrail detection outputs are publicly available on
Google Cloud Storage at gs://goes_contrails_dataset.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 21:03:46 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2023 06:00:41 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Ng",
"Joe Yue-Hei",
""
],
[
"McCloskey",
"Kevin",
""
],
[
"Cui",
"Jian",
""
],
[
"Meijer",
"Vincent R.",
""
],
[
"Brand",
"Erica",
""
],
[
"Sarna",
"Aaron",
""
],
[
"Goyal",
"Nita",
""
],
[
"Van Arsdale",
"Christopher",
""
],
[
"Geraedts",
"Scott",
""
]
] |
new_dataset
| 0.999789 |
2304.04087
|
Tanveer Ahmed Belal
|
Tanveer Ahmed Belal, G. M. Shahariar, Md. Hasanul Kabir
|
Interpretable Multi Labeled Bengali Toxic Comments Classification using
Deep Learning
| null |
2023 International Conference on Electrical, Computer and
Communication Engineering (ECCE)
|
10.1109/ECCE57851.2023.10101588
| null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents a deep learning-based pipeline for categorizing Bengali
toxic comments, in which at first a binary classification model is used to
determine whether a comment is toxic or not, and then a multi-label classifier
is employed to determine which toxicity type the comment belongs to. For this
purpose, we have prepared a manually labeled dataset consisting of 16,073
instances among which 8,488 are Toxic and any toxic comment may correspond to
one or more of the six toxic categories - vulgar, hate, religious, threat,
troll, and insult simultaneously. Long Short Term Memory (LSTM) with BERT
Embedding achieved 89.42% accuracy for the binary classification task while as
a multi-label classifier, a combination of Convolutional Neural Network and
Bi-directional Long Short Term Memory (CNN-BiLSTM) with attention mechanism
achieved 78.92% accuracy and 0.86 as weighted F1-score. To explain the
predictions and interpret the word feature importance during classification by
the proposed models, we utilized Local Interpretable Model-Agnostic
Explanations (LIME) framework. We have made our dataset public and can be
accessed at -
https://github.com/deepu099cse/Multi-Labeled-Bengali-Toxic-Comments-Classification
|
[
{
"version": "v1",
"created": "Sat, 8 Apr 2023 19:28:26 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Belal",
"Tanveer Ahmed",
""
],
[
"Shahariar",
"G. M.",
""
],
[
"Kabir",
"Md. Hasanul",
""
]
] |
new_dataset
| 0.99894 |
2304.08756
|
Yang Liu
|
Yang Liu, Shen Yan, Yuge Zhang, Kan Ren, Quanlu Zhang, Zebin Ren, Deng
Cai, Mi Zhang
|
AutoTaskFormer: Searching Vision Transformers for Multi-task Learning
|
15 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision Transformers have shown great performance in single tasks such as
classification and segmentation. However, real-world problems are not isolated,
which calls for vision transformers that can perform multiple tasks
concurrently. Existing multi-task vision transformers are handcrafted and
heavily rely on human expertise. In this work, we propose a novel one-shot
neural architecture search framework, dubbed AutoTaskFormer (Automated
Multi-Task Vision TransFormer), to automate this process. AutoTaskFormer not
only identifies the weights to share across multiple tasks automatically, but
also provides thousands of well-trained vision transformers with a wide range
of parameters (e.g., number of heads and network depth) for deployment under
various resource constraints. Experiments on both small-scale (2-task
Cityscapes and 3-task NYUv2) and large-scale (16-task Taskonomy) datasets show
that AutoTaskFormer outperforms state-of-the-art handcrafted vision
transformers in multi-task learning. The entire code and models will be
open-sourced.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 06:30:20 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2023 02:27:04 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Liu",
"Yang",
""
],
[
"Yan",
"Shen",
""
],
[
"Zhang",
"Yuge",
""
],
[
"Ren",
"Kan",
""
],
[
"Zhang",
"Quanlu",
""
],
[
"Ren",
"Zebin",
""
],
[
"Cai",
"Deng",
""
],
[
"Zhang",
"Mi",
""
]
] |
new_dataset
| 0.998874 |
2304.09859
|
Daniel G. Krakowczyk
|
Daniel G. Krakowczyk, David R. Reich, Jakob Chwastek, Deborah N.
Jakobi, Paul Prasse, Assunta S\"uss, Oleksii Turuta, Pawe{\l} Kasprowski,
Lena A. J\"ager
|
pymovements: A Python Package for Eye Movement Data Processing
|
Preprint for ETRA '23: 2023 Symposium on Eye Tracking Research and
Applications
| null |
10.1145/3588015.3590134
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce pymovements: a Python package for analyzing eye-tracking data
that follows best practices in software development, including rigorous testing
and adherence to coding standards. The package provides functionality for key
processes along the entire preprocessing pipeline. This includes parsing of eye
tracker data files, transforming positional data into velocity data, detecting
gaze events like saccades and fixations, computing event properties like
saccade amplitude and fixational dispersion and visualizing data and results
with several types of plotting methods. Moreover, pymovements also provides an
easily accessible interface for downloading and processing publicly available
datasets. Additionally, we emphasize how rigorous testing in scientific
software packages is critical to the reproducibility and transparency of
research, enabling other researchers to verify and build upon previous
findings.
|
[
{
"version": "v1",
"created": "Tue, 11 Apr 2023 18:39:37 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Krakowczyk",
"Daniel G.",
""
],
[
"Reich",
"David R.",
""
],
[
"Chwastek",
"Jakob",
""
],
[
"Jakobi",
"Deborah N.",
""
],
[
"Prasse",
"Paul",
""
],
[
"Süss",
"Assunta",
""
],
[
"Turuta",
"Oleksii",
""
],
[
"Kasprowski",
"Paweł",
""
],
[
"Jäger",
"Lena A.",
""
]
] |
new_dataset
| 0.960809 |
2304.09860
|
Manuel Striani
|
Manuel Striani
|
NRTS: A Client-Server architecture for supporting data recording,
transmission and evaluation of multidisciplinary teams during the neonatal
resuscitation simulation scenario
|
8 pages, 13 figures, 6 references
| null | null | null |
cs.HC cs.AI cs.NI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this technical report, we describe Neonatal Resuscitation Training
Simulator (NRTS), an Android mobile app designed to support medical experts to
input, transmit and record data during a High-Fidelity Simulation course for
neonatal resuscitation. This mobile app allows one to automatically send all
the recorded data from "Neonatal Intensive Care Unit" (NICU) of Casale
Monferrato Children's Hospital, (Italy) to a server located at the Department
of Science and Technological Innovation (DiSIT), University of Piemonte
Orientale (Italy). Finally, the medical instructor can view statistics on a
simulation exercise that may be used during the de-briefing phase for the
evaluation of multidisciplinary teams involved in the simulation scenarios.
|
[
{
"version": "v1",
"created": "Wed, 12 Apr 2023 07:36:40 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Striani",
"Manuel",
""
]
] |
new_dataset
| 0.998444 |
2304.09873
|
Mojtaba Eshghie
|
Mahshid Eshghie, Mojtaba Eshghie
|
ChatGPT as a Therapist Assistant: A Suitability Study
| null | null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes using ChatGPT, an innovative technology with various
applications, as an assistant for psychotherapy. ChatGPT can serve as a patient
information collector, a companion for patients in between therapy sessions,
and an organizer of gathered information for therapists to facilitate treatment
processes. The research identifies five research questions and discovers useful
prompts for fine-tuning the assistant, which shows that ChatGPT can participate
in positive conversations, listen attentively, offer validation and potential
coping strategies without providing explicit medical advice, and help
therapists discover new insights from multiple conversations with the same
patient. Using ChatGPT as an assistant for psychotherapy poses several
challenges that need to be addressed, including technical as well as
human-centric challenges which are discussed.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 13:35:23 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Eshghie",
"Mahshid",
""
],
[
"Eshghie",
"Mojtaba",
""
]
] |
new_dataset
| 0.955025 |
2304.09919
|
Marcus Schwarting
|
Vesa Akerman and David Baines and Damien Daspit and Ulf Hermjakob and
Taeho Jang and Colin Leong and Michael Martin and Joel Mathew and Jonathan
Robie and Marcus Schwarting
|
The eBible Corpus: Data and Model Benchmarks for Bible Translation for
Low-Resource Languages
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Efficiently and accurately translating a corpus into a low-resource language
remains a challenge, regardless of the strategies employed, whether manual,
automated, or a combination of the two. Many Christian organizations are
dedicated to the task of translating the Holy Bible into languages that lack a
modern translation. Bible translation (BT) work is currently underway for over
3000 extremely low resource languages. We introduce the eBible corpus: a
dataset containing 1009 translations of portions of the Bible with data in 833
different languages across 75 language families. In addition to a BT
benchmarking dataset, we introduce model performance benchmarks built on the No
Language Left Behind (NLLB) neural machine translation (NMT) models. Finally,
we describe several problems specific to the domain of BT and consider how the
established data and model benchmarks might be used for future translation
efforts. For a BT task trained with NLLB, Austronesian and Trans-New Guinea
language families achieve 35.1 and 31.6 BLEU scores respectively, which spurs
future innovations for NMT for low-resource languages in Papua New Guinea.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 18:52:49 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Akerman",
"Vesa",
""
],
[
"Baines",
"David",
""
],
[
"Daspit",
"Damien",
""
],
[
"Hermjakob",
"Ulf",
""
],
[
"Jang",
"Taeho",
""
],
[
"Leong",
"Colin",
""
],
[
"Martin",
"Michael",
""
],
[
"Mathew",
"Joel",
""
],
[
"Robie",
"Jonathan",
""
],
[
"Schwarting",
"Marcus",
""
]
] |
new_dataset
| 0.999841 |
2304.09952
|
Franc Grootjen
|
Franc Grootjen and Nikolai Schauer
|
Baugh-Wooley Multiplication for the RISCV Processor
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This article describes an efficient way to implement the multiplication
instructions for a RISCV processor. Instead of using three predefined IP blocks
for signed, unsigned and mixed multiplication, this article presents a novel
extension to the Baugh-Wooley multiplication algorithm which reduces area and
power consumption with roughly a factor three.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 20:06:08 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Grootjen",
"Franc",
""
],
[
"Schauer",
"Nikolai",
""
]
] |
new_dataset
| 0.972578 |
2304.09982
|
Maite Taboada
|
Valentin-Gabriel Soumah, Prashanth Rao, Philipp Eibl, Maite Taboada
|
Radar de Parit\'e: An NLP system to measure gender representation in
French news stories
|
Full conference paper plus appendix
|
The 36th Canadian Conference on Artificial Intelligence. 5-9 June
2023, Montr\'eal
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present the Radar de Parit\'e, an automated Natural Language Processing
(NLP) system that measures the proportion of women and men quoted daily in six
Canadian French-language media outlets. We outline the system's architecture
and detail the challenges we overcame to address French-specific issues, in
particular regarding coreference resolution, a new contribution to the NLP
literature on French. We also showcase statistics covering over one year's
worth of data (282,512 news articles). Our results highlight the
underrepresentation of women in news stories, while also illustrating the
application of modern NLP methods to measure gender representation and address
societal issues.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 21:33:59 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Soumah",
"Valentin-Gabriel",
""
],
[
"Rao",
"Prashanth",
""
],
[
"Eibl",
"Philipp",
""
],
[
"Taboada",
"Maite",
""
]
] |
new_dataset
| 0.990866 |
2304.10068
|
Joel Dabrowski Dr
|
Joel Janek Dabrowski and Ashfaqur Rahman
|
Fruit Picker Activity Recognition with Wearable Sensors and Machine
Learning
|
Accepted at IEEE International Joint Conference on Neural Networks
(IJCNN) conference, 2023
| null | null | null |
cs.LG eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper we present a novel application of detecting fruit picker
activities based on time series data generated from wearable sensors. During
harvesting, fruit pickers pick fruit into wearable bags and empty these bags
into harvesting bins located in the orchard. Once full, these bins are quickly
transported to a cooled pack house to improve the shelf life of picked fruits.
For farmers and managers, the knowledge of when a picker bag is emptied is
important for managing harvesting bins more effectively to minimise the time
the picked fruit is left out in the heat (resulting in reduced shelf life). We
propose a means to detect these bag-emptying events using human activity
recognition with wearable sensors and machine learning methods. We develop a
semi-supervised approach to labelling the data. A feature-based machine
learning ensemble model and a deep recurrent convolutional neural network are
developed and tested on a real-world dataset. When compared, the neural network
achieves 86% detection accuracy.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 03:38:08 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Dabrowski",
"Joel Janek",
""
],
[
"Rahman",
"Ashfaqur",
""
]
] |
new_dataset
| 0.99869 |
2304.10113
|
Goirik Chakrabarty
|
Goirik Chakrabarty, Manogna Sreenivas and Soma Biswas
|
SATA: Source Anchoring and Target Alignment Network for Continual Test
Time Adaptation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adapting a trained model to perform satisfactorily on continually changing
testing domains/environments is an important and challenging task. In this
work, we propose a novel framework, SATA, which aims to satisfy the following
characteristics required for online adaptation: 1) can work seamlessly with
different (preferably small) batch sizes to reduce latency; 2) should continue
to work well for the source domain; 3) should have minimal tunable
hyper-parameters and storage requirements. Given a pre-trained network trained
on source domain data, the proposed SATA framework modifies the batch-norm
affine parameters using source anchoring based self-distillation. This ensures
that the model incorporates the knowledge of the newly encountered domains,
without catastrophically forgetting about the previously seen ones. We also
propose a source-prototype driven contrastive alignment to ensure natural
grouping of the target samples, while maintaining the already learnt semantic
information. Extensive evaluation on three benchmark datasets under challenging
settings justify the effectiveness of SATA for real-world applications.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 06:38:33 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Chakrabarty",
"Goirik",
""
],
[
"Sreenivas",
"Manogna",
""
],
[
"Biswas",
"Soma",
""
]
] |
new_dataset
| 0.992213 |
2304.10154
|
Andrea Fusiello
|
Abdul Salam Rasmi Asraf Ali and Andrea Fusiello and Claudio Landi and
Cristina Sarti and Anneke Annassia Putri Siswadi
|
Motion Artifacts Detection in Short-scan Dental CBCT Reconstructions
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cone Beam Computed Tomography (CBCT) is widely used in dentistry for
diagnostics and treatment planning. CBCT Imaging has a long acquisition time
and consequently, the patient is likely to move. This motion causes significant
artifacts in the reconstructed data which may lead to misdiagnosis. Existing
motion correction algorithms only address this issue partially, struggling with
inconsistencies due to truncation, accuracy, and execution speed. On the other
hand, a short-scan reconstruction using a subset of motion-free projections
with appropriate weighting methods can have a sufficient clinical image quality
for most diagnostic purposes. Therefore, a framework is used in this study to
extract the motion-free part of the scanned projections with which a clean
short-scan volume can be reconstructed without using correction algorithms.
Motion artifacts are detected using deep learning with a slice-based prediction
scheme followed by volume averaging to get the final result. A realistic motion
simulation strategy and data augmentation has been implemented to address data
scarcity. The framework has been validated by testing it with real
motion-affected data while the model was trained only with simulated motion
data. This shows the feasibility to apply the proposed framework to a broad
variety of motion cases for further research.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 08:28:44 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Ali",
"Abdul Salam Rasmi Asraf",
""
],
[
"Fusiello",
"Andrea",
""
],
[
"Landi",
"Claudio",
""
],
[
"Sarti",
"Cristina",
""
],
[
"Siswadi",
"Anneke Annassia Putri",
""
]
] |
new_dataset
| 0.996005 |
2304.10201
|
Savvas Papaioannou
|
Savvas Papaioannou, Panayiotis Kolios, Theocharis Theocharides,
Christos G. Panayiotou and Marios M. Polycarpou
|
UAV-based Receding Horizon Control for 3D Inspection Planning
|
2022 International Conference on Unmanned Aircraft Systems (ICUAS),
21-24 June 2022, Dubrovnik, Croatia
| null |
10.1109/ICUAS54217.2022.9836051
| null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Nowadays, unmanned aerial vehicles or UAVs are being used for a wide range of
tasks, including infrastructure inspection, automated monitoring and coverage.
This paper investigates the problem of 3D inspection planning with an
autonomous UAV agent which is subject to dynamical and sensing constraints. We
propose a receding horizon 3D inspection planning control approach for
generating optimal trajectories which enable an autonomous UAV agent to inspect
a finite number of feature-points scattered on the surface of a cuboid-like
structure of interest. The inspection planning problem is formulated as a
constrained open-loop optimal control problem and is solved using mixed integer
programming (MIP) optimization. Quantitative and qualitative evaluation
demonstrates the effectiveness of the proposed approach.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 10:42:18 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Papaioannou",
"Savvas",
""
],
[
"Kolios",
"Panayiotis",
""
],
[
"Theocharides",
"Theocharis",
""
],
[
"Panayiotou",
"Christos G.",
""
],
[
"Polycarpou",
"Marios M.",
""
]
] |
new_dataset
| 0.998315 |
2304.10211
|
Sami Barchid
|
Sami Barchid, Benjamin Allaert, Amel Aissaoui, Jos\'e Mennesson,
Chaabane Dj\'eraba
|
Spiking-Fer: Spiking Neural Network for Facial Expression Recognition
With Event Cameras
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Facial Expression Recognition (FER) is an active research domain that has
shown great progress recently, notably thanks to the use of large deep learning
models. However, such approaches are particularly energy intensive, which makes
their deployment difficult for edge devices. To address this issue, Spiking
Neural Networks (SNNs) coupled with event cameras are a promising alternative,
capable of processing sparse and asynchronous events with lower energy
consumption. In this paper, we establish the first use of event cameras for
FER, named "Event-based FER", and propose the first related benchmarks by
converting popular video FER datasets to event streams. To deal with this new
task, we propose "Spiking-FER", a deep convolutional SNN model, and compare it
against a similar Artificial Neural Network (ANN). Experiments show that the
proposed approach achieves comparable performance to the ANN architecture,
while consuming less energy by orders of magnitude (up to 65.39x). In addition,
an experimental study of various event-based data augmentation techniques is
performed to provide insights into the efficient transformations specific to
event-based FER.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 10:59:56 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Barchid",
"Sami",
""
],
[
"Allaert",
"Benjamin",
""
],
[
"Aissaoui",
"Amel",
""
],
[
"Mennesson",
"José",
""
],
[
"Djéraba",
"Chaabane",
""
]
] |
new_dataset
| 0.989975 |
2304.10256
|
Kaushal Goyal
|
Dr. Velmathi G, Kaushal Goyal
|
Indian Sign Language Recognition Using Mediapipe Holistic
|
16 pages, 22 figures
| null | null | null |
cs.CV cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deaf individuals confront significant communication obstacles on a daily
basis. Their inability to hear makes it difficult for them to communicate with
those who do not understand sign language. Moreover, it presents difficulties
in educational, occupational, and social contexts. By providing alternative
communication channels, technology can play a crucial role in overcoming these
obstacles. One such technology that can facilitate communication between deaf
and hearing individuals is sign language recognition. We will create a robust
system for sign language recognition in order to convert Indian Sign Language
to text or speech. We will evaluate the proposed system and compare CNN and
LSTM models. Since there are both static and gesture sign languages, a robust
model is required to distinguish between them. In this study, we discovered
that a CNN model captures letters and characters for recognition of static sign
language better than an LSTM model, but it outperforms CNN by monitoring hands,
faces, and pose in gesture sign language phrases and sentences. The creation of
a text-to-sign language paradigm is essential since it will enhance the sign
language-dependent deaf and hard-of-hearing population's communication skills.
Even though the sign-to-text translation is just one side of communication, not
all deaf or hard-of-hearing people are proficient in reading or writing text.
Some may have difficulty comprehending written language due to educational or
literacy issues. Therefore, a text-to-sign language paradigm would allow them
to comprehend text-based information and participate in a variety of social,
educational, and professional settings.
Keywords: deaf and hard-of-hearing, DHH, Indian sign language, CNN, LSTM,
static and gesture sign languages, text-to-sign language model, MediaPipe
Holistic, sign language recognition, SLR, SLT
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 12:25:47 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"G",
"Dr. Velmathi",
""
],
[
"Goyal",
"Kaushal",
""
]
] |
new_dataset
| 0.999048 |
2304.10282
|
Omar Hashash
|
Omar Hashash, Christina Chaccour, Walid Saad, Tao Yu, Kei Sakaguchi,
Merouane Debbah
|
The Seven Worlds and Experiences of the Wireless Metaverse: Challenges
and Opportunities
| null | null | null | null |
cs.IT cs.AI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The wireless metaverse will create diverse user experiences at the
intersection of the physical, digital, and virtual worlds. These experiences
will enable novel interactions between the constituents (e.g., extended reality
(XR) users and avatars) of the three worlds. However, remarkably, to date,
there is no holistic vision that identifies the full set of metaverse worlds,
constituents, and experiences, and the implications of their associated
interactions on next-generation communication and computing systems. In this
paper, we present a holistic vision of a limitless, wireless metaverse that
distills the metaverse into an intersection of seven worlds and experiences
that include the: i) physical, digital, and virtual worlds, along with the ii)
cyber, extended, live, and parallel experiences. We then articulate how these
experiences bring forth interactions between diverse metaverse constituents,
namely, a) humans and avatars and b) connected intelligence systems and their
digital twins (DTs). Then, we explore the wireless, computing, and artificial
intelligence (AI) challenges that must be addressed to establish
metaverse-ready networks that support these experiences and interactions. We
particularly highlight the need for end-to-end synchronization of DTs, and the
role of human-level AI and reasoning abilities for cognitive avatars. Moreover,
we articulate a sequel of open questions that should ignite the quest for the
future metaverse. We conclude with a set of recommendations to deploy the
limitless metaverse over future wireless systems.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 13:04:52 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Hashash",
"Omar",
""
],
[
"Chaccour",
"Christina",
""
],
[
"Saad",
"Walid",
""
],
[
"Yu",
"Tao",
""
],
[
"Sakaguchi",
"Kei",
""
],
[
"Debbah",
"Merouane",
""
]
] |
new_dataset
| 0.980967 |
2304.10313
|
Georgios Spathoulas
|
Lydia Negka, Angeliki Katsika, Georgios Spathoulas, Vassilis
Plagianakos
|
ORIGAMI: A flexible state channels design for public blockchain systems
|
33 pages, 12 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Public blockchain systems offer security guarantees that cannot be matched by
any centralised system. This offering has attracted a lot of interest and has
exposed a significant limitation of most blockchain designs with regards to
scalability. One of the scaling solutions proposed is state channels which
enables serving given applications with minimum number of transactions.
Existing state channels designs set multiple compatibility requirements for
applications to be deployed. Origami is a novel state channels design which
removes most of the requirements of existing approaches, while it also offers a
number of new features. Origami enables dynamic groups of users to interact in
an unordered way completely off-chain after an initial on-boarding on-chain
transaction. The proposed design is analysed in detail and compared to existing
schemes, while a formal security analysis validates the security properties it
offers.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 13:44:15 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Negka",
"Lydia",
""
],
[
"Katsika",
"Angeliki",
""
],
[
"Spathoulas",
"Georgios",
""
],
[
"Plagianakos",
"Vassilis",
""
]
] |
new_dataset
| 0.999227 |
2304.10348
|
Bata Vasic Dr
|
Bata Vasc, Nithin Raveendran and Bane Vasic
|
Neuro-OSVETA: A Robust Watermarking of 3D Meshes
|
10 pages, 5 figures
|
Proceedings of the International Telemetering Conference (ITC
2019), ISSN 1546-2188, vol. 55, pp. 387 - 396, Las Vegas, NV, USA, Octobar 21
- 24, 2019
| null | null |
cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Best and practical watermarking schemes for copyright protection of 3D meshes
are required to be blind and robust to attacks and errors. In this paper, we
present the latest developments in 3D blind watermarking with a special
emphasis on our Ordered Statistics Vertex Extraction and Tracing Algorithm
(OSVETA) algorithm and its improvements. OSVETA is based on a combination of
quantization index modulation (QIM) and error correction coding using novel
ways for judicial selection of mesh vertices which are stable under mesh
simplification, and the technique we propose in this paper offers a systematic
method for vertex selection based on neural networks replacing a heuristic
approach in the OSVETA. The Neuro-OSVETA enables a more precise mesh geometry
estimation and better curvature and topological feature estimation. These
enhancements result in a more accurate identification of stable vertices
resulting in significant reduction of deletion probability.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 14:39:24 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Vasc",
"Bata",
""
],
[
"Raveendran",
"Nithin",
""
],
[
"Vasic",
"Bane",
""
]
] |
new_dataset
| 0.99836 |
2304.10381
|
Diego Figueira
|
Diego Figueira, Santiago Figueira, Edwin Pin
|
PDL on Steroids: on Expressive Extensions of PDL with Intersection and
Converse
| null | null | null | null |
cs.LO cs.AI cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce CPDL+, a family of expressive logics rooted in Propositional
Dynamic Logic (PDL). In terms of expressive power, CPDL+ strictly contains PDL
extended with intersection and converse (a.k.a. ICPDL) as well as Conjunctive
Queries (CQ), Conjunctive Regular Path Queries (CRPQ), or some known extensions
thereof (Regular Queries and CQPDL). We investigate the expressive power,
characterization of bisimulation, satisfiability, and model checking for CPDL+.
We argue that natural subclasses of CPDL+ can be defined in terms of the
tree-width of the underlying graphs of the formulas. We show that the class of
CPDL+ formulas of tree-width 2 is equivalent to ICPDL, and that it also
coincides with CPDL+ formulas of tree-width 1. However, beyond tree-width 2,
incrementing the tree-width strictly increases the expressive power. We
characterize the expressive power for every class of fixed tree-width formulas
in terms of a bisimulation game with pebbles. Based on this characterization,
we show that CPDL+ has a tree-like model property. We prove that the
satisfiability problem is decidable in 2ExpTime on fixed tree-width formulas,
coinciding with the complexity of ICPDL. We also exhibit classes for which
satisfiability is reduced to ExpTime. Finally, we establish that the model
checking problem for fixed tree-width formulas is in \ptime, contrary to the
full class CPDL+.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 15:21:01 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Figueira",
"Diego",
""
],
[
"Figueira",
"Santiago",
""
],
[
"Pin",
"Edwin",
""
]
] |
new_dataset
| 0.99124 |
2304.10391
|
Avital Boruchovsky
|
Avital Boruchovsky, Daniella Bar-Lev and Eitan Yaakobi
|
DNA-Correcting Codes: End-to-end Correction in DNA Storage Systems
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper introduces a new solution to DNA storage that integrates all three
steps of retrieval, namely clustering, reconstruction, and error correction.
DNA-correcting codes are presented as a unique solution to the problem of
ensuring that the output of the storage system is unique for any valid set of
input strands. To this end, we introduce a novel distance metric to capture the
unique behavior of the DNA storage system and provide necessary and sufficient
conditions for DNA-correcting codes. The paper also includes several upper
bounds and constructions of DNA-correcting codes.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 15:27:14 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Boruchovsky",
"Avital",
""
],
[
"Bar-Lev",
"Daniella",
""
],
[
"Yaakobi",
"Eitan",
""
]
] |
new_dataset
| 0.999111 |
2304.10392
|
Quyet V. Do
|
Tianqing Fang, Quyet V. Do, Sehyun Choi, Weiqi Wang, Yangqiu Song
|
CKBP v2: An Expert-Annotated Evaluation Set for Commonsense Knowledge
Base Population
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Populating Commonsense Knowledge Bases (CSKB) is an important yet hard task
in NLP, as it tackles knowledge from external sources with unseen events and
entities. Fang et al. (2021a) proposed a CSKB Population benchmark with an
evaluation set CKBP v1. However, CKBP v1 adopts crowdsourced annotations that
suffer from a substantial fraction of incorrect answers, and the evaluation set
is not well-aligned with the external knowledge source as a result of random
sampling. In this paper, we introduce CKBP v2, a new high-quality CSKB
Population benchmark, which addresses the two mentioned problems by using
experts instead of crowd-sourced annotation and by adding diversified
adversarial samples to make the evaluation set more representative. We conduct
extensive experiments comparing state-of-the-art methods for CSKB Population on
the new evaluation set for future research comparisons. Empirical results show
that the population task is still challenging, even for large language models
(LLM) such as ChatGPT. Codes and data are available at
https://github.com/HKUST-KnowComp/CSKB-Population.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 15:27:29 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Fang",
"Tianqing",
""
],
[
"Do",
"Quyet V.",
""
],
[
"Choi",
"Sehyun",
""
],
[
"Wang",
"Weiqi",
""
],
[
"Song",
"Yangqiu",
""
]
] |
new_dataset
| 0.972316 |
2304.10415
|
Zhengyu Liang
|
Yingqian Wang, Longguang Wang, Zhengyu Liang, Jungang Yang, Radu
Timofte, Yulan Guo
|
NTIRE 2023 Challenge on Light Field Image Super-Resolution: Dataset,
Methods and Results
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this report, we summarize the first NTIRE challenge on light field (LF)
image super-resolution (SR), which aims at super-resolving LF images under the
standard bicubic degradation with a magnification factor of 4. This challenge
develops a new LF dataset called NTIRE-2023 for validation and test, and
provides a toolbox called BasicLFSR to facilitate model development. Compared
with single image SR, the major challenge of LF image SR lies in how to exploit
complementary angular information from plenty of views with varying
disparities. In total, 148 participants have registered the challenge, and 11
teams have successfully submitted results with PSNR scores higher than the
baseline method LF-InterNet \cite{LF-InterNet}. These newly developed methods
have set new state-of-the-art in LF image SR, e.g., the winning method achieves
around 1 dB PSNR improvement over the existing state-of-the-art method DistgSSR
\cite{DistgLF}. We report the solutions proposed by the participants, and
summarize their common trends and useful tricks. We hope this challenge can
stimulate future research and inspire new ideas in LF image SR.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 15:59:31 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Wang",
"Yingqian",
""
],
[
"Wang",
"Longguang",
""
],
[
"Liang",
"Zhengyu",
""
],
[
"Yang",
"Jungang",
""
],
[
"Timofte",
"Radu",
""
],
[
"Guo",
"Yulan",
""
]
] |
new_dataset
| 0.99974 |
2304.10448
|
Riccardo Spezialetti
|
Marco Toschi, Riccardo De Matteo, Riccardo Spezialetti, Daniele De
Gregorio, Luigi Di Stefano, Samuele Salti
|
ReLight My NeRF: A Dataset for Novel View Synthesis and Relighting of
Real World Objects
|
Accepted at CVPR 2023 as a highlight
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we focus on the problem of rendering novel views from a Neural
Radiance Field (NeRF) under unobserved light conditions. To this end, we
introduce a novel dataset, dubbed ReNe (Relighting NeRF), framing real world
objects under one-light-at-time (OLAT) conditions, annotated with accurate
ground-truth camera and light poses. Our acquisition pipeline leverages two
robotic arms holding, respectively, a camera and an omni-directional point-wise
light source. We release a total of 20 scenes depicting a variety of objects
with complex geometry and challenging materials. Each scene includes 2000
images, acquired from 50 different points of views under 40 different OLAT
conditions. By leveraging the dataset, we perform an ablation study on the
relighting capability of variants of the vanilla NeRF architecture and identify
a lightweight architecture that can render novel views of an object under novel
light conditions, which we use to establish a non-trivial baseline for the
dataset. Dataset and benchmark are available at
https://eyecan-ai.github.io/rene.
|
[
{
"version": "v1",
"created": "Thu, 20 Apr 2023 16:43:58 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Toschi",
"Marco",
""
],
[
"De Matteo",
"Riccardo",
""
],
[
"Spezialetti",
"Riccardo",
""
],
[
"De Gregorio",
"Daniele",
""
],
[
"Di Stefano",
"Luigi",
""
],
[
"Salti",
"Samuele",
""
]
] |
new_dataset
| 0.999736 |
2304.10495
|
Luis Camacho Prof.
|
Luis Camacho
|
A primer on getting neologisms from foreign languages to under-resourced
languages
|
13 pages, 3 tables
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Mainly due to lack of support, most under-resourced languages have a reduced
lexicon in most realms and domains of increasing importance, then their
speakers need to significantly augment it. Although neologisms should arise
from the languages themselves, external sources are widely accepted. However,
we dispute the "common sense" of using the imposed official languages, which
are highly probably a legacy of colonialism, as the only source, and we propose
to introduce neologisms from any language as long as these neologisms "sound
like" native words of the target languages.
|
[
{
"version": "v1",
"created": "Tue, 7 Mar 2023 15:10:37 GMT"
}
] | 2023-04-21T00:00:00 |
[
[
"Camacho",
"Luis",
""
]
] |
new_dataset
| 0.966575 |
1604.01673
|
Antti Kuusisto
|
Antti Kuusisto
|
On the uniform one-dimensional fragment
| null | null | null | null |
cs.LO cs.AI math.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The uniform one-dimensional fragment of first-order logic, U1, is a formalism
that extends two-variable logic in a natural way to contexts with relations of
all arities. We survey properties of U1 and investigate its relationship to
description logics designed to accommodate higher arity relations, with
particular attention given to DLR_reg. We also define a description logic
version of a variant of U1 and prove a range of new results concerning the
expressivity of U1 and related logics.
|
[
{
"version": "v1",
"created": "Wed, 6 Apr 2016 16:03:42 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2016 15:09:02 GMT"
},
{
"version": "v3",
"created": "Wed, 19 Apr 2023 17:46:40 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Kuusisto",
"Antti",
""
]
] |
new_dataset
| 0.999211 |
2202.03936
|
Madhurananda Pahar
|
Madhurananda Pahar, Igor Miranda, Andreas Diacon and Thomas Niesler
|
Accelerometer-based Bed Occupancy Detection for Automatic, Non-invasive
Long-term Cough Monitoring
| null |
IEEE Access, vol. 11, pp. 30739-30752, 2023
|
10.1109/ACCESS.2023.3261557
| null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present a new machine learning based bed-occupancy detection system that
uses the accelerometer signal captured by a bed-attached consumer smartphone.
Automatic bed-occupancy detection is necessary for automatic long-term cough
monitoring, since the time which the monitored patient occupies the bed is
required to accurately calculate a cough rate. Accelerometer measurements are
more cost effective and less intrusive than alternatives such as video
monitoring or pressure sensors. A 249-hour dataset of manually-labelled
acceleration signals gathered from seven patients undergoing treatment for
tuberculosis (TB) was compiled for experimentation. These signals are
characterised by brief activity bursts interspersed with long periods of little
or no activity, even when the bed is occupied. To process them effectively, we
propose an architecture consisting of three interconnected components. An
occupancy-change detector locates instances at which bed occupancy is likely to
have changed, an occupancy-interval detector classifies periods between
detected occupancy changes and an occupancy-state detector corrects
falsely-identified occupancy changes. Using long short-term memory (LSTM)
networks, this architecture was demonstrated to achieve an AUC of 0.94. When
integrated into a complete cough monitoring system, the daily cough rate of a
patient undergoing TB treatment was determined over a period of 14 days. As the
colony forming unit (CFU) counts decreased and the time to positivity (TPP)
increased, the measured cough rate decreased, indicating effective TB
treatment. This provides a first indication that automatic cough monitoring
based on bed-mounted accelerometer measurements may present a non-invasive,
non-intrusive and cost-effective means of monitoring long-term recovery of TB
patients.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 15:38:34 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Mar 2022 17:56:19 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Pahar",
"Madhurananda",
""
],
[
"Miranda",
"Igor",
""
],
[
"Diacon",
"Andreas",
""
],
[
"Niesler",
"Thomas",
""
]
] |
new_dataset
| 0.999621 |
2202.13876
|
Zhengyun Zhao
|
Zhengyun Zhao, Qiao Jin, Fangyuan Chen, Tuorui Peng, Sheng Yu
|
PMC-Patients: A Large-scale Dataset of Patient Summaries and Relations
for Benchmarking Retrieval-based Clinical Decision Support Systems
|
35 pages, 3 figures, 5 tables. Dataset and code are available at
https://github.com/pmc-patients/pmc-patients
| null | null | null |
cs.CL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Objective: Retrieval-based Clinical Decision Support (ReCDS) can aid clinical
workflow by providing relevant literature and similar patients for a given
patient. However, the development of ReCDS systems has been severely obstructed
by the lack of diverse patient collections and publicly available large-scale
patient-level annotation datasets. In this paper, we aim to define and
benchmark two ReCDS tasks: Patient-to-Article Retrieval (ReCDS-PAR) and
Patient-to-Patient Retrieval (ReCDS-PPR) using a novel dataset called
PMC-Patients. Methods: We extract patient summaries from PubMed Central
articles using simple heuristics and utilize the PubMed citation graph to
define patient-article relevance and patient-patient similarity. We also
implement and evaluate several ReCDS systems on the PMC-Patients benchmarks,
including sparse retrievers, dense retrievers, and nearest neighbor retrievers.
We conduct several case studies to show the clinical utility of PMC-Patients.
Results: PMC-Patients contains 167k patient summaries with 3.1M patient-article
relevance annotations and 293k patient-patient similarity annotations, which is
the largest-scale resource for ReCDS and also one of the largest patient
collections. Human evaluation and analysis show that PMC-Patients is a diverse
dataset with high-quality annotations. The evaluation of various ReCDS systems
shows that the PMC-Patients benchmark is challenging and calls for further
research. Conclusion: We present PMC-Patients, a large-scale, diverse, and
publicly available patient summary dataset with the largest-scale patient-level
relation annotations. Based on PMC-Patients, we formally define two benchmark
tasks for ReCDS systems and evaluate various existing retrieval methods.
PMC-Patients can largely facilitate methodology research on ReCDS systems and
shows real-world clinical utility.
|
[
{
"version": "v1",
"created": "Mon, 28 Feb 2022 15:24:33 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jan 2023 04:14:52 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Apr 2023 07:32:25 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Apr 2023 03:24:56 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Zhao",
"Zhengyun",
""
],
[
"Jin",
"Qiao",
""
],
[
"Chen",
"Fangyuan",
""
],
[
"Peng",
"Tuorui",
""
],
[
"Yu",
"Sheng",
""
]
] |
new_dataset
| 0.999778 |
2205.13213
|
Bohan Zhuang
|
Zizheng Pan, Jianfei Cai, Bohan Zhuang
|
Fast Vision Transformers with HiLo Attention
|
NeurIPS 2022 camera ready
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Vision Transformers (ViTs) have triggered the most recent and significant
breakthroughs in computer vision. Their efficient designs are mostly guided by
the indirect metric of computational complexity, i.e., FLOPs, which however has
a clear gap with the direct metric such as throughput. Thus, we propose to use
the direct speed evaluation on the target platform as the design principle for
efficient ViTs. Particularly, we introduce LITv2, a simple and effective ViT
which performs favourably against the existing state-of-the-art methods across
a spectrum of different model sizes with faster speed. At the core of LITv2 is
a novel self-attention mechanism, which we dub HiLo. HiLo is inspired by the
insight that high frequencies in an image capture local fine details and low
frequencies focus on global structures, whereas a multi-head self-attention
layer neglects the characteristic of different frequencies. Therefore, we
propose to disentangle the high/low frequency patterns in an attention layer by
separating the heads into two groups, where one group encodes high frequencies
via self-attention within each local window, and another group encodes low
frequencies by performing global attention between the average-pooled
low-frequency keys and values from each window and each query position in the
input feature map. Benefiting from the efficient design for both groups, we
show that HiLo is superior to the existing attention mechanisms by
comprehensively benchmarking FLOPs, speed and memory consumption on GPUs and
CPUs. For example, HiLo is 1.4x faster than spatial reduction attention and
1.6x faster than local window attention on CPUs. Powered by HiLo, LITv2 serves
as a strong backbone for mainstream vision tasks including image
classification, dense detection and segmentation. Code is available at
https://github.com/ziplab/LITv2.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 08:16:14 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Sep 2022 06:24:35 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Oct 2022 03:37:47 GMT"
},
{
"version": "v4",
"created": "Thu, 19 Jan 2023 01:12:54 GMT"
},
{
"version": "v5",
"created": "Wed, 19 Apr 2023 12:04:13 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Pan",
"Zizheng",
""
],
[
"Cai",
"Jianfei",
""
],
[
"Zhuang",
"Bohan",
""
]
] |
new_dataset
| 0.99915 |
2206.00052
|
Akshita Jha
|
Akshita Jha, and Chandan K. Reddy
|
CodeAttack: Code-Based Adversarial Attacks for Pre-trained Programming
Language Models
|
AAAI Conference on Artificial Intelligence (AAAI) 2023
| null | null | null |
cs.CL cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Pre-trained programming language (PL) models (such as CodeT5, CodeBERT,
GraphCodeBERT, etc.,) have the potential to automate software engineering tasks
involving code understanding and code generation. However, these models operate
in the natural channel of code, i.e., they are primarily concerned with the
human understanding of the code. They are not robust to changes in the input
and thus, are potentially susceptible to adversarial attacks in the natural
channel. We propose, CodeAttack, a simple yet effective black-box attack model
that uses code structure to generate effective, efficient, and imperceptible
adversarial code samples and demonstrates the vulnerabilities of the
state-of-the-art PL models to code-specific adversarial attacks. We evaluate
the transferability of CodeAttack on several code-code (translation and repair)
and code-NL (summarization) tasks across different programming languages.
CodeAttack outperforms state-of-the-art adversarial NLP attack models to
achieve the best overall drop in performance while being more efficient,
imperceptible, consistent, and fluent. The code can be found at
https://github.com/reddy-lab-code-research/CodeAttack.
|
[
{
"version": "v1",
"created": "Tue, 31 May 2022 18:40:01 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Dec 2022 05:07:45 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Apr 2023 22:12:55 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Jha",
"Akshita",
""
],
[
"Reddy",
"Chandan K.",
""
]
] |
new_dataset
| 0.999814 |
2208.05972
|
Eshwar Jagadeesh Savitha
|
Eshwar J. Savitha and Roger A. Sauer
|
A new anisotropic bending model for nonlinear shells: Comparison with
existing models and isogeometric finite element implementation
| null | null |
10.1016/j.ijsolstr.2023.112169
| null |
cs.CE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A new nonlinear hyperelastic bending model for shells formulated directly in
surface form is presented, and compared to four prominently used bending
models. Through an essential set of elementary nonlinear bending test cases,
the stresses and moments of each model are examined analytically. Only the
proposed bending model passes all the test cases while the other bending models
either fail or only pass the test cases for small deformations. The proposed
new bending model can handle large deformations and initially curved surfaces.
It is based on the principal curvatures and their directions in the initial
configuration, and it thus can have different bending moduli along those
directions. These characteristics make it flexible in modeling a given
material, while it does not suffer from the pathologies of existing bending
models. Further, the bending models are compared computationally through four
classical benchmark examples and one contact example. As the underlying shell
theory is based on Kirchhoff-Love kinematics, isogeometric NURBS shape
functions are used to discretize the shell surface. The linearization and
efficient finite element implementation of the proposed new model are also
provided.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 17:33:41 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 17:15:34 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Apr 2023 19:54:13 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Savitha",
"Eshwar J.",
""
],
[
"Sauer",
"Roger A.",
""
]
] |
new_dataset
| 0.951577 |
2210.04333
|
James Motes
|
James Motes, Tan Chen, Timothy Bretl, Marco Morales, Nancy M. Amato
|
Hypergraph-based Multi-Robot Task and Motion Planning
|
This work has been submitted for review
| null | null | null |
cs.RO cs.AI cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
We present a multi-robot task and motion planning method that, when applied
to the rearrangement of objects by manipulators, results in solution times up
to three orders of magnitude faster than existing methods and successfully
plans for problems with up to twenty objects, more than three times as many
objects as comparable methods. We achieve this improvement by decomposing the
planning space to consider manipulators alone, objects, and manipulators
holding objects. We represent this decomposition with a hypergraph where
vertices are decomposed elements of the planning spaces and hyperarcs are
transitions between elements. Existing methods use graph-based representations
where vertices are full composite spaces and edges are transitions between
these. Using the hypergraph reduces the representation size of the planning
space-for multi-manipulator object rearrangement, the number of hypergraph
vertices scales linearly with the number of either robots or objects, while the
number of hyperarcs scales quadratically with the number of robots and linearly
with the number of objects. In contrast, the number of vertices and edges in
graph-based representations scales exponentially in the number of robots and
objects. We show that similar gains can be achieved for other multi-robot task
and motion planning problems.
|
[
{
"version": "v1",
"created": "Sun, 9 Oct 2022 19:43:21 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Apr 2023 11:04:29 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Motes",
"James",
""
],
[
"Chen",
"Tan",
""
],
[
"Bretl",
"Timothy",
""
],
[
"Morales",
"Marco",
""
],
[
"Amato",
"Nancy M.",
""
]
] |
new_dataset
| 0.99594 |
2210.05038
|
Pedro Rodriguez
|
Pedro Rodriguez, Mahmoud Azab, Becka Silvert, Renato Sanchez, Linzy
Labson, Hardik Shah and Seungwhan Moon
|
Fighting FIRe with FIRE: Assessing the Validity of Text-to-Video
Retrieval Benchmarks
|
EACL 2023 Camera Ready
| null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Searching troves of videos with textual descriptions is a core multimodal
retrieval task. Owing to the lack of a purpose-built dataset for text-to-video
retrieval, video captioning datasets have been re-purposed to evaluate models
by (1) treating captions as positive matches to their respective videos and (2)
assuming all other videos to be negatives. However, this methodology leads to a
fundamental flaw during evaluation: since captions are marked as relevant only
to their original video, many alternate videos also match the caption, which
introduces false-negative caption-video pairs. We show that when these false
negatives are corrected, a recent state-of-the-art model gains 25\% recall
points -- a difference that threatens the validity of the benchmark itself. To
diagnose and mitigate this issue, we annotate and release 683K additional
caption-video pairs. Using these, we recompute effectiveness scores for three
models on two standard benchmarks (MSR-VTT and MSVD). We find that (1) the
recomputed metrics are up to 25\% recall points higher for the best models, (2)
these benchmarks are nearing saturation for Recall@10, (3) caption length
(generality) is related to the number of positives, and (4) annotation costs
can be mitigated through sampling. We recommend retiring these benchmarks in
their current form, and we make recommendations for future text-to-video
retrieval benchmarks.
|
[
{
"version": "v1",
"created": "Mon, 10 Oct 2022 22:45:06 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Apr 2023 03:50:48 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Rodriguez",
"Pedro",
""
],
[
"Azab",
"Mahmoud",
""
],
[
"Silvert",
"Becka",
""
],
[
"Sanchez",
"Renato",
""
],
[
"Labson",
"Linzy",
""
],
[
"Shah",
"Hardik",
""
],
[
"Moon",
"Seungwhan",
""
]
] |
new_dataset
| 0.999774 |
2211.05340
|
Vijaya Yajnanarayana Ph.D
|
Vijaya Yajnanarayana, Henk Wymeersch
|
Multistatic Sensing of Passive Targets Using 6G Cellular Infrastructure
|
To appear in IEEE EuCNC 2023
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sensing using cellular infrastructure may be one of the defining feature of
sixth generation (6G) wireless systems. Wideband 6G communication channels
operating at higher frequency bands (upper mmWave bands) are better modeled
using clustered geometric channel models. In this paper, we propose methods for
detection of passive targets and estimating their position using communication
deployment without any assistance from the target. A novel AI architecture
called CsiSenseNet is developed for this purpose. We analyze the resolution,
coverage and position uncertainty for practical indoor deployments. Using the
proposed method, we show that human sized target can be sensed with high
accuracy and sub-meter positioning errors in a practical indoor deployment
scenario.
|
[
{
"version": "v1",
"created": "Thu, 10 Nov 2022 04:45:48 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Apr 2023 05:06:15 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Yajnanarayana",
"Vijaya",
""
],
[
"Wymeersch",
"Henk",
""
]
] |
new_dataset
| 0.984335 |
2211.09791
|
Tiancai Wang
|
Yuang Zhang, Tiancai Wang, Xiangyu Zhang
|
MOTRv2: Bootstrapping End-to-End Multi-Object Tracking by Pretrained
Object Detectors
|
Accepted by CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose MOTRv2, a simple yet effective pipeline to
bootstrap end-to-end multi-object tracking with a pretrained object detector.
Existing end-to-end methods, MOTR and TrackFormer are inferior to their
tracking-by-detection counterparts mainly due to their poor detection
performance. We aim to improve MOTR by elegantly incorporating an extra object
detector. We first adopt the anchor formulation of queries and then use an
extra object detector to generate proposals as anchors, providing detection
prior to MOTR. The simple modification greatly eases the conflict between joint
learning detection and association tasks in MOTR. MOTRv2 keeps the query
propogation feature and scales well on large-scale benchmarks. MOTRv2 ranks the
1st place (73.4% HOTA on DanceTrack) in the 1st Multiple People Tracking in
Group Dance Challenge. Moreover, MOTRv2 reaches state-of-the-art performance on
the BDD100K dataset. We hope this simple and effective pipeline can provide
some new insights to the end-to-end MOT community. Code is available at
\url{https://github.com/megvii-research/MOTRv2}.
|
[
{
"version": "v1",
"created": "Thu, 17 Nov 2022 18:57:12 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Apr 2023 07:28:54 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Zhang",
"Yuang",
""
],
[
"Wang",
"Tiancai",
""
],
[
"Zhang",
"Xiangyu",
""
]
] |
new_dataset
| 0.999651 |
2211.12979
|
Anatol Garioud
|
Anatol Garioud, St\'ephane Peillet, Eva Bookjans, S\'ebastien
Giordano, Boris Wattrelos
|
FLAIR #1: semantic segmentation and domain adaptation dataset
|
Data access update
| null |
10.13140/RG.2.2.30183.73128/1
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The French National Institute of Geographical and Forest Information (IGN)
has the mission to document and measure land-cover on French territory and
provides referential geographical datasets, including high-resolution aerial
images and topographic maps. The monitoring of land-cover plays a crucial role
in land management and planning initiatives, which can have significant
socio-economic and environmental impact. Together with remote sensing
technologies, artificial intelligence (IA) promises to become a powerful tool
in determining land-cover and its evolution. IGN is currently exploring the
potential of IA in the production of high-resolution land cover maps. Notably,
deep learning methods are employed to obtain a semantic segmentation of aerial
images. However, territories as large as France imply heterogeneous contexts:
variations in landscapes and image acquisition make it challenging to provide
uniform, reliable and accurate results across all of France. The FLAIR-one
dataset presented is part of the dataset currently used at IGN to establish the
French national reference land cover map "Occupation du sol \`a grande
\'echelle" (OCS- GE).
|
[
{
"version": "v1",
"created": "Wed, 23 Nov 2022 14:38:59 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Nov 2022 08:45:24 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Nov 2022 18:05:30 GMT"
},
{
"version": "v4",
"created": "Sun, 4 Dec 2022 00:42:14 GMT"
},
{
"version": "v5",
"created": "Wed, 19 Apr 2023 08:42:41 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Garioud",
"Anatol",
""
],
[
"Peillet",
"Stéphane",
""
],
[
"Bookjans",
"Eva",
""
],
[
"Giordano",
"Sébastien",
""
],
[
"Wattrelos",
"Boris",
""
]
] |
new_dataset
| 0.999668 |
2212.04638
|
Aoyang Liu
|
Yansong Tang, Jinpeng Liu, Aoyang Liu, Bin Yang, Wenxun Dai, Yongming
Rao, Jiwen Lu, Jie Zhou, Xiu Li
|
FLAG3D: A 3D Fitness Activity Dataset with Language Instruction
|
Accepted to CVPR2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the continuously thriving popularity around the world, fitness activity
analytic has become an emerging research topic in computer vision. While a
variety of new tasks and algorithms have been proposed recently, there are
growing hunger for data resources involved in high-quality data, fine-grained
labels, and diverse environments. In this paper, we present FLAG3D, a
large-scale 3D fitness activity dataset with language instruction containing
180K sequences of 60 categories. FLAG3D features the following three aspects:
1) accurate and dense 3D human pose captured from advanced MoCap system to
handle the complex activity and large movement, 2) detailed and professional
language instruction to describe how to perform a specific activity, 3)
versatile video resources from a high-tech MoCap system, rendering software,
and cost-effective smartphones in natural environments. Extensive experiments
and in-depth analysis show that FLAG3D contributes great research value for
various challenges, such as cross-domain human action recognition, dynamic
human mesh recovery, and language-guided human action generation. Our dataset
and source code are publicly available at https://andytang15.github.io/FLAG3D.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 02:33:33 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Apr 2023 13:31:03 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Tang",
"Yansong",
""
],
[
"Liu",
"Jinpeng",
""
],
[
"Liu",
"Aoyang",
""
],
[
"Yang",
"Bin",
""
],
[
"Dai",
"Wenxun",
""
],
[
"Rao",
"Yongming",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Zhou",
"Jie",
""
],
[
"Li",
"Xiu",
""
]
] |
new_dataset
| 0.999885 |
2301.00337
|
Young-Ho Kim
|
Christian DeBuys and Florin C. Ghesu and Jagadeesan Jayender and Reza
Langari and Young-Ho Kim
|
Separable Tendon-Driven Robotic Manipulator with a Long, Flexible,
Passive Proximal Section
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work tackles practical issues which arise when using a tendon-driven
robotic manipulator (TDRM) with a long, flexible, passive proximal section in
medical applications. Tendon-driven devices are preferred in medicine for their
improved outcomes via minimally invasive procedures, but TDRMs come with unique
challenges such as sterilization and reuse, simultaneous control of tendons,
hysteresis in the tendon-sheath mechanism, and unmodeled effects of the
proximal section shape. A separable TDRM which overcomes difficulties in
actuation and sterilization is introduced, in which the body containing the
electronics is reusable and the remainder is disposable. An open-loop redundant
controller which resolves the redundancy in the kinematics is developed. Simple
linear hysteresis compensation and re-tension compensation based on the
physical properties of the device are proposed. The controller and compensation
methods are evaluated on a testbed for a straight proximal section, a curved
proximal section at various static angles, and a proximal section which
dynamically changes angles; and overall, distal tip error was reduced.
|
[
{
"version": "v1",
"created": "Sun, 1 Jan 2023 03:31:15 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Apr 2023 20:33:39 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"DeBuys",
"Christian",
""
],
[
"Ghesu",
"Florin C.",
""
],
[
"Jayender",
"Jagadeesan",
""
],
[
"Langari",
"Reza",
""
],
[
"Kim",
"Young-Ho",
""
]
] |
new_dataset
| 0.997943 |
2303.03129
|
Tuomas V\"alim\"aki
|
Tuomas V\"alim\"aki, Bharath Garigipati and Reza Ghabcheloo
|
Motion-based extrinsic sensor-to-sensor calibration: Effect of reference
frame selection for new and existing methods
| null |
Sensors 2023, 23(7), 3740
|
10.3390/s23073740
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper studies the effect of reference frame selection in
sensor-to-sensor extrinsic calibration when formulated as a motion-based
hand-eye calibration problem. Different reference selection options are tested
under varying noise conditions in simulation, and the findings are validated
with real data from the KITTI dataset. We propose two nonlinear cost functions
for optimization and compare them with four state-of-the-art methods. One of
the proposed cost functions incorporates outlier rejection to improve
calibration performance and was shown to significantly improve performance in
the presence of outliers, and either match or outperform the other algorithms
in other noise conditions. However, the performance gain from reference frame
selection was deemed larger than that from algorithm selection. In addition, we
show that with realistic noise, the reference frame selection method commonly
used in literature is inferior to other tested options, and that relative error
metrics are not reliable for telling which method achieves best calibration
performance.
|
[
{
"version": "v1",
"created": "Mon, 6 Mar 2023 13:44:23 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Välimäki",
"Tuomas",
""
],
[
"Garigipati",
"Bharath",
""
],
[
"Ghabcheloo",
"Reza",
""
]
] |
new_dataset
| 0.999438 |
2303.06962
|
Tao Wang
|
Tao Wang, Jie Lv, Haonan Tong, Changsheng You, Changchuan Yin
|
A Novel Two-Layer Codebook Based Near-Field Beam Training for
Intelligent Reflecting Surface
|
6 pages, 4 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the codebook-based near-field beam training for
intelligent reflecting surfaces (IRSs) aided wireless system. In the considered
model, the near-field beam training is critical to focus signals at the
location of user equipment (UE) to obtain prominent IRS array gain. However,
existing codebook schemes cannot achieve low training overhead and high
receiving power simultaneously. To tackle this issue, a novel two-layer
codebook based beam training scheme is proposed. The layer-1 codebook is
designed based on the omnidirectionality of a random-phase beam pattern, which
estimates the UE distance with training overhead equivalent to that of one DFT
codeword. Then, based on the estimated UE distance, the layer-2 codebook is
generated to scan candidate UE locations and obtain the optimal codeword for
IRS beamforming. Numerical results show that compared with benchmarks, the
proposed two-layer beam training scheme achieves more accurate UE distance and
angle estimation, higher data rate, and smaller training overhead.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 10:04:46 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Apr 2023 01:43:57 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Wang",
"Tao",
""
],
[
"Lv",
"Jie",
""
],
[
"Tong",
"Haonan",
""
],
[
"You",
"Changsheng",
""
],
[
"Yin",
"Changchuan",
""
]
] |
new_dataset
| 0.98346 |
2303.14933
|
Zicheng Zhang
|
Zicheng Zhang, Wei Wu, Wei Sun, Dangyang Tu, Wei Lu, Xiongkuo Min,
Ying Chen, Guangtao Zhai
|
MD-VQA: Multi-Dimensional Quality Assessment for UGC Live Videos
|
Accepted to CVPR2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
User-generated content (UGC) live videos are often bothered by various
distortions during capture procedures and thus exhibit diverse visual
qualities. Such source videos are further compressed and transcoded by media
server providers before being distributed to end-users. Because of the
flourishing of UGC live videos, effective video quality assessment (VQA) tools
are needed to monitor and perceptually optimize live streaming videos in the
distributing process. In this paper, we address \textbf{UGC Live VQA} problems
by constructing a first-of-a-kind subjective UGC Live VQA database and
developing an effective evaluation tool. Concretely, 418 source UGC videos are
collected in real live streaming scenarios and 3,762 compressed ones at
different bit rates are generated for the subsequent subjective VQA
experiments. Based on the built database, we develop a
\underline{M}ulti-\underline{D}imensional \underline{VQA} (\textbf{MD-VQA})
evaluator to measure the visual quality of UGC live videos from semantic,
distortion, and motion aspects respectively. Extensive experimental results
show that MD-VQA achieves state-of-the-art performance on both our UGC Live VQA
database and existing compressed UGC VQA databases.
|
[
{
"version": "v1",
"created": "Mon, 27 Mar 2023 06:17:10 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Apr 2023 07:51:02 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Zhang",
"Zicheng",
""
],
[
"Wu",
"Wei",
""
],
[
"Sun",
"Wei",
""
],
[
"Tu",
"Dangyang",
""
],
[
"Lu",
"Wei",
""
],
[
"Min",
"Xiongkuo",
""
],
[
"Chen",
"Ying",
""
],
[
"Zhai",
"Guangtao",
""
]
] |
new_dataset
| 0.973251 |
2304.09181
|
Shantanu Mandal
|
Shantanu Mandal, Adhrik Chethan, Vahid Janfaza, S M Farabi Mahmud,
Todd A Anderson, Javier Turek, Jesmin Jahan Tithi, Abdullah Muzahid
|
Large Language Models Based Automatic Synthesis of Software
Specifications
| null | null | null | null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Software configurations play a crucial role in determining the behavior of
software systems. In order to ensure safe and error-free operation, it is
necessary to identify the correct configuration, along with their valid bounds
and rules, which are commonly referred to as software specifications. As
software systems grow in complexity and scale, the number of configurations and
associated specifications required to ensure the correct operation can become
large and prohibitively difficult to manipulate manually. Due to the fast pace
of software development, it is often the case that correct software
specifications are not thoroughly checked or validated within the software
itself. Rather, they are frequently discussed and documented in a variety of
external sources, including software manuals, code comments, and online
discussion forums. Therefore, it is hard for the system administrator to know
the correct specifications of configurations due to the lack of clarity,
organization, and a centralized unified source to look at. To address this
challenge, we propose SpecSyn a framework that leverages a state-of-the-art
large language model to automatically synthesize software specifications from
natural language sources. Our approach formulates software specification
synthesis as a sequence-to-sequence learning problem and investigates the
extraction of specifications from large contextual texts. This is the first
work that uses a large language model for end-to-end specification synthesis
from natural language texts. Empirical results demonstrate that our system
outperforms prior the state-of-the-art specification synthesis tool by 21% in
terms of F1 score and can find specifications from single as well as multiple
sentences.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 01:22:44 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Mandal",
"Shantanu",
""
],
[
"Chethan",
"Adhrik",
""
],
[
"Janfaza",
"Vahid",
""
],
[
"Mahmud",
"S M Farabi",
""
],
[
"Anderson",
"Todd A",
""
],
[
"Turek",
"Javier",
""
],
[
"Tithi",
"Jesmin Jahan",
""
],
[
"Muzahid",
"Abdullah",
""
]
] |
new_dataset
| 0.95239 |
2304.09285
|
Benjamin Killeen
|
Benjamin D. Killeen, Han Zhang, Jan Mangulabnan, Mehran Armand, Russel
H. Taylor, Greg Osgood, Mathias Unberath
|
Pelphix: Surgical Phase Recognition from X-ray Images in Percutaneous
Pelvic Fixation
| null | null | null | null |
cs.LG cs.AI cs.CV q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Surgical phase recognition (SPR) is a crucial element in the digital
transformation of the modern operating theater. While SPR based on video
sources is well-established, incorporation of interventional X-ray sequences
has not yet been explored. This paper presents Pelphix, a first approach to SPR
for X-ray-guided percutaneous pelvic fracture fixation, which models the
procedure at four levels of granularity -- corridor, activity, view, and frame
value -- simulating the pelvic fracture fixation workflow as a Markov process
to provide fully annotated training data. Using added supervision from
detection of bony corridors, tools, and anatomy, we learn image representations
that are fed into a transformer model to regress surgical phases at the four
granularity levels. Our approach demonstrates the feasibility of X-ray-based
SPR, achieving an average accuracy of 93.8% on simulated sequences and 67.57%
in cadaver across all granularity levels, with up to 88% accuracy for the
target corridor in real data. This work constitutes the first step toward SPR
for the X-ray domain, establishing an approach to categorizing phases in
X-ray-guided surgery, simulating realistic image sequences to enable machine
learning model development, and demonstrating that this approach is feasible
for the analysis of real procedures. As X-ray-based SPR continues to mature, it
will benefit procedures in orthopedic surgery, angiography, and interventional
radiology by equipping intelligent surgical systems with situational awareness
in the operating room.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 20:48:14 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Killeen",
"Benjamin D.",
""
],
[
"Zhang",
"Han",
""
],
[
"Mangulabnan",
"Jan",
""
],
[
"Armand",
"Mehran",
""
],
[
"Taylor",
"Russel H.",
""
],
[
"Osgood",
"Greg",
""
],
[
"Unberath",
"Mathias",
""
]
] |
new_dataset
| 0.999217 |
2304.09286
|
Young-Ho Kim
|
Young-Ho Kim and \`Eric Lluch and Gulsun Mehmet and Florin C. Ghesu
and Ankur Kapoor
|
AI-based Agents for Automated Robotic Endovascular Guidewire
Manipulation
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Endovascular guidewire manipulation is essential for minimally-invasive
clinical applications (Percutaneous Coronary Intervention (PCI), Mechanical
thrombectomy techniques for acute ischemic stroke (AIS), or Transjugular
intrahepatic portosystemic shunt (TIPS)). All procedures commonly require 3D
vessel geometries from 3D CTA (Computed Tomography Angiography) images. During
these procedures, the clinician generally places a guiding catheter in the
ostium of the relevant vessel and then manipulates a wire through the catheter
and across the blockage. The clinician only uses X-ray fluoroscopy
intermittently to visualize and guide the catheter, guidewire, and other
devices. However, clinicians still passively control guidewires/catheters by
relying on limited indirect observation (i.e., 2D partial view of devices, and
intermittent updates due to radiation limit) from X-ray fluoroscopy. Modeling
and controlling the guidewire manipulation in coronary vessels remains
challenging because of the complicated interaction between guidewire motions
with different physical properties (i.e., loads, coating) and vessel geometries
with lumen conditions resulting in a highly non-linear system. This paper
introduces a scalable learning pipeline to train AI-based agent models toward
automated endovascular predictive device controls. First, we create a scalable
environment by pre-processing 3D CTA images, providing patient-specific 3D
vessel geometry and the centerline of the coronary. Next, we apply a large
quantity of randomly generated motion sequences from the proximal end to
generate wire states associated with each environment using a physics-based
device simulator. Then, we reformulate the control problem to a
sequence-to-sequence learning problem, in which we use a Transformer-based
model, trained to handle non-linear sequential forward/inverse transition
functions.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 20:53:25 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Kim",
"Young-Ho",
""
],
[
"Lluch",
"Èric",
""
],
[
"Mehmet",
"Gulsun",
""
],
[
"Ghesu",
"Florin C.",
""
],
[
"Kapoor",
"Ankur",
""
]
] |
new_dataset
| 0.995341 |
2304.09299
|
Sam Ross
|
Sam Ross, Nicole Sullivan, Jina Yoon
|
Virtual Fidgets: Opportunities and Design Principles for Bringing
Fidgeting to Online Learning
|
6 pages, 3 figures, CHI LBW 2023
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present design guidelines for incorporating fidgeting into the virtual
world as a tool for students in online lectures. Fidgeting is associated with
increased attention and self-regulation, and has the potential to help students
focus. Currently there are no fidgets, physical or virtual, designed for
preserving attention specifically in online learning environments, and no
heuristics for designing fidgets within this domain. We identify three virtual
fidget proxies to serve as design probes for studying student experiences with
virtual fidgeting. Through a study of eight students using our virtual fidget
proxies in online lectures, we identify eight emergent themes that encompass
student experience with virtual fidgeting in lectures. Based on these themes,
we present four principles for designing domain-specific virtual fidgets for
online lectures. We identify that virtual fidgets for lectures should be
context-aware, visually appealing, easy to adopt, and physically interactive.
|
[
{
"version": "v1",
"created": "Tue, 18 Apr 2023 21:03:30 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Ross",
"Sam",
""
],
[
"Sullivan",
"Nicole",
""
],
[
"Yoon",
"Jina",
""
]
] |
new_dataset
| 0.993795 |
2304.09370
|
Theodore Tyler
|
Ted Tyler, Vaibhav Malhotra, Adam Montague, Zhigen Zhao, Frank L.
Hammond III, and Ye Zhao
|
Integrating Reconfigurable Foot Design, Multi-modal Contact Sensing, and
Terrain Classification for Bipedal Locomotion
|
7 pages, 6 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability of bipedal robots to adapt to diverse and unstructured terrain
conditions is crucial for their deployment in real-world environments. To this
end, we present a novel, bio-inspired robot foot design with stabilizing tarsal
segments and a multifarious sensor suite involving acoustic, capacitive,
tactile, temperature, and acceleration sensors. A real-time signal processing
and terrain classification system is developed and evaluated. The sensed
terrain information is used to control actuated segments of the foot, leading
to improved ground contact and stability. The proposed framework highlights the
potential of the sensor-integrated adaptive foot for intelligent and adaptive
locomotion.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 01:53:30 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Tyler",
"Ted",
""
],
[
"Malhotra",
"Vaibhav",
""
],
[
"Montague",
"Adam",
""
],
[
"Zhao",
"Zhigen",
""
],
[
"Hammond",
"Frank L.",
"III"
],
[
"Zhao",
"Ye",
""
]
] |
new_dataset
| 0.9991 |
2304.09384
|
Chrystian Chrystian
|
Chrystian, Wahyono
|
SP-BatikGAN: An Efficient Generative Adversarial Network for Symmetric
Pattern Generation
| null | null | null | null |
cs.CV cs.MM eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Following the contention of AI arts, our research focuses on bringing AI for
all, particularly for artists, to create AI arts with limited data and
settings. We are interested in geometrically symmetric pattern generation,
which appears on many artworks such as Portuguese, Moroccan tiles, and Batik, a
cultural heritage in Southeast Asia. Symmetric pattern generation is a complex
problem, with prior research creating too-specific models for certain patterns
only. We provide publicly, the first-ever 1,216 high-quality symmetric patterns
straight from design files for this task. We then formulate symmetric pattern
enforcement (SPE) loss to leverage underlying symmetric-based structures that
exist on current image distributions. Our SPE improves and accelerates training
on any GAN configuration, and, with efficient attention, SP-BatikGAN compared
to FastGAN, the state-of-the-art GAN for limited setting, improves the FID
score from 110.11 to 90.76, an 18% decrease, and model diversity recall score
from 0.047 to 0.204, a 334% increase.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 02:38:11 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Chrystian",
"",
""
],
[
"Wahyono",
"",
""
]
] |
new_dataset
| 0.992634 |
2304.09395
|
Yan Jin
|
Xuanhao Pan, Yan Jin, Yuandong Ding, Mingxiao Feng, Li Zhao, Lei Song,
Jiang Bian
|
H-TSP: Hierarchically Solving the Large-Scale Travelling Salesman
Problem
|
Accepted by AAAI 2023, February 2023
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an end-to-end learning framework based on hierarchical
reinforcement learning, called H-TSP, for addressing the large-scale Travelling
Salesman Problem (TSP). The proposed H-TSP constructs a solution of a TSP
instance starting from the scratch relying on two components: the upper-level
policy chooses a small subset of nodes (up to 200 in our experiment) from all
nodes that are to be traversed, while the lower-level policy takes the chosen
nodes as input and outputs a tour connecting them to the existing partial route
(initially only containing the depot). After jointly training the upper-level
and lower-level policies, our approach can directly generate solutions for the
given TSP instances without relying on any time-consuming search procedures. To
demonstrate effectiveness of the proposed approach, we have conducted extensive
experiments on randomly generated TSP instances with different numbers of
nodes. We show that H-TSP can achieve comparable results (gap 3.42% vs. 7.32%)
as SOTA search-based approaches, and more importantly, we reduce the time
consumption up to two orders of magnitude (3.32s vs. 395.85s). To the best of
our knowledge, H-TSP is the first end-to-end deep reinforcement learning
approach that can scale to TSP instances of up to 10000 nodes. Although there
are still gaps to SOTA results with respect to solution quality, we believe
that H-TSP will be useful for practical applications, particularly those that
are time-sensitive e.g., on-call routing and ride hailing service.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 03:10:30 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Pan",
"Xuanhao",
""
],
[
"Jin",
"Yan",
""
],
[
"Ding",
"Yuandong",
""
],
[
"Feng",
"Mingxiao",
""
],
[
"Zhao",
"Li",
""
],
[
"Song",
"Lei",
""
],
[
"Bian",
"Jiang",
""
]
] |
new_dataset
| 0.999678 |
2304.09400
|
Qianqian Zhang
|
Qianqian Zhang, Hu Zhou, Ying-Chang Liang, Sumei Sun, Wei Zhang, and
H. Vincent Poor
|
On the Capacity Region of Reconfigurable Intelligent Surface Assisted
Symbiotic Radios
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we are interested in reconfigurable intelligent surface
(RIS)-assisted symbiotic radio (SR) systems, where an RIS assists a primary
transmission by passive beamforming and simultaneously acts as an information
transmitter by periodically adjusting its reflecting coefficients. The above
modulation scheme innately enables a new multiplicative multiple access channel
(M-MAC), where the primary and secondary signals are superposed in a
multiplicative and additive manner. To pursue the fundamental performance
limits of the M-MAC, we focus on the characterization of the capacity region of
such systems. Due to the passive nature of RISs, the transmitted signal of the
RIS should satisfy the peak power constraint. Under this constraint at the RIS
as well as the average power constraint at the primary transmitter (PTx), we
analyze the capacity-achieving distributions of the transmitted signals and
characterize the capacity region of the M-MAC. Then, theoretical analysis is
performed to reveal insights into the RIS-assisted SR. It is observed that: 1)
the capacity region of the M-MAC is strictly convex and larger than that of the
conventional TDMA scheme; 2) the secondary transmission can achieve the maximum
rate when the PTx transmits the constant envelope signals; 3) and the sum rate
can achieve the maximum when the PTx transmits Gaussian signals and the RIS
transmits the constant envelope signals. Finally, extensive numerical results
are provided to evaluate the performance of the RIS-assisted SR and verify the
accuracy of our theoretical analysis.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 03:28:32 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Zhang",
"Qianqian",
""
],
[
"Zhou",
"Hu",
""
],
[
"Liang",
"Ying-Chang",
""
],
[
"Sun",
"Sumei",
""
],
[
"Zhang",
"Wei",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
new_dataset
| 0.972346 |
2304.09411
|
Yi Zheng
|
Yi Zheng, Aasheesh Kolli, Shaizeen Aga
|
Egalitarian ORAM: Wear-Leveling for ORAM
| null | null | null | null |
cs.AR
|
http://creativecommons.org/publicdomain/zero/1.0/
|
While non-volatile memories (NVMs) provide several desirable characteristics
like better density and comparable energy efficiency than DRAM, DRAM-like
performance, and disk-like durability, the limited endurance NVMs manifest
remains a challenge with these memories. Indeed, the endurance constraints of
NVMs can prevent solutions that are commonly employed for other mainstream
memories like DRAM from being carried over as-is to NVMs. Specifically, in this
work we observe that, Oblivious RAM (ORAM) primitive, the state-ofart solution
to tackle memory bus side channel vulnerability, while widely studied for
DRAMs, is particularly challenging to implement as-is for NVMs as it severely
affects endurance of NVMs. This is so, as the inherent nature of ORAM primitive
causes an order of magnitude increase in write traffic and furthermore, causes
some regions of memory to be written far more often than others. This
non-uniform write traffic as manifested by ORAM primitive stands to severely
affect the lifetime of non-volatile memories (1% of baseline without ORAM) to
even make it impractical to address this security vulnerability
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 03:56:45 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Zheng",
"Yi",
""
],
[
"Kolli",
"Aasheesh",
""
],
[
"Aga",
"Shaizeen",
""
]
] |
new_dataset
| 0.982122 |
2304.09412
|
Snehesh Shrestha
|
Snehesh Shrestha, Ishan Tamrakar, Cornelia Fermuller, Yiannis
Aloimonos
|
hDesigner: Real-Time Haptic Feedback Pattern Designer
| null | null | null | null |
cs.HC cs.MM cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Haptic sensing can provide a new dimension to enhance people's musical and
cinematic experiences. However, designing a haptic pattern is neither intuitive
nor trivial. Imagined haptic patterns tend to be different from experienced
ones. As a result, researchers use simple step-curve patterns to create haptic
stimuli. To this end, we designed and developed an intuitive haptic pattern
designer that lets you rapidly prototype creative patterns. Our simple
architecture, wireless connectivity, and easy-to-program communication protocol
make it modular and easy to scale. In this demo, workshop participants can
select from a library of haptic patterns and design new ones. They can feel the
pattern as they make changes in the user interface. With this new workflow,
researchers and artists can design and rapidly test haptic patterns for
downstream tasks such as research experiments or create new musical and
cinematic experiences. More details about the project are available at
https://www.snehesh.com/hDesigner
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 04:00:48 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Shrestha",
"Snehesh",
""
],
[
"Tamrakar",
"Ishan",
""
],
[
"Fermuller",
"Cornelia",
""
],
[
"Aloimonos",
"Yiannis",
""
]
] |
new_dataset
| 0.999499 |
2304.09421
|
Zhao Kang
|
Quanjiang Guo, Zhao Kang, Ling Tian, Zhouguo Chen
|
TieFake: Title-Text Similarity and Emotion-Aware Fake News Detection
|
Appear on IJCNN 2023
| null | null | null |
cs.CL cs.CV cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fake news detection aims to detect fake news widely spreading on social media
platforms, which can negatively influence the public and the government. Many
approaches have been developed to exploit relevant information from news
images, text, or videos. However, these methods may suffer from the following
limitations: (1) ignore the inherent emotional information of the news, which
could be beneficial since it contains the subjective intentions of the authors;
(2) pay little attention to the relation (similarity) between the title and
textual information in news articles, which often use irrelevant title to
attract reader' attention. To this end, we propose a novel Title-Text
similarity and emotion-aware Fake news detection (TieFake) method by jointly
modeling the multi-modal context information and the author sentiment in a
unified framework. Specifically, we respectively employ BERT and ResNeSt to
learn the representations for text and images, and utilize publisher emotion
extractor to capture the author's subjective emotion in the news content. We
also propose a scale-dot product attention mechanism to capture the similarity
between title features and textual features. Experiments are conducted on two
publicly available multi-modal datasets, and the results demonstrate that our
proposed method can significantly improve the performance of fake news
detection. Our code is available at https://github.com/UESTC-GQJ/TieFake.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 04:47:36 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Guo",
"Quanjiang",
""
],
[
"Kang",
"Zhao",
""
],
[
"Tian",
"Ling",
""
],
[
"Chen",
"Zhouguo",
""
]
] |
new_dataset
| 0.997762 |
2304.09448
|
Yao Mu Mark
|
Yao Mu, Shunyu Yao, Mingyu Ding, Ping Luo, Chuang Gan
|
EC^2: Emergent Communication for Embodied Control
|
Published in CVPR2023
| null | null | null |
cs.LG cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Embodied control requires agents to leverage multi-modal pre-training to
quickly learn how to act in new environments, where video demonstrations
contain visual and motion details needed for low-level perception and control,
and language instructions support generalization with abstract, symbolic
structures. While recent approaches apply contrastive learning to force
alignment between the two modalities, we hypothesize better modeling their
complementary differences can lead to more holistic representations for
downstream adaption. To this end, we propose Emergent Communication for
Embodied Control (EC^2), a novel scheme to pre-train video-language
representations for few-shot embodied control. The key idea is to learn an
unsupervised "language" of videos via emergent communication, which bridges the
semantics of video details and structures of natural language. We learn
embodied representations of video trajectories, emergent language, and natural
language using a language model, which is then used to finetune a lightweight
policy network for downstream control. Through extensive experiments in
Metaworld and Franka Kitchen embodied benchmarks, EC^2 is shown to consistently
outperform previous contrastive learning methods for both videos and texts as
task inputs. Further ablations confirm the importance of the emergent language,
which is beneficial for both video and language learning, and significantly
superior to using pre-trained video captions. We also present a quantitative
and qualitative analysis of the emergent language and discuss future directions
toward better understanding and leveraging emergent communication in embodied
tasks.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 06:36:02 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Mu",
"Yao",
""
],
[
"Yao",
"Shunyu",
""
],
[
"Ding",
"Mingyu",
""
],
[
"Luo",
"Ping",
""
],
[
"Gan",
"Chuang",
""
]
] |
new_dataset
| 0.96837 |
2304.09463
|
Zhuo Chen
|
Zhuo Chen, Xudong Xu, Yichao Yan, Ye Pan, Wenhan Zhu, Wayne Wu, Bo Dai
and Xiaokang Yang
|
HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Portrait stylization is a long-standing task enabling extensive applications.
Although 2D-based methods have made great progress in recent years, real-world
applications such as metaverse and games often demand 3D content. On the other
hand, the requirement of 3D data, which is costly to acquire, significantly
impedes the development of 3D portrait stylization methods. In this paper,
inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D
fields as the intermediate representation for rendering 2D images, we propose a
novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait
stylization. At the core of our method is a hyper-network learned to manipulate
the parameters of the generator in a single forward pass. It not only offers a
strong capacity to handle multiple styles with a single model, but also enables
flexible fine-grained stylization that affects only texture, shape, or local
part of the portrait. While the use of 3D-aware GANs bypasses the requirement
of 3D data, we further alleviate the necessity of style images with the CLIP
model being the stylization guidance. We conduct an extensive set of
experiments across the style, attribute, and shape, and meanwhile, measure the
3D consistency. These experiments demonstrate the superior capability of our
HyperStyle3D model in rendering 3D-consistent images in diverse styles,
deforming the face shape, and editing various attributes.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 07:22:05 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Chen",
"Zhuo",
""
],
[
"Xu",
"Xudong",
""
],
[
"Yan",
"Yichao",
""
],
[
"Pan",
"Ye",
""
],
[
"Zhu",
"Wenhan",
""
],
[
"Wu",
"Wayne",
""
],
[
"Dai",
"Bo",
""
],
[
"Yang",
"Xiaokang",
""
]
] |
new_dataset
| 0.963067 |
2304.09468
|
Ali AlQahtani
|
Hosam Alamleh, Ali Abdullah S. AlQahtani, Baker Al Smadi
|
Secure Mobile Payment Architecture Enabling Multi-factor Authentication
| null | null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
The rise of smartphones has led to a significant increase in the usage of
mobile payments. Mobile payments allow individuals to access financial
resources and make transactions through their mobile devices while on the go.
However, the current mobile payment systems were designed to align with
traditional payment structures, which limits the full potential of smartphones,
including their security features. This has become a major concern in the
rapidly growing mobile payment market. To address these security concerns,in
this paper we propose new mobile payment architecture. This architecture
leverages the advanced capabilities of modern smartphones to verify various
aspects of a payment, such as funds, biometrics, location, and others. The
proposed system aims to guarantee the legitimacy of transactions and protect
against identity theft by verifying multiple elements of a payment. The
security of mobile payment systems is crucial, given the rapid growth of the
market. Evaluating mobile payment systems based on their authentication,
encryption, and fraud detection capabilities is of utmost importance. The
proposed architecture provides a secure mobile payment solution that enhances
the overall payment experience by taking advantage of the advanced capabilities
of modern smartphones. This will not only improve the security of mobile
payments but also offer a more user-friendly payment experience for consumers.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 07:30:18 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Alamleh",
"Hosam",
""
],
[
"AlQahtani",
"Ali Abdullah S.",
""
],
[
"Smadi",
"Baker Al",
""
]
] |
new_dataset
| 0.997263 |
2304.09469
|
Adriel Isaiah Amoguis
|
Adriel Isaiah V. Amoguis, Gian Joseph B. Madrid, Benito Miguel D.
Flores IV, Macario O. Cordel II
|
Baybayin Character Instance Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Philippine Government recently passed the "National Writing System Act,"
which promotes using Baybayin in Philippine texts. In support of this effort to
promote the use of Baybayin, we present a computer vision system which can aid
individuals who cannot easily read Baybayin script. In this paper, we survey
the existing methods of identifying Baybayin scripts using computer vision and
machine learning techniques and discuss their capabilities and limitations.
Further, we propose a Baybayin Optical Character Instance Segmentation and
Classification model using state-of-the-art Convolutional Neural Networks
(CNNs) that detect Baybayin character instances in an image then outputs the
Latin alphabet counterparts of each character instance in the image. Most
existing systems are limited to character-level image classification and often
misclassify or not natively support characters with diacritics. In addition,
these existing models often have specific input requirements that limit it to
classifying Baybayin text in a controlled setting, such as limitations in
clarity and contrast, among others. To our knowledge, our proposed method is
the first end-to-end character instance detection model for Baybayin, achieving
a mAP50 score of 93.30%, mAP50-95 score of 80.50%, and F1-Score of 84.84%.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 07:35:41 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Amoguis",
"Adriel Isaiah V.",
""
],
[
"Madrid",
"Gian Joseph B.",
""
],
[
"Flores",
"Benito Miguel D.",
"IV"
],
[
"Cordel",
"Macario O.",
"II"
]
] |
new_dataset
| 0.999568 |
2304.09574
|
Sudhakar Singh
|
Amisha Gangwar, Sudhakar Singh, Richa Mishra, Shiv Prakash
|
The State-of-the-Art in Air Pollution Monitoring and Forecasting Systems
using IoT, Big Data, and Machine Learning
|
30 pages, 11 figures, Wireless Personal Communications. Wireless Pers
Commun (2023)
| null |
10.1007/s11277-023-10351-1
|
WIRE-D-22-01442-R1
|
cs.LG cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
The quality of air is closely linked with the life quality of humans,
plantations, and wildlife. It needs to be monitored and preserved continuously.
Transportations, industries, construction sites, generators, fireworks, and
waste burning have a major percentage in degrading the air quality. These
sources are required to be used in a safe and controlled manner. Using
traditional laboratory analysis or installing bulk and expensive models every
few miles is no longer efficient. Smart devices are needed for collecting and
analyzing air data. The quality of air depends on various factors, including
location, traffic, and time. Recent researches are using machine learning
algorithms, big data technologies, and the Internet of Things to propose a
stable and efficient model for the stated purpose. This review paper focuses on
studying and compiling recent research in this field and emphasizes the Data
sources, Monitoring, and Forecasting models. The main objective of this paper
is to provide the astuteness of the researches happening to improve the various
aspects of air polluting models. Further, it casts light on the various
research issues and challenges also.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 11:24:53 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Gangwar",
"Amisha",
""
],
[
"Singh",
"Sudhakar",
""
],
[
"Mishra",
"Richa",
""
],
[
"Prakash",
"Shiv",
""
]
] |
new_dataset
| 0.953228 |
2304.09588
|
Yu Guo
|
Yu Guo, Ryan Wen Liu, Jiangtian Nie, Lingjuan Lyu, Zehui Xiong, Jiawen
Kang, Han Yu, Dusit Niyato
|
DADFNet: Dual Attention and Dual Frequency-Guided Dehazing Network for
Video-Empowered Intelligent Transportation
|
This paper is accepted by AAAI 2022 Workshop: AI for Transportation
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual surveillance technology is an indispensable functional component of
advanced traffic management systems. It has been applied to perform traffic
supervision tasks, such as object detection, tracking and recognition. However,
adverse weather conditions, e.g., fog, haze and mist, pose severe challenges
for video-based transportation surveillance. To eliminate the influences of
adverse weather conditions, we propose a dual attention and dual
frequency-guided dehazing network (termed DADFNet) for real-time visibility
enhancement. It consists of a dual attention module (DAM) and a high-low
frequency-guided sub-net (HLFN) to jointly consider the attention and frequency
mapping to guide haze-free scene reconstruction. Extensive experiments on both
synthetic and real-world images demonstrate the superiority of DADFNet over
state-of-the-art methods in terms of visibility enhancement and improvement in
detection accuracy. Furthermore, DADFNet only takes $6.3$ ms to process a 1,920
* 1,080 image on the 2080 Ti GPU, making it highly efficient for deployment in
intelligent transportation systems.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 11:55:30 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Guo",
"Yu",
""
],
[
"Liu",
"Ryan Wen",
""
],
[
"Nie",
"Jiangtian",
""
],
[
"Lyu",
"Lingjuan",
""
],
[
"Xiong",
"Zehui",
""
],
[
"Kang",
"Jiawen",
""
],
[
"Yu",
"Han",
""
],
[
"Niyato",
"Dusit",
""
]
] |
new_dataset
| 0.999192 |
2304.09609
|
Wendong Zhang
|
Wendong Zhang
|
MMDR: A Result Feature Fusion Object Detection Approach for Autonomous
System
|
9 pages, 12 figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object detection has been extensively utilized in autonomous systems in
recent years, encompassing both 2D and 3D object detection. Recent research in
this field has primarily centered around multimodal approaches for addressing
this issue.In this paper, a multimodal fusion approach based on result
feature-level fusion is proposed. This method utilizes the outcome features
generated from single modality sources, and fuses them for downstream
tasks.Based on this method, a new post-fusing network is proposed for
multimodal object detection, which leverages the single modality outcomes as
features. The proposed approach, called Multi-Modal Detector based on Result
features (MMDR), is designed to work for both 2D and 3D object detection tasks.
Compared to previous multimodal models, the proposed approach in this paper
performs feature fusion at a later stage, enabling better representation of the
deep-level features of single modality sources. Additionally, the MMDR model
incorporates shallow global features during the feature fusion stage, endowing
the model with the ability to perceive background information and the overall
input, thereby avoiding issues such as missed detections.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 12:28:42 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Zhang",
"Wendong",
""
]
] |
new_dataset
| 0.983558 |
2304.09639
|
Alessandro Ronca
|
Alessandro Ronca
|
The Krohn-Rhodes Logics
| null | null | null | null |
cs.LO cs.AI cs.FL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present a new family of modal temporal logics of the past, obtained by
extending Past LTL with a rich set of temporal operators based on the theory by
Krohn and Rhodes for automata cascades. The theory says that every automaton
can be expressed as a cascade of some basic automata called prime automata.
They are the building blocks of all automata, analogously to prime numbers
being the building blocks of all natural numbers. We show that Past LTL
corresponds to cascades of one kind of prime automata called flip-flops. In
particular, the temporal operators of Past LTL are captured by flip-flops, and
they cannot capture any other prime automaton, confining the expressivity
within the star-free regular languages. We propose novel temporal operators
that can capture other prime automata, and hence extend the expressivity of
Past LTL. Such operators are infinitely-many, and they yield an infinite number
of logics capturing an infinite number of distinct fragments of the regular
languages. The result is a yet unexplored landscape of extensions of Past LTL,
that we call Krohn-Rhodes Logics, each of them with the potential of matching
the expressivity required by specific applications.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 13:24:04 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Ronca",
"Alessandro",
""
]
] |
new_dataset
| 0.996893 |
2304.09653
|
Sitong Wang
|
Sitong Wang, Samia Menon, Tao Long, Keren Henderson, Dingzeyu Li,
Kevin Crowston, Mark Hansen, Jeffrey V. Nickerson, Lydia B. Chilton
|
ReelFramer: Co-creating News Reels on Social Media with Generative AI
| null | null | null | null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Short videos on social media are a prime way many young people find and
consume content. News outlets would like to reach audiences through news reels,
but currently struggle to translate traditional journalistic formats into the
short, entertaining videos that match the style of the platform. There are many
ways to frame a reel-style narrative around a news story, and selecting one is
a challenge. Different news stories call for different framings, and require a
different trade-off between entertainment and information. We present a system
called ReelFramer that uses text and image generation to help journalists
explore multiple narrative framings for a story, then generate scripts,
character boards and storyboards they can edit and iterate on. A user study of
five graduate students in journalism-related fields found the system greatly
eased the burden of transforming a written story into a reel, and that
exploring framings to find the right one was a rewarding process.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 13:44:35 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Wang",
"Sitong",
""
],
[
"Menon",
"Samia",
""
],
[
"Long",
"Tao",
""
],
[
"Henderson",
"Keren",
""
],
[
"Li",
"Dingzeyu",
""
],
[
"Crowston",
"Kevin",
""
],
[
"Hansen",
"Mark",
""
],
[
"Nickerson",
"Jeffrey V.",
""
],
[
"Chilton",
"Lydia B.",
""
]
] |
new_dataset
| 0.999675 |
2304.09660
|
Liang Zhang
|
Liang Zhang, Anwen Hu, Jing Zhang, Shuo Hu, Qin Jin
|
MPMQA: Multimodal Question Answering on Product Manuals
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual contents, such as illustrations and images, play a big role in product
manual understanding. Existing Product Manual Question Answering (PMQA)
datasets tend to ignore visual contents and only retain textual parts. In this
work, to emphasize the importance of multimodal contents, we propose a
Multimodal Product Manual Question Answering (MPMQA) task. For each question,
MPMQA requires the model not only to process multimodal contents but also to
provide multimodal answers. To support MPMQA, a large-scale dataset PM209 is
constructed with human annotations, which contains 209 product manuals from 27
well-known consumer electronic brands. Human annotations include 6 types of
semantic regions for manual contents and 22,021 pairs of question and answer.
Especially, each answer consists of a textual sentence and related visual
regions from manuals. Taking into account the length of product manuals and the
fact that a question is always related to a small number of pages, MPMQA can be
naturally split into two subtasks: retrieving most related pages and then
generating multimodal answers. We further propose a unified model that can
perform these two subtasks all together and achieve comparable performance with
multiple task-specific models. The PM209 dataset is available at
https://github.com/AIM3-RUC/MPMQA.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 13:48:14 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Zhang",
"Liang",
""
],
[
"Hu",
"Anwen",
""
],
[
"Zhang",
"Jing",
""
],
[
"Hu",
"Shuo",
""
],
[
"Jin",
"Qin",
""
]
] |
new_dataset
| 0.999023 |
2304.09787
|
Seung Wook Kim
|
Seung Wook Kim, Bradley Brown, Kangxue Yin, Karsten Kreis, Katja
Schwarz, Daiqing Li, Robin Rombach, Antonio Torralba, Sanja Fidler
|
NeuralField-LDM: Scene Generation with Hierarchical Latent Diffusion
Models
|
CVPR 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatically generating high-quality real world 3D scenes is of enormous
interest for applications such as virtual reality and robotics simulation.
Towards this goal, we introduce NeuralField-LDM, a generative model capable of
synthesizing complex 3D environments. We leverage Latent Diffusion Models that
have been successfully utilized for efficient high-quality 2D content creation.
We first train a scene auto-encoder to express a set of image and pose pairs as
a neural field, represented as density and feature voxel grids that can be
projected to produce novel views of the scene. To further compress this
representation, we train a latent-autoencoder that maps the voxel grids to a
set of latent representations. A hierarchical diffusion model is then fit to
the latents to complete the scene generation pipeline. We achieve a substantial
improvement over existing state-of-the-art scene generation models.
Additionally, we show how NeuralField-LDM can be used for a variety of 3D
content creation applications, including conditional scene generation, scene
inpainting and scene style manipulation.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 16:13:21 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Kim",
"Seung Wook",
""
],
[
"Brown",
"Bradley",
""
],
[
"Yin",
"Kangxue",
""
],
[
"Kreis",
"Karsten",
""
],
[
"Schwarz",
"Katja",
""
],
[
"Li",
"Daiqing",
""
],
[
"Rombach",
"Robin",
""
],
[
"Torralba",
"Antonio",
""
],
[
"Fidler",
"Sanja",
""
]
] |
new_dataset
| 0.993329 |
2304.09831
|
Kyle Stachowicz
|
Kyle Stachowicz, Dhruv Shah, Arjun Bhorkar, Ilya Kostrikov, Sergey
Levine
|
FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing
| null | null | null | null |
cs.RO cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present a system that enables an autonomous small-scale RC car to drive
aggressively from visual observations using reinforcement learning (RL). Our
system, FastRLAP (faster lap), trains autonomously in the real world, without
human interventions, and without requiring any simulation or expert
demonstrations. Our system integrates a number of important components to make
this possible: we initialize the representations for the RL policy and value
function from a large prior dataset of other robots navigating in other
environments (at low speed), which provides a navigation-relevant
representation. From here, a sample-efficient online RL method uses a single
low-speed user-provided demonstration to determine the desired driving course,
extracts a set of navigational checkpoints, and autonomously practices driving
through these checkpoints, resetting automatically on collision or failure.
Perhaps surprisingly, we find that with appropriate initialization and choice
of algorithm, our system can learn to drive over a variety of racing courses
with less than 20 minutes of online training. The resulting policies exhibit
emergent aggressive driving skills, such as timing braking and acceleration
around turns and avoiding areas which impede the robot's motion, approaching
the performance of a human driver using a similar first-person interface over
the course of training.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 17:33:47 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Stachowicz",
"Kyle",
""
],
[
"Shah",
"Dhruv",
""
],
[
"Bhorkar",
"Arjun",
""
],
[
"Kostrikov",
"Ilya",
""
],
[
"Levine",
"Sergey",
""
]
] |
new_dataset
| 0.99978 |
2304.09856
|
Xianbiao Qi
|
Xianbiao Qi, Jianan Wang, Yihao Chen, Yukai Shi, Lei Zhang
|
LipsFormer: Introducing Lipschitz Continuity to Vision Transformers
|
To appear in ICLR 2023, our code will be public at
https://github.com/IDEA-Research/LipsFormer
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present a Lipschitz continuous Transformer, called LipsFormer, to pursue
training stability both theoretically and empirically for Transformer-based
models. In contrast to previous practical tricks that address training
instability by learning rate warmup, layer normalization, attention
formulation, and weight initialization, we show that Lipschitz continuity is a
more essential property to ensure training stability. In LipsFormer, we replace
unstable Transformer component modules with Lipschitz continuous counterparts:
CenterNorm instead of LayerNorm, spectral initialization instead of Xavier
initialization, scaled cosine similarity attention instead of dot-product
attention, and weighted residual shortcut. We prove that these introduced
modules are Lipschitz continuous and derive an upper bound on the Lipschitz
constant of LipsFormer. Our experiments show that LipsFormer allows stable
training of deep Transformer architectures without the need of careful learning
rate tuning such as warmup, yielding a faster convergence and better
generalization. As a result, on the ImageNet 1K dataset, LipsFormer-Swin-Tiny
based on Swin Transformer training for 300 epochs can obtain 82.7\% without any
learning rate warmup. Moreover, LipsFormer-CSwin-Tiny, based on CSwin, training
for 300 epochs achieves a top-1 accuracy of 83.5\% with 4.7G FLOPs and 24M
parameters. The code will be released at
\url{https://github.com/IDEA-Research/LipsFormer}.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 17:59:39 GMT"
}
] | 2023-04-20T00:00:00 |
[
[
"Qi",
"Xianbiao",
""
],
[
"Wang",
"Jianan",
""
],
[
"Chen",
"Yihao",
""
],
[
"Shi",
"Yukai",
""
],
[
"Zhang",
"Lei",
""
]
] |
new_dataset
| 0.956404 |
2010.00600
|
Emilio Ferrara
|
Emily Chen, Ashok Deb, Emilio Ferrara
|
#Election2020: The First Public Twitter Dataset on the 2020 US
Presidential Election
|
Our dataset is available at:
https://github.com/echen102/us-pres-elections-2020
| null |
10.1007/s42001-021-00117-9
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The integrity of democratic political discourse is at the core to guarantee
free and fair elections. With social media often dictating the tones and trends
of politics-related discussion, it is of paramount important to be able to
study online chatter, especially in the run up to important voting events, like
in the case of the upcoming November 3, 2020 U.S. Presidential Election.
Limited access to social media data is often the first barrier to impede,
hinder, or slow down progress, and ultimately our understanding of online
political discourse. To mitigate this issue and try to empower the
Computational Social Science research community, we decided to publicly release
a massive-scale, longitudinal dataset of U.S. politics- and election-related
tweets. This multilingual dataset that we have been collecting for over one
year encompasses hundreds of millions of tweets and tracks all salient U.S.
politics trends, actors, and events between 2019 and 2020. It predates and
spans the whole period of Republican and Democratic primaries, with real-time
tracking of all presidential contenders of both sides of the isle. After that,
it focuses on presidential and vice-presidential candidates. Our dataset
release is curated, documented and will be constantly updated on a
weekly-basis, until the November 3, 2020 election and beyond. We hope that the
academic community, computational journalists, and research practitioners alike
will all take advantage of our dataset to study relevant scientific and social
issues, including problems like misinformation, information manipulation,
interference, and distortion of online political discourse that have been
prevalent in the context of recent election events in the United States and
worldwide.
Our dataset is available at:
https://github.com/echen102/us-pres-elections-2020
|
[
{
"version": "v1",
"created": "Thu, 1 Oct 2020 18:00:03 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Chen",
"Emily",
""
],
[
"Deb",
"Ashok",
""
],
[
"Ferrara",
"Emilio",
""
]
] |
new_dataset
| 0.999833 |
2103.01403
|
Qing Li
|
Qing Li, Siyuan Huang, Yining Hong, Yixin Zhu, Ying Nian Wu, Song-Chun
Zhu
|
A Minimalist Dataset for Systematic Generalization of Perception,
Syntax, and Semantics
|
ICLR 2023. website: https://liqing-ustc.github.io/HINT
| null | null | null |
cs.LG cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Inspired by humans' exceptional ability to master arithmetic and generalize
to new problems, we present a new dataset, Handwritten arithmetic with INTegers
(HINT), to examine machines' capability of learning generalizable concepts at
three levels: perception, syntax, and semantics. In HINT, machines are tasked
with learning how concepts are perceived from raw signals such as images (i.e.,
perception), how multiple concepts are structurally combined to form a valid
expression (i.e., syntax), and how concepts are realized to afford various
reasoning tasks (i.e., semantics), all in a weakly supervised manner. Focusing
on systematic generalization, we carefully design a five-fold test set to
evaluate both the interpolation and the extrapolation of learned concepts
w.r.t. the three levels. Further, we design a few-shot learning split to
determine whether or not models can rapidly learn new concepts and generalize
them to more complex scenarios. To comprehend existing models' limitations, we
undertake extensive experiments with various sequence-to-sequence models,
including RNNs, Transformers, and GPT-3 (with the chain of thought prompting).
The results indicate that current models struggle to extrapolate to long-range
syntactic dependency and semantics. Models exhibit a considerable gap toward
human-level generalization when evaluated with new concepts in a few-shot
setting. Moreover, we discover that it is infeasible to solve HINT by merely
scaling up the dataset and the model size; this strategy contributes little to
the extrapolation of syntax and semantics. Finally, in zero-shot GPT-3
experiments, the chain of thought prompting exhibits impressive results and
significantly boosts the test accuracy. We believe the HINT dataset and the
experimental findings are of great interest to the learning community on
systematic generalization.
|
[
{
"version": "v1",
"created": "Tue, 2 Mar 2021 01:32:54 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 02:16:59 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Apr 2023 07:54:24 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Li",
"Qing",
""
],
[
"Huang",
"Siyuan",
""
],
[
"Hong",
"Yining",
""
],
[
"Zhu",
"Yixin",
""
],
[
"Wu",
"Ying Nian",
""
],
[
"Zhu",
"Song-Chun",
""
]
] |
new_dataset
| 0.999857 |
2203.07908
|
Josip \v{S}ari\'c
|
Josip \v{S}ari\'c, Marin Or\v{s}i\'c, Sini\v{s}a \v{S}egvi\'c
|
Panoptic SwiftNet: Pyramidal Fusion for Real-time Panoptic Segmentation
|
Code available at: https://github.com/jsaric/panoptic-swiftnet
|
Remote Sensing. 2023, 15(8), 1968;
|
10.3390/rs15081968
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dense panoptic prediction is a key ingredient in many existing applications
such as autonomous driving, automated warehouses or remote sensing. Many of
these applications require fast inference over large input resolutions on
affordable or even embedded hardware. We propose to achieve this goal by
trading off backbone capacity for multi-scale feature extraction. In comparison
with contemporaneous approaches to panoptic segmentation, the main novelties of
our method are efficient scale-equivariant feature extraction, cross-scale
upsampling through pyramidal fusion and boundary-aware learning of
pixel-to-instance assignment. The proposed method is very well suited for
remote sensing imagery due to the huge number of pixels in typical city-wide
and region-wide datasets. We present panoptic experiments on Cityscapes,
Vistas, COCO and the BSB-Aerial dataset. Our models outperform the state of the
art on the BSB-Aerial dataset while being able to process more than a hundred
1MPx images per second on a RTX3090 GPU with FP16 precision and TensorRT
optimization.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 13:47:40 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Apr 2023 14:46:07 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Šarić",
"Josip",
""
],
[
"Oršić",
"Marin",
""
],
[
"Šegvić",
"Siniša",
""
]
] |
new_dataset
| 0.985067 |
2204.06776
|
Haolong Li
|
Haolong Li and Joerg Stueckler
|
Visual-Inertial Odometry with Online Calibration of Velocity-Control
Based Kinematic Motion Models
|
Accepted by IEEE Robotics and Automation Letters (RA-L) 2022
| null |
10.1109/LRA.2022.3169837
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual-inertial odometry (VIO) is an important technology for autonomous
robots with power and payload constraints. In this paper, we propose a novel
approach for VIO with stereo cameras which integrates and calibrates the
velocity-control based kinematic motion model of wheeled mobile robots online.
Including such a motion model can help to improve the accuracy of VIO. Compared
to several previous approaches proposed to integrate wheel odometer
measurements for this purpose, our method does not require wheel encoders and
can be applied when the robot motion can be modeled with velocity-control based
kinematic motion model. We use radial basis function (RBF) kernels to
compensate for the time delay and deviations between control commands and
actual robot motion. The motion model is calibrated online by the VIO system
and can be used as a forward model for motion control and planning. We evaluate
our approach with data obtained in variously sized indoor environments,
demonstrate improvements over a pure VIO method, and evaluate the prediction
accuracy of the online calibrated model.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 06:21:12 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Apr 2022 15:50:59 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Apr 2023 09:45:42 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Li",
"Haolong",
""
],
[
"Stueckler",
"Joerg",
""
]
] |
new_dataset
| 0.958961 |
2207.01249
|
Bohan Yang
|
Bohan Yang, Bo Lu, Wei Chen, Fangxun Zhong, and Yun-Hui Liu
|
Model-Free 3D Shape Control of Deformable Objects Using Novel Features
Based on Modal Analysis
|
Accepted by the IEEE Transactions on Robotics. The paper will appear
in the IEEE Transactions on Robotics. IEEE copyright
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Shape control of deformable objects is a challenging and important robotic
problem. This paper proposes a model-free controller using novel 3D global
deformation features based on modal analysis. Unlike most existing controllers
using geometric features, our controller employs a physically-based deformation
feature by decoupling 3D global deformation into low-frequency mode shapes.
Although modal analysis is widely adopted in computer vision and simulation, it
has not been used in robotic deformation control. We develop a new model-free
framework for modal-based deformation control under robot manipulation.
Physical interpretation of mode shapes enables us to formulate an analytical
deformation Jacobian matrix mapping the robot manipulation onto changes of the
modal features. In the Jacobian matrix, unknown geometry and physical
properties of the object are treated as low-dimensional modal parameters which
can be used to linearly parameterize the closed-loop system. Thus, an adaptive
controller with proven stability can be designed to deform the object while
online estimating the modal parameters. Simulations and experiments are
conducted using linear, planar, and solid objects under different settings. The
results not only confirm the superior performance of our controller but also
demonstrate its advantages over the baseline method.
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2022 08:15:10 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Apr 2023 08:23:02 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Yang",
"Bohan",
""
],
[
"Lu",
"Bo",
""
],
[
"Chen",
"Wei",
""
],
[
"Zhong",
"Fangxun",
""
],
[
"Liu",
"Yun-Hui",
""
]
] |
new_dataset
| 0.995755 |
2208.08636
|
Haoran Xie
|
Yichen Peng, Chunqi Zhao, Haoran Xie, Tsukasa Fukusato, Kazunori
Miyata, Takeo Igarashi
|
DualMotion: Global-to-Local Casual Motion Design for Character
Animations
|
10 pages, 10 figures, under submission, video is here
https://youtu.be/-tk8q8LSiL0
| null |
10.1587/transinf.2022IIP0011
| null |
cs.GR cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Animating 3D characters using motion capture data requires basic expertise
and manual labor. To support the creativity of animation design and make it
easier for common users, we present a sketch-based interface DualMotion, with
rough sketches as input for designing daily-life animations of characters, such
as walking and jumping.Our approach enables to combine global motions of lower
limbs and the local motion of the upper limbs in a database by utilizing a
two-stage design strategy. Users are allowed to design a motion by starting
with drawing a rough trajectory of a body/lower limb movement in the global
design stage. The upper limb motions are then designed by drawing several more
relative motion trajectories in the local design stage. We conduct a user study
and verify the effectiveness and convenience of the proposed system in creative
activities.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 05:11:11 GMT"
}
] | 2023-04-19T00:00:00 |
[
[
"Peng",
"Yichen",
""
],
[
"Zhao",
"Chunqi",
""
],
[
"Xie",
"Haoran",
""
],
[
"Fukusato",
"Tsukasa",
""
],
[
"Miyata",
"Kazunori",
""
],
[
"Igarashi",
"Takeo",
""
]
] |
new_dataset
| 0.999059 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.