id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.05518
|
Frederike D\"umbgen
|
Frederike D\"umbgen, Mohammed A. Shalaby, Connor Holmes, Charles C.
Cossette, James R. Forbes, Jerome Le Ny, Timothy D. Barfoot
|
STAR-loc: Dataset for STereo And Range-based localization
|
15 pages, 15 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This document contains a detailed description of the STAR-loc dataset. For a
quick starting guide please refer to the associated Github repository
(https://github.com/utiasASRL/starloc). The dataset consists of stereo camera
data (rectified/raw images and inertial measurement unit measurements) and
ultra-wideband (UWB) data (range measurements) collected on a sensor rig in a
Vicon motion capture arena. The UWB anchors and visual landmarks (Apriltags)
are of known position, so the dataset can be used for both localization and
Simultaneous Localization and Mapping (SLAM).
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 15:01:54 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Dümbgen",
"Frederike",
""
],
[
"Shalaby",
"Mohammed A.",
""
],
[
"Holmes",
"Connor",
""
],
[
"Cossette",
"Charles C.",
""
],
[
"Forbes",
"James R.",
""
],
[
"Ny",
"Jerome Le",
""
],
[
"Barfoot",
"Timothy D.",
""
]
] |
new_dataset
| 0.999848 |
2309.05534
|
Chengyu Wang
|
Chengyu Wang, Zhongjie Duan, Bingyan Liu, Xinyi Zou, Cen Chen, Kui
Jia, Jun Huang
|
PAI-Diffusion: Constructing and Serving a Family of Open Chinese
Diffusion Models for Text-to-image Synthesis on the Cloud
| null | null | null | null |
cs.CL cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-to-image synthesis for the Chinese language poses unique challenges due
to its large vocabulary size, and intricate character relationships. While
existing diffusion models have shown promise in generating images from textual
descriptions, they often neglect domain-specific contexts and lack robustness
in handling the Chinese language. This paper introduces PAI-Diffusion, a
comprehensive framework that addresses these limitations. PAI-Diffusion
incorporates both general and domain-specific Chinese diffusion models,
enabling the generation of contextually relevant images. It explores the
potential of using LoRA and ControlNet for fine-grained image style transfer
and image editing, empowering users with enhanced control over image
generation. Moreover, PAI-Diffusion seamlessly integrates with Alibaba Cloud's
Machine Learning Platform for AI, providing accessible and scalable solutions.
All the Chinese diffusion model checkpoints, LoRAs, and ControlNets, including
domain-specific ones, are publicly available. A user-friendly Chinese WebUI and
the diffusers-api elastic inference toolkit, also open-sourced, further
facilitate the easy deployment of PAI-Diffusion models in various environments,
making it a valuable resource for Chinese text-to-image synthesis.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 15:18:28 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Wang",
"Chengyu",
""
],
[
"Duan",
"Zhongjie",
""
],
[
"Liu",
"Bingyan",
""
],
[
"Zou",
"Xinyi",
""
],
[
"Chen",
"Cen",
""
],
[
"Jia",
"Kui",
""
],
[
"Huang",
"Jun",
""
]
] |
new_dataset
| 0.952893 |
2309.05537
|
Mohamed Chahine Ghanem Dr
|
Mohamed Chahine Ghanem, Patrick Mulvihill, Karim Ouazzane, Ramzi
Djemai, Dipo Dunsin
|
D2WFP: A Novel Protocol for Forensically Identifying, Extracting, and
Analysing Deep and Dark Web Browsing Activities
| null | null | null | null |
cs.CR cs.IR cs.NI cs.OS
|
http://creativecommons.org/licenses/by/4.0/
|
The use of the un-indexed web, commonly known as the deep web and dark web,
to commit or facilitate criminal activity has drastically increased over the
past decade. The dark web is an in-famously dangerous place where all kinds of
criminal activities take place [1-2], despite advances in web forensics
techniques, tools, and methodologies, few studies have formally tackled the
dark and deep web forensics and the technical differences in terms of
investigative techniques and artefacts identification and extraction. This
research proposes a novel and comprehensive protocol to guide and assist
digital forensics professionals in investigating crimes committed on or via the
deep and dark web, The protocol named D2WFP establishes a new sequential
approach for performing investigative activities by observing the order of
volatility and implementing a systemic approach covering all browsing related
hives and artefacts which ultimately resulted into improv-ing the accuracy and
effectiveness. Rigorous quantitative and qualitative research has been
conducted by assessing D2WFP following a scientifically-sound and comprehensive
process in different scenarios and the obtained results show an apparent
increase in the number of artefacts re-covered when adopting D2WFP which
outperform any current industry or opensource browsing forensics tools. The
second contribution of D2WFP is the robust formulation of artefact correlation
and cross-validation within D2WFP which enables digital forensics professionals
to better document and structure their analysis of host-based deep and dark web
browsing artefacts.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 15:19:57 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Ghanem",
"Mohamed Chahine",
""
],
[
"Mulvihill",
"Patrick",
""
],
[
"Ouazzane",
"Karim",
""
],
[
"Djemai",
"Ramzi",
""
],
[
"Dunsin",
"Dipo",
""
]
] |
new_dataset
| 0.999553 |
2309.05542
|
Liam Dugan
|
Andrew Zhu, Liam Dugan, Alyssa Hwang, Chris Callison-Burch
|
Kani: A Lightweight and Highly Hackable Framework for Building Language
Model Applications
|
In submission to NLP-OSS
| null | null | null |
cs.SE cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Language model applications are becoming increasingly popular and complex,
often including features like tool usage and retrieval augmentation. However,
existing frameworks for such applications are often opinionated, deciding for
developers how their prompts ought to be formatted and imposing limitations on
customizability and reproducibility. To solve this we present Kani: a
lightweight, flexible, and model-agnostic open-source framework for building
language model applications. Kani helps developers implement a variety of
complex features by supporting the core building blocks of chat interaction:
model interfacing, chat management, and robust function calling. All Kani core
functions are easily overridable and well documented to empower developers to
customize functionality for their own needs. Kani thus serves as a useful tool
for researchers, hobbyists, and industry professionals alike to accelerate
their development while retaining interoperability and fine-grained control.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 15:27:59 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Zhu",
"Andrew",
""
],
[
"Dugan",
"Liam",
""
],
[
"Hwang",
"Alyssa",
""
],
[
"Callison-Burch",
"Chris",
""
]
] |
new_dataset
| 0.997268 |
2309.05551
|
Giuseppe Cartella
|
Giuseppe Cartella, Alberto Baldrati, Davide Morelli, Marcella Cornia,
Marco Bertini, Rita Cucchiara
|
OpenFashionCLIP: Vision-and-Language Contrastive Learning with
Open-Source Fashion Data
|
International Conference on Image Analysis and Processing (ICIAP)
2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The inexorable growth of online shopping and e-commerce demands scalable and
robust machine learning-based solutions to accommodate customer requirements.
In the context of automatic tagging classification and multimodal retrieval,
prior works either defined a low generalizable supervised learning approach or
more reusable CLIP-based techniques while, however, training on closed source
data. In this work, we propose OpenFashionCLIP, a vision-and-language
contrastive learning method that only adopts open-source fashion data stemming
from diverse domains, and characterized by varying degrees of specificity. Our
approach is extensively validated across several tasks and benchmarks, and
experimental results highlight a significant out-of-domain generalization
capability and consistent improvements over state-of-the-art methods both in
terms of accuracy and recall. Source code and trained models are publicly
available at: https://github.com/aimagelab/open-fashion-clip.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 15:36:03 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Cartella",
"Giuseppe",
""
],
[
"Baldrati",
"Alberto",
""
],
[
"Morelli",
"Davide",
""
],
[
"Cornia",
"Marcella",
""
],
[
"Bertini",
"Marco",
""
],
[
"Cucchiara",
"Rita",
""
]
] |
new_dataset
| 0.991176 |
2309.05573
|
Youquan Liu
|
Youquan Liu, Runnan Chen, Xin Li, Lingdong Kong, Yuchen Yang, Zhaoyang
Xia, Yeqi Bai, Xinge Zhu, Yuexin Ma, Yikang Li, Yu Qiao, Yuenan Hou
|
UniSeg: A Unified Multi-Modal LiDAR Segmentation Network and the
OpenPCSeg Codebase
|
ICCV 2023; 21 pages; 9 figures; 18 tables; Code at
https://github.com/PJLab-ADG/PCSeg
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point-, voxel-, and range-views are three representative forms of point
clouds. All of them have accurate 3D measurements but lack color and texture
information. RGB images are a natural complement to these point cloud views and
fully utilizing the comprehensive information of them benefits more robust
perceptions. In this paper, we present a unified multi-modal LiDAR segmentation
network, termed UniSeg, which leverages the information of RGB images and three
views of the point cloud, and accomplishes semantic segmentation and panoptic
segmentation simultaneously. Specifically, we first design the Learnable
cross-Modal Association (LMA) module to automatically fuse voxel-view and
range-view features with image features, which fully utilize the rich semantic
information of images and are robust to calibration errors. Then, the enhanced
voxel-view and range-view features are transformed to the point space,where
three views of point cloud features are further fused adaptively by the
Learnable cross-View Association module (LVA). Notably, UniSeg achieves
promising results in three public benchmarks, i.e., SemanticKITTI, nuScenes,
and Waymo Open Dataset (WOD); it ranks 1st on two challenges of two benchmarks,
including the LiDAR semantic segmentation challenge of nuScenes and panoptic
segmentation challenges of SemanticKITTI. Besides, we construct the OpenPCSeg
codebase, which is the largest and most comprehensive outdoor LiDAR
segmentation codebase. It contains most of the popular outdoor LiDAR
segmentation algorithms and provides reproducible implementations. The
OpenPCSeg codebase will be made publicly available at
https://github.com/PJLab-ADG/PCSeg.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 16:00:22 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Liu",
"Youquan",
""
],
[
"Chen",
"Runnan",
""
],
[
"Li",
"Xin",
""
],
[
"Kong",
"Lingdong",
""
],
[
"Yang",
"Yuchen",
""
],
[
"Xia",
"Zhaoyang",
""
],
[
"Bai",
"Yeqi",
""
],
[
"Zhu",
"Xinge",
""
],
[
"Ma",
"Yuexin",
""
],
[
"Li",
"Yikang",
""
],
[
"Qiao",
"Yu",
""
],
[
"Hou",
"Yuenan",
""
]
] |
new_dataset
| 0.994012 |
2309.05645
|
William Beksi
|
Jordan A. James, Heather K. Manching, Matthew R. Mattia, Kim D.
Bowman, Amanda M. Hulse-Kemp, William J. Beksi
|
CitDet: A Benchmark Dataset for Citrus Fruit Detection
|
Submitted to IEEE Robotics and Automation Letters (RA-L)
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this letter, we present a new dataset to advance the state of the art in
detecting citrus fruit and accurately estimate yield on trees affected by the
Huanglongbing (HLB) disease in orchard environments via imaging. Despite the
fact that significant progress has been made in solving the fruit detection
problem, the lack of publicly available datasets has complicated direct
comparison of results. For instance, citrus detection has long been of interest
in the agricultural research community, yet there is an absence of work,
particularly involving public datasets of citrus affected by HLB. To address
this issue, we enhance state-of-the-art object detection methods for use in
typical orchard settings. Concretely, we provide high-resolution images of
citrus trees located in an area known to be highly affected by HLB, along with
high-quality bounding box annotations of citrus fruit. Fruit on both the trees
and the ground are labeled to allow for identification of fruit location, which
contributes to advancements in yield estimation and potential measure of HLB
impact via fruit drop. The dataset consists of over 32,000 bounding box
annotations for fruit instances contained in 579 high-resolution images. In
summary, our contributions are the following: (i) we introduce a novel dataset
along with baseline performance benchmarks on multiple contemporary object
detection algorithms, (ii) we show the ability to accurately capture fruit
location on tree or on ground, and finally (ii) we present a correlation of our
results with yield estimations.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 17:37:08 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"James",
"Jordan A.",
""
],
[
"Manching",
"Heather K.",
""
],
[
"Mattia",
"Matthew R.",
""
],
[
"Bowman",
"Kim D.",
""
],
[
"Hulse-Kemp",
"Amanda M.",
""
],
[
"Beksi",
"William J.",
""
]
] |
new_dataset
| 0.999862 |
2309.05655
|
Binghao Huang
|
Binghao Huang, Yuanpei Chen, Tianyu Wang, Yuzhe Qin, Yaodong Yang,
Nikolay Atanasov, Xiaolong Wang
|
Dynamic Handover: Throw and Catch with Bimanual Hands
|
Accepted at CoRL 2023.
https://binghao-huang.github.io/dynamic_handover/
| null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans throw and catch objects all the time. However, such a seemingly common
skill introduces a lot of challenges for robots to achieve: The robots need to
operate such dynamic actions at high-speed, collaborate precisely, and interact
with diverse objects. In this paper, we design a system with two multi-finger
hands attached to robot arms to solve this problem. We train our system using
Multi-Agent Reinforcement Learning in simulation and perform Sim2Real transfer
to deploy on the real robots. To overcome the Sim2Real gap, we provide multiple
novel algorithm designs including learning a trajectory prediction model for
the object. Such a model can help the robot catcher has a real-time estimation
of where the object will be heading, and then react accordingly. We conduct our
experiments with multiple objects in the real-world system, and show
significant improvements over multiple baselines. Our project page is available
at \url{https://binghao-huang.github.io/dynamic_handover/}.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 17:49:25 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Huang",
"Binghao",
""
],
[
"Chen",
"Yuanpei",
""
],
[
"Wang",
"Tianyu",
""
],
[
"Qin",
"Yuzhe",
""
],
[
"Yang",
"Yaodong",
""
],
[
"Atanasov",
"Nikolay",
""
],
[
"Wang",
"Xiaolong",
""
]
] |
new_dataset
| 0.992524 |
2309.05662
|
Hongyu Li
|
Hongyu Li, Snehal Dikhale, Soshi Iba, Nawid Jamali
|
ViHOPE: Visuotactile In-Hand Object 6D Pose Estimation with Shape
Completion
|
Accepted by RA-L
| null |
10.1109/LRA.2023.3313941
| null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this letter, we introduce ViHOPE, a novel framework for estimating the 6D
pose of an in-hand object using visuotactile perception. Our key insight is
that the accuracy of the 6D object pose estimate can be improved by explicitly
completing the shape of the object. To this end, we introduce a novel
visuotactile shape completion module that uses a conditional Generative
Adversarial Network to complete the shape of an in-hand object based on
volumetric representation. This approach improves over prior works that
directly regress visuotactile observations to a 6D pose. By explicitly
completing the shape of the in-hand object and jointly optimizing the shape
completion and pose estimation tasks, we improve the accuracy of the 6D object
pose estimate. We train and test our model on a synthetic dataset and compare
it with the state-of-the-art. In the visuotactile shape completion task, we
outperform the state-of-the-art by 265% using the Intersection of Union metric
and achieve 88% lower Chamfer Distance. In the visuotactile pose estimation
task, we present results that suggest our framework reduces position and
angular errors by 35% and 64%, respectively. Furthermore, we ablate our
framework to confirm the gain on the 6D object pose estimate from explicitly
completing the shape. Ultimately, we show that our framework produces models
that are robust to sim-to-real transfer on a real-world robot platform.
|
[
{
"version": "v1",
"created": "Mon, 11 Sep 2023 17:58:14 GMT"
}
] | 2023-09-12T00:00:00 |
[
[
"Li",
"Hongyu",
""
],
[
"Dikhale",
"Snehal",
""
],
[
"Iba",
"Soshi",
""
],
[
"Jamali",
"Nawid",
""
]
] |
new_dataset
| 0.999145 |
2205.10018
|
Ze Wang
|
Guogang Liao, Xuejian Li, Ze Wang, Fan Yang, Muzhi Guan, Bingqi Zhu,
Yongkang Wang, Xingxing Wang, Dong Wang
|
NMA: Neural Multi-slot Auctions with Externalities for Online
Advertising
|
10 pages, 3figures
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online advertising driven by auctions brings billions of dollars in revenue
for social networking services and e-commerce platforms. GSP auctions, which
are simple and easy to understand for advertisers, have almost become the
benchmark for ad auction mechanisms in the industry. However, most GSP-based
industrial practices assume that the user click only relies on the ad itself,
which overlook the effect of external items, referred to as externalities.
Recently, DNA has attempted to upgrade GSP with deep neural networks and models
local externalities to some extent. However, it only considers set-level
contexts from auctions and ignores the order and displayed position of ads,
which is still suboptimal. Although VCG-based multi-slot auctions (e.g., VCG,
WVCG) make it theoretically possible to model global externalities (e.g., the
order and positions of ads and so on), they lack an efficient balance of both
revenue and social welfare. In this paper, we propose novel auction mechanisms
named Neural Multi-slot Auctions (NMA) to tackle the above-mentioned
challenges. Specifically, we model the global externalities effectively with a
context-aware list-wise prediction module to achieve better performance. We
design a list-wise deep rank module to guarantee incentive compatibility in
end-to-end learning. Furthermore, we propose an auxiliary loss for social
welfare to effectively reduce the decline of social welfare while maximizing
revenue. Experiment results on both offline large-scale datasets and online A/B
tests demonstrate that NMA obtains higher revenue with balanced social welfare
than other existing auction mechanisms (i.e., GSP, DNA, WVCG) in industrial
practice, and we have successfully deployed NMA on Meituan food delivery
platform.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 08:21:59 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Feb 2023 08:48:29 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Sep 2023 08:21:07 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Liao",
"Guogang",
""
],
[
"Li",
"Xuejian",
""
],
[
"Wang",
"Ze",
""
],
[
"Yang",
"Fan",
""
],
[
"Guan",
"Muzhi",
""
],
[
"Zhu",
"Bingqi",
""
],
[
"Wang",
"Yongkang",
""
],
[
"Wang",
"Xingxing",
""
],
[
"Wang",
"Dong",
""
]
] |
new_dataset
| 0.993316 |
2301.05880
|
Hongpeng Lin
|
Hongpeng Lin, Ludan Ruan, Wenke Xia, Peiyu Liu, Jingyuan Wen, Yixin
Xu, Di Hu, Ruihua Song, Wayne Xin Zhao, Qin Jin and Zhiwu Lu
|
TikTalk: A Video-Based Dialogue Dataset for Multi-Modal Chitchat in Real
World
|
Accepted to ACM Multimedia 2023
| null |
10.1145/3581783.3612425
| null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To facilitate the research on intelligent and human-like chatbots with
multi-modal context, we introduce a new video-based multi-modal dialogue
dataset, called TikTalk. We collect 38K videos from a popular video-sharing
platform, along with 367K conversations posted by users beneath them. Users
engage in spontaneous conversations based on their multi-modal experiences from
watching videos, which helps recreate real-world chitchat context. Compared to
previous multi-modal dialogue datasets, the richer context types in TikTalk
lead to more diverse conversations, but also increase the difficulty in
capturing human interests from intricate multi-modal information to generate
personalized responses. Moreover, external knowledge is more frequently evoked
in our dataset. These facts reveal new challenges for multi-modal dialogue
models. We quantitatively demonstrate the characteristics of TikTalk, propose a
video-based multi-modal chitchat task, and evaluate several dialogue baselines.
Experimental results indicate that the models incorporating large language
models (LLM) can generate more diverse responses, while the model utilizing
knowledge graphs to introduce external knowledge performs the best overall.
Furthermore, no existing model can solve all the above challenges well. There
is still a large room for future improvements, even for LLM with visual
extensions. Our dataset is available at
\url{https://ruc-aimind.github.io/projects/TikTalk/}.
|
[
{
"version": "v1",
"created": "Sat, 14 Jan 2023 10:18:22 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Aug 2023 10:36:44 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Sep 2023 10:03:16 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Lin",
"Hongpeng",
""
],
[
"Ruan",
"Ludan",
""
],
[
"Xia",
"Wenke",
""
],
[
"Liu",
"Peiyu",
""
],
[
"Wen",
"Jingyuan",
""
],
[
"Xu",
"Yixin",
""
],
[
"Hu",
"Di",
""
],
[
"Song",
"Ruihua",
""
],
[
"Zhao",
"Wayne Xin",
""
],
[
"Jin",
"Qin",
""
],
[
"Lu",
"Zhiwu",
""
]
] |
new_dataset
| 0.999907 |
2303.04781
|
Yanhao Yang
|
Yanhao Yang, Joseph Norby, Justin K. Yim, Aaron M. Johnson
|
Proprioception and Tail Control Enable Extreme Terrain Traversal by
Quadruped Robots
|
8 pages, 9 figures, accepted to IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS) 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Legged robots leverage ground contacts and the reaction forces they provide
to achieve agile locomotion. However, uncertainty coupled with contact
discontinuities can lead to failure, especially in real-world environments with
unexpected height variations such as rocky hills or curbs. To enable dynamic
traversal of extreme terrain, this work introduces 1) a proprioception-based
gait planner for estimating unknown hybrid events due to elevation changes and
responding by modifying contact schedules and planned footholds online, and 2)
a two-degree-of-freedom tail for improving contact-independent control and a
corresponding decoupled control scheme for better versatility and efficiency.
Simulation results show that the gait planner significantly improves stability
under unforeseen terrain height changes compared to methods that assume fixed
contact schedules and footholds. Further, tests have shown that the tail is
particularly effective at maintaining stability when encountering a terrain
change with an initial angular disturbance. The results show that these
approaches work synergistically to stabilize locomotion with elevation changes
up to 1.5 times the leg length and tilted initial states.
|
[
{
"version": "v1",
"created": "Wed, 8 Mar 2023 18:28:29 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 04:23:19 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Yang",
"Yanhao",
""
],
[
"Norby",
"Joseph",
""
],
[
"Yim",
"Justin K.",
""
],
[
"Johnson",
"Aaron M.",
""
]
] |
new_dataset
| 0.998094 |
2303.07169
|
Stanis{\l}aw Wo\'zniak
|
Axel von Arnim, Jules Lecomte, Naima Elosegui Borras, Stanislaw
Wozniak, Angeliki Pantazi
|
Dynamic Event-based Optical Identification and Communication
|
10 pages, 7 figures and 1 table
| null | null | null |
cs.CV cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optical identification is often done with spatial or temporal visual pattern
recognition and localization. Temporal pattern recognition, depending on the
technology, involves a trade-off between communication frequency, range and
accurate tracking. We propose a solution with light-emitting beacons that
improves this trade-off by exploiting fast event-based cameras and, for
tracking, sparse neuromorphic optical flow computed with spiking neurons. The
system is embedded in a simulated drone and evaluated in an asset monitoring
use case. It is robust to relative movements and enables simultaneous
communication with, and tracking of, multiple moving beacons. Finally, in a
hardware lab prototype, we demonstrate for the first time beacon tracking
performed simultaneously with state-of-the-art frequency communication in the
kHz range.
|
[
{
"version": "v1",
"created": "Mon, 13 Mar 2023 15:12:30 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Mar 2023 21:39:04 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Sep 2023 11:29:25 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"von Arnim",
"Axel",
""
],
[
"Lecomte",
"Jules",
""
],
[
"Borras",
"Naima Elosegui",
""
],
[
"Wozniak",
"Stanislaw",
""
],
[
"Pantazi",
"Angeliki",
""
]
] |
new_dataset
| 0.997798 |
2304.11397
|
Xianhao Chen
|
Xianhao Chen, Yiqin Deng, Haichuan Ding, Guanqiao Qu, Haixia Zhang,
Pan Li, Yuguang Fang
|
Vehicle as a Service (VaaS): Leverage Vehicles to Build Service Networks
and Capabilities for Smart Cities
|
32 pages, 11 figures
| null | null | null |
cs.NI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Smart cities demand resources for rich immersive sensing, ubiquitous
communications, powerful computing, large storage, and high intelligence
(SCCSI) to support various kinds of applications, such as public safety,
connected and autonomous driving, smart and connected health, and smart living.
At the same time, it is widely recognized that vehicles such as autonomous
cars, equipped with significantly powerful SCCSI capabilities, will become
ubiquitous in future smart cities. By observing the convergence of these two
trends, this article advocates the use of vehicles to build a cost-effective
service network, called the Vehicle as a Service (VaaS) paradigm, where
vehicles empowered with SCCSI capability form a web of mobile servers and
communicators to provide SCCSI services in smart cities. Towards this
direction, we first examine the potential use cases in smart cities and
possible upgrades required for the transition from traditional vehicular ad hoc
networks (VANETs) to VaaS. Then, we will introduce the system architecture of
the VaaS paradigm and discuss how it can provide SCCSI services in future smart
cities, respectively. At last, we identify the open problems of this paradigm
and future research directions, including architectural design, service
provisioning, incentive design, and security & privacy. We expect that this
paper paves the way towards developing a cost-effective and sustainable
approach for building smart cities.
|
[
{
"version": "v1",
"created": "Sat, 22 Apr 2023 13:13:53 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 08:56:06 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Chen",
"Xianhao",
""
],
[
"Deng",
"Yiqin",
""
],
[
"Ding",
"Haichuan",
""
],
[
"Qu",
"Guanqiao",
""
],
[
"Zhang",
"Haixia",
""
],
[
"Li",
"Pan",
""
],
[
"Fang",
"Yuguang",
""
]
] |
new_dataset
| 0.996791 |
2305.03689
|
Arijit Ray
|
Arijit Ray, Filip Radenovic, Abhimanyu Dubey, Bryan A. Plummer, Ranjay
Krishna, Kate Saenko
|
COLA: A Benchmark for Compositional Text-to-image Retrieval
|
Under review. Webpage: https://github.com/arijitray1993/COLA
| null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Compositional reasoning is a hallmark of human visual intelligence; yet
despite the size of large vision-language models, they struggle to represent
simple compositions by combining objects with their attributes. To measure this
lack of compositional capability, we design Cola, a text-to-image retrieval
benchmark to Compose Objects Localized with Attributes. To solve Cola, a model
must retrieve images with the correct configuration of attributes and objects,
and avoid choosing a distractor image with the same objects and attributes but
in the wrong configuration. Cola contains about 1.2k composed queries of 168
objects and 197 attributes on around 30K images. Our human evaluation finds
that Cola is 83.33% accurate, similar to contemporary compositionality
benchmarks. Using Cola as a testbed, we explore empirical modeling designs to
adapt pre-trained vision-language models to reason compositionally. We explore
6 adaptation strategies on 2 seminal vision-language models, using
compositionality-centric test benchmarks - Cola and CREPE. We find the optimal
adaptation strategy is to train a multimodal attention layer that jointly
attends over the frozen pre-trained image and language features. Surprisingly,
training multimodal layers on CLIP performs better than tuning a larger FLAVA
model with already pre-trained multimodal layers. Furthermore, our adaptation
strategy improves CLIP and FLAVA to comparable levels, suggesting that training
multimodal layers using contrastive attribute-object data is key, as opposed to
using them pre-trained. Lastly, we show that Cola is harder than a closely
related contemporary benchmark, CREPE, since simpler fine-tuning strategies
without multimodal layers suffice on CREPE, but not on Cola. However, we still
see a significant gap between our best adaptation and human accuracy,
suggesting considerable room for further research.
|
[
{
"version": "v1",
"created": "Fri, 5 May 2023 17:00:16 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 02:46:19 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Ray",
"Arijit",
""
],
[
"Radenovic",
"Filip",
""
],
[
"Dubey",
"Abhimanyu",
""
],
[
"Plummer",
"Bryan A.",
""
],
[
"Krishna",
"Ranjay",
""
],
[
"Saenko",
"Kate",
""
]
] |
new_dataset
| 0.999829 |
2306.12241
|
Quanyi Li
|
Quanyi Li, Zhenghao Peng, Lan Feng, Zhizheng Liu, Chenda Duan, Wenjie
Mo, Bolei Zhou
|
ScenarioNet: Open-Source Platform for Large-Scale Traffic Scenario
Simulation and Modeling
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Large-scale driving datasets such as Waymo Open Dataset and nuScenes
substantially accelerate autonomous driving research, especially for perception
tasks such as 3D detection and trajectory forecasting. Since the driving logs
in these datasets contain HD maps and detailed object annotations which
accurately reflect the real-world complexity of traffic behaviors, we can
harvest a massive number of complex traffic scenarios and recreate their
digital twins in simulation. Compared to the hand-crafted scenarios often used
in existing simulators, data-driven scenarios collected from the real world can
facilitate many research opportunities in machine learning and autonomous
driving. In this work, we present ScenarioNet, an open-source platform for
large-scale traffic scenario modeling and simulation. ScenarioNet defines a
unified scenario description format and collects a large-scale repository of
real-world traffic scenarios from the heterogeneous data in various driving
datasets including Waymo, nuScenes, Lyft L5, and nuPlan datasets. These
scenarios can be further replayed and interacted with in multiple views from
Bird-Eye-View layout to realistic 3D rendering in MetaDrive simulator. This
provides a benchmark for evaluating the safety of autonomous driving stacks in
simulation before their real-world deployment. We further demonstrate the
strengths of ScenarioNet on large-scale scenario generation, imitation
learning, and reinforcement learning in both single-agent and multi-agent
settings. Code, demo videos, and website are available at
https://metadriverse.github.io/scenarionet.
|
[
{
"version": "v1",
"created": "Wed, 21 Jun 2023 13:00:16 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2023 11:27:26 GMT"
},
{
"version": "v3",
"created": "Sun, 25 Jun 2023 19:09:46 GMT"
},
{
"version": "v4",
"created": "Sun, 2 Jul 2023 13:50:25 GMT"
},
{
"version": "v5",
"created": "Fri, 8 Sep 2023 13:33:10 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Li",
"Quanyi",
""
],
[
"Peng",
"Zhenghao",
""
],
[
"Feng",
"Lan",
""
],
[
"Liu",
"Zhizheng",
""
],
[
"Duan",
"Chenda",
""
],
[
"Mo",
"Wenjie",
""
],
[
"Zhou",
"Bolei",
""
]
] |
new_dataset
| 0.971522 |
2307.12720
|
Ahmed Hareedy
|
Iven Guzel, Do\u{g}ukan \"Ozbayrak, Robert Calderbank, Ahmed Hareedy
|
Eliminating Media Noise While Preserving Storage Capacity:
Reconfigurable Constrained Codes for Two-Dimensional Magnetic Recording
|
27 pages (single column), 12 figures, submitted to the IEEE
Transactions on Information Theory (TIT)
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Magnetic recording devices are still competitive in the storage density race
thanks to new technologies such as two-dimensional magnetic recording (TDMR).
Error-prone patterns where a bit is surrounded by complementary bits at the
four positions with Manhattan distance $1$ on the TDMR grid are called plus
isolation (PIS) patterns. Recently, we introduced optimal plus LOCO (OP-LOCO)
codes that prevent these patterns from being written. However, as the device
ages, error-prone patterns where a bit is surrounded by complementary bits at
only three positions with Manhattan distance $1$ emerge, and we call these
incomplete PIS (IPIS) patterns. In this paper, we present capacity-achieving
codes that forbid both PIS and IPIS patterns in TDMR systems with wide read
heads. We collectively call these patterns rotated T isolation (RTIS) patterns,
and we call the new codes optimal T LOCO (OT-LOCO) codes. We analyze OT-LOCO
codes and derive their encoding-decoding rule. Simulation results demonstrate
that OT-LOCO codes entirely eliminate media noise at practical TD densities. We
suggest using OP-LOCO codes early in the device lifetime, then reconfiguring to
OT-LOCO codes later on. Moreover, we introduce another coding scheme to remove
RTIS patterns which offers lower complexity, lower error propagation, and track
separation.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 12:02:53 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 14:48:17 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Guzel",
"Iven",
""
],
[
"Özbayrak",
"Doğukan",
""
],
[
"Calderbank",
"Robert",
""
],
[
"Hareedy",
"Ahmed",
""
]
] |
new_dataset
| 0.999144 |
2308.01317
|
Andrew Sellergren
|
Shawn Xu, Lin Yang, Christopher Kelly, Marcin Sieniek, Timo
Kohlberger, Martin Ma, Wei-Hung Weng, Atilla Kiraly, Sahar Kazemzadeh, Zakkai
Melamed, Jungyeon Park, Patricia Strachan, Yun Liu, Chuck Lau, Preeti Singh,
Christina Chen, Mozziyar Etemadi, Sreenivasa Raju Kalidindi, Yossi Matias,
Katherine Chou, Greg S. Corrado, Shravya Shetty, Daniel Tse, Shruthi
Prabhakara, Daniel Golden, Rory Pilgrim, Krish Eswaran, Andrew Sellergren
|
ELIXR: Towards a general purpose X-ray artificial intelligence system
through alignment of large language models and radiology vision encoders
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present an approach, which we call Embeddings for
Language/Image-aligned X-Rays, or ELIXR, that leverages a language-aligned
image encoder combined or grafted onto a fixed LLM, PaLM 2, to perform a broad
range of chest X-ray tasks. We train this lightweight adapter architecture
using images paired with corresponding free-text radiology reports from the
MIMIC-CXR dataset. ELIXR achieved state-of-the-art performance on zero-shot
chest X-ray (CXR) classification (mean AUC of 0.850 across 13 findings),
data-efficient CXR classification (mean AUCs of 0.893 and 0.898 across five
findings (atelectasis, cardiomegaly, consolidation, pleural effusion, and
pulmonary edema) for 1% (~2,200 images) and 10% (~22,000 images) training
data), and semantic search (0.76 normalized discounted cumulative gain (NDCG)
across nineteen queries, including perfect retrieval on twelve of them).
Compared to existing data-efficient methods including supervised contrastive
learning (SupCon), ELIXR required two orders of magnitude less data to reach
similar performance. ELIXR also showed promise on CXR vision-language tasks,
demonstrating overall accuracies of 58.7% and 62.5% on visual question
answering and report quality assurance tasks, respectively. These results
suggest that ELIXR is a robust and versatile approach to CXR AI.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 17:59:45 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2023 23:07:51 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Xu",
"Shawn",
""
],
[
"Yang",
"Lin",
""
],
[
"Kelly",
"Christopher",
""
],
[
"Sieniek",
"Marcin",
""
],
[
"Kohlberger",
"Timo",
""
],
[
"Ma",
"Martin",
""
],
[
"Weng",
"Wei-Hung",
""
],
[
"Kiraly",
"Atilla",
""
],
[
"Kazemzadeh",
"Sahar",
""
],
[
"Melamed",
"Zakkai",
""
],
[
"Park",
"Jungyeon",
""
],
[
"Strachan",
"Patricia",
""
],
[
"Liu",
"Yun",
""
],
[
"Lau",
"Chuck",
""
],
[
"Singh",
"Preeti",
""
],
[
"Chen",
"Christina",
""
],
[
"Etemadi",
"Mozziyar",
""
],
[
"Kalidindi",
"Sreenivasa Raju",
""
],
[
"Matias",
"Yossi",
""
],
[
"Chou",
"Katherine",
""
],
[
"Corrado",
"Greg S.",
""
],
[
"Shetty",
"Shravya",
""
],
[
"Tse",
"Daniel",
""
],
[
"Prabhakara",
"Shruthi",
""
],
[
"Golden",
"Daniel",
""
],
[
"Pilgrim",
"Rory",
""
],
[
"Eswaran",
"Krish",
""
],
[
"Sellergren",
"Andrew",
""
]
] |
new_dataset
| 0.981872 |
2308.03075
|
Karl Bringmann
|
Karl Bringmann
|
Knapsack with Small Items in Near-Quadratic Time
|
28 pages
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
The Bounded Knapsack problem is one of the most fundamental NP-complete
problems at the intersection of computer science, optimization, and operations
research. A recent line of research worked towards understanding the complexity
of pseudopolynomial-time algorithms for Bounded Knapsack parameterized by the
maximum item weight $w_{\mathrm{max}}$ and the number of items $n$. A
conditional lower bound rules out that Bounded Knapsack can be solved in time
$O((n+w_{\mathrm{max}})^{2-\delta})$ for any $\delta > 0$ [Cygan, Mucha,
Wegrzycki, Wlodarczyk'17, K\"unnemann, Paturi, Schneider'17]. This raised the
question whether Bounded Knapsack can be solved in time $\tilde
O((n+w_{\mathrm{max}})^2)$. The quest of resolving this question lead to
algorithms that run in time $\tilde O(n^3 w_{\mathrm{max}}^2)$ [Tamir'09],
$\tilde O(n^2 w_{\mathrm{max}}^2)$ and $\tilde O(n w_{\mathrm{max}}^3)$
[Bateni, Hajiaghayi, Seddighin, Stein'18], $O(n^2 w_{\mathrm{max}}^2)$ and
$\tilde O(n w_{\mathrm{max}}^2)$ [Eisenbrand and Weismantel'18], $O(n +
w_{\mathrm{max}}^3)$ [Polak, Rohwedder, Wegrzycki'21], and very recently
$\tilde O(n + w_{\mathrm{max}}^{12/5})$ [Chen, Lian, Mao, Zhang'23].
In this paper we resolve this question by designing an algorithm for Bounded
Knapsack with running time $\tilde O(n + w_{\mathrm{max}}^2)$, which is
conditionally near-optimal.
|
[
{
"version": "v1",
"created": "Sun, 6 Aug 2023 10:07:03 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 16:54:13 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Bringmann",
"Karl",
""
]
] |
new_dataset
| 0.998743 |
2308.04170
|
Ali Muzaffar
|
Ali Muzaffar, Hani Ragab Hassen, Hind Zantout, Michael A Lones
|
DroidDissector: A Static and Dynamic Analysis Tool for Android Malware
Detection
| null | null |
10.1007/978-3-031-40598-3_1
| null |
cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
DroidDissector is an extraction tool for both static and dynamic features.
The aim is to provide Android malware researchers and analysts with an
integrated tool that can extract all of the most widely used features in
Android malware detection from one location. The static analysis module
extracts features from both the manifest file and the source code of the
application to obtain a broad array of features that include permissions, API
call graphs and opcodes. The dynamic analysis module runs on the latest version
of Android and analyses the complete behaviour of an application by tracking
the system calls used, network traffic generated, API calls used and log files
produced by the application.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 09:59:56 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Aug 2023 10:54:12 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Muzaffar",
"Ali",
""
],
[
"Hassen",
"Hani Ragab",
""
],
[
"Zantout",
"Hind",
""
],
[
"Lones",
"Michael A",
""
]
] |
new_dataset
| 0.999733 |
2308.06603
|
Tonghui Zou
|
Tonghui Zou and Lei Chen
|
LadleNet: Translating Thermal Infrared Images to Visible Light Images
Using A Scalable Two-stage U-Net
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The translation of thermal infrared (TIR) images to visible light (VI) images
presents a challenging task with potential applications spanning various
domains such as TIR-VI image registration and fusion. Leveraging supplementary
information derived from TIR image conversions can significantly enhance model
performance and generalization across these applications. However, prevailing
issues within this field include suboptimal image fidelity and limited model
scalability. In this paper, we introduce an algorithm, LadleNet, based on the
U-Net architecture. LadleNet employs a two-stage U-Net concatenation structure,
augmented with skip connections and refined feature aggregation techniques,
resulting in a substantial enhancement in model performance. Comprising
'Handle' and 'Bowl' modules, LadleNet's Handle module facilitates the
construction of an abstract semantic space, while the Bowl module decodes this
semantic space to yield mapped VI images. The Handle module exhibits
extensibility by allowing the substitution of its network architecture with
semantic segmentation networks, thereby establishing more abstract semantic
spaces to bolster model performance. Consequently, we propose LadleNet+, which
replaces LadleNet's Handle module with the pre-trained DeepLabv3+ network,
thereby endowing the model with enhanced semantic space construction
capabilities. The proposed method is evaluated and tested on the KAIST dataset,
accompanied by quantitative and qualitative analyses. Compared to existing
methodologies, our approach achieves state-of-the-art performance in terms of
image clarity and perceptual quality. The source code will be made available at
https://github.com/Ach-1914/LadleNet/tree/main/.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2023 16:14:44 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 13:03:24 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Zou",
"Tonghui",
""
],
[
"Chen",
"Lei",
""
]
] |
new_dataset
| 0.953099 |
2308.14076
|
Chiranjibi Sitaula
|
Chiranjibi Sitaula, Jagannath Aryal and Avik Bhattacharya
|
A Novel Multi-scale Attention Feature Extraction Block for Aerial Remote
Sensing Image Classification
|
The paper is under review in IEEE Geoscience and Remote Sensing
Letters Journal (IEEE-GRSL). This version may be deleted and/or updated based
on the journal's policy
|
IEEE Geoscience and Remote Sensing Letters, 2023
|
10.1109/LGRS.2023.3312643
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Classification of very high-resolution (VHR) aerial remote sensing (RS)
images is a well-established research area in the remote sensing community as
it provides valuable spatial information for decision-making. Existing works on
VHR aerial RS image classification produce an excellent classification
performance; nevertheless, they have a limited capability to well-represent VHR
RS images having complex and small objects, thereby leading to performance
instability. As such, we propose a novel plug-and-play multi-scale attention
feature extraction block (MSAFEB) based on multi-scale convolution at two
levels with skip connection, producing discriminative/salient information at a
deeper/finer level. The experimental study on two benchmark VHR aerial RS image
datasets (AID and NWPU) demonstrates that our proposal achieves a
stable/consistent performance (minimum standard deviation of $0.002$) and
competent overall classification performance (AID: 95.85\% and NWPU: 94.09\%).
|
[
{
"version": "v1",
"created": "Sun, 27 Aug 2023 11:49:46 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Sitaula",
"Chiranjibi",
""
],
[
"Aryal",
"Jagannath",
""
],
[
"Bhattacharya",
"Avik",
""
]
] |
new_dataset
| 0.997725 |
2309.00066
|
Varun Sundar
|
Varun Sundar, Andrei Ardelean, Tristan Swedish, Claudio Bruschini,
Edoardo Charbon and Mohit Gupta
|
SoDaCam: Software-defined Cameras via Single-Photon Imaging
|
Accepted at ICCV 2023 (oral). Project webpage can be found at
https://wisionlab.com/project/sodacam/
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Reinterpretable cameras are defined by their post-processing capabilities
that exceed traditional imaging. We present "SoDaCam" that provides
reinterpretable cameras at the granularity of photons, from photon-cubes
acquired by single-photon devices. Photon-cubes represent the spatio-temporal
detections of photons as a sequence of binary frames, at frame-rates as high as
100 kHz. We show that simple transformations of the photon-cube, or photon-cube
projections, provide the functionality of numerous imaging systems including:
exposure bracketing, flutter shutter cameras, video compressive systems, event
cameras, and even cameras that move during exposure. Our photon-cube
projections offer the flexibility of being software-defined constructs that are
only limited by what is computable, and shot-noise. We exploit this flexibility
to provide new capabilities for the emulated cameras. As an added benefit, our
projections provide camera-dependent compression of photon-cubes, which we
demonstrate using an implementation of our projections on a novel compute
architecture that is designed for single-photon imaging.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 18:13:01 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 14:15:50 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Sundar",
"Varun",
""
],
[
"Ardelean",
"Andrei",
""
],
[
"Swedish",
"Tristan",
""
],
[
"Bruschini",
"Claudio",
""
],
[
"Charbon",
"Edoardo",
""
],
[
"Gupta",
"Mohit",
""
]
] |
new_dataset
| 0.998836 |
2309.01855
|
Dan Casas
|
Dan Casas, Marc Comino-Trinidad
|
SMPLitex: A Generative Model and Dataset for 3D Human Texture Estimation
from Single Image
|
Accepted at BMVC 2023. Project website:
https://dancasas.github.io/projects/SMPLitex
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose SMPLitex, a method for estimating and manipulating the complete 3D
appearance of humans captured from a single image. SMPLitex builds upon the
recently proposed generative models for 2D images, and extends their use to the
3D domain through pixel-to-surface correspondences computed on the input image.
To this end, we first train a generative model for complete 3D human
appearance, and then fit it into the input image by conditioning the generative
model to the visible parts of the subject. Furthermore, we propose a new
dataset of high-quality human textures built by sampling SMPLitex conditioned
on subject descriptions and images. We quantitatively and qualitatively
evaluate our method in 3 publicly available datasets, demonstrating that
SMPLitex significantly outperforms existing methods for human texture
estimation while allowing for a wider variety of tasks such as editing,
synthesis, and manipulation
|
[
{
"version": "v1",
"created": "Mon, 4 Sep 2023 23:05:41 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2023 20:44:23 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Casas",
"Dan",
""
],
[
"Comino-Trinidad",
"Marc",
""
]
] |
new_dataset
| 0.99979 |
2309.03912
|
Thomas Mejstrik
|
Thomas Mejstrik
|
__host__ __device__ -- Generic programming in Cuda
|
First draft
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present patterns for Cuda/C++ to write save generic code which works both
on the host and device side. Writing templated functions in Cuda/C++ both for
the CPU and the GPU bears the problem that in general both __host__ and
__device__ functions are instantiated, which leads to lots of compiler warnings
or errors.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 22:08:11 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Mejstrik",
"Thomas",
""
]
] |
new_dataset
| 0.999594 |
2309.03914
|
Tao Xiao
|
Tao Xiao, Christoph Treude, Hideaki Hata, Kenichi Matsumoto
|
DevGPT: Studying Developer-ChatGPT Conversations
|
MSR 2024 Mining Challenge Proposal
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
The emergence of large language models (LLMs) such as ChatGPT has disrupted
the landscape of software development. Many studies are investigating the
quality of responses generated by ChatGPT, the efficacy of various prompting
techniques, and its comparative performance in programming contests, to name a
few examples. Yet, we know very little about how ChatGPT is actually used by
software developers. What questions do developers present to ChatGPT? What are
the dynamics of these interactions? What is the backdrop against which these
conversations are held, and how do the conversations feedback into the
artifacts of their work? To close this gap, we introduce DevGPT, a curated
dataset which encompasses 17,913 prompts and ChatGPT's responses including
11,751 code snippets, coupled with the corresponding software development
artifacts -- ranging from source code, commits, issues, pull requests, to
discussions and Hacker News threads -- to enable the analysis of the context
and implications of these developer interactions with ChatGPT.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 06:55:40 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Xiao",
"Tao",
""
],
[
"Treude",
"Christoph",
""
],
[
"Hata",
"Hideaki",
""
],
[
"Matsumoto",
"Kenichi",
""
]
] |
new_dataset
| 0.995476 |
2309.03921
|
William Theisen
|
William Theisen and Walter Scheirer
|
C-CLIP: Contrastive Image-Text Encoders to Close the
Descriptive-Commentative Gap
|
11 Pages, 5 Figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The interplay between the image and comment on a social media post is one of
high importance for understanding its overall message. Recent strides in
multimodal embedding models, namely CLIP, have provided an avenue forward in
relating image and text. However the current training regime for CLIP models is
insufficient for matching content found on social media, regardless of site or
language. Current CLIP training data is based on what we call ``descriptive''
text: text in which an image is merely described. This is something rarely seen
on social media, where the vast majority of text content is ``commentative'' in
nature. The captions provide commentary and broader context related to the
image, rather than describing what is in it. Current CLIP models perform poorly
on retrieval tasks where image-caption pairs display a commentative
relationship. Closing this gap would be beneficial for several important
application areas related to social media. For instance, it would allow groups
focused on Open-Source Intelligence Operations (OSINT) to further aid efforts
during disaster events, such as the ongoing Russian invasion of Ukraine, by
easily exposing data to non-technical users for discovery and analysis. In
order to close this gap we demonstrate that training contrastive image-text
encoders on explicitly commentative pairs results in large improvements in
retrieval results, with the results extending across a variety of non-English
languages.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 19:03:49 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Theisen",
"William",
""
],
[
"Scheirer",
"Walter",
""
]
] |
new_dataset
| 0.998758 |
2309.03933
|
Robin Courant
|
Robin Courant, Xi Wang, Marc Christie and Vicky Kalogeiton
|
BluNF: Blueprint Neural Field
|
ICCV-W (AI3DCC) 2023. Project page with videos and code:
https://www.lix.polytechnique.fr/vista/projects/2023_iccvw_courant/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural Radiance Fields (NeRFs) have revolutionized scene novel view
synthesis, offering visually realistic, precise, and robust implicit
reconstructions. While recent approaches enable NeRF editing, such as object
removal, 3D shape modification, or material property manipulation, the manual
annotation prior to such edits makes the process tedious. Additionally,
traditional 2D interaction tools lack an accurate sense of 3D space, preventing
precise manipulation and editing of scenes. In this paper, we introduce a novel
approach, called Blueprint Neural Field (BluNF), to address these editing
issues. BluNF provides a robust and user-friendly 2D blueprint, enabling
intuitive scene editing. By leveraging implicit neural representation, BluNF
constructs a blueprint of a scene using prior semantic and depth information.
The generated blueprint allows effortless editing and manipulation of NeRF
representations. We demonstrate BluNF's editability through an intuitive
click-and-change mechanism, enabling 3D manipulations, such as masking,
appearance modification, and object removal. Our approach significantly
contributes to visual content creation, paving the way for further research in
this area.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 17:53:25 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Courant",
"Robin",
""
],
[
"Wang",
"Xi",
""
],
[
"Christie",
"Marc",
""
],
[
"Kalogeiton",
"Vicky",
""
]
] |
new_dataset
| 0.998214 |
2309.04068
|
Xiaoqiang Wang
|
Xiaoqiang Wang, Yue Su, Dabin Zheng, Wei Lu
|
Two classes of reducible cyclic codes with large minimum symbol-pair
distances
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The high-density data storage technology aims to design high-capacity storage
at a relatively low cost. In order to achieve this goal, symbol-pair codes were
proposed by Cassuto and Blaum \cite{CB10,CB11} to handle channels that output
pairs of overlapping symbols. Such a channel is called symbol-pair read
channel, which introduce new concept called symbol-pair weight and minimum
symbol-pair distance. In this paper, we consider the parameters of two classes
of reducible cyclic codes under the symbol-pair metric. Based on the theory of
cyclotomic numbers and Gaussian period over finite fields, we show the possible
symbol-pair weights of these codes. Their minimum symbol-pair distances are
twice the minimum Hamming distances under some conditions. Moreover, we obtain
some three symbol-pair weight codes and determine their symbol-pair weight
distribution. A class of MDS symbol-pair codes is also established. Among other
results, we determine the values of some generalized cyclotomic numbers.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 01:50:13 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Wang",
"Xiaoqiang",
""
],
[
"Su",
"Yue",
""
],
[
"Zheng",
"Dabin",
""
],
[
"Lu",
"Wei",
""
]
] |
new_dataset
| 0.963403 |
2309.04220
|
Junfeng Cheng
|
Junfeng Cheng, Mingdong Wu, Ruiyuan Zhang, Guanqi Zhan, Chao Wu, Hao
Dong
|
Score-PA: Score-based 3D Part Assembly
|
BMVC 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous 3D part assembly is a challenging task in the areas of robotics
and 3D computer vision. This task aims to assemble individual components into a
complete shape without relying on predefined instructions. In this paper, we
formulate this task from a novel generative perspective, introducing the
Score-based 3D Part Assembly framework (Score-PA) for 3D part assembly. Knowing
that score-based methods are typically time-consuming during the inference
stage. To address this issue, we introduce a novel algorithm called the Fast
Predictor-Corrector Sampler (FPC) that accelerates the sampling process within
the framework. We employ various metrics to assess assembly quality and
diversity, and our evaluation results demonstrate that our algorithm
outperforms existing state-of-the-art approaches. We release our code at
https://github.com/J-F-Cheng/Score-PA_Score-based-3D-Part-Assembly.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 09:10:03 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Cheng",
"Junfeng",
""
],
[
"Wu",
"Mingdong",
""
],
[
"Zhang",
"Ruiyuan",
""
],
[
"Zhan",
"Guanqi",
""
],
[
"Wu",
"Chao",
""
],
[
"Dong",
"Hao",
""
]
] |
new_dataset
| 0.998838 |
2309.04221
|
Thach V. Bui
|
Thach V. Bui, Jonathan Scarlett
|
Concomitant Group Testing
|
15 pages, 3 figures, 1 table
| null | null | null |
cs.IT cs.LG math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce a variation of the group testing problem
capturing the idea that a positive test requires a combination of multiple
``types'' of item. Specifically, we assume that there are multiple disjoint
\emph{semi-defective sets}, and a test is positive if and only if it contains
at least one item from each of these sets. The goal is to reliably identify all
of the semi-defective sets using as few tests as possible, and we refer to this
problem as \textit{Concomitant Group Testing} (ConcGT). We derive a variety of
algorithms for this task, focusing primarily on the case that there are two
semi-defective sets. Our algorithms are distinguished by (i) whether they are
deterministic (zero-error) or randomized (small-error), and (ii) whether they
are non-adaptive, fully adaptive, or have limited adaptivity (e.g., 2 or 3
stages). Both our deterministic adaptive algorithm and our randomized
algorithms (non-adaptive or limited adaptivity) are order-optimal in broad
scaling regimes of interest, and improve significantly over baseline results
that are based on solving a more general problem as an intermediate step (e.g.,
hypergraph learning).
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 09:11:12 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Bui",
"Thach V.",
""
],
[
"Scarlett",
"Jonathan",
""
]
] |
new_dataset
| 0.959663 |
2309.04228
|
Felix Rosberg
|
Felix Rosberg, Eren Erdal Aksoy, Cristofer Englund, Fernando
Alonso-Fernandez
|
FIVA: Facial Image and Video Anonymization and Anonymization Defense
|
Accepted to ICCVW 2023 - DFAD 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we present a new approach for facial anonymization in images
and videos, abbreviated as FIVA. Our proposed method is able to maintain the
same face anonymization consistently over frames with our suggested
identity-tracking and guarantees a strong difference from the original face.
FIVA allows for 0 true positives for a false acceptance rate of 0.001. Our work
considers the important security issue of reconstruction attacks and
investigates adversarial noise, uniform noise, and parameter noise to disrupt
reconstruction attacks. In this regard, we apply different defense and
protection methods against these privacy threats to demonstrate the scalability
of FIVA. On top of this, we also show that reconstruction attack models can be
used for detection of deep fakes. Last but not least, we provide experimental
results showing how FIVA can even enable face swapping, which is purely trained
on a single target image.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 09:34:48 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Rosberg",
"Felix",
""
],
[
"Aksoy",
"Eren Erdal",
""
],
[
"Englund",
"Cristofer",
""
],
[
"Alonso-Fernandez",
"Fernando",
""
]
] |
new_dataset
| 0.995816 |
2309.04245
|
Kinga Skorupska
|
Bartosz Muczy\'nski, Kinga Skorupska, Katarzyna Abramczuk, Cezary
Biele, Zbigniew Bohdanowicz, Daniel Cnotkowski, Jazmin Collins, Wies{\l}aw
Kope\'c, Jaros{\l}aw Kowalski, Grzegorz Pochwatko, Thomas Logan
|
VR Accessibility in Distance Adult Education
|
7 pages, 1 figure
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As virtual reality (VR) technology becomes more pervasive, it continues to
find multiple new uses beyond research laboratories. One of them is distance
adult education -- the potential of VR to provide valuable education
experiences is massive, despite the current barriers to its widespread
application. Nevertheless, recent trends demonstrate clearly that VR is on the
rise in education settings, and VR-only courses are becoming more popular
across the globe. This trend will continue as more affordable VR solutions are
released commercially, increasing the number of education institutions that
benefit from the technology. No accessibility guidelines exist at present that
are created specifically for the design, development, and use of VR hardware
and software in distance education. The purpose of this workshop is to address
this niche. It gathers researchers and practitioners who are interested in
education and intend to work together to formulate a set of practical
guidelines for the use of VR in distance adult education to make it accessible
to a wider range of people.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 10:21:51 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Muczyński",
"Bartosz",
""
],
[
"Skorupska",
"Kinga",
""
],
[
"Abramczuk",
"Katarzyna",
""
],
[
"Biele",
"Cezary",
""
],
[
"Bohdanowicz",
"Zbigniew",
""
],
[
"Cnotkowski",
"Daniel",
""
],
[
"Collins",
"Jazmin",
""
],
[
"Kopeć",
"Wiesław",
""
],
[
"Kowalski",
"Jarosław",
""
],
[
"Pochwatko",
"Grzegorz",
""
],
[
"Logan",
"Thomas",
""
]
] |
new_dataset
| 0.993159 |
2309.04295
|
Chengwu Liu
|
Chengwu Liu, Jianhao Shen, Huajian Xin, Zhengying Liu, Ye Yuan,
Haiming Wang, Wei Ju, Chuanyang Zheng, Yichun Yin, Lin Li, Ming Zhang, Qun
Liu
|
FIMO: A Challenge Formal Dataset for Automated Theorem Proving
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present FIMO, an innovative dataset comprising formal mathematical problem
statements sourced from the International Mathematical Olympiad (IMO)
Shortlisted Problems. Designed to facilitate advanced automated theorem proving
at the IMO level, FIMO is currently tailored for the Lean formal language. It
comprises 149 formal problem statements, accompanied by both informal problem
descriptions and their corresponding LaTeX-based informal proofs. Through
initial experiments involving GPT-4, our findings underscore the existing
limitations in current methodologies, indicating a substantial journey ahead
before achieving satisfactory IMO-level automated theorem proving outcomes.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 12:34:28 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Liu",
"Chengwu",
""
],
[
"Shen",
"Jianhao",
""
],
[
"Xin",
"Huajian",
""
],
[
"Liu",
"Zhengying",
""
],
[
"Yuan",
"Ye",
""
],
[
"Wang",
"Haiming",
""
],
[
"Ju",
"Wei",
""
],
[
"Zheng",
"Chuanyang",
""
],
[
"Yin",
"Yichun",
""
],
[
"Li",
"Lin",
""
],
[
"Zhang",
"Ming",
""
],
[
"Liu",
"Qun",
""
]
] |
new_dataset
| 0.999835 |
2309.04347
|
Weixing Zhang
|
Weixing Zhang, Jan-Philipp Stegh\"ofer, Regina Hebig, Daniel Str\"uber
|
A Rapid Prototyping Language Workbench for Textual DSLs based on Xtext:
Vision and Progress
|
6 pages, 3 figures
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Metamodel-based DSL development in language workbenches like Xtext allows
language engineers to focus more on metamodels and domain concepts rather than
grammar details. However, the grammar generated from metamodels often requires
manual modification, which can be tedious and time-consuming. Especially when
it comes to rapid prototyping and language evolution, the grammar will be
generated repeatedly, this means that language engineers need to repeat such
manual modification back and forth. Previous work introduced GrammarOptimizer,
which automatically improves the generated grammar using optimization rules.
However, the optimization rules need to be configured manually, which lacks
user-friendliness and convenience. In this paper, we present our vision for and
current progress towards a language workbench that integrates
GrammarOptimizer's grammar optimization rules to support rapid prototyping and
evolution of metamodel-based languages. It provides a visual configuration of
optimization rules and a real-time preview of the effects of grammar
optimization to address the limitations of GrammarOptimizer. Furthermore, it
supports the inference of a grammar based on examples from model instances and
offers a selection of language styles. These features aim to enhance the
automation level of metamodel-based DSL development with Xtext and assist
language engineers in iterative development and rapid prototyping. Our paper
discusses the potential and applications of this language workbench, as well as
how it fills the gaps in existing language workbenches.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 14:17:00 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Zhang",
"Weixing",
""
],
[
"Steghöfer",
"Jan-Philipp",
""
],
[
"Hebig",
"Regina",
""
],
[
"Strüber",
"Daniel",
""
]
] |
new_dataset
| 0.994764 |
2309.04372
|
Sijia Li
|
Sijia Li, Chen Chen, Haonan Lu
|
MoEController: Instruction-based Arbitrary Image Manipulation with
Mixture-of-Expert Controllers
|
5 pages,6 figures
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diffusion-model-based text-guided image generation has recently made
astounding progress, producing fascinating results in open-domain image
manipulation tasks. Few models, however, currently have complete zero-shot
capabilities for both global and local image editing due to the complexity and
diversity of image manipulation tasks. In this work, we propose a method with a
mixture-of-expert (MOE) controllers to align the text-guided capacity of
diffusion models with different kinds of human instructions, enabling our model
to handle various open-domain image manipulation tasks with natural language
instructions. First, we use large language models (ChatGPT) and conditional
image synthesis models (ControlNet) to generate a large number of global image
transfer dataset in addition to the instruction-based local image editing
dataset. Then, using an MOE technique and task-specific adaptation training on
a large-scale dataset, our conditional diffusion model can edit images globally
and locally. Extensive experiments demonstrate that our approach performs
surprisingly well on various image manipulation tasks when dealing with
open-domain images and arbitrary human instructions. Please refer to our
project page: [https://oppo-mente-lab.github.io/moe_controller/]
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 15:06:05 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Li",
"Sijia",
""
],
[
"Chen",
"Chen",
""
],
[
"Lu",
"Haonan",
""
]
] |
new_dataset
| 0.993161 |
2309.04379
|
Dongming Wu
|
Dongming Wu, Wencheng Han, Tiancai Wang, Yingfei Liu, Xiangyu Zhang,
Jianbing Shen
|
Language Prompt for Autonomous Driving
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A new trend in the computer vision community is to capture objects of
interest following flexible human command represented by a natural language
prompt. However, the progress of using language prompts in driving scenarios is
stuck in a bottleneck due to the scarcity of paired prompt-instance data. To
address this challenge, we propose the first object-centric language prompt set
for driving scenes within 3D, multi-view, and multi-frame space, named
NuPrompt. It expands Nuscenes dataset by constructing a total of 35,367
language descriptions, each referring to an average of 5.3 object tracks. Based
on the object-text pairs from the new benchmark, we formulate a new
prompt-based driving task, \ie, employing a language prompt to predict the
described object trajectory across views and frames. Furthermore, we provide a
simple end-to-end baseline model based on Transformer, named PromptTrack.
Experiments show that our PromptTrack achieves impressive performance on
NuPrompt. We hope this work can provide more new insights for the autonomous
driving community. Dataset and Code will be made public at
\href{https://github.com/wudongming97/Prompt4Driving}{https://github.com/wudongming97/Prompt4Driving}.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 15:21:07 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Wu",
"Dongming",
""
],
[
"Han",
"Wencheng",
""
],
[
"Wang",
"Tiancai",
""
],
[
"Liu",
"Yingfei",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Shen",
"Jianbing",
""
]
] |
new_dataset
| 0.999779 |
2309.04422
|
Thomas Huang
|
Thomas E. Huang, Yifan Liu, Luc Van Gool, Fisher Yu
|
Video Task Decathlon: Unifying Image and Video Tasks in Autonomous
Driving
|
ICCV 2023, project page at https://www.vis.xyz/pub/vtd
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Performing multiple heterogeneous visual tasks in dynamic scenes is a
hallmark of human perception capability. Despite remarkable progress in image
and video recognition via representation learning, current research still
focuses on designing specialized networks for singular, homogeneous, or simple
combination of tasks. We instead explore the construction of a unified model
for major image and video recognition tasks in autonomous driving with diverse
input and output structures. To enable such an investigation, we design a new
challenge, Video Task Decathlon (VTD), which includes ten representative image
and video tasks spanning classification, segmentation, localization, and
association of objects and pixels. On VTD, we develop our unified network,
VTDNet, that uses a single structure and a single set of weights for all ten
tasks. VTDNet groups similar tasks and employs task interaction stages to
exchange information within and between task groups. Given the impracticality
of labeling all tasks on all frames, and the performance degradation associated
with joint training of many tasks, we design a Curriculum training,
Pseudo-labeling, and Fine-tuning (CPF) scheme to successfully train VTDNet on
all tasks and mitigate performance loss. Armed with CPF, VTDNet significantly
outperforms its single-task counterparts on most tasks with only 20% overall
computations. VTD is a promising new direction for exploring the unification of
perception tasks in autonomous driving.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 16:33:27 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Huang",
"Thomas E.",
""
],
[
"Liu",
"Yifan",
""
],
[
"Van Gool",
"Luc",
""
],
[
"Yu",
"Fisher",
""
]
] |
new_dataset
| 0.996907 |
2309.04437
|
Brandon Zhao
|
Brandon Zhao, Aviad Levis, Liam Connor, Pratul P. Srinivasan,
Katherine L. Bouman
|
Single View Refractive Index Tomography with Neural Fields
| null | null | null | null |
cs.CV astro-ph.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Refractive Index Tomography is an inverse problem in which we seek to
reconstruct a scene's 3D refractive field from 2D projected image measurements.
The refractive field is not visible itself, but instead affects how the path of
a light ray is continuously curved as it travels through space. Refractive
fields appear across a wide variety of scientific applications, from
translucent cell samples in microscopy to fields of dark matter bending light
from faraway galaxies. This problem poses a unique challenge because the
refractive field directly affects the path that light takes, making its
recovery a non-linear problem. In addition, in contrast with traditional
tomography, we seek to recover the refractive field using a projected image
from only a single viewpoint by leveraging knowledge of light sources scattered
throughout the medium. In this work, we introduce a method that uses a
coordinate-based neural network to model the underlying continuous refractive
field in a scene. We then use explicit modeling of rays' 3D spatial curvature
to optimize the parameters of this network, reconstructing refractive fields
with an analysis-by-synthesis approach. The efficacy of our approach is
demonstrated by recovering refractive fields in simulation, and analyzing how
recovery is affected by the light source distribution. We then test our method
on a simulated dark matter mapping problem, where we recover the refractive
field underlying a realistic simulated dark matter distribution.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 17:01:34 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Zhao",
"Brandon",
""
],
[
"Levis",
"Aviad",
""
],
[
"Connor",
"Liam",
""
],
[
"Srinivasan",
"Pratul P.",
""
],
[
"Bouman",
"Katherine L.",
""
]
] |
new_dataset
| 0.993132 |
2309.04453
|
Chris Hayner
|
Daniel Broyles, Christopher R. Hayner, Karen Leung
|
WiSARD: A Labeled Visual and Thermal Image Dataset for Wilderness Search
and Rescue
| null |
2022 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), Kyoto, Japan, 2022, pp. 9467-9474
|
10.1109/IROS47612.2022.9981298
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sensor-equipped unoccupied aerial vehicles (UAVs) have the potential to help
reduce search times and alleviate safety risks for first responders carrying
out Wilderness Search and Rescue (WiSAR) operations, the process of finding and
rescuing person(s) lost in wilderness areas. Unfortunately, visual sensors
alone do not address the need for robustness across all the possible terrains,
weather, and lighting conditions that WiSAR operations can be conducted in. The
use of multi-modal sensors, specifically visual-thermal cameras, is critical in
enabling WiSAR UAVs to perform in diverse operating conditions. However, due to
the unique challenges posed by the wilderness context, existing dataset
benchmarks are inadequate for developing vision-based algorithms for autonomous
WiSAR UAVs. To this end, we present WiSARD, a dataset with roughly 56,000
labeled visual and thermal images collected from UAV flights in various
terrains, seasons, weather, and lighting conditions. To the best of our
knowledge, WiSARD is the first large-scale dataset collected with multi-modal
sensors for autonomous WiSAR operations. We envision that our dataset will
provide researchers with a diverse and challenging benchmark that can test the
robustness of their algorithms when applied to real-world (life-saving)
applications.
|
[
{
"version": "v1",
"created": "Fri, 8 Sep 2023 17:22:26 GMT"
}
] | 2023-09-11T00:00:00 |
[
[
"Broyles",
"Daniel",
""
],
[
"Hayner",
"Christopher R.",
""
],
[
"Leung",
"Karen",
""
]
] |
new_dataset
| 0.999879 |
2203.05893
|
Baihong Lin
|
Hanxing Chi, Baihong Lin, Jun Hu, Liang Wang
|
DRTAM: Dual Rank-1 Tensor Attention Module
|
There exists some problems on the experiments. Besides, we find that
the sturcture of DRTAM can be optimized
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, attention mechanisms have been extensively investigated in computer
vision, but few of them show excellent performance on both large and mobile
networks. This paper proposes Dual Rank-1 Tensor Attention Module (DRTAM), a
novel residual-attention-learning-guided attention module for feed-forward
convolutional neural networks. Given a 3D feature tensor map, DRTAM firstly
generates three 2D feature descriptors along three axes. Then, using three
descriptors, DRTAM sequentially infers two rank-1 tensor attention maps, the
initial attention map and the complement attention map, combines and multiplied
them to the input feature map for adaptive feature refinement(see Fig.1(c)). To
generate two attention maps, DRTAM introduces rank-1 tensor attention module
(RTAM) and residual descriptors extraction module (RDEM): RTAM divides each 2D
feature descriptors into several chunks, and generate three factor vectors of a
rank-1 tensor attention map by employing strip pooling on each chunk so that
local and long-range contextual information can be captured along three
dimension respectively; RDEM generates three 2D feature descriptors of the
residual feature to produce the complement attention map, using three factor
vectors of the initial attention map and three descriptors of the input
feature. Extensive experimental results on ImageNet-1K, MS COCO and PASCAL VOC
demonstrate that DRTAM achieves competitive performance on both large and
mobile networks compare with other state-of-the-art attention modules.
|
[
{
"version": "v1",
"created": "Fri, 11 Mar 2022 12:52:44 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2023 05:59:53 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Chi",
"Hanxing",
""
],
[
"Lin",
"Baihong",
""
],
[
"Hu",
"Jun",
""
],
[
"Wang",
"Liang",
""
]
] |
new_dataset
| 0.989061 |
2205.12215
|
Gabriele Sarti
|
Gabriele Sarti, Arianna Bisazza, Ana Guerberof Arenas, Antonio Toral
|
DivEMT: Neural Machine Translation Post-Editing Effort Across
Typologically Diverse Languages
|
EMNLP 2022, materials: https://github.com/gsarti/divemt
|
Proceedings of EMNLP (2022) 7795-7816
|
10.18653/v1/2022.emnlp-main.532
| null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We introduce DivEMT, the first publicly available post-editing study of
Neural Machine Translation (NMT) over a typologically diverse set of target
languages. Using a strictly controlled setup, 18 professional translators were
instructed to translate or post-edit the same set of English documents into
Arabic, Dutch, Italian, Turkish, Ukrainian, and Vietnamese. During the process,
their edits, keystrokes, editing times and pauses were recorded, enabling an
in-depth, cross-lingual evaluation of NMT quality and post-editing
effectiveness. Using this new dataset, we assess the impact of two
state-of-the-art NMT systems, Google Translate and the multilingual mBART-50
model, on translation productivity. We find that post-editing is consistently
faster than translation from scratch. However, the magnitude of productivity
gains varies widely across systems and languages, highlighting major
disparities in post-editing effectiveness for languages at different degrees of
typological relatedness to English, even when controlling for system
architecture and training data size. We publicly release the complete dataset
including all collected behavioral data, to foster new research on the
translation capabilities of NMT systems for typologically diverse languages.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 17:22:52 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Oct 2022 16:38:00 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Sarti",
"Gabriele",
""
],
[
"Bisazza",
"Arianna",
""
],
[
"Arenas",
"Ana Guerberof",
""
],
[
"Toral",
"Antonio",
""
]
] |
new_dataset
| 0.980987 |
2206.03223
|
Alberto Pretto
|
Henrik Andreasson, Giorgio Grisetti, Todor Stoyanov, and Alberto
Pretto
|
Sensors for Mobile Robots
|
This chapter appears in: Ang, M.H., Khatib, O., Siciliano, B. (eds)
Encyclopedia of Robotics. Springer, Berlin, Heidelberg
|
In: Ang, M.H., Khatib, O., Siciliano, B. (eds) Encyclopedia of
Robotics. Springer, Berlin, Heidelberg (2023)
|
10.1007/978-3-642-41610-1_159-1
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A sensor is a device that converts a physical parameter or an environmental
characteristic (e.g., temperature, distance, speed, etc.) into a signal that
can be digitally measured and processed to perform specific tasks. Mobile
robots need sensors to measure properties of their environment, thus allowing
for safe navigation, complex perception and corresponding actions, and
effective interactions with other agents that populate it. Sensors used by
mobile robots range from simple tactile sensors, such as bumpers, to complex
vision-based sensors such as structured light RGB-D cameras. All of them
provide a digital output (e.g., a string, a set of values, a matrix, etc.) that
can be processed by the robot's computer. Such output is typically obtained by
discretizing one or more analog electrical signals by using an Analog to
Digital Converter (ADC) included in the sensor. In this chapter we present the
most common sensors used in mobile robotics, providing an introduction to their
taxonomy, basic features, and specifications. The description of the
functionalities and the types of applications follows a bottom-up approach: the
basic principles and components on which the sensors are based are presented
before describing real-world sensors, which are generally based on multiple
technologies and basic devices.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 12:14:23 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jul 2023 16:00:18 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Sep 2023 16:48:17 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Andreasson",
"Henrik",
""
],
[
"Grisetti",
"Giorgio",
""
],
[
"Stoyanov",
"Todor",
""
],
[
"Pretto",
"Alberto",
""
]
] |
new_dataset
| 0.9987 |
2208.01708
|
Vinay Ummadi Mr
|
Vinay Ummadi, Aravind Gundlapalle, Althaf Shaik, Shaik Mohammad Rafi B
|
Autonomous Agriculture Robot for Smart Farming
|
Due to author interest conflicts
| null | null | null |
cs.RO cs.AI cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This project aims to develop and demonstrate a ground robot with intelligence
capable of conducting semi-autonomous farm operations for different low-heights
vegetable crops referred as Agriculture Application Robot(AAR). AAR is a
lightweight, solar-electric powered robot that uses intelligent perception for
conducting detection and classification of plants and their characteristics.
The system also has a robotic arm for the autonomous weed cutting process. The
robot can deliver fertilizer spraying, insecticide, herbicide, and other fluids
to the targets such as crops, weeds, and other pests. Besides, it provides
information for future research into higher-level tasks such as yield
estimation, crop, and soil health monitoring. We present the design of robot
and the associated experiments which show the promising results in real world
environments.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 19:38:48 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2023 06:07:17 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Ummadi",
"Vinay",
""
],
[
"Gundlapalle",
"Aravind",
""
],
[
"Shaik",
"Althaf",
""
],
[
"B",
"Shaik Mohammad Rafi",
""
]
] |
new_dataset
| 0.999491 |
2210.17040
|
Jia Li
|
Jia Li, Ge Li, Zhuo Li, Zhi Jin, Xing Hu, Kechi Zhang, Zhiyi Fu
|
CodeEditor: Learning to Edit Source Code with Pre-trained Models
|
Accepted by the ACM Transactions on Software Engineering and
Methodology (TOSEM)
| null |
10.1145/3597207
| null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developers often perform repetitive code editing activities for various
reasons (e.g., code refactoring) during software development. Pre-trained code
editing models have achieved the state-of-the-art (SOTA) results. Pre-trained
models are first pre-trained with pre-training tasks and fine-tuned with the
code editing task. Existing pre-training tasks mainly are code infilling tasks
(e.g., masked language modeling), which are derived from the natural language
processing field and are not designed for automatic code editing.
This paper proposes a novel pre-training task specialized in code editing and
presents an effective pre-trained code editing model named CodeEditor. Our
pre-training task further improves the performance and generalization ability
of code editing models. Specifically, we collect lots of real-world code
snippets as the ground truth and use a powerful generator to rewrite them into
mutated versions. Then, we pre-train our CodeEditor to edit mutated versions
into the corresponding ground truth, to learn edit patterns. We conduct
experiments on four code editing datasets and evaluate the pre-trained
CodeEditor in three settings. (1) In the fine-tuning setting, we train the
pre-trained CodeEditor with four datasets and evaluate it on the test data.
CodeEditor outperforms the SOTA baselines by 15%, 25.5%, and 9.4% and 26.6% on
four datasets. (2) In the few-shot setting, we train the pre-trained CodeEditor
with limited data and evaluate it on the test data. CodeEditor substantially
performs better than all baselines. (3) In the zero-shot setting, CodeEditor
correctly edits 1,113 programs while the SOTA baselines can not work.
|
[
{
"version": "v1",
"created": "Mon, 31 Oct 2022 03:26:33 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Aug 2023 08:38:17 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Sep 2023 11:35:51 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Li",
"Jia",
""
],
[
"Li",
"Ge",
""
],
[
"Li",
"Zhuo",
""
],
[
"Jin",
"Zhi",
""
],
[
"Hu",
"Xing",
""
],
[
"Zhang",
"Kechi",
""
],
[
"Fu",
"Zhiyi",
""
]
] |
new_dataset
| 0.99586 |
2302.10602
|
Hu Gao
|
Hu Gao and Zhihui Li and Depeng Dang and Ning Wang and Jingfan Yang
|
SU-Net: Pose estimation network for non-cooperative spacecraft on-orbit
|
We need to overhaul the paper and innovate
| null |
10.1038/s41598-023-38974-1
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spacecraft pose estimation plays a vital role in many on-orbit space
missions, such as rendezvous and docking, debris removal, and on-orbit
maintenance. At present, space images contain widely varying lighting
conditions, high contrast and low resolution, pose estimation of space objects
is more challenging than that of objects on earth. In this paper, we analyzing
the radar image characteristics of spacecraft on-orbit, then propose a new deep
learning neural Network structure named Dense Residual U-shaped Network
(DR-U-Net) to extract image features. We further introduce a novel neural
network based on DR-U-Net, namely Spacecraft U-shaped Network (SU-Net) to
achieve end-to-end pose estimation for non-cooperative spacecraft.
Specifically, the SU-Net first preprocess the image of non-cooperative
spacecraft, then transfer learning was used for pre-training. Subsequently, in
order to solve the problem of radar image blur and low ability of spacecraft
contour recognition, we add residual connection and dense connection to the
backbone network U-Net, and we named it DR-U-Net. In this way, the feature loss
and the complexity of the model is reduced, and the degradation of deep neural
network during training is avoided. Finally, a layer of feedforward neural
network is used for pose estimation of non-cooperative spacecraft on-orbit.
Experiments prove that the proposed method does not rely on the hand-made
object specific features, and the model has robust robustness, and the
calculation accuracy outperforms the state-of-the-art pose estimation methods.
The absolute error is 0.1557 to 0.4491 , the mean error is about 0.302 , and
the standard deviation is about 0.065 .
|
[
{
"version": "v1",
"created": "Tue, 21 Feb 2023 11:14:01 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2023 09:32:24 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Gao",
"Hu",
""
],
[
"Li",
"Zhihui",
""
],
[
"Dang",
"Depeng",
""
],
[
"Wang",
"Ning",
""
],
[
"Yang",
"Jingfan",
""
]
] |
new_dataset
| 0.999178 |
2303.09681
|
Shihao Zou
|
Shihao Zou, Yuxuan Mu, Xinxin Zuo, Sen Wang, Li Cheng
|
Event-based Human Pose Tracking by Spiking Spatiotemporal Transformer
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event camera, as an emerging biologically-inspired vision sensor for
capturing motion dynamics, presents new potential for 3D human pose tracking,
or video-based 3D human pose estimation. However, existing works in pose
tracking either require the presence of additional gray-scale images to
establish a solid starting pose, or ignore the temporal dependencies all
together by collapsing segments of event streams to form static event frames.
Meanwhile, although the effectiveness of Artificial Neural Networks (ANNs,
a.k.a. dense deep learning) has been showcased in many event-based tasks, the
use of ANNs tends to neglect the fact that compared to the dense frame-based
image sequences, the occurrence of events from an event camera is
spatiotemporally much sparser. Motivated by the above mentioned issues, we
present in this paper a dedicated end-to-end sparse deep learning approach for
event-based pose tracking: 1) to our knowledge this is the first time that 3D
human pose tracking is obtained from events only, thus eliminating the need of
accessing to any frame-based images as part of input; 2) our approach is based
entirely upon the framework of Spiking Neural Networks (SNNs), which consists
of Spike-Element-Wise (SEW) ResNet and a novel Spiking Spatiotemporal
Transformer; 3) a large-scale synthetic dataset is constructed that features a
broad and diverse set of annotated 3D human motions, as well as longer hours of
event stream data, named SynEventHPD. Empirical experiments demonstrate that,
with superior performance over the state-of-the-art (SOTA) ANNs counterparts,
our approach also achieves a significant computation reduction of 80% in FLOPS.
Furthermore, our proposed method also outperforms SOTA SNNs in the regression
task of human pose tracking. Our implementation is available at
https://github.com/JimmyZou/HumanPoseTracking_SNN and dataset will be released
upon paper acceptance.
|
[
{
"version": "v1",
"created": "Thu, 16 Mar 2023 22:56:12 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Mar 2023 02:31:52 GMT"
},
{
"version": "v3",
"created": "Wed, 10 May 2023 23:50:23 GMT"
},
{
"version": "v4",
"created": "Wed, 6 Sep 2023 21:34:59 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Zou",
"Shihao",
""
],
[
"Mu",
"Yuxuan",
""
],
[
"Zuo",
"Xinxin",
""
],
[
"Wang",
"Sen",
""
],
[
"Cheng",
"Li",
""
]
] |
new_dataset
| 0.996934 |
2303.10606
|
Mehrdad RafiePour
|
Mehrdad Rafiepour, Javad Salimi Sartakhti
|
CTRAN: CNN-Transformer-based Network for Natural Language Understanding
| null |
Engineering Applications Of Artificial Intelligence, Volume 126,
Part C, 2023
|
10.1016/j.engappai.2023.107013
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intent-detection and slot-filling are the two main tasks in natural language
understanding. In this study, we propose CTRAN, a novel encoder-decoder
CNN-Transformer-based architecture for intent-detection and slot-filling. In
the encoder, we use BERT, followed by several convolutional layers, and
rearrange the output using window feature sequence. We use stacked Transformer
encoders after the window feature sequence. For the intent-detection decoder,
we utilize self-attention followed by a linear layer. In the slot-filling
decoder, we introduce the aligned Transformer decoder, which utilizes a zero
diagonal mask, aligning output tags with input tokens. We apply our network on
ATIS and SNIPS, and surpass the current state-of-the-art in slot-filling on
both datasets. Furthermore, we incorporate the language model as word
embeddings, and show that this strategy yields a better result when compared to
the language model as an encoder.
|
[
{
"version": "v1",
"created": "Sun, 19 Mar 2023 08:57:39 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Rafiepour",
"Mehrdad",
""
],
[
"Sartakhti",
"Javad Salimi",
""
]
] |
new_dataset
| 0.999219 |
2305.12259
|
Harish Kumar Dureppagari
|
Harish K. Dureppagari, Chiranjib Saha, Harpreet S. Dhillon, R. Michael
Buehrer
|
NTN-based 6G Localization: Vision, Role of LEOs, and Open Problems
|
7 pages, 6 figures, submitted to IEEE Wireless Communications
Magazine
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Since the introduction of 5G Release 18, non-terrestrial networks (NTNs)
based positioning has garnered significant interest due to its numerous
applications, including emergency services, lawful intercept, and charging and
tariff services. This release considers single low-earth-orbit (LEO)
positioning explicitly for $\textit{location verification}$ purposes, which
requires a fairly coarse location estimate. To understand the future trajectory
of NTN-based localization in 6G, we first provide a comprehensive overview of
the evolution of 3rd Generation Partnership Project (3GPP) localization
techniques, with specific emphasis on the current activities in 5G related to
NTN location verification. We then delineate the suitability of LEOs for
location-based services and emphasize increased interest in LEO-based
positioning. In order to provide support for more accurate positioning in 6G
using LEOs, we identify two NTN positioning systems that are likely study items
for 6G: (i) multi-LEO positioning, and (ii) augmenting single-LEO and multi-LEO
setups with Global Navigation Satellite System (GNSS), especially when an
insufficient number of GNSS satellites (such as 2) are visible. We evaluate the
accuracy of both systems through a 3GPP-compliant simulation study using a
Cram\'{e}r-Rao lower bound (CRLB) analysis. Our findings suggest that NTN
technology has significant potential to provide accurate positioning of UEs in
scenarios where GNSS signals may be weak or unavailable, but there are
technical challenges in accommodating these solutions in 3GPP. We conclude with
a discussion on the research landscape and key open problems related to
NTN-based positioning.
|
[
{
"version": "v1",
"created": "Sat, 20 May 2023 18:25:17 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2023 17:09:39 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Dureppagari",
"Harish K.",
""
],
[
"Saha",
"Chiranjib",
""
],
[
"Dhillon",
"Harpreet S.",
""
],
[
"Buehrer",
"R. Michael",
""
]
] |
new_dataset
| 0.959429 |
2306.08754
|
Sungduk Yu
|
Sungduk Yu, Walter M. Hannah, Liran Peng, Jerry Lin, Mohamed Aziz
Bhouri, Ritwik Gupta, Bj\"orn L\"utjens, Justus C. Will, Gunnar Behrens,
Julius J. M. Busecke, Nora Loose, Charles Stern, Tom Beucler, Bryce E.
Harrop, Benjamin R. Hilman, Andrea M. Jenney, Savannah L. Ferretti, Nana Liu,
Anima Anandkumar, Noah D. Brenowitz, Veronika Eyring, Nicholas Geneva, Pierre
Gentine, Stephan Mandt, Jaideep Pathak, Akshay Subramaniam, Carl Vondrick,
Rose Yu, Laure Zanna, Tian Zheng, Ryan P. Abernathey, Fiaz Ahmed, David C.
Bader, Pierre Baldi, Elizabeth A. Barnes, Christopher S. Bretherton, Peter M.
Caldwell, Wayne Chuang, Yilun Han, Yu Huang, Fernando Iglesias-Suarez, Sanket
Jantre, Karthik Kashinath, Marat Khairoutdinov, Thorsten Kurth, Nicholas J.
Lutsko, Po-Lun Ma, Griffin Mooers, J. David Neelin, David A. Randall, Sara
Shamekh, Mark A. Taylor, Nathan M. Urban, Janni Yuval, Guang J. Zhang,
Michael S. Pritchard
|
ClimSim: An open large-scale dataset for training high-resolution
physics emulators in hybrid multi-scale climate simulators
| null | null | null | null |
cs.LG physics.ao-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern climate projections lack adequate spatial and temporal resolution due
to computational constraints. A consequence is inaccurate and imprecise
predictions of critical processes such as storms. Hybrid methods that combine
physics with machine learning (ML) have introduced a new generation of higher
fidelity climate simulators that can sidestep Moore's Law by outsourcing
compute-hungry, short, high-resolution simulations to ML emulators. However,
this hybrid ML-physics simulation approach requires domain-specific treatment
and has been inaccessible to ML experts because of lack of training data and
relevant, easy-to-use workflows. We present ClimSim, the largest-ever dataset
designed for hybrid ML-physics research. It comprises multi-scale climate
simulations, developed by a consortium of climate scientists and ML
researchers. It consists of 5.7 billion pairs of multivariate input and output
vectors that isolate the influence of locally-nested, high-resolution,
high-fidelity physics on a host climate simulator's macro-scale physical state.
The dataset is global in coverage, spans multiple years at high sampling
frequency, and is designed such that resulting emulators are compatible with
downstream coupling into operational climate simulators. We implement a range
of deterministic and stochastic regression baselines to highlight the ML
challenges and their scoring. The data
(https://huggingface.co/datasets/LEAP/ClimSim_high-res,
https://huggingface.co/datasets/LEAP/ClimSim_low-res, and
https://huggingface.co/datasets/LEAP/ClimSim_low-res_aqua-planet) and code
(https://leap-stc.github.io/ClimSim) are released openly to support the
development of hybrid ML-physics and high-fidelity climate simulations for the
benefit of science and society.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 21:26:31 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2023 15:31:38 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Sep 2023 22:56:03 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Yu",
"Sungduk",
""
],
[
"Hannah",
"Walter M.",
""
],
[
"Peng",
"Liran",
""
],
[
"Lin",
"Jerry",
""
],
[
"Bhouri",
"Mohamed Aziz",
""
],
[
"Gupta",
"Ritwik",
""
],
[
"Lütjens",
"Björn",
""
],
[
"Will",
"Justus C.",
""
],
[
"Behrens",
"Gunnar",
""
],
[
"Busecke",
"Julius J. M.",
""
],
[
"Loose",
"Nora",
""
],
[
"Stern",
"Charles",
""
],
[
"Beucler",
"Tom",
""
],
[
"Harrop",
"Bryce E.",
""
],
[
"Hilman",
"Benjamin R.",
""
],
[
"Jenney",
"Andrea M.",
""
],
[
"Ferretti",
"Savannah L.",
""
],
[
"Liu",
"Nana",
""
],
[
"Anandkumar",
"Anima",
""
],
[
"Brenowitz",
"Noah D.",
""
],
[
"Eyring",
"Veronika",
""
],
[
"Geneva",
"Nicholas",
""
],
[
"Gentine",
"Pierre",
""
],
[
"Mandt",
"Stephan",
""
],
[
"Pathak",
"Jaideep",
""
],
[
"Subramaniam",
"Akshay",
""
],
[
"Vondrick",
"Carl",
""
],
[
"Yu",
"Rose",
""
],
[
"Zanna",
"Laure",
""
],
[
"Zheng",
"Tian",
""
],
[
"Abernathey",
"Ryan P.",
""
],
[
"Ahmed",
"Fiaz",
""
],
[
"Bader",
"David C.",
""
],
[
"Baldi",
"Pierre",
""
],
[
"Barnes",
"Elizabeth A.",
""
],
[
"Bretherton",
"Christopher S.",
""
],
[
"Caldwell",
"Peter M.",
""
],
[
"Chuang",
"Wayne",
""
],
[
"Han",
"Yilun",
""
],
[
"Huang",
"Yu",
""
],
[
"Iglesias-Suarez",
"Fernando",
""
],
[
"Jantre",
"Sanket",
""
],
[
"Kashinath",
"Karthik",
""
],
[
"Khairoutdinov",
"Marat",
""
],
[
"Kurth",
"Thorsten",
""
],
[
"Lutsko",
"Nicholas J.",
""
],
[
"Ma",
"Po-Lun",
""
],
[
"Mooers",
"Griffin",
""
],
[
"Neelin",
"J. David",
""
],
[
"Randall",
"David A.",
""
],
[
"Shamekh",
"Sara",
""
],
[
"Taylor",
"Mark A.",
""
],
[
"Urban",
"Nathan M.",
""
],
[
"Yuval",
"Janni",
""
],
[
"Zhang",
"Guang J.",
""
],
[
"Pritchard",
"Michael S.",
""
]
] |
new_dataset
| 0.999856 |
2306.12760
|
Dani Lischinski
|
Ori Gordon and Omri Avrahami and Dani Lischinski
|
Blended-NeRF: Zero-Shot Object Generation and Blending in Existing
Neural Radiance Fields
|
16 pages, 14 figures. Project page:
https://www.vision.huji.ac.il/blended-nerf/
| null | null | null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Editing a local region or a specific object in a 3D scene represented by a
NeRF or consistently blending a new realistic object into the scene is
challenging, mainly due to the implicit nature of the scene representation. We
present Blended-NeRF, a robust and flexible framework for editing a specific
region of interest in an existing NeRF scene, based on text prompts, along with
a 3D ROI box. Our method leverages a pretrained language-image model to steer
the synthesis towards a user-provided text prompt, along with a 3D MLP model
initialized on an existing NeRF scene to generate the object and blend it into
a specified region in the original scene. We allow local editing by localizing
a 3D ROI box in the input scene, and blend the content synthesized inside the
ROI with the existing scene using a novel volumetric blending technique. To
obtain natural looking and view-consistent results, we leverage existing and
new geometric priors and 3D augmentations for improving the visual fidelity of
the final result. We test our framework both qualitatively and quantitatively
on a variety of real 3D scenes and text prompts, demonstrating realistic
multi-view consistent results with much flexibility and diversity compared to
the baselines. Finally, we show the applicability of our framework for several
3D editing applications, including adding new objects to a scene,
removing/replacing/altering existing objects, and texture conversion.
|
[
{
"version": "v1",
"created": "Thu, 22 Jun 2023 09:34:55 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2023 10:30:10 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Gordon",
"Ori",
""
],
[
"Avrahami",
"Omri",
""
],
[
"Lischinski",
"Dani",
""
]
] |
new_dataset
| 0.993908 |
2306.13455
|
Jignyu Zhuang
|
Jingyu Zhuang, Chen Wang, Lingjie Liu, Liang Lin, Guanbin Li
|
DreamEditor: Text-Driven 3D Scene Editing with Neural Fields
|
Accepted by SIGGRAPH Asia 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural fields have achieved impressive advancements in view synthesis and
scene reconstruction. However, editing these neural fields remains challenging
due to the implicit encoding of geometry and texture information. In this
paper, we propose DreamEditor, a novel framework that enables users to perform
controlled editing of neural fields using text prompts. By representing scenes
as mesh-based neural fields, DreamEditor allows localized editing within
specific regions. DreamEditor utilizes the text encoder of a pretrained
text-to-Image diffusion model to automatically identify the regions to be
edited based on the semantics of the text prompts. Subsequently, DreamEditor
optimizes the editing region and aligns its geometry and texture with the text
prompts through score distillation sampling [29]. Extensive experiments have
demonstrated that DreamEditor can accurately edit neural fields of real-world
scenes according to the given text prompts while ensuring consistency in
irrelevant areas. DreamEditor generates highly realistic textures and geometry,
significantly surpassing previous works in both quantitative and qualitative
evaluations.
|
[
{
"version": "v1",
"created": "Fri, 23 Jun 2023 11:53:43 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 10:38:04 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Sep 2023 13:01:27 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Zhuang",
"Jingyu",
""
],
[
"Wang",
"Chen",
""
],
[
"Liu",
"Lingjie",
""
],
[
"Lin",
"Liang",
""
],
[
"Li",
"Guanbin",
""
]
] |
new_dataset
| 0.998862 |
2307.02321
|
Amelie Royer
|
Jakob Drachmann Havtorn and Amelie Royer and Tijmen Blankevoort and
Babak Ehteshami Bejnordi
|
MSViT: Dynamic Mixed-Scale Tokenization for Vision Transformers
|
ICCV Workshops 2023; Code for the Generalized Batch-Shaping loss is
available at https://github.com/Qualcomm-AI-research/batchshaping
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The input tokens to Vision Transformers carry little semantic meaning as they
are defined as regular equal-sized patches of the input image, regardless of
its content. However, processing uniform background areas of an image should
not necessitate as much compute as dense, cluttered areas. To address this
issue, we propose a dynamic mixed-scale tokenization scheme for ViT, MSViT. Our
method introduces a conditional gating mechanism that selects the optimal token
scale for every image region, such that the number of tokens is dynamically
determined per input. In addition, to enhance the conditional behavior of the
gate during training, we introduce a novel generalization of the batch-shaping
loss. We show that our gating module is able to learn meaningful semantics
despite operating locally at the coarse patch-level. The proposed gating module
is lightweight, agnostic to the choice of transformer backbone, and trained
within a few epochs with little training overhead. Furthermore, in contrast to
token pruning, MSViT does not lose information about the input, thus can be
readily applied for dense tasks. We validate MSViT on the tasks of
classification and segmentation where it leads to improved accuracy-complexity
trade-off.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 14:22:31 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2023 09:36:16 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Havtorn",
"Jakob Drachmann",
""
],
[
"Royer",
"Amelie",
""
],
[
"Blankevoort",
"Tijmen",
""
],
[
"Bejnordi",
"Babak Ehteshami",
""
]
] |
new_dataset
| 0.995859 |
2307.05766
|
Chantal Pellegrini
|
Chantal Pellegrini, Matthias Keicher, Ege \"Ozsoy, Nassir Navab
|
Rad-ReStruct: A Novel VQA Benchmark and Method for Structured Radiology
Reporting
|
accepted at MICCAI 2023
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Radiology reporting is a crucial part of the communication between
radiologists and other medical professionals, but it can be time-consuming and
error-prone. One approach to alleviate this is structured reporting, which
saves time and enables a more accurate evaluation than free-text reports.
However, there is limited research on automating structured reporting, and no
public benchmark is available for evaluating and comparing different methods.
To close this gap, we introduce Rad-ReStruct, a new benchmark dataset that
provides fine-grained, hierarchically ordered annotations in the form of
structured reports for X-Ray images. We model the structured reporting task as
hierarchical visual question answering (VQA) and propose hi-VQA, a novel method
that considers prior context in the form of previously asked questions and
answers for populating a structured radiology report. Our experiments show that
hi-VQA achieves competitive performance to the state-of-the-art on the medical
VQA benchmark VQARad while performing best among methods without
domain-specific vision-language pretraining and provides a strong baseline on
Rad-ReStruct. Our work represents a significant step towards the automated
population of structured radiology reports and provides a valuable first
benchmark for future research in this area. Our dataset and code is available
at https://github.com/ChantalMP/Rad-ReStruct.
|
[
{
"version": "v1",
"created": "Tue, 11 Jul 2023 19:47:05 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Jul 2023 15:28:18 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Jul 2023 08:48:22 GMT"
},
{
"version": "v4",
"created": "Thu, 7 Sep 2023 10:00:08 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Pellegrini",
"Chantal",
""
],
[
"Keicher",
"Matthias",
""
],
[
"Özsoy",
"Ege",
""
],
[
"Navab",
"Nassir",
""
]
] |
new_dataset
| 0.996902 |
2308.03944
|
Ahmed Agiza
|
Ahmed Agiza, Rajarshi Roy, Teodor Dumitru Ene, Saad Godil, Sherief
Reda, Bryan Catanzaro
|
GraPhSyM: Graph Physical Synthesis Model
|
Accepted at Proceedings of the 42nd International Conference on
Computer-Aided Design (ICCAD), 2023
| null | null | null |
cs.LG cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we introduce GraPhSyM, a Graph Attention Network (GATv2) model
for fast and accurate estimation of post-physical synthesis circuit delay and
area metrics from pre-physical synthesis circuit netlists. Once trained,
GraPhSyM provides accurate visibility of final design metrics to early EDA
stages, such as logic synthesis, without running the slow physical synthesis
flow, enabling global co-optimization across stages. Additionally, the swift
and precise feedback provided by GraPhSyM is instrumental for
machine-learning-based EDA optimization frameworks. Given a gate-level netlist
of a circuit represented as a graph, GraPhSyM utilizes graph structure,
connectivity, and electrical property features to predict the impact of
physical synthesis transformations such as buffer insertion and gate sizing.
When trained on a dataset of 6000 prefix adder designs synthesized at an
aggressive delay target, GraPhSyM can accurately predict the post-synthesis
delay (98.3%) and area (96.1%) metrics of unseen adders with a fast 0.22s
inference time. Furthermore, we illustrate the compositionality of GraPhSyM by
employing the model trained on a fixed delay target to accurately anticipate
post-synthesis metrics at a variety of unseen delay targets. Lastly, we report
promising generalization capabilities of the GraPhSyM model when it is
evaluated on circuits different from the adders it was exclusively trained on.
The results show the potential for GraPhSyM to serve as a powerful tool for
advanced optimization techniques and as an oracle for EDA machine learning
frameworks.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2023 23:19:34 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2023 15:59:20 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Agiza",
"Ahmed",
""
],
[
"Roy",
"Rajarshi",
""
],
[
"Ene",
"Teodor Dumitru",
""
],
[
"Godil",
"Saad",
""
],
[
"Reda",
"Sherief",
""
],
[
"Catanzaro",
"Bryan",
""
]
] |
new_dataset
| 0.999308 |
2308.13754
|
Jia Li
|
Jia Li, Chongyang Tao, Zhi Jin, Fang Liu, Jia Li, Ge Li
|
ZC3: Zero-Shot Cross-Language Code Clone Detection
|
Accepted by the 38th IEEE/ACM International Conference on Automated
Software Engineering (ASE 2023)
| null | null | null |
cs.SE cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developers introduce code clones to improve programming productivity. Many
existing studies have achieved impressive performance in monolingual code clone
detection. However, during software development, more and more developers write
semantically equivalent programs with different languages to support different
platforms and help developers translate projects from one language to another.
Considering that collecting cross-language parallel data, especially for
low-resource languages, is expensive and time-consuming, how designing an
effective cross-language model that does not rely on any parallel data is a
significant problem. In this paper, we propose a novel method named ZC3 for
Zero-shot Cross-language Code Clone detection. ZC3 designs the contrastive
snippet prediction to form an isomorphic representation space among different
programming languages. Based on this, ZC3 exploits domain-aware learning and
cycle consistency learning to further constrain the model to generate
representations that are aligned among different languages meanwhile are
diacritical for different types of clones. To evaluate our approach, we conduct
extensive experiments on four representative cross-language clone detection
datasets. Experimental results show that ZC3 outperforms the state-of-the-art
baselines by 67.12%, 51.39%, 14.85%, and 53.01% on the MAP score, respectively.
We further investigate the representational distribution of different languages
and discuss the effectiveness of our method.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 03:48:10 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2023 11:22:59 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Li",
"Jia",
""
],
[
"Tao",
"Chongyang",
""
],
[
"Jin",
"Zhi",
""
],
[
"Liu",
"Fang",
""
],
[
"Li",
"Jia",
""
],
[
"Li",
"Ge",
""
]
] |
new_dataset
| 0.996147 |
2308.13775
|
Jia Li
|
Jia Li, Yongmin Li, Ge Li, Xing Hu, Xin Xia, Zhi Jin
|
EditSum: A Retrieve-and-Edit Framework for Source Code Summarization
|
Accepted by the 36th IEEE/ACM International Conference on Automated
Software Engineering (ASE 2021)
| null |
10.1109/ASE51524.2021.9678724
| null |
cs.SE cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing studies show that code summaries help developers understand and
maintain source code. Unfortunately, these summaries are often missing or
outdated in software projects. Code summarization aims to generate natural
language descriptions automatically for source code. Code summaries are highly
structured and have repetitive patterns. Besides the patternized words, a code
summary also contains important keywords, which are the key to reflecting the
functionality of the code. However, the state-of-the-art approaches perform
poorly on predicting the keywords, which leads to the generated summaries
suffering a loss in informativeness. To alleviate this problem, this paper
proposes a novel retrieve-and-edit approach named EditSum for code
summarization. Specifically, EditSum first retrieves a similar code snippet
from a pre-defined corpus and treats its summary as a prototype summary to
learn the pattern. Then, EditSum edits the prototype automatically to combine
the pattern in the prototype with the semantic information of input code. Our
motivation is that the retrieved prototype provides a good start-point for
post-generation because the summaries of similar code snippets often have the
same pattern. The post-editing process further reuses the patternized words in
the prototype and generates keywords based on the semantic information of input
code. We conduct experiments on a large-scale Java corpus and experimental
results demonstrate that EditSum outperforms the state-of-the-art approaches by
a substantial margin. The human evaluation also proves the summaries generated
by EditSum are more informative and useful. We also verify that EditSum
performs well on predicting the patternized words and keywords.
|
[
{
"version": "v1",
"created": "Sat, 26 Aug 2023 05:48:57 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2023 11:19:30 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Li",
"Jia",
""
],
[
"Li",
"Yongmin",
""
],
[
"Li",
"Ge",
""
],
[
"Hu",
"Xing",
""
],
[
"Xia",
"Xin",
""
],
[
"Jin",
"Zhi",
""
]
] |
new_dataset
| 0.997363 |
2308.16360
|
Yuhang Zhou
|
Yuhang Zhou, Xuan Lu, Ge Gao, Qiaozhu Mei, Wei Ai
|
Emoji Promotes Developer Participation and Issue Resolution on GitHub
|
12 pages, 5 figures. To be published in the 18th International AAAI
Conference on Web and Social Media (ICWSM 2024)
| null | null | null |
cs.CY cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Although remote working is increasingly adopted during the pandemic, many are
concerned by the low-efficiency in the remote working. Missing in text-based
communication are non-verbal cues such as facial expressions and body language,
which hinders the effective communication and negatively impacts the work
outcomes. Prevalent on social media platforms, emojis, as alternative
non-verbal cues, are gaining popularity in the virtual workspaces well. In this
paper, we study how emoji usage influences developer participation and issue
resolution in virtual workspaces. To this end, we collect GitHub issues for a
one-year period and apply causal inference techniques to measure the causal
effect of emojis on the outcome of issues, controlling for confounders such as
issue content, repository, and author information. We find that emojis can
significantly reduce the resolution time of issues and attract more user
participation. We also compare the heterogeneous effect on different types of
issues. These findings deepen our understanding of the developer communities,
and they provide design implications on how to facilitate interactions and
broaden developer participation.
|
[
{
"version": "v1",
"created": "Wed, 30 Aug 2023 23:26:33 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2023 13:06:17 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Zhou",
"Yuhang",
""
],
[
"Lu",
"Xuan",
""
],
[
"Gao",
"Ge",
""
],
[
"Mei",
"Qiaozhu",
""
],
[
"Ai",
"Wei",
""
]
] |
new_dataset
| 0.999188 |
2309.00237
|
Sunjun Kweon
|
Sunjun Kweon, Junu Kim, Jiyoun Kim, Sujeong Im, Eunbyeol Cho, Seongsu
Bae, Jungwoo Oh, Gyubok Lee, Jong Hak Moon, Seng Chan You, Seungjin Baek,
Chang Hoon Han, Yoon Bin Jung, Yohan Jo, Edward Choi
|
Publicly Shareable Clinical Large Language Model Built on Synthetic
Clinical Notes
|
https://github.com/starmpcc/Asclepius
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The development of large language models tailored for handling patients'
clinical notes is often hindered by the limited accessibility and usability of
these notes due to strict privacy regulations. To address these challenges, we
first create synthetic large-scale clinical notes using publicly available case
reports extracted from biomedical literature. We then use these synthetic notes
to train our specialized clinical large language model, Asclepius. While
Asclepius is trained on synthetic data, we assess its potential performance in
real-world applications by evaluating it using real clinical notes. We
benchmark Asclepius against several other large language models, including
GPT-3.5-turbo and other open-source alternatives. To further validate our
approach using synthetic notes, we also compare Asclepius with its variants
trained on real clinical notes. Our findings convincingly demonstrate that
synthetic clinical notes can serve as viable substitutes for real ones when
constructing high-performing clinical language models. This conclusion is
supported by detailed evaluations conducted by both GPT-4 and medical
professionals. All resources including weights, codes, and data used in the
development of Asclepius are made publicly accessible for future research.
|
[
{
"version": "v1",
"created": "Fri, 1 Sep 2023 04:01:20 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Sep 2023 18:11:15 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Kweon",
"Sunjun",
""
],
[
"Kim",
"Junu",
""
],
[
"Kim",
"Jiyoun",
""
],
[
"Im",
"Sujeong",
""
],
[
"Cho",
"Eunbyeol",
""
],
[
"Bae",
"Seongsu",
""
],
[
"Oh",
"Jungwoo",
""
],
[
"Lee",
"Gyubok",
""
],
[
"Moon",
"Jong Hak",
""
],
[
"You",
"Seng Chan",
""
],
[
"Baek",
"Seungjin",
""
],
[
"Han",
"Chang Hoon",
""
],
[
"Jung",
"Yoon Bin",
""
],
[
"Jo",
"Yohan",
""
],
[
"Choi",
"Edward",
""
]
] |
new_dataset
| 0.987266 |
2309.01671
|
Tim Hegemann
|
Tim Hegemann and Alexander Wolff
|
A Simple Pipeline for Orthogonal Graph Drawing
|
Appears in the Proceedings of the 31st International Symposium on
Graph Drawing and Network Visualization (GD 2023)
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Orthogonal graph drawing has many applications, e.g., for laying out UML
diagrams or cableplans. In this paper, we present a new pipeline that draws
multigraphs orthogonally, using few bends, few crossings, and small area. Our
pipeline computes an initial graph layout, then removes overlaps between the
rectangular nodes, routes the edges, orders the edges, and nudges them, that
is, moves edge segments in order to balance the inter-edge distances. Our
pipeline is flexible and integrates well with existing approaches. Our main
contribution is (i) an effective edge-nudging algorithm that is based on linear
programming, (ii) a selection of simple algorithms that together produce
competitive results, and (iii) an extensive experimental comparison of our
pipeline with existing approaches using standard benchmark sets and metrics.
|
[
{
"version": "v1",
"created": "Mon, 4 Sep 2023 15:35:23 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2023 11:57:42 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Hegemann",
"Tim",
""
],
[
"Wolff",
"Alexander",
""
]
] |
new_dataset
| 0.98998 |
2309.02721
|
Yuchen Cui
|
Li-Heng Lin, Yuchen Cui, Yilun Hao, Fei Xia, Dorsa Sadigh
|
Gesture-Informed Robot Assistance via Foundation Models
|
CoRL 2023
| null | null | null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Gestures serve as a fundamental and significant mode of non-verbal
communication among humans. Deictic gestures (such as pointing towards an
object), in particular, offer valuable means of efficiently expressing intent
in situations where language is inaccessible, restricted, or highly
specialized. As a result, it is essential for robots to comprehend gestures in
order to infer human intentions and establish more effective coordination with
them. Prior work often rely on a rigid hand-coded library of gestures along
with their meanings. However, interpretation of gestures is often
context-dependent, requiring more flexibility and common-sense reasoning. In
this work, we propose a framework, GIRAF, for more flexibly interpreting
gesture and language instructions by leveraging the power of large language
models. Our framework is able to accurately infer human intent and
contextualize the meaning of their gestures for more effective human-robot
collaboration. We instantiate the framework for interpreting deictic gestures
in table-top manipulation tasks and demonstrate that it is both effective and
preferred by users, achieving 70% higher success rates than the baseline. We
further demonstrate GIRAF's ability on reasoning about diverse types of
gestures by curating a GestureInstruct dataset consisting of 36 different task
scenarios. GIRAF achieved 81% success rate on finding the correct plan for
tasks in GestureInstruct. Website: https://tinyurl.com/giraf23
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 05:10:17 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Sep 2023 05:38:15 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Lin",
"Li-Heng",
""
],
[
"Cui",
"Yuchen",
""
],
[
"Hao",
"Yilun",
""
],
[
"Xia",
"Fei",
""
],
[
"Sadigh",
"Dorsa",
""
]
] |
new_dataset
| 0.998947 |
2309.03204
|
Zihan Yin
|
Zihan Yin, Annewsha Datta, Shwetha Vijayakumar, Ajey Jacob, Akhilesh
Jaiswal
|
A 9 Transistor SRAM Featuring Array-level XOR Parallelism with Secure
Data Toggling Operation
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Security and energy-efficiency are critical for computing applications in
general and for edge applications in particular. Digital in-Memory Computing
(IMC) in SRAM cells have widely been studied to accelerate inference tasks to
maximize both throughput and energy efficiency for intelligent computing at the
edge. XOR operations have been of particular interest due to their wide
applicability in numerous applications that include binary neural networks and
encryption. However, existing IMC circuits for XOR acceleration are limited to
two rows in a memory array and extending the XOR parallelism to multiple rows
in an SRAM array has remained elusive. Further, SRAM is prone to both data
imprinting and data remanence security issues, which poses limitations on
security . Based on commerical Globalfoundries 22nm mode, we are proposing a
novel 9T SRAM cell such that multiple rows of data (entire array) can be XORed
in a massively parallel single cycle fashion. The new cell also supports
data-toggling within the SRAM cell efficiently to circumvent imprinting attacks
and erase the SRAM value in case of remanence attack.
|
[
{
"version": "v1",
"created": "Sat, 12 Aug 2023 00:46:00 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Yin",
"Zihan",
""
],
[
"Datta",
"Annewsha",
""
],
[
"Vijayakumar",
"Shwetha",
""
],
[
"Jacob",
"Ajey",
""
],
[
"Jaiswal",
"Akhilesh",
""
]
] |
new_dataset
| 0.999584 |
2309.03216
|
Bikram Koirala
|
Bikram Koirala, Behnood Rasti, Zakaria Bnoulkacem, Andrea de Lima
Ribeiro, Yuleika Madriz, Erik Herrmann, Arthur Gestels, Thomas De Kerf,
Sandra Lorenz, Margret Fuchs, Koen Janssens, Gunther Steenackers, Richard
Gloaguen, and Paul Scheunders
|
A Multisensor Hyperspectral Benchmark Dataset For Unmixing of Intimate
Mixtures
|
Currently, this paper is under review in IEEE
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optical hyperspectral cameras capture the spectral reflectance of materials.
Since many materials behave as heterogeneous intimate mixtures with which each
photon interacts differently, the relationship between spectral reflectance and
material composition is very complex. Quantitative validation of spectral
unmixing algorithms requires high-quality ground truth fractional abundance
data, which are very difficult to obtain. In this work, we generated a
comprehensive laboratory ground truth dataset of intimately mixed mineral
powders. For this, five clay powders (Kaolin, Roof clay, Red clay, mixed clay,
and Calcium hydroxide) were mixed homogeneously to prepare 325 samples of 60
binary, 150 ternary, 100 quaternary, and 15 quinary mixtures. Thirteen
different hyperspectral sensors have been used to acquire the reflectance
spectra of these mixtures in the visible, near, short, mid, and long-wavelength
infrared regions (350-15385) nm. {\color{black} Overlaps in wavelength regions
due to the operational ranges of each sensor} and variations in acquisition
conditions {\color{black} resulted in} a large amount of spectral variability.
Ground truth composition is given by construction, but to verify that the
generated samples are sufficiently homogeneous, XRD and XRF elemental analysis
is performed. We believe these data will be beneficial for validating advanced
methods for nonlinear unmixing and material composition estimation, including
studying spectral variability and training supervised unmixing approaches. The
datasets can be downloaded from the following link:
https://github.com/VisionlabUA/Multisensor_datasets.
|
[
{
"version": "v1",
"created": "Wed, 30 Aug 2023 11:48:36 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Koirala",
"Bikram",
""
],
[
"Rasti",
"Behnood",
""
],
[
"Bnoulkacem",
"Zakaria",
""
],
[
"Ribeiro",
"Andrea de Lima",
""
],
[
"Madriz",
"Yuleika",
""
],
[
"Herrmann",
"Erik",
""
],
[
"Gestels",
"Arthur",
""
],
[
"De Kerf",
"Thomas",
""
],
[
"Lorenz",
"Sandra",
""
],
[
"Fuchs",
"Margret",
""
],
[
"Janssens",
"Koen",
""
],
[
"Steenackers",
"Gunther",
""
],
[
"Gloaguen",
"Richard",
""
],
[
"Scheunders",
"Paul",
""
]
] |
new_dataset
| 0.999769 |
2309.03221
|
Shyam Narayanan
|
Shyam Narayanan, Matteo Cartiglia, Arianna Rubino, Charles Lego,
Charlotte Frenkel, Giacomo Indiveri
|
SPAIC: A sub-$\mu$W/Channel, 16-Channel General-Purpose Event-Based
Analog Front-End with Dual-Mode Encoders
|
5 pages, 10 figures, Accepted for lecture at IEEE BioCAS Conference
2023
| null | null | null |
cs.AR cs.ET cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Low-power event-based analog front-ends (AFE) are a crucial component
required to build efficient end-to-end neuromorphic processing systems for edge
computing. Although several neuromorphic chips have been developed for
implementing spiking neural networks (SNNs) and solving a wide range of sensory
processing tasks, there are only a few general-purpose analog front-end devices
that can be used to convert analog sensory signals into spikes and interfaced
to neuromorphic processors. In this work, we present a novel, highly
configurable analog front-end chip, denoted as SPAIC (signal-to-spike converter
for analog AI computation), that offers a general-purpose dual-mode analog
signal-to-spike encoding with delta modulation and pulse frequency modulation,
with tunable frequency bands. The ASIC is designed in a 180 nm process. It
supports and encodes a wide variety of signals spanning 4 orders of magnitude
in frequency, and provides an event-based output that is compatible with
existing neuromorphic processors. We validated the ASIC for its functions and
present initial silicon measurement results characterizing the basic building
blocks of the chip.
|
[
{
"version": "v1",
"created": "Thu, 31 Aug 2023 19:53:04 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Narayanan",
"Shyam",
""
],
[
"Cartiglia",
"Matteo",
""
],
[
"Rubino",
"Arianna",
""
],
[
"Lego",
"Charles",
""
],
[
"Frenkel",
"Charlotte",
""
],
[
"Indiveri",
"Giacomo",
""
]
] |
new_dataset
| 0.988505 |
2309.03233
|
G\"ozel Shakeri
|
Marco Druschba, G\"ozel Shakeri
|
Scale-Score: Food Label to Support Nutritious and Sustainable Online
Grocery Shopping
|
ICT4S 2023; extended abstract
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
To empower online grocery shoppers in making nutritionally and
environmentally informed decisions, we investigate the efficacy of the
Scale-Score, a label combining nutritional and environmental information to
highlight a product's benefit to both the consumer's and the planet's health,
without obscuring either information. We conducted an experimental study in a
mock online grocery environment, and assessed label efficacy. We find that the
Scale-Score supports nutritious purchases, yet needs improving regarding
sustainability support. Our research shows first insights into design
considerations and performance of a combined yet disjoint food label.
|
[
{
"version": "v1",
"created": "Tue, 5 Sep 2023 06:57:52 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Druschba",
"Marco",
""
],
[
"Shakeri",
"Gözel",
""
]
] |
new_dataset
| 0.990887 |
2309.03251
|
Hao Dong
|
Hao Dong, Pengyang Wang, Meng Xiao, Zhiyuan Ning, Pengfei Wang,
Yuanchun Zhou
|
Temporal Inductive Path Neural Network for Temporal Knowledge Graph
Reasoning
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Temporal Knowledge Graph (TKG) is an extension of traditional Knowledge Graph
(KG) that incorporates the dimension of time. Reasoning on TKGs is a crucial
task that aims to predict future facts based on historical occurrences. The key
challenge lies in uncovering structural dependencies within historical
subgraphs and temporal patterns. Most existing approaches model TKGs relying on
entity modeling, as nodes in the graph play a crucial role in knowledge
representation. However, the real-world scenario often involves an extensive
number of entities, with new entities emerging over time. This makes it
challenging for entity-dependent methods to cope with extensive volumes of
entities, and effectively handling newly emerging entities also becomes a
significant challenge. Therefore, we propose Temporal Inductive Path Neural
Network (TiPNN), which models historical information in an entity-independent
perspective. Specifically, TiPNN adopts a unified graph, namely history
temporal graph, to comprehensively capture and encapsulate information from
history. Subsequently, we utilize the defined query-aware temporal paths to
model historical path information related to queries on history temporal graph
for the reasoning. Extensive experiments illustrate that the proposed model not
only attains significant performance enhancements but also handles inductive
settings, while additionally facilitating the provision of reasoning evidence
through history temporal graphs.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 17:37:40 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Dong",
"Hao",
""
],
[
"Wang",
"Pengyang",
""
],
[
"Xiao",
"Meng",
""
],
[
"Ning",
"Zhiyuan",
""
],
[
"Wang",
"Pengfei",
""
],
[
"Zhou",
"Yuanchun",
""
]
] |
new_dataset
| 0.975775 |
2309.03294
|
Soumyadeep Dey
|
Sidharth Anand, Barsha Mitra, Soumyadeep Dey, Abhinav Rao, Rupsa Dhar
and Jaideep Vaidya
|
MALITE: Lightweight Malware Detection and Classification for Constrained
Devices
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Today, malware is one of the primary cyberthreats to organizations. Malware
has pervaded almost every type of computing device including the ones having
limited memory, battery and computation power such as mobile phones, tablets
and embedded devices like Internet-of-Things (IoT) devices. Consequently, the
privacy and security of the malware infected systems and devices have been
heavily jeopardized. In recent years, researchers have leveraged machine
learning based strategies for malware detection and classification. Malware
analysis approaches can only be employed in resource constrained environments
if the methods are lightweight in nature. In this paper, we present MALITE, a
lightweight malware analysis system, that can classify various malware families
and distinguish between benign and malicious binaries. MALITE converts a binary
into a gray scale or an RGB image and employs low memory and battery power
consuming as well as computationally inexpensive malware analysis strategies.
We have designed MALITE-MN, a lightweight neural network based architecture and
MALITE-HRF, an ultra lightweight random forest based method that uses histogram
features extracted by a sliding window. We evaluate the performance of both on
six publicly available datasets (Malimg, Microsoft BIG, Dumpware10, MOTIF,
Drebin and CICAndMal2017), and compare them to four state-of-the-art malware
classification techniques. The results show that MALITE-MN and MALITE-HRF not
only accurately identify and classify malware but also respectively consume
several orders of magnitude lower resources (in terms of both memory as well as
computation capabilities), making them much more suitable for resource
constrained environments.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 18:17:38 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Anand",
"Sidharth",
""
],
[
"Mitra",
"Barsha",
""
],
[
"Dey",
"Soumyadeep",
""
],
[
"Rao",
"Abhinav",
""
],
[
"Dhar",
"Rupsa",
""
],
[
"Vaidya",
"Jaideep",
""
]
] |
new_dataset
| 0.999782 |
2309.03298
|
Claire Arthur
|
Claire Arthur, Frank Lehman, John McNamara
|
Presenting the SWTC: A Symbolic Corpus of Themes from John Williams'
Star Wars Episodes I-IX
|
Corpus report (5000 words)
| null | null | null |
cs.SD cs.SC eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents a new symbolic corpus of musical themes from the complete
Star Wars trilogies (Episodes I-IX) by John Williams. The corpus files are made
available in multiple formats (.krn, .sib, and .musicxml) and include melodic,
harmonic, and formal information. The Star Wars Thematic Corpus (SWTC) contains
a total of 64 distinctive, recurring, and symbolically meaningful themes and
motifs, commonly referred to as leitmotifs. Through this corpus we also
introduce a new humdrum standard for non-functional harmony encodings, **harte,
based on Harte (2005, 2010). This report details the motivation, describes the
transcription and encoding processes, and provides some brief summary
statistics. While relatively small in scale, the SWTC represents a unified
collection from one of the most prolific and influential composers of the 20th
century, and the under-studied subset of film and multimedia musical material
in general. We hope the SWTC will provide insights into John Williams'
compositional style, as well as prove useful in comparisons against other
thematic corpora from film and beyond.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 18:21:55 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Arthur",
"Claire",
""
],
[
"Lehman",
"Frank",
""
],
[
"McNamara",
"John",
""
]
] |
new_dataset
| 0.999726 |
2309.03356
|
David Usevitch
|
Alireza Alamdar, David E. Usevitch, Jiahao Wu, Russell H. Taylor,
Peter Gehlbach, Iulian Iordachita
|
Steady-Hand Eye Robot 3.0: Optimization and Benchtop Evaluation for
Subretinal Injection
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Subretinal injection methods and other procedures for treating retinal
conditions and diseases (many considered incurable) have been limited in scope
due to limited human motor control. This study demonstrates the next
generation, cooperatively controlled Steady-Hand Eye Robot (SHER 3.0), a
precise and intuitive-to-use robotic platform achieving clinical standards for
targeting accuracy and resolution for subretinal injections. The system design
and basic kinematics are reported and a deflection model for the incorporated
delta stage and validation experiments are presented. This model optimizes the
delta stage parameters, maximizing the global conditioning index and minimizing
torsional compliance. Five tests measuring accuracy, repeatability, and
deflection show the optimized stage design achieves a tip accuracy of <30
$\mu$m, tip repeatability of 9.3 $\mu$m and 0.02{\deg}, and deflections between
20-350 $\mu$m/N. Future work will use updated control models to refine tip
positioning outcomes and will be tested on in vivo animal models.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 20:43:06 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Alamdar",
"Alireza",
""
],
[
"Usevitch",
"David E.",
""
],
[
"Wu",
"Jiahao",
""
],
[
"Taylor",
"Russell H.",
""
],
[
"Gehlbach",
"Peter",
""
],
[
"Iordachita",
"Iulian",
""
]
] |
new_dataset
| 0.987271 |
2309.03401
|
Allen Jiang
|
Yalong Jiang, Changkang Li
|
Reasonable Anomaly Detection in Long Sequences
|
8 pages, 1 figure
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video anomaly detection is a challenging task due to the lack in approaches
for representing samples. The visual representations of most existing
approaches are limited by short-term sequences of observations which cannot
provide enough clues for achieving reasonable detections. In this paper, we
propose to completely represent the motion patterns of objects by learning from
long-term sequences. Firstly, a Stacked State Machine (SSM) model is proposed
to represent the temporal dependencies which are consistent across long-range
observations. Then SSM model functions in predicting future states based on
past ones, the divergence between the predictions with inherent normal patterns
and observed ones determines anomalies which violate normal motion patterns.
Extensive experiments are carried out to evaluate the proposed approach on the
dataset and existing ones. Improvements over state-of-the-art methods can be
observed. Our code is available at
https://github.com/AllenYLJiang/Anomaly-Detection-in-Sequences.
|
[
{
"version": "v1",
"created": "Wed, 6 Sep 2023 23:35:55 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Jiang",
"Yalong",
""
],
[
"Li",
"Changkang",
""
]
] |
new_dataset
| 0.990983 |
2309.03412
|
Masahiro Suzuki
|
Masahiro Suzuki, Masanori Hirano, Hiroki Sakaji
|
From Base to Conversational: Japanese Instruction Dataset and Tuning
Large Language Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Instruction tuning is essential for large language models (LLMs) to become
interactive. While many instruction tuning datasets exist in English, there is
a noticeable lack in other languages. Also, their effectiveness has not been
well verified in non-English languages. We construct a Japanese instruction
dataset by expanding and filtering existing datasets and apply the dataset to a
Japanese pre-trained base model. We performed Low-Rank Adaptation (LoRA) tuning
on both Japanese and English existing models using our instruction dataset. We
evaluated these models from both quantitative and qualitative perspectives. As
a result, the effectiveness of Japanese instruction datasets is confirmed. The
results also indicate that even with relatively small LLMs, performances in
downstream tasks would be improved through instruction tuning. Our instruction
dataset, tuned models, and implementation are publicly available online.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 00:14:37 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Suzuki",
"Masahiro",
""
],
[
"Hirano",
"Masanori",
""
],
[
"Sakaji",
"Hiroki",
""
]
] |
new_dataset
| 0.999478 |
2309.03436
|
Trinh Van Chien
|
Trinh Van Chien and Lam Thanh Tu and Waqas Khalid and Heejung Yu and
Symeon Chatzinotas and Marco Di Renzo
|
RIS-Assisted Wireless Communications: Long-Term versus Short-Term Phase
Shift Designs
|
14 pages, 7 figures. Submitted for possible publication
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfigurable intelligent surface (RIS) has recently gained significant
interest as an emerging technology for future wireless networks thanks to its
potential for improving the coverage probability in challenging propagation
environments. This paper studies an RIS-assisted propagation environment, where
a source transmits data to a destination in the presence of a weak direct link.
We analyze and compare RIS designs based on long-term and short-term channel
statistics in terms of coverage probability and ergodic rate. For the
considered optimization designs, we derive closed-form expressions for the
coverage probability and ergodic rate, which explicitly unveil the impact of
both the propagation environment and the RIS on the system performance. Besides
the optimization of the RIS phase profile, we formulate an RIS placement
optimization problem with the aim of maximizing the coverage probability by
relying only on partial channel state information. An efficient algorithm is
proposed based on the gradient ascent method. Simulation results are
illustrated in order to corroborate the analytical framework and findings. The
proposed RIS phase profile is shown to outperform several heuristic benchmarks
in terms of outage probability and ergodic rate. In addition, the proposed RIS
placement strategy provides an extra degree of freedom that remarkably improves
system performance.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 01:38:03 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Van Chien",
"Trinh",
""
],
[
"Tu",
"Lam Thanh",
""
],
[
"Khalid",
"Waqas",
""
],
[
"Yu",
"Heejung",
""
],
[
"Chatzinotas",
"Symeon",
""
],
[
"Di Renzo",
"Marco",
""
]
] |
new_dataset
| 0.992373 |
2309.03453
|
Yuan Liu
|
Yuan Liu and Cheng Lin and Zijiao Zeng and Xiaoxiao Long and Lingjie
Liu and Taku Komura and Wenping Wang
|
SyncDreamer: Generating Multiview-consistent Images from a Single-view
Image
|
Project page: https://liuyuan-pal.github.io/SyncDreamer/
| null | null | null |
cs.CV cs.AI cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a novel diffusion model called that generates
multiview-consistent images from a single-view image. Using pretrained
large-scale 2D diffusion models, recent work Zero123 demonstrates the ability
to generate plausible novel views from a single-view image of an object.
However, maintaining consistency in geometry and colors for the generated
images remains a challenge. To address this issue, we propose a synchronized
multiview diffusion model that models the joint probability distribution of
multiview images, enabling the generation of multiview-consistent images in a
single reverse process. SyncDreamer synchronizes the intermediate states of all
the generated images at every step of the reverse process through a 3D-aware
feature attention mechanism that correlates the corresponding features across
different views. Experiments show that SyncDreamer generates images with high
consistency across different views, thus making it well-suited for various 3D
generation tasks such as novel-view-synthesis, text-to-3D, and image-to-3D.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 02:28:04 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Liu",
"Yuan",
""
],
[
"Lin",
"Cheng",
""
],
[
"Zeng",
"Zijiao",
""
],
[
"Long",
"Xiaoxiao",
""
],
[
"Liu",
"Lingjie",
""
],
[
"Komura",
"Taku",
""
],
[
"Wang",
"Wenping",
""
]
] |
new_dataset
| 0.998451 |
2309.03468
|
Nikhil Raghuraman
|
Nikhil Raghuraman, Adam W. Harley, Leonidas Guibas
|
Cross-Image Context Matters for Bongard Problems
|
Main paper: 7 pages, Appendix: 10 pages, 30 figures. Code:
https://github.com/nraghuraman/bongard-context
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current machine learning methods struggle to solve Bongard problems, which
are a type of IQ test that requires deriving an abstract "concept" from a set
of positive and negative "support" images, and then classifying whether or not
a new query image depicts the key concept. On Bongard-HOI, a benchmark for
natural-image Bongard problems, existing methods have only reached 66% accuracy
(where chance is 50%). Low accuracy is often attributed to neural nets' lack of
ability to find human-like symbolic rules. In this work, we point out that many
existing methods are forfeiting accuracy due to a much simpler problem: they do
not incorporate information contained in the support set as a whole, and rely
instead on information extracted from individual supports. This is a critical
issue, because unlike in few-shot learning tasks concerning object
classification, the "key concept" in a typical Bongard problem can only be
distinguished using multiple positives and multiple negatives. We explore a
variety of simple methods to take this cross-image context into account, and
demonstrate substantial gains over prior methods, leading to new
state-of-the-art performance on Bongard-LOGO (75.3%) and Bongard-HOI (72.45%)
and strong performance on the original Bongard problem set (60.84%).
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 03:33:49 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Raghuraman",
"Nikhil",
""
],
[
"Harley",
"Adam W.",
""
],
[
"Guibas",
"Leonidas",
""
]
] |
new_dataset
| 0.995217 |
2309.03480
|
Keita Emura
|
Kota Chin, Keita Emura, Kazumasa Omote
|
An Anonymous yet Accountable Contract Wallet System using Account
Abstraction
|
8 pages, 4 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Account abstraction allows a contract wallet to initiate transaction
execution. Thus, account abstraction is useful for preserving the privacy of
externally owned accounts (EOAs) because it can remove a transaction issued
from an EOA to the contract wallet and hides who issued the transaction by
additionally employing anonymous authentication procedures such as ring
signatures. However, unconditional anonymity is undesirable in practice because
it prevents to reveal who is accountable for a problem when it arises. Thus,
maintaining a balancing between anonymity and accountability is important.
In this paper, we propose an anonymous yet accountable contract wallet
system. In addition to account abstraction, the proposed system also utilizes
accountable ring signatures (Bootle et al., ESORICS 2015). The proposed system
provides (1) anonymity of a transaction issuer that hides who agreed with
running the contract wallet, and (2) accountability of the issuer, which allows
the issuer to prove they agreed with running the contract wallet. Moreover, due
to a security requirement of accountable ring signatures, the transaction
issuer cannot claim that someone else issued the transaction. This
functionality allows us to clarify the accountability involved in issuing a
transaction. In addition, the proposed system allows an issuer to employ a
typical signature scheme, e.g., ECDSA, together with the ring signature scheme.
This functionality can be considered an extension of the common
multi-signatures that require a certain number of ECDSA signatures to run a
contract wallet. The proposed system was implemented using zkSync (Solidity).
We discuss several potential applications of the proposed system, i.e., medical
information sharing and asset management.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 04:54:19 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Chin",
"Kota",
""
],
[
"Emura",
"Keita",
""
],
[
"Omote",
"Kazumasa",
""
]
] |
new_dataset
| 0.989892 |
2309.03483
|
Clarence Lee
|
Clarence Lee, M Ganesh Kumar, Cheston Tan
|
DetermiNet: A Large-Scale Diagnostic Dataset for Complex
Visually-Grounded Referencing using Determiners
|
10 pages, 6 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
State-of-the-art visual grounding models can achieve high detection accuracy,
but they are not designed to distinguish between all objects versus only
certain objects of interest. In natural language, in order to specify a
particular object or set of objects of interest, humans use determiners such as
"my", "either" and "those". Determiners, as an important word class, are a type
of schema in natural language about the reference or quantity of the noun.
Existing grounded referencing datasets place much less emphasis on determiners,
compared to other word classes such as nouns, verbs and adjectives. This makes
it difficult to develop models that understand the full variety and complexity
of object referencing. Thus, we have developed and released the DetermiNet
dataset , which comprises 250,000 synthetically generated images and captions
based on 25 determiners. The task is to predict bounding boxes to identify
objects of interest, constrained by the semantics of the given determiner. We
find that current state-of-the-art visual grounding models do not perform well
on the dataset, highlighting the limitations of existing models on reference
and quantification tasks.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 05:13:52 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Lee",
"Clarence",
""
],
[
"Kumar",
"M Ganesh",
""
],
[
"Tan",
"Cheston",
""
]
] |
new_dataset
| 0.999816 |
2309.03496
|
Peng Chen
|
Peng Chen, Yuxuan Xie, Yunlong Lyu, Yuxiao Wang, and Hao Chen
|
HOPPER: Interpretative Fuzzing for Libraries
|
To appear in the ACM CCS 2023
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the fact that the state-of-the-art fuzzers can generate inputs
efficiently, existing fuzz drivers still cannot adequately cover entries in
libraries. Most of these fuzz drivers are crafted manually by developers, and
their quality depends on the developers' understanding of the code. Existing
works have attempted to automate the generation of fuzz drivers by learning API
usage from code and execution traces. However, the generated fuzz drivers are
limited to a few specific call sequences by the code being learned. To address
these challenges, we present HOPPER, which can fuzz libraries without requiring
any domain knowledge to craft fuzz drivers. It transforms the problem of
library fuzzing into the problem of interpreter fuzzing. The interpreters
linked against libraries under test can interpret the inputs that describe
arbitrary API usage. To generate semantically correct inputs for the
interpreter, HOPPER learns the intra- and inter-API constraints in the
libraries and mutates the program with grammar awareness. We implemented HOPPER
and evaluated its effectiveness on 11 real-world libraries against manually
crafted fuzzers and other automatic solutions. Our results show that HOPPER
greatly outperformed the other fuzzers in both code coverage and bug finding,
having uncovered 25 previously unknown bugs that other fuzzers couldn't.
Moreover, we have demonstrated that the proposed intra- and inter-API
constraint learning methods can correctly learn constraints implied by the
library and, therefore, significantly improve the fuzzing efficiency. The
experiment results indicate that HOPPER is able to explore a vast range of API
usages for library fuzzing out of the box.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 06:11:18 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Chen",
"Peng",
""
],
[
"Xie",
"Yuxuan",
""
],
[
"Lyu",
"Yunlong",
""
],
[
"Wang",
"Yuxiao",
""
],
[
"Chen",
"Hao",
""
]
] |
new_dataset
| 0.955415 |
2309.03522
|
Ziming Li
|
Boyuan Chen, Junkun Long, Wenxuan Zheng, Yuzheng Wu, Ziming Li, Yue
Li, Hai-Ning Liang
|
AR.S.Space: An AR Casual Game for Social Engagement in Work Environments
|
2023 ISMAR Student Competition
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In social situations, individuals often encounter communication challenges,
particularly when adapting to new environments. While some studies have
acknowledged the potential of AR social games to aid in effective socialization
to some extent, little attention has been given to AR HMD-based games
specifically designed to facilitate social interactions. In response, we
propose AR.S.Space, an AR HMD-based social game that employs augmented reality
features to engage users with virtual social agents through asynchronous
communication. The game aims to mitigate the unease associated with initial
social interactions and foster long-term connections. To assess its efficacy, a
user study was conducted within a specific scenario (an office space),
gathering quantitative data and qualitative feedback through questionnaires and
interviews. The findings highlight the game's potential to enhance
socialization in small-scale environments. Moreover, the study offers valuable
design guidelines for future research and the application of AR social games in
similar settings.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 07:06:30 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Chen",
"Boyuan",
""
],
[
"Long",
"Junkun",
""
],
[
"Zheng",
"Wenxuan",
""
],
[
"Wu",
"Yuzheng",
""
],
[
"Li",
"Ziming",
""
],
[
"Li",
"Yue",
""
],
[
"Liang",
"Hai-Ning",
""
]
] |
new_dataset
| 0.998883 |
2309.03544
|
Zeeshan Ali Haq
|
Mohd Ashhad, Omar Ahmed, Sooraj K. Ambat, Zeeshan Ali Haq, Mansaf Alam
|
MVD:A Novel Methodology and Dataset for Acoustic Vehicle Type
Classification
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Rising urban populations have led to a surge in vehicle use and made traffic
monitoring and management indispensable. Acoustic traffic monitoring (ATM)
offers a cost-effective and efficient alternative to more computationally
expensive methods of monitoring traffic such as those involving computer vision
technologies. In this paper, we present MVD and MVDA: two open datasets for the
development of acoustic traffic monitoring and vehicle-type classification
algorithms, which contain audio recordings of moving vehicles. The dataset
contain four classes- Trucks, Cars, Motorbikes, and a No-vehicle class.
Additionally, we propose a novel and efficient way to accurately classify these
acoustic signals using cepstrum and spectrum based local and global audio
features, and a multi-input neural network. Experimental results show that our
methodology improves upon the established baselines of previous works and
achieves an accuracy of 91.98% and 96.66% on MVD and MVDA Datasets,
respectively. Finally, the proposed model was deployed through an Android
application to make it accessible for testing and demonstrate its efficacy.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 08:02:57 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Ashhad",
"Mohd",
""
],
[
"Ahmed",
"Omar",
""
],
[
"Ambat",
"Sooraj K.",
""
],
[
"Haq",
"Zeeshan Ali",
""
],
[
"Alam",
"Mansaf",
""
]
] |
new_dataset
| 0.999794 |
2309.03548
|
Xiaohan Cui
|
Xiaohan Cui, Long Ma, Tengyu Ma, Jinyuan Liu, Xin Fan, Risheng Liu
|
Trash to Treasure: Low-Light Object Detection via
Decomposition-and-Aggregation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object detection in low-light scenarios has attracted much attention in the
past few years. A mainstream and representative scheme introduces enhancers as
the pre-processing for regular detectors. However, because of the disparity in
task objectives between the enhancer and detector, this paradigm cannot shine
at its best ability. In this work, we try to arouse the potential of enhancer +
detector. Different from existing works, we extend the illumination-based
enhancers (our newly designed or existing) as a scene decomposition module,
whose removed illumination is exploited as the auxiliary in the detector for
extracting detection-friendly features. A semantic aggregation module is
further established for integrating multi-scale scene-related semantic
information in the context space. Actually, our built scheme successfully
transforms the "trash" (i.e., the ignored illumination in the detector) into
the "treasure" for the detector. Plenty of experiments are conducted to reveal
our superiority against other state-of-the-art methods. The code will be public
if it is accepted.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 08:11:47 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Cui",
"Xiaohan",
""
],
[
"Ma",
"Long",
""
],
[
"Ma",
"Tengyu",
""
],
[
"Liu",
"Jinyuan",
""
],
[
"Fan",
"Xin",
""
],
[
"Liu",
"Risheng",
""
]
] |
new_dataset
| 0.950517 |
2309.03566
|
Jens Kanstrup Larsen
|
Jens Kanstrup Larsen, Roberto Guanciale, Philipp Haller, Alceste
Scalas
|
P4R-Type: a Verified API for P4 Control Plane Programs (Technical
Report)
|
82 pages, 27 figures, extended version of paper to be published at
OOPSLA 2023
| null |
10.1145/3622866
| null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Software-Defined Networking (SDN) significantly simplifies programming,
reconfiguring, and optimizing network devices, such as switches and routers.
The de facto standard for programmming SDN devices is the P4 language. However,
the flexibility and power of P4, and SDN more generally, gives rise to
important risks. As a number of incidents at major cloud providers have shown,
errors in SDN programs can compromise the availability of networks, leaving
them in a non-functional state. The focus of this paper are errors in
control-plane programs that interact with P4-enabled network devices via the
standardized P4Runtime API. For clients of the P4Runtime API it is easy to make
mistakes that lead to catastrophic failures, despite the use of Google's
Protocol Buffers as an interface definition language.
This paper proposes P4R-Type, a novel verified P4Runtime API for Scala that
performs static checks for P4 control plane operations, ruling out mismatches
between P4 tables, allowed actions, and action parameters. As a formal
foundation of P4R-Type, we present the $F_{\text{P4R}}$ calculus and its typing
system, which ensure that well-typed programs never get stuck by issuing
invalid P4Runtime operations. We evaluate the safety and flexibility of
P4R-Type with 3 case studies. To the best of our knowledge, this is the first
work that formalises P4Runtime control plane applications, and a typing
discipline ensuring the correctness of P4Runtime operations.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 08:52:49 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Larsen",
"Jens Kanstrup",
""
],
[
"Guanciale",
"Roberto",
""
],
[
"Haller",
"Philipp",
""
],
[
"Scalas",
"Alceste",
""
]
] |
new_dataset
| 0.997016 |
2309.03579
|
Ajitesh Srivastava
|
Ajitesh Srivastava
|
DTW+S: Shape-based Comparison of Time-series with Ordered Local Trend
|
11 pages, 13 figures
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Measuring distance or similarity between time-series data is a fundamental
aspect of many applications including classification and clustering. Existing
measures may fail to capture similarities due to local trends (shapes) and may
even produce misleading results. Our goal is to develop a measure that looks
for similar trends occurring around similar times and is easily interpretable
for researchers in applied domains. This is particularly useful for
applications where time-series have a sequence of meaningful local trends that
are ordered, such as in epidemics (a surge to an increase to a peak to a
decrease). We propose a novel measure, DTW+S, which creates an interpretable
"closeness-preserving" matrix representation of the time-series, where each
column represents local trends, and then it applies Dynamic Time Warping to
compute distances between these matrices. We present a theoretical analysis
that supports the choice of this representation. We demonstrate the utility of
DTW+S in ensemble building and clustering of epidemic curves. We also
demonstrate that our approach results in better classification compared to
Dynamic Time Warping for a class of datasets, particularly when local trends
rather than scale play a decisive role.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 09:18:12 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Srivastava",
"Ajitesh",
""
]
] |
new_dataset
| 0.998767 |
2309.03584
|
Tobias Pfandzelter
|
Tobias Pfandzelter and David Bermbach
|
Enoki: Stateful Distributed FaaS from Edge to Cloud
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Function-as-a-Service (FaaS) is a promising paradigm for applications
distributed across the edge-cloud continuum. FaaS functions are stateless by
nature, leading to high elasticity and transparent invocation. Supporting
stateful applications, however, requires integrating data storage in FaaS,
which is not trivial in an edge-cloud environment.
We propose Enoki, an architecture for stateful FaaS computing replicated
across the edge-cloud continuum. Enoki integrates a replicated key-value store
with single-node FaaS systems at edge and cloud nodes in order to provide
low-latency local data access for functions without breaking the abstraction of
the FaaS programming model. We evaluate Enoki with microbenchmarks on an
open-source prototype and demonstrate building a stateful FaaS application with
multiple functions distributed over edge and cloud.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 09:25:03 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Pfandzelter",
"Tobias",
""
],
[
"Bermbach",
"David",
""
]
] |
new_dataset
| 0.952484 |
2309.03595
|
Carolina Camassa
|
Claudia Biancotti, Carolina Camassa
|
Loquacity and Visible Emotion: ChatGPT as a Policy Advisor
|
33 pages
| null | null | null |
cs.CL cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
ChatGPT, a software seeking to simulate human conversational abilities, is
attracting increasing attention. It is sometimes portrayed as a groundbreaking
productivity aid, including for creative work. In this paper, we run an
experiment to assess its potential in complex writing tasks. We ask the
software to compose a policy brief for the Board of the Bank of Italy. We find
that ChatGPT can accelerate workflows by providing well-structured content
suggestions, and by producing extensive, linguistically correct text in a
matter of seconds. It does, however, require a significant amount of expert
supervision, which partially offsets productivity gains. If the app is used
naively, output can be incorrect, superficial, or irrelevant. Superficiality is
an especially problematic limitation in the context of policy advice intended
for high-level audiences.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 09:40:12 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Biancotti",
"Claudia",
""
],
[
"Camassa",
"Carolina",
""
]
] |
new_dataset
| 0.997721 |
2309.03607
|
Francesco Marchiori
|
Francesco Marchiori, Mauro Conti
|
Your Battery Is a Blast! Safeguarding Against Counterfeit Batteries with
Authentication
|
18 pages, 11 figures
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Lithium-ion (Li-ion) batteries are the primary power source in various
applications due to their high energy and power density. Their market was
estimated to be up to 48 billion U.S. dollars in 2022. However, the widespread
adoption of Li-ion batteries has resulted in counterfeit cell production, which
can pose safety hazards to users. Counterfeit cells can cause explosions or
fires, and their prevalence in the market makes it difficult for users to
detect fake cells. Indeed, current battery authentication methods can be
susceptible to advanced counterfeiting techniques and are often not adaptable
to various cells and systems. In this paper, we improve the state of the art on
battery authentication by proposing two novel methodologies, DCAuth and
EISthentication, which leverage the internal characteristics of each cell
through Machine Learning models. Our methods automatically authenticate
lithium-ion battery models and architectures using data from their regular
usage without the need for any external device. They are also resilient to the
most common and critical counterfeit practices and can scale to several
batteries and devices. To evaluate the effectiveness of our proposed
methodologies, we analyze time-series data from a total of 20 datasets that we
have processed to extract meaningful features for our analysis. Our methods
achieve high accuracy in battery authentication for both architectures (up to
0.99) and models (up to 0.96). Moreover, our methods offer comparable
identification performances. By using our proposed methodologies, manufacturers
can ensure that devices only use legitimate batteries, guaranteeing the
operational state of any system and safety measures for the users.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 10:02:59 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Marchiori",
"Francesco",
""
],
[
"Conti",
"Mauro",
""
]
] |
new_dataset
| 0.999492 |
2309.03617
|
Edoardo Manino
|
Edoardo Manino, Rafael S\'a Menezes, Fedor Shmarov, Lucas C. Cordeiro
|
NeuroCodeBench: a plain C neural network benchmark for software
verification
|
Submitted to the 2023 AFRiTS workshop
| null | null | null |
cs.SE cs.AI cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Safety-critical systems with neural network components require strong
guarantees. While existing neural network verification techniques have shown
great progress towards this goal, they cannot prove the absence of software
faults in the network implementation. This paper presents NeuroCodeBench - a
verification benchmark for neural network code written in plain C. It contains
32 neural networks with 607 safety properties divided into 6 categories: maths
library, activation functions, error-correcting networks, transfer function
approximation, probability density estimation and reinforcement learning. Our
preliminary evaluation shows that state-of-the-art software verifiers struggle
to provide correct verdicts, due to their incomplete support of the standard C
mathematical library and the complexity of larger neural networks.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 10:19:33 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Manino",
"Edoardo",
""
],
[
"Menezes",
"Rafael Sá",
""
],
[
"Shmarov",
"Fedor",
""
],
[
"Cordeiro",
"Lucas C.",
""
]
] |
new_dataset
| 0.998721 |
2309.03643
|
Wenbo Guo
|
Wenbo Guo
|
High-Speed (7,2) Compressor Using A Fast Carry-Generation Logic based on
Sorting Network
|
3 pages, 4 figures
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Fast binary compressors are the main components of many basic digital
calculation units. In this paper, a high-speed (7,2) compressor with a fast
carry-generation logic is proposed. The carry-generation logic is based on the
sorting network, and it can generate a carry bit within 2 logical stages other
than 3 stages as in previous school book full adders. Collaborating with the
adjusted full adder logic, the proposed (7,2) compressor achieves using only 11
basic logical stages. Testing this new design in a binary arry with 7 rows and
8 columns, and the result shows that this design have higher proformance than
previous designs. This method is suitable for high proformance cases in
multiplication design or other cryptography hardware blocks.
|
[
{
"version": "v1",
"created": "Wed, 30 Aug 2023 05:08:25 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Guo",
"Wenbo",
""
]
] |
new_dataset
| 0.993868 |
2309.03658
|
Liming Zhou
|
Liming Zhou and Xiaowei Xu and Xiaodong Wang
|
BNS-Net: A Dual-channel Sarcasm Detection Method Considering
Behavior-level and Sentence-level Conflicts
|
11 pages, 5 figures
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sarcasm detection is a binary classification task that aims to determine
whether a given utterance is sarcastic. Over the past decade, sarcasm detection
has evolved from classical pattern recognition to deep learning approaches,
where features such as user profile, punctuation and sentiment words have been
commonly employed for sarcasm detection. In real-life sarcastic expressions,
behaviors without explicit sentimental cues often serve as carriers of implicit
sentimental meanings. Motivated by this observation, we proposed a dual-channel
sarcasm detection model named BNS-Net. The model considers behavior and
sentence conflicts in two channels. Channel 1: Behavior-level Conflict Channel
reconstructs the text based on core verbs while leveraging the modified
attention mechanism to highlight conflict information. Channel 2:
Sentence-level Conflict Channel introduces external sentiment knowledge to
segment the text into explicit and implicit sentences, capturing conflicts
between them. To validate the effectiveness of BNS-Net, several comparative and
ablation experiments are conducted on three public sarcasm datasets. The
analysis and evaluation of experimental results demonstrate that the BNS-Net
effectively identifies sarcasm in text and achieves the state-of-the-art
performance.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 11:55:11 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Zhou",
"Liming",
""
],
[
"Xu",
"Xiaowei",
""
],
[
"Wang",
"Xiaodong",
""
]
] |
new_dataset
| 0.999411 |
2309.03661
|
Ting Liu
|
Ting Liu, Wansen Wu, Yue Hu, Youkai Wang, Kai Xu, Quanjun Yin
|
Prompt-based Context- and Domain-aware Pretraining for Vision and
Language Navigation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With strong representation capabilities, pretrained vision-language models
are widely used in vision and language navigation (VLN). However, most of them
are trained on web-crawled general-purpose datasets, which incurs a
considerable domain gap when used for VLN tasks. Another challenge for VLN is
how the agent understands the contextual relations between actions on a
trajectory and performs cross-modal alignment sequentially. In this paper, we
propose a novel Prompt-bAsed coNtext- and Domain-Aware (PANDA) pretraining
framework to address these problems. It performs prompting in two stages. In
the domain-aware stage, we apply a low-cost prompt tuning paradigm to learn
soft visual prompts from an in-domain dataset for equipping the pretrained
models with object-level and scene-level cross-modal alignment in VLN tasks.
Furthermore, in the context-aware stage, we design a set of hard context
prompts to capture the sequence-level semantics and instill both out-of-context
and contextual knowledge in the instruction into cross-modal representations.
They enable further tuning of the pretrained models via contrastive learning.
Experimental results on both R2R and REVERIE show the superiority of PANDA
compared to previous state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 11:58:34 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Liu",
"Ting",
""
],
[
"Wu",
"Wansen",
""
],
[
"Hu",
"Yue",
""
],
[
"Wang",
"Youkai",
""
],
[
"Xu",
"Kai",
""
],
[
"Yin",
"Quanjun",
""
]
] |
new_dataset
| 0.99824 |
2309.03664
|
Davide Moroni
|
Francesco Conti, Martina Banchelli, Valentina Bessi, Cristina Cecchi,
Fabrizio Chiti, Sara Colantonio, Cristiano D'Andrea, Marella de Angelis,
Davide Moroni, Benedetta Nacmias, Maria Antonietta Pascali, Sandro Sorbi and
Paolo Matteini
|
Alzheimer Disease Detection from Raman Spectroscopy of the Cerebrospinal
Fluid via Topological Machine Learning
|
Accepter for inclusion in AITA 2023 (http://aita.isti.cnr.it/)
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The cerebrospinal fluid (CSF) of 19 subjects who received a clinical
diagnosis of Alzheimer's disease (AD) as well as of 5 pathological controls
have been collected and analysed by Raman spectroscopy (RS). We investigated
whether the raw and preprocessed Raman spectra could be used to distinguish AD
from controls. First, we applied standard Machine Learning (ML) methods
obtaining unsatisfactory results. Then, we applied ML to a set of topological
descriptors extracted from raw spectra, achieving a very good classification
accuracy (>87%). Although our results are preliminary, they indicate that RS
and topological analysis together may provide an effective combination to
confirm or disprove a clinical diagnosis of AD. The next steps will include
enlarging the dataset of CSF samples to validate the proposed method better
and, possibly, to understand if topological data analysis could support the
characterization of AD subtypes.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 12:01:01 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Conti",
"Francesco",
""
],
[
"Banchelli",
"Martina",
""
],
[
"Bessi",
"Valentina",
""
],
[
"Cecchi",
"Cristina",
""
],
[
"Chiti",
"Fabrizio",
""
],
[
"Colantonio",
"Sara",
""
],
[
"D'Andrea",
"Cristiano",
""
],
[
"de Angelis",
"Marella",
""
],
[
"Moroni",
"Davide",
""
],
[
"Nacmias",
"Benedetta",
""
],
[
"Pascali",
"Maria Antonietta",
""
],
[
"Sorbi",
"Sandro",
""
],
[
"Matteini",
"Paolo",
""
]
] |
new_dataset
| 0.997557 |
2309.03683
|
Anirudha Bhattacharjee
|
Ratnangshu Das, Yashaswi Sinha, Anirudha Bhattacharjee, and Bishakh
Bhattacharya
|
An anthropomorphic continuum robotic neck actuated by SMA spring-based
multipennate muscle architecture
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents a novel Shape Memory Alloy spring actuated continuum
robotic neck that derives inspiration from pennate muscle architecture. The
proposed design has 2DOF, and experimental studies reveal that the designed
joint can replicate the human head's anthropomorphic range of motion. We
enumerate the analytical modelling for SMA actuators and the kinematic model of
the proposed design configuration. A series of experiments were conducted to
assess the performance of the anthropomorphic neck by measuring the range of
motion with varying input currents. Furthermore, the experiments were conducted
to validate the analytical model of the SMA Multiphysics and the continuum
backbone. The existing humanoid necks have been powered by conventional
actuators that have relatively low energy efficiency and are prone to wear. The
current research envisages application of nonconventional actuator such as SMA
springs with specific geometric configuration yielding high power to weight
ratio that delivers smooth motion for continuum robots as demonstrated in this
present work.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 12:45:19 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Das",
"Ratnangshu",
""
],
[
"Sinha",
"Yashaswi",
""
],
[
"Bhattacharjee",
"Anirudha",
""
],
[
"Bhattacharya",
"Bishakh",
""
]
] |
new_dataset
| 0.997365 |
2309.03685
|
Nicolas Hubert
|
Nicolas Hubert, Pierre Monnin, Mathieu d'Aquin, Armelle Brun, Davy
Monticolo
|
PyGraft: Configurable Generation of Schemas and Knowledge Graphs at Your
Fingertips
| null | null | null | null |
cs.AI cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge graphs (KGs) have emerged as a prominent data representation and
management paradigm. Being usually underpinned by a schema (e.g. an ontology),
KGs capture not only factual information but also contextual knowledge. In some
tasks, a few KGs established themselves as standard benchmarks. However, recent
works outline that relying on a limited collection of datasets is not
sufficient to assess the generalization capability of an approach. In some
data-sensitive fields such as education or medicine, access to public datasets
is even more limited. To remedy the aforementioned issues, we release PyGraft,
a Python-based tool that generates highly customized, domain-agnostic schemas
and knowledge graphs. The synthesized schemas encompass various RDFS and OWL
constructs, while the synthesized KGs emulate the characteristics and scale of
real-world KGs. Logical consistency of the generated resources is ultimately
ensured by running a description logic (DL) reasoner. By providing a way of
generating both a schema and KG in a single pipeline, PyGraft's aim is to
empower the generation of a more diverse array of KGs for benchmarking novel
approaches in areas such as graph-based machine learning (ML), or more
generally KG processing. In graph-based ML in particular, this should foster a
more holistic evaluation of model performance and generalization capability,
thereby going beyond the limited collection of available benchmarks. PyGraft is
available at: https://github.com/nicolas-hbt/pygraft.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 13:00:09 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Hubert",
"Nicolas",
""
],
[
"Monnin",
"Pierre",
""
],
[
"d'Aquin",
"Mathieu",
""
],
[
"Brun",
"Armelle",
""
],
[
"Monticolo",
"Davy",
""
]
] |
new_dataset
| 0.960215 |
2309.03725
|
Shyam Ayyasamy
|
Shyam A, Aparna Purayath, Keerthivasan S, Akash S M, Aswathaman
Govindaraju, Manojkumar Lakshmanan, and Mohanasankar Sivaprakasam
|
Immersive Virtual Reality Platform for Robot-Assisted Antenatal
Ultrasound Scanning
|
The paper was accepted and presented at IEEE ROMAN 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Maternal health remains a pervasive challenge in developing and
underdeveloped countries. Inadequate access to basic antenatal Ultrasound (US)
examinations, limited resources such as primary health services and
infrastructure, and lack of skilled healthcare professionals are the major
concerns. To improve the quality of maternal care, robot-assisted antenatal US
systems with teleoperable and autonomous capabilities were introduced. However,
the existing teleoperation systems rely on standard video stream-based
approaches that are constrained by limited immersion and scene awareness. Also,
there is no prior work on autonomous antenatal robotic US systems that automate
standardized scanning protocols. To that end, this paper introduces a novel
Virtual Reality (VR) platform for robotic antenatal ultrasound, which enables
sonologists to control a robotic arm over a wired network. The effectiveness of
the system is enhanced by providing a reconstructed 3D view of the environment
and immersing the user in a VR space. Also, the system facilitates a better
understanding of the anatomical surfaces to perform pragmatic scans using 3D
models. Further, the proposed robotic system also has autonomous capabilities;
under the supervision of the sonologist, it can perform the standard six-step
approach for obstetric US scanning recommended by the ISUOG. Using a 23-week
fetal phantom, the proposed system was demonstrated to technology and academia
experts at MEDICA 2022 as a part of the KUKA Innovation Award. The positive
feedback from them supports the feasibility of the system. It also gave an
insight into the improvisations to be carried out to make it a clinically
viable system.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 14:12:04 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"A",
"Shyam",
""
],
[
"Purayath",
"Aparna",
""
],
[
"S",
"Keerthivasan",
""
],
[
"M",
"Akash S",
""
],
[
"Govindaraju",
"Aswathaman",
""
],
[
"Lakshmanan",
"Manojkumar",
""
],
[
"Sivaprakasam",
"Mohanasankar",
""
]
] |
new_dataset
| 0.992561 |
2309.03728
|
Moni Naor
|
Moni Naor and Eugene Pekel
|
Adjacency Sketches in Adversarial Environments
| null | null | null | null |
cs.DS cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An adjacency sketching or implicit labeling scheme for a family $\cal F$ of
graphs is a method that defines for any $n$ vertex $G \in \cal F$ an assignment
of labels to each vertex in $G$, so that the labels of two vertices tell you
whether or not they are adjacent. The goal is to come up with labeling schemes
that use as few bits as possible to represent the labels. By using randomness
when assigning labels, it is sometimes possible to produce adjacency sketches
with much smaller label sizes, but this comes at the cost of introducing some
probability of error. Both deterministic and randomized labeling schemes have
been extensively studied, as they have applications for distributed data
structures and deeper connections to universal graphs and communication
complexity. The main question of interest is which graph families have schemes
using short labels, usually $O(\log n)$ in the deterministic case or constant
for randomized sketches.
In this work we consider the resilience of probabilistic adjacency sketches
against an adversary making adaptive queries to the labels. This differs from
the previously analyzed probabilistic setting which is ``one shot". We show
that in the adaptive adversarial case the size of the labels is tightly related
to the maximal degree of the graphs in $\cal F$. This results in a stronger
characterization compared to what is known in the non-adversarial setting. In
more detail, we construct sketches that fail with probability $\varepsilon$ for
graphs with maximal degree $d$ using $2d\log (1/\varepsilon)$ bit labels and
show that this is roughly the best that can be done for any specific graph of
maximal degree $d$, e.g.\ a $d$-ary tree.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 14:13:44 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Naor",
"Moni",
""
],
[
"Pekel",
"Eugene",
""
]
] |
new_dataset
| 0.996185 |
2309.03755
|
Qiang Huang
|
Yihao Ang, Qiang Huang, Yifan Bao, Anthony K. H. Tung, Zhiyong Huang
|
TSGBench: Time Series Generation Benchmark
|
14 pages, 8 figures, and 4 tables
| null | null | null |
cs.LG cs.AI cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Synthetic Time Series Generation (TSG) is crucial in a range of applications,
including data augmentation, anomaly detection, and privacy preservation.
Although significant strides have been made in this field, existing methods
exhibit three key limitations: (1) They often benchmark against similar model
types, constraining a holistic view of performance capabilities. (2) The use of
specialized synthetic and private datasets introduces biases and hampers
generalizability. (3) Ambiguous evaluation measures, often tied to custom
networks or downstream tasks, hinder consistent and fair comparison.
To overcome these limitations, we introduce \textsf{TSGBench}, the inaugural
TSG Benchmark, designed for a unified and comprehensive assessment of TSG
methods. It comprises three modules: (1) a curated collection of publicly
available, real-world datasets tailored for TSG, together with a standardized
preprocessing pipeline; (2) a comprehensive evaluation measures suite including
vanilla measures, new distance-based assessments, and visualization tools; (3)
a pioneering generalization test rooted in Domain Adaptation (DA), compatible
with all methods. We have conducted extensive experiments across ten real-world
datasets from diverse domains, utilizing ten advanced TSG methods and twelve
evaluation measures, all gauged through \textsf{TSGBench}. The results
highlight its remarkable efficacy and consistency. More importantly,
\textsf{TSGBench} delivers a statistical breakdown of method rankings,
illuminating performance variations across different datasets and measures, and
offering nuanced insights into the effectiveness of each method.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 14:51:42 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Ang",
"Yihao",
""
],
[
"Huang",
"Qiang",
""
],
[
"Bao",
"Yifan",
""
],
[
"Tung",
"Anthony K. H.",
""
],
[
"Huang",
"Zhiyong",
""
]
] |
new_dataset
| 0.979298 |
2309.03763
|
Johannes Flotzinger
|
Johannes Flotzinger, Philipp J. R\"osch, Norbert Oswald, Thomas Braml
|
dacl1k: Real-World Bridge Damage Dataset Putting Open-Source Data to the
Test
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognising reinforced concrete defects (RCDs) is a crucial element for
determining the structural integrity, traffic safety and durability of bridges.
However, most of the existing datasets in the RCD domain are derived from a
small number of bridges acquired in specific camera poses, lighting conditions
and with fixed hardware. These limitations question the usability of models
trained on such open-source data in real-world scenarios. We address this
problem by testing such models on our "dacl1k" dataset, a highly diverse RCD
dataset for multi-label classification based on building inspections including
1,474 images. Thereby, we trained the models on different combinations of
open-source data (meta datasets) which were subsequently evaluated both
extrinsically and intrinsically. During extrinsic evaluation, we report metrics
on dacl1k and the meta datasets. The performance analysis on dacl1k shows
practical usability of the meta data, where the best model shows an Exact Match
Ratio of 32%. Additionally, we conduct an intrinsic evaluation by clustering
the bottleneck features of the best model derived from the extrinsic evaluation
in order to find out, if the model has learned distinguishing datasets or the
classes (RCDs) which is the aspired goal. The dacl1k dataset and our trained
models will be made publicly available, enabling researchers and practitioners
to put their models to the real-world test.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 15:05:35 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Flotzinger",
"Johannes",
""
],
[
"Rösch",
"Philipp J.",
""
],
[
"Oswald",
"Norbert",
""
],
[
"Braml",
"Thomas",
""
]
] |
new_dataset
| 0.999821 |
2309.03771
|
Zeping Sui
|
Zeping Sui, Hongming Zhang, Sumei Sun, Lie-Liang Yang, Lajos Hanzo
|
Space-Time Shift Keying Aided OTFS Modulation for Orthogonal Multiple
Access
|
Accepted by IEEE Transactions on Communications
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Space-time shift keying-aided orthogonal time frequency space
modulation-based multiple access (STSK-OTFS-MA) is proposed for reliable uplink
transmission in high-Doppler scenarios. As a beneficial feature of our
STSK-OTFS-MA system, extra information bits are mapped onto the indices of the
active dispersion matrices, which allows the system to enjoy the joint benefits
of both STSK and OTFS signalling. Due to the fact that both the time-, space-
and DD-domain degrees of freedom are jointly exploited, our STSK-OTFS-MA
achieves increased diversity and coding gains. To mitigate the potentially
excessive detection complexity, the sparse structure of the equivalent
transmitted symbol vector is exploited, resulting in a pair of low-complexity
near-maximum likelihood (ML) multiuser detection algorithms. Explicitly, we
conceive a progressive residual check-based greedy detector (PRCGD) and an
iterative reduced-space check-based detector (IRCD). Then, we derive both the
unconditional single-user pairwise error probability (SU-UPEP) and a tight bit
error ratio (BER) union-bound for our single-user STSK-OTFS-MA system employing
the ML detector. Furthermore, the discrete-input continuous-output memoryless
channel (DCMC) capacity of the proposed system is derived. The optimal
dispersion matrices (DMs) are designed based on the maximum attainable
diversity and coding gain metrics. Finally, it is demonstrated that our
STSK-OTFS-MA system achieves both a lower BER and a higher DCMC capacity than
its conventional spatial modulation (SM) {and its orthogonal frequency-division
multiplexing (OFDM) counterparts. As a benefit, the proposed system strikes a
compelling BER vs. system complexity as well as BER vs. detection complexity
trade-offs.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 15:20:21 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Sui",
"Zeping",
""
],
[
"Zhang",
"Hongming",
""
],
[
"Sun",
"Sumei",
""
],
[
"Yang",
"Lie-Liang",
""
],
[
"Hanzo",
"Lajos",
""
]
] |
new_dataset
| 0.990821 |
2309.03799
|
Linh Trinh
|
Linh Trinh, Bach Ha, Tu Tran
|
FisheyePP4AV: A privacy-preserving method for autonomous vehicles on
fisheye camera images
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In many parts of the world, the use of vast amounts of data collected on
public roadways for autonomous driving has increased. In order to detect and
anonymize pedestrian faces and nearby car license plates in actual road-driving
scenarios, there is an urgent need for effective solutions. As more data is
collected, privacy concerns regarding it increase, including but not limited to
pedestrian faces and surrounding vehicle license plates. Normal and fisheye
cameras are the two common camera types that are typically mounted on
collection vehicles. With complex camera distortion models, fisheye camera
images were deformed in contrast to regular images. It causes computer vision
tasks to perform poorly when using numerous deep learning models. In this work,
we pay particular attention to protecting privacy while yet adhering to several
laws for fisheye camera photos taken by driverless vehicles. First, we suggest
a framework for extracting face and plate identification knowledge from several
teacher models. Our second suggestion is to transform both the image and the
label from a regular image to fisheye-like data using a varied and realistic
fisheye transformation. Finally, we run a test using the open-source PP4AV
dataset. The experimental findings demonstrated that our model outperformed
baseline methods when trained on data from autonomous vehicles, even when the
data were softly labeled. The implementation code is available at our github:
https://github.com/khaclinh/FisheyePP4AV.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 15:51:31 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Trinh",
"Linh",
""
],
[
"Ha",
"Bach",
""
],
[
"Tran",
"Tu",
""
]
] |
new_dataset
| 0.999371 |
2309.03812
|
Salehe Erfanian Ebadi
|
Francesco Picetti, Shrinath Deshpande, Jonathan Leban, Soroosh
Shahtalebi, Jay Patel, Peifeng Jing, Chunpu Wang, Charles Metze III, Cameron
Sun, Cera Laidlaw, James Warren, Kathy Huynh, River Page, Jonathan Hogins,
Adam Crespi, Sujoy Ganguly, Salehe Erfanian Ebadi
|
AnthroNet: Conditional Generation of Humans via Anthropometrics
|
AnthroNet's Unity data generator source code is available at:
https://unity-technologies.github.io/AnthroNet/
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel human body model formulated by an extensive set of
anthropocentric measurements, which is capable of generating a wide range of
human body shapes and poses. The proposed model enables direct modeling of
specific human identities through a deep generative architecture, which can
produce humans in any arbitrary pose. It is the first of its kind to have been
trained end-to-end using only synthetically generated data, which not only
provides highly accurate human mesh representations but also allows for precise
anthropometry of the body. Moreover, using a highly diverse animation library,
we articulated our synthetic humans' body and hands to maximize the diversity
of the learnable priors for model training. Our model was trained on a dataset
of $100k$ procedurally-generated posed human meshes and their corresponding
anthropometric measurements. Our synthetic data generator can be used to
generate millions of unique human identities and poses for non-commercial
academic research purposes.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 16:09:06 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Picetti",
"Francesco",
""
],
[
"Deshpande",
"Shrinath",
""
],
[
"Leban",
"Jonathan",
""
],
[
"Shahtalebi",
"Soroosh",
""
],
[
"Patel",
"Jay",
""
],
[
"Jing",
"Peifeng",
""
],
[
"Wang",
"Chunpu",
""
],
[
"Metze",
"Charles",
"III"
],
[
"Sun",
"Cameron",
""
],
[
"Laidlaw",
"Cera",
""
],
[
"Warren",
"James",
""
],
[
"Huynh",
"Kathy",
""
],
[
"Page",
"River",
""
],
[
"Hogins",
"Jonathan",
""
],
[
"Crespi",
"Adam",
""
],
[
"Ganguly",
"Sujoy",
""
],
[
"Ebadi",
"Salehe Erfanian",
""
]
] |
new_dataset
| 0.991807 |
2309.03815
|
Guokai Zhang
|
An-An Liu, Guokai Zhang, Yuting Su, Ning Xu, Yongdong Zhang, and
Lanjun Wang
|
T2IW: Joint Text to Image & Watermark Generation
| null | null | null | null |
cs.CV cs.MM eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent developments in text-conditioned image generative models have
revolutionized the production of realistic results. Unfortunately, this has
also led to an increase in privacy violations and the spread of false
information, which requires the need for traceability, privacy protection, and
other security measures. However, existing text-to-image paradigms lack the
technical capabilities to link traceable messages with image generation. In
this study, we introduce a novel task for the joint generation of text to image
and watermark (T2IW). This T2IW scheme ensures minimal damage to image quality
when generating a compound image by forcing the semantic feature and the
watermark signal to be compatible in pixels. Additionally, by utilizing
principles from Shannon information theory and non-cooperative game theory, we
are able to separate the revealed image and the revealed watermark from the
compound image. Furthermore, we strengthen the watermark robustness of our
approach by subjecting the compound image to various post-processing attacks,
with minimal pixel distortion observed in the revealed watermark. Extensive
experiments have demonstrated remarkable achievements in image quality,
watermark invisibility, and watermark robustness, supported by our proposed set
of evaluation metrics.
|
[
{
"version": "v1",
"created": "Thu, 7 Sep 2023 16:12:06 GMT"
}
] | 2023-09-08T00:00:00 |
[
[
"Liu",
"An-An",
""
],
[
"Zhang",
"Guokai",
""
],
[
"Su",
"Yuting",
""
],
[
"Xu",
"Ning",
""
],
[
"Zhang",
"Yongdong",
""
],
[
"Wang",
"Lanjun",
""
]
] |
new_dataset
| 0.968105 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.