id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2301.03946
|
Catherine Han
|
Catherine Han, Joseph Seering, Deepak Kumar, Jeffrey T. Hancock, Zakir
Durumeric
|
Hate Raids on Twitch: Echoes of the Past, New Modalities, and
Implications for Platform Governance
| null | null | null | null |
cs.CY cs.CR cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the summer of 2021, users on the livestreaming platform Twitch were
targeted by a wave of "hate raids," a form of attack that overwhelms a
streamer's chatroom with hateful messages, often through the use of bots and
automation. Using a mixed-methods approach, we combine a quantitative
measurement of attacks across the platform with interviews of streamers and
third-party bot developers. We present evidence that confirms that some hate
raids were highly-targeted, hate-driven attacks, but we also observe another
mode of hate raid similar to networked harassment and specific forms of
subcultural trolling. We show that the streamers who self-identify as LGBTQ+
and/or Black were disproportionately targeted and that hate raid messages were
most commonly rooted in anti-Black racism and antisemitism. We also document
how these attacks elicited rapid community responses in both bolstering
reactive moderation and developing proactive mitigations for future attacks. We
conclude by discussing how platforms can better prepare for attacks and protect
at-risk communities while considering the division of labor between community
moderators, tool-builders, and platforms.
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 13:00:14 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Jan 2023 01:02:16 GMT"
}
] | 2023-01-16T00:00:00 |
[
[
"Han",
"Catherine",
""
],
[
"Seering",
"Joseph",
""
],
[
"Kumar",
"Deepak",
""
],
[
"Hancock",
"Jeffrey T.",
""
],
[
"Durumeric",
"Zakir",
""
]
] |
new_dataset
| 0.998604 |
2301.04460
|
Julius Kirkegaard
|
Albert Alonso and Julius B. Kirkegaard
|
Fast spline detection in high density microscopy data
| null | null | null | null |
cs.CV cs.LG q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Computer-aided analysis of biological microscopy data has seen a massive
improvement with the utilization of general-purpose deep learning techniques.
Yet, in microscopy studies of multi-organism systems, the problem of collision
and overlap remains challenging. This is particularly true for systems composed
of slender bodies such as crawling nematodes, swimming spermatozoa, or the
beating of eukaryotic or prokaryotic flagella. Here, we develop a novel
end-to-end deep learning approach to extract precise shape trajectories of
generally motile and overlapping splines. Our method works in low resolution
settings where feature keypoints are hard to define and detect. Detection is
fast and we demonstrate the ability to track thousands of overlapping organisms
simultaneously. While our approach is agnostic to area of application, we
present it in the setting of and exemplify its usability on dense experiments
of crawling Caenorhabditis elegans. The model training is achieved purely on
synthetic data, utilizing a physics-based model for nematode motility, and we
demonstrate the model's ability to generalize from simulations to experimental
videos.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 13:40:05 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Jan 2023 10:05:00 GMT"
}
] | 2023-01-16T00:00:00 |
[
[
"Alonso",
"Albert",
""
],
[
"Kirkegaard",
"Julius B.",
""
]
] |
new_dataset
| 0.997817 |
2301.05277
|
Debasree Das
|
Debasree Das, Sandip Chakraborty, Bivas Mitra
|
DriCon: On-device Just-in-Time Context Characterization for Unexpected
Driving Events
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Driving is a complex task carried out under the influence of diverse spatial
objects and their temporal interactions. Therefore, a sudden fluctuation in
driving behavior can be due to either a lack of driving skill or the effect of
various on-road spatial factors such as pedestrian movements, peer vehicles'
actions, etc. Therefore, understanding the context behind a degraded driving
behavior just-in-time is necessary to ensure on-road safety. In this paper, we
develop a system called \ourmethod{} that exploits the information acquired
from a dashboard-mounted edge-device to understand the context in terms of
micro-events from a diverse set of on-road spatial factors and in-vehicle
driving maneuvers taken. \ourmethod{} uses the live in-house testbed and the
largest publicly available driving dataset to generate human interpretable
explanations against the unexpected driving events. Also, it provides a better
insight with an improved similarity of $80$\% over $50$ hours of driving data
than the existing driving behavior characterization techniques.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 19:55:33 GMT"
}
] | 2023-01-16T00:00:00 |
[
[
"Das",
"Debasree",
""
],
[
"Chakraborty",
"Sandip",
""
],
[
"Mitra",
"Bivas",
""
]
] |
new_dataset
| 0.99974 |
2301.05402
|
Evan Crothers
|
Evan Crothers, Herna Viktor, Nathalie Japkowicz
|
In BLOOM: Creativity and Affinity in Artificial Lyrics and Art
|
Accepted to AAAI2023 creativeAI workshop
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We apply a large multilingual language model (BLOOM-176B) in open-ended
generation of Chinese song lyrics, and evaluate the resulting lyrics for
coherence and creativity using human reviewers. We find that current
computational metrics for evaluating large language model outputs (MAUVE) have
limitations in evaluation of creative writing. We note that the human concept
of creativity requires lyrics to be both comprehensible and distinctive -- and
that humans assess certain types of machine-generated lyrics to score more
highly than real lyrics by popular artists. Inspired by the inherently
multimodal nature of album releases, we leverage a Chinese-language stable
diffusion model to produce high-quality lyric-guided album art, demonstrating a
creative approach for an artist seeking inspiration for an album or single.
Finally, we introduce the MojimLyrics dataset, a Chinese-language dataset of
popular song lyrics for future research.
|
[
{
"version": "v1",
"created": "Fri, 13 Jan 2023 06:22:22 GMT"
}
] | 2023-01-16T00:00:00 |
[
[
"Crothers",
"Evan",
""
],
[
"Viktor",
"Herna",
""
],
[
"Japkowicz",
"Nathalie",
""
]
] |
new_dataset
| 0.982519 |
2301.05434
|
Esha Pahwa
|
Esha Pahwa, Achleshwar Luthra, Pratik Narang
|
LVRNet: Lightweight Image Restoration for Aerial Images under Low
Visibility
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Learning to recover clear images from images having a combination of
degrading factors is a challenging task. That being said, autonomous
surveillance in low visibility conditions caused by high pollution/smoke, poor
air quality index, low light, atmospheric scattering, and haze during a
blizzard becomes even more important to prevent accidents. It is thus crucial
to form a solution that can result in a high-quality image and is efficient
enough to be deployed for everyday use. However, the lack of proper datasets
available to tackle this task limits the performance of the previous methods
proposed. To this end, we generate the LowVis-AFO dataset, containing 3647
paired dark-hazy and clear images. We also introduce a lightweight deep
learning model called Low-Visibility Restoration Network (LVRNet). It
outperforms previous image restoration methods with low latency, achieving a
PSNR value of 25.744 and an SSIM of 0.905, making our approach scalable and
ready for practical use. The code and data can be found at
https://github.com/Achleshwar/LVRNet.
|
[
{
"version": "v1",
"created": "Fri, 13 Jan 2023 08:43:11 GMT"
}
] | 2023-01-16T00:00:00 |
[
[
"Pahwa",
"Esha",
""
],
[
"Luthra",
"Achleshwar",
""
],
[
"Narang",
"Pratik",
""
]
] |
new_dataset
| 0.995176 |
2301.05455
|
Jan Martin Nordbotten
|
Jan Martin Nordbotten, Benyamine Benali, Jakub Wiktor Both, Bergit
Brattek{\aa}s, Erlend Storvik, Martin A. Fern{\o}
|
DarSIA: An open-source Python toolbox for two-scale image processing of
dynamics in porous media
| null | null | null | null |
cs.MS
|
http://creativecommons.org/licenses/by/4.0/
|
Understanding porous media flow is inherently a multi-scale challenge, where
at the core lies the aggregation of pore-level processes to a continuum, or
Darcy-scale, description. This challenge is directly mirrored in image
processing, where grains and interfaces may be clearly visible, yet continuous
parameters are desirable to measure. Classical image processing is poorly
adapted to this setting, as most techniques do not explicitly utilize the fact
that the image contains explicit physical processes.
Here, we adapt classical image processing concepts to what we define as
physical images of porous materials and processes within them. This is realized
through the development of a new open-source image analysis toolbox
specifically adapted to time-series of images of porous materials.
|
[
{
"version": "v1",
"created": "Fri, 13 Jan 2023 09:48:36 GMT"
}
] | 2023-01-16T00:00:00 |
[
[
"Nordbotten",
"Jan Martin",
""
],
[
"Benali",
"Benyamine",
""
],
[
"Both",
"Jakub Wiktor",
""
],
[
"Brattekås",
"Bergit",
""
],
[
"Storvik",
"Erlend",
""
],
[
"Fernø",
"Martin A.",
""
]
] |
new_dataset
| 0.993537 |
2301.05530
|
Ruby Kumari
|
Ruby Kumari, Jai Gopal Pandey, Abhijit Karmakar
|
An RTL Implementation of the Data Encryption Standard (DES)
|
10 Pages with 7 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Data Encryption Standard (DES) is based on the Feistel block cipher,
developed in 1971 by IBM cryptography researcher Horst Feistel. DES uses 16
rounds of the Feistel structure. But with the changes in recent years, the
internet is starting to be used more to connect devices to each other. These
devices can range from powerful computing devices, such as desktop computers
and tablets, to resource constrained devices, When it comes to these
constrained devices, using a different key for each round cryptography
algorithms fail to provide necessary security and performance.
|
[
{
"version": "v1",
"created": "Fri, 13 Jan 2023 13:20:38 GMT"
}
] | 2023-01-16T00:00:00 |
[
[
"Kumari",
"Ruby",
""
],
[
"Pandey",
"Jai Gopal",
""
],
[
"Karmakar",
"Abhijit",
""
]
] |
new_dataset
| 0.973765 |
2301.05538
|
Zitai Chen
|
Zitai Chen, David Oswald
|
PMFault: Faulting and Bricking Server CPUs through Management Interfaces
|
For demo and source code, visit https://zt-chen.github.io/PMFault/
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Apart from the actual CPU, modern server motherboards contain other auxiliary
components, for example voltage regulators for power management. Those are
connected to the CPU and the separate Baseboard Management Controller (BMC) via
the I2C-based PMBus.
In this paper, using the case study of the widely used Supermicro X11SSL
motherboard, we show how remotely exploitable software weaknesses in the BMC
(or other processors with PMBus access) can be used to access the PMBus and
then perform hardware-based fault injection attacks on the main CPU. The
underlying weaknesses include insecure firmware encryption and signing
mechanisms, a lack of authentication for the firmware upgrade process and the
IPMI KCS control interface, as well as the motherboard design (with the PMBus
connected to the BMC and SMBus by default).
First, we show that undervolting through the PMBus allows breaking the
integrity guarantees of SGX enclaves, bypassing Intel's countermeasures against
previous undervolting attacks like Plundervolt/V0ltPwn. Second, we
experimentally show that overvolting outside the specified range has the
potential of permanently damaging Intel Xeon CPUs, rendering the server
inoperable. We assess the impact of our findings on other server motherboards
made by Supermicro and ASRock.
Our attacks, dubbed PMFault, can be carried out by a privileged software
adversary and do not require physical access to the server motherboard or
knowledge of the BMC login credentials.
We responsibly disclosed the issues reported in this paper to Supermicro and
discuss possible countermeasures at different levels. To the best of our
knowledge, the 12th generation of Supermicro motherboards, which was designed
before we reported PMFault to Supermicro, is not vulnerable.
|
[
{
"version": "v1",
"created": "Fri, 13 Jan 2023 13:36:28 GMT"
}
] | 2023-01-16T00:00:00 |
[
[
"Chen",
"Zitai",
""
],
[
"Oswald",
"David",
""
]
] |
new_dataset
| 0.996134 |
2301.05550
|
Paul Jungeblut
|
Nicholas Bieker, Thomas Bl\"asius, Emil Dohse, Paul Jungeblut
|
Recognizing Unit Disk Graphs in Hyperbolic Geometry is
$\exists\mathbb{R}$-Complete
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A graph G is a (Euclidean) unit disk graph if it is the intersection graph of
unit disks in the Euclidean plane $\mathbb{R}^2$. Recognizing them is known to
be $\exists\mathbb{R}$-complete, i.e., as hard as solving a system of
polynomial inequalities. In this note we describe a simple framework to
translate $\exists\mathbb{R}$-hardness reductions from the Euclidean plane
$\mathbb{R}^2$ to the hyperbolic plane $\mathbb{H}^2$. We apply our framework
to prove that the recognition of unit disk graphs in the hyperbolic plane is
also $\exists\mathbb{R}$-complete.
|
[
{
"version": "v1",
"created": "Fri, 13 Jan 2023 13:55:03 GMT"
}
] | 2023-01-16T00:00:00 |
[
[
"Bieker",
"Nicholas",
""
],
[
"Bläsius",
"Thomas",
""
],
[
"Dohse",
"Emil",
""
],
[
"Jungeblut",
"Paul",
""
]
] |
new_dataset
| 0.970502 |
2301.05565
|
Xiang Li
|
Li Xiang, He Miao, Luo Haibo, Xiao Jiajie
|
DINF: Dynamic Instance Noise Filter for Occluded Pedestrian Detection
|
15 pages, 8 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Occlusion issue is the biggest challenge in pedestrian detection. RCNN-based
detectors extract instance features by cropping rectangle regions of interest
in the feature maps. However, the visible pixels of the occluded objects are
limited, making the rectangle instance feature mixed with a lot of
instance-irrelevant noise information. Besides, by counting the number of
instances with different degrees of overlap of CrowdHuman dataset, we find that
the number of severely overlapping objects and the number of slightly
overlapping objects are unbalanced, which may exacerbate the challenges posed
by occlusion issues. Regarding to the noise issue, from the perspective of
denoising, an iterable dynamic instance noise filter (DINF) is proposed for the
RCNN-based pedestrian detectors to improve the signal-noise ratio of the
instance feature. Simulating the wavelet denoising process, we use the instance
feature vector to generate dynamic convolutional kernels to transform the RoIs
features to a domain in which the near-zero values represent the noise
information. Then, soft thresholding with channel-wise adaptive thresholds is
applied to convert the near-zero values to zero to filter out noise
information. For the imbalance issue, we propose an IoU-Focal factor (IFF) to
modulate the contributions of the well-regressed boxes and the bad-regressed
boxes to the loss in the training process, paying more attention to the
minority severely overlapping objects. Extensive experiments conducted on
CrowdHuman and CityPersons demonstrate that our methods can help RCNN-based
pedestrian detectors achieve state-of-the-art performance.
|
[
{
"version": "v1",
"created": "Fri, 13 Jan 2023 14:12:36 GMT"
}
] | 2023-01-16T00:00:00 |
[
[
"Xiang",
"Li",
""
],
[
"Miao",
"He",
""
],
[
"Haibo",
"Luo",
""
],
[
"Jiajie",
"Xiao",
""
]
] |
new_dataset
| 0.997982 |
2301.05586
|
Bo Zhang
|
Chuyi Li, Lulu Li, Yifei Geng, Hongliang Jiang, Meng Cheng, Bo Zhang,
Zaidan Ke, Xiaoming Xu, Xiangxiang Chu
|
YOLOv6 v3.0: A Full-Scale Reloading
|
Tech Report. arXiv admin note: text overlap with arXiv:2209.02976
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The YOLO community has been in high spirits since our first two releases! By
the advent of Chinese New Year 2023, which sees the Year of the Rabbit, we
refurnish YOLOv6 with numerous novel enhancements on the network architecture
and the training scheme. This release is identified as YOLOv6 v3.0. For a
glimpse of performance, our YOLOv6-N hits 37.5% AP on the COCO dataset at a
throughput of 1187 FPS tested with an NVIDIA Tesla T4 GPU. YOLOv6-S strikes
45.0% AP at 484 FPS, outperforming other mainstream detectors at the same scale
(YOLOv5-S, YOLOv8-S, YOLOX-S and PPYOLOE-S). Whereas, YOLOv6-M/L also achieve
better accuracy performance (50.0%/52.8% respectively) than other detectors at
a similar inference speed. Additionally, with an extended backbone and neck
design, our YOLOv6-L6 achieves the state-of-the-art accuracy in real-time.
Extensive experiments are carefully conducted to validate the effectiveness of
each improving component. Our code is made available at
https://github.com/meituan/YOLOv6.
|
[
{
"version": "v1",
"created": "Fri, 13 Jan 2023 14:46:46 GMT"
}
] | 2023-01-16T00:00:00 |
[
[
"Li",
"Chuyi",
""
],
[
"Li",
"Lulu",
""
],
[
"Geng",
"Yifei",
""
],
[
"Jiang",
"Hongliang",
""
],
[
"Cheng",
"Meng",
""
],
[
"Zhang",
"Bo",
""
],
[
"Ke",
"Zaidan",
""
],
[
"Xu",
"Xiaoming",
""
],
[
"Chu",
"Xiangxiang",
""
]
] |
new_dataset
| 0.999834 |
2301.05604
|
Kangcheng Liu
|
Kangcheng Liu
|
A LiDAR-Inertial-Visual SLAM System with Loop Detection
|
2022 12th International Conference on CYBER Technology in Automation,
Control, and Intelligent Systems (IEEE Cyber Oral)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We have proposed, to the best of our knowledge, the first-of-its-kind
LiDAR-Inertial-Visual-Fused simultaneous localization and mapping (SLAM) system
with a strong place recognition capacity. Our proposed SLAM system is consist
of visual-inertial odometry (VIO) and LiDAR inertial odometry (LIO) subsystems.
We propose the LIO subsystem utilizing the measurement from the LiDAR and the
inertial sensors to build the local odometry map, and propose the VIO subsystem
which takes in the visual information to construct the 2D-3D associated map.
Then, we propose an iterative Kalman Filter-based optimization function to
optimize the local project-based 2D-to-3D photo-metric error between the
projected image pixels and the local 3D points to make the robust 2D-3D
alignment. Finally, we have also proposed the back-end pose graph global
optimization and the elaborately designed loop closure detection network to
improve the accuracy of the whole SLAM system. Extensive experiments deployed
on the UGV in complicated real-world circumstances demonstrate that our
proposed LiDAR-Visual-Inertial localization system outperforms the current
state-of-the-art in terms of accuracy, efficiency, and robustness.
|
[
{
"version": "v1",
"created": "Fri, 13 Jan 2023 15:16:09 GMT"
}
] | 2023-01-16T00:00:00 |
[
[
"Liu",
"Kangcheng",
""
]
] |
new_dataset
| 0.998956 |
1810.00624
|
L. Sunil Chandran
|
L. Sunil Chandran and Talha Hashim and Dalu Jacob and Rogers Mathew
and Deepak Rajendraprasad and Nitin Singh
|
New bounds on the anti-Ramsey numbers of star graphs
|
19 pages, 3 figures
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The anti-Ramsey number $ar(G,H)$ with input graph $G$ and pattern graph $H$,
is the maximum positive integer $k$ such that there exists an edge coloring of
$G$ using $k$ colors, in which there are no rainbow subgraphs isomorphic to $H$
in $G$. ($H$ is rainbow if all its edges get distinct colors). The concept of
anti-Ramsey number was introduced by Erd\"os, Simanovitz, and S\'os in 1973.
Thereafter several researchers investigated this concept in the combinatorial
setting. Recently, Feng et al. revisited the anti-Ramsey problem for the
pattern graph $K_{1,t}$ (for $t \geq 3$) purely from an algorithmic point of
view due to its applications in interference modeling of wireless networks.
They posed it as an optimization problem, the maximum edge $q$-coloring
problem. For a graph $G$ and an integer $q\geq 2$, an edge $q$-coloring of $G$
is an assignment of colors to edges of $G$, such that edges incident on a
vertex span at most $q$ distinct colors. The maximum edge $q$-coloring problem
seeks to maximize the number of colors in an edge $q$-coloring of the graph
$G$. Note that the optimum value of the edge $q$-coloring problem of $G$ equals
$ar(G,K_{1,q+1})$. In this paper, we study $ar(G,K_{1,t})$, the anti-Ramsey
number of stars, for each fixed integer $t\geq 3$, both from combinatorial and
algorithmic point of view. The first of our main results presents an upper
bound for $ar(G,K_{1,q+1})$, in terms of number of vertices and the minimum
degree of $G$. The second one improves this result for the case of
triangle-free input graphs. For a positive integer $t$, let $H_t$ denote a
subgraph of $G$ with maximum number of possible edges and maximum degree $t$.
Our third main result presents an upper bound for $ar(G,K_{1,q+1})$ in terms of
$|E(H_{q-1})|$. All our results have algorithmic consequences.
|
[
{
"version": "v1",
"created": "Mon, 1 Oct 2018 11:18:53 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Jan 2023 07:39:44 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Chandran",
"L. Sunil",
""
],
[
"Hashim",
"Talha",
""
],
[
"Jacob",
"Dalu",
""
],
[
"Mathew",
"Rogers",
""
],
[
"Rajendraprasad",
"Deepak",
""
],
[
"Singh",
"Nitin",
""
]
] |
new_dataset
| 0.979163 |
2105.11292
|
Tatsuya Iwase Ph.D.
|
Tatsuya Iwase, Sebastian Stein, Enrico H. Gerding
|
A Polynomial-time, Truthful, Individually Rational and Budget Balanced
Ridesharing Mechanism
| null |
Proceedings of the International Joint Conference on Artificial
Intelligence (IJCAI), 2021
| null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ridesharing has great potential to improve transportation efficiency while
reducing congestion and pollution. To realize this potential, mechanisms are
needed that allocate vehicles optimally and provide the right incentives to
riders. However, many existing approaches consider restricted settings (e.g.,
only one rider per vehicle or a common origin for all riders). Moreover, naive
applications of standard approaches, such as the Vickrey-Clarke-Groves or
greedy mechanisms, cannot achieve a polynomial-time, truthful, individually
rational and budget balanced mechanism. To address this, we formulate a general
ridesharing problem and apply mechanism design to develop a novel mechanism
which satisfies all four properties and whose social cost is within 8.6% of the
optimal on average.
|
[
{
"version": "v1",
"created": "Fri, 21 May 2021 08:15:26 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Jun 2021 11:32:51 GMT"
},
{
"version": "v3",
"created": "Wed, 11 Jan 2023 03:16:37 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Iwase",
"Tatsuya",
""
],
[
"Stein",
"Sebastian",
""
],
[
"Gerding",
"Enrico H.",
""
]
] |
new_dataset
| 0.986999 |
2109.02811
|
Heeseung Bang
|
Raymond M. Zayas, Logan E. Beaver, Behdad Chalaki, Heeseung Bang,
Andreas A. Malikopoulos
|
A Digital Smart City for Emerging Mobility Systems
|
6 pages, 8 figures
|
IEEE 2nd International Conference on Digital Twins and Parallel
Intelligence (DTPI), 2022
|
10.1109/DTPI55838.2022.9998963
| null |
cs.RO cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The increasing demand for emerging mobility systems with connected and
automated vehicles has imposed the necessity for quality testing environments
to support their development. In this paper, we introduce a Unity-based virtual
simulation environment for emerging mobility systems, called the Information
and Decision Science Lab's Scaled Smart Digital City (IDS 3D City), intended to
operate alongside its physical peer and its established control framework. By
utilizing the Robot Operation System, AirSim, and Unity, we constructed a
simulation environment capable of iteratively designing experiments
significantly faster than it is possible in a physical testbed. This
environment provides an intermediate step to validate the effectiveness of our
control algorithms prior to their implementation in the physical testbed. The
IDS 3D City also enables us to demonstrate that our control algorithms work
independently of the underlying vehicle dynamics, as the vehicle dynamics
introduced by AirSim operate at a different scale than our scaled smart city.
Finally, we demonstrate the behavior of our digital environment by performing
an experiment in both the virtual and physical environments and comparing their
outputs.
|
[
{
"version": "v1",
"created": "Tue, 7 Sep 2021 01:55:47 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Sep 2021 13:30:08 GMT"
},
{
"version": "v3",
"created": "Thu, 12 Jan 2023 02:29:22 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Zayas",
"Raymond M.",
""
],
[
"Beaver",
"Logan E.",
""
],
[
"Chalaki",
"Behdad",
""
],
[
"Bang",
"Heeseung",
""
],
[
"Malikopoulos",
"Andreas A.",
""
]
] |
new_dataset
| 0.99799 |
2112.12180
|
Michal Balazia
|
Tanay Agrawal, Dhruv Agarwal, Michal Balazia, Neelabh Sinha, Francois
Bremond
|
Multimodal Personality Recognition using Cross-Attention Transformer and
Behaviour Encoding
|
Preprint. Final paper accepted at the 17th International Conference
on Computer Vision Theory and Applications (VISAPP), virtual, February, 2022.
8 pages
| null |
10.5220/0010841400003124
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Personality computing and affective computing have gained recent interest in
many research areas. The datasets for the task generally have multiple
modalities like video, audio, language and bio-signals. In this paper, we
propose a flexible model for the task which exploits all available data. The
task involves complex relations and to avoid using a large model for video
processing specifically, we propose the use of behaviour encoding which boosts
performance with minimal change to the model. Cross-attention using
transformers has become popular in recent times and is utilised for fusion of
different modalities. Since long term relations may exist, breaking the input
into chunks is not desirable, thus the proposed model processes the entire
input together. Our experiments show the importance of each of the above
contributions
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 19:14:55 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Dec 2022 22:18:25 GMT"
},
{
"version": "v3",
"created": "Thu, 12 Jan 2023 15:01:11 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Agrawal",
"Tanay",
""
],
[
"Agarwal",
"Dhruv",
""
],
[
"Balazia",
"Michal",
""
],
[
"Sinha",
"Neelabh",
""
],
[
"Bremond",
"Francois",
""
]
] |
new_dataset
| 0.994415 |
2207.09086
|
Haitian Zeng
|
Haitian Zeng, Xin Yu, Jiaxu Miao, Yi Yang
|
MHR-Net: Multiple-Hypothesis Reconstruction of Non-Rigid Shapes from 2D
Views
|
Accepted to ECCV 2022; code: https://github.com/haitianzeng/MHR-Net
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose MHR-Net, a novel method for recovering Non-Rigid Shapes from
Motion (NRSfM). MHR-Net aims to find a set of reasonable reconstructions for a
2D view, and it also selects the most likely reconstruction from the set. To
deal with the challenging unsupervised generation of non-rigid shapes, we
develop a new Deterministic Basis and Stochastic Deformation scheme in MHR-Net.
The non-rigid shape is first expressed as the sum of a coarse shape basis and a
flexible shape deformation, then multiple hypotheses are generated with
uncertainty modeling of the deformation part. MHR-Net is optimized with
reprojection loss on the basis and the best hypothesis. Furthermore, we design
a new Procrustean Residual Loss, which reduces the rigid rotations between
similar shapes and further improves the performance. Experiments show that
MHR-Net achieves state-of-the-art reconstruction accuracy on Human3.6M, SURREAL
and 300-VW datasets.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 05:47:03 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Jan 2023 01:27:43 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Zeng",
"Haitian",
""
],
[
"Yu",
"Xin",
""
],
[
"Miao",
"Jiaxu",
""
],
[
"Yang",
"Yi",
""
]
] |
new_dataset
| 0.972875 |
2207.13866
|
Wen Lu
|
Zhiqi Zhang, Wen Lu, Jinshan Cao, Guangqi Xie
|
MKANet: A Lightweight Network with Sobel Boundary Loss for Efficient
Land-cover Classification of Satellite Remote Sensing Imagery
| null |
Remote Sens. 2022, 14(18), 4514
|
10.3390/rs14184514
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Land cover classification is a multi-class segmentation task to classify each
pixel into a certain natural or man-made category of the earth surface, such as
water, soil, natural vegetation, crops, and human infrastructure. Limited by
hardware computational resources and memory capacity, most existing studies
preprocessed original remote sensing images by down sampling or cropping them
into small patches less than 512*512 pixels before sending them to a deep
neural network. However, down sampling images incurs spatial detail loss,
renders small segments hard to discriminate, and reverses the spatial
resolution progress obtained by decades of years of efforts. Cropping images
into small patches causes a loss of long-range context information, and
restoring the predicted results to their original size brings extra latency. In
response to the above weaknesses, we present an efficient lightweight semantic
segmentation network termed MKANet. Aimed at the characteristics of top view
high-resolution remote sensing imagery, MKANet utilizes sharing kernels to
simultaneously and equally handle ground segments of inconsistent scales, and
also employs parallel and shallow architecture to boost inference speed and
friendly support image patches more than 10X larger. To enhance boundary and
small segments discrimination, we also propose a method that captures category
impurity areas, exploits boundary information and exerts an extra penalty on
boundaries and small segment misjudgment. Both visual interpretations and
quantitative metrics of extensive experiments demonstrate that MKANet acquires
state-of-the-art accuracy on two land-cover classification datasets and infers
2X faster than other competitive lightweight networks. All these merits
highlight the potential of MKANet in practical applications.
|
[
{
"version": "v1",
"created": "Thu, 28 Jul 2022 03:29:08 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Zhang",
"Zhiqi",
""
],
[
"Lu",
"Wen",
""
],
[
"Cao",
"Jinshan",
""
],
[
"Xie",
"Guangqi",
""
]
] |
new_dataset
| 0.999743 |
2210.00888
|
Sungho Suh
|
Mengxi Liu, Sungho Suh, Bo Zhou, Agnes Gruenerbl and Paul Lukowicz
|
Smart-Badge: A wearable badge with multi-modal sensors for kitchen
activity recognition
|
Presented at HASCA workshop of Ubicomp2022
| null | null | null |
cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human health is closely associated with their daily behavior and environment.
However, keeping a healthy lifestyle is still challenging for most people as it
is difficult to recognize their living behaviors and identify their surrounding
situations to take appropriate action. Human activity recognition is a
promising approach to building a behavior model of users, by which users can
get feedback about their habits and be encouraged to develop a healthier
lifestyle. In this paper, we present a smart light wearable badge with six
kinds of sensors, including an infrared array sensor MLX90640 offering
privacy-preserving, low-cost, and non-invasive features, to recognize daily
activities in a realistic unmodified kitchen environment. A multi-channel
convolutional neural network (MC-CNN) based on data and feature fusion methods
is applied to classify 14 human activities associated with potentially
unhealthy habits. Meanwhile, we evaluate the impact of the infrared array
sensor on the recognition accuracy of these activities. We demonstrate the
performance of the proposed work to detect the 14 activities performed by ten
volunteers with an average accuracy of 92.44 % and an F1 score of 88.27 %.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 12:52:46 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Jan 2023 14:08:51 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Liu",
"Mengxi",
""
],
[
"Suh",
"Sungho",
""
],
[
"Zhou",
"Bo",
""
],
[
"Gruenerbl",
"Agnes",
""
],
[
"Lukowicz",
"Paul",
""
]
] |
new_dataset
| 0.99971 |
2210.12817
|
Jack Stade
|
Jack Stade
|
The Point-Boundary Art Gallery Problem is $\exists\mathbb{R}$-hard
|
31 pages, 31 figures
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
We resolve the complexity of the point-boundary variant of the art gallery
problem, showing that it is $\exists\mathbb{R}$-complete, meaning that it is
equivalent under polynomial time reductions to deciding whether a system of
polynomial equations has a real solution.
Introduced by Victor Klee in 1973, the art gallery problem concerns finding
configurations of \emph{guards} which together can see every point inside of an
\emph{art gallery} shaped like a polygon. The original version of this problem
has previously been shown to $\exists\mathbb{R}$-hard, but until now the
complexity of the variant where guards only need to guard the walls of the art
gallery was an open problem.
Our results can also be used to provide a simpler proof of the
$\exists\mathbb{R}$-hardness of the point-point art gallery problem. In
particular, we show how the algebraic constraints describing a polynomial
system of equations can occur somewhat naturally in an art gallery setting.
|
[
{
"version": "v1",
"created": "Sun, 23 Oct 2022 18:30:59 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Jan 2023 22:49:52 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Stade",
"Jack",
""
]
] |
new_dataset
| 0.978186 |
2212.14750
|
Thomas Kreutz
|
Thomas Kreutz, Max M\"uhlh\"auser, and Alejandro Sanchez Guinea
|
Unsupervised 4D LiDAR Moving Object Segmentation in Stationary Settings
with Multivariate Occupancy Time Series
|
Preprint, Paper has been accepted at WACV2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we address the problem of unsupervised moving object
segmentation (MOS) in 4D LiDAR data recorded from a stationary sensor, where no
ground truth annotations are involved. Deep learning-based state-of-the-art
methods for LiDAR MOS strongly depend on annotated ground truth data, which is
expensive to obtain and scarce in existence. To close this gap in the
stationary setting, we propose a novel 4D LiDAR representation based on
multivariate time series that relaxes the problem of unsupervised MOS to a time
series clustering problem. More specifically, we propose modeling the change in
occupancy of a voxel by a multivariate occupancy time series (MOTS), which
captures spatio-temporal occupancy changes on the voxel level and its
surrounding neighborhood. To perform unsupervised MOS, we train a neural
network in a self-supervised manner to encode MOTS into voxel-level feature
representations, which can be partitioned by a clustering algorithm into moving
or stationary. Experiments on stationary scenes from the Raw KITTI dataset show
that our fully unsupervised approach achieves performance that is comparable to
that of supervised state-of-the-art approaches.
|
[
{
"version": "v1",
"created": "Fri, 30 Dec 2022 14:48:14 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Jan 2023 12:45:24 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Kreutz",
"Thomas",
""
],
[
"Mühlhäuser",
"Max",
""
],
[
"Guinea",
"Alejandro Sanchez",
""
]
] |
new_dataset
| 0.95803 |
2301.02915
|
Robert Schilling
|
Robert Schilling, Pascal Nasahl, Martin Unterguggenberger, Stefan
Mangard
|
SFP: Providing System Call Flow Protection against Software and Fault
Attacks
|
Published at HASP22
| null | null | null |
cs.CR cs.OS
|
http://creativecommons.org/licenses/by/4.0/
|
With the improvements in computing technologies, edge devices in the
Internet-of-Things have become more complex. The enabler technology for these
complex systems are powerful application core processors with operating system
support, such as Linux. While the isolation of applications through the
operating system increases the security, the interface to the kernel poses a
new threat. Different attack vectors, including fault attacks and memory
vulnerabilities, exploit the kernel interface to escalate privileges and take
over the system.
In this work, we present SFP, a mechanism to protect the execution of system
calls against software and fault attacks providing integrity to user-kernel
transitions. SFP provides system call flow integrity by a two-step linking
approach, which links the system call and its origin to the state of
control-flow integrity. A second linking step within the kernel ensures that
the right system call is executed in the kernel. Combining both linking steps
ensures that only the correct system call is executed at the right location in
the program and cannot be skipped. Furthermore, SFP provides dynamic CFI
instrumentation and a new CFI checking policy at the edge of the kernel to
verify the control-flow state of user programs before entering the kernel. We
integrated SFP into FIPAC, a CFI protection scheme exploiting ARM pointer
authentication. Our prototype is based on a custom LLVM-based toolchain with an
instrumented runtime library combined with a custom Linux kernel to protect
system calls. The evaluation of micro- and macrobenchmarks based on SPEC 2017
show an average runtime overhead of 1.9 % and 20.6 %, which is only an increase
of 1.8 % over plain control-flow protection. This small impact on the
performance shows the efficiency of SFP for protecting all system calls and
providing integrity for the user-kernel transitions.
|
[
{
"version": "v1",
"created": "Sat, 7 Jan 2023 18:35:08 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Jan 2023 12:10:34 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Schilling",
"Robert",
""
],
[
"Nasahl",
"Pascal",
""
],
[
"Unterguggenberger",
"Martin",
""
],
[
"Mangard",
"Stefan",
""
]
] |
new_dataset
| 0.998377 |
2301.04684
|
Michael Bennington
|
Michael J. Bennington, Tuo Wang, Jiaguo Yin, Sarah Bergbreiter, Carmel
Majidi, Victoria A. Webster-Wood
|
Design and Characterization of Viscoelastic McKibben Actuators with
Tunable Force-Velocity Curves
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible. (Submitted to RoboSoft 2023)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The McKibben pneumatic artificial muscle is a commonly studied soft robotic
actuator, and its quasistatic force-length properties have been well
characterized and modeled. However, its damping and force-velocity properties
are less well studied. Understanding these properties will allow for more
robust dynamic modeling of soft robotic systems. The force-velocity response of
these actuators is of particular interest because these actuators are often
used as hardware models of skeletal muscles for bioinspired robots, and this
force-velocity relationship is fundamental to muscle physiology. In this work,
we investigated the force-velocity response of McKibben actuators and the
ability to tune this response through the use of viscoelastic polymer sheaths.
These viscoelastic McKibben actuators (VMAs) were characterized using
iso-velocity experiments inspired by skeletal muscle physiology tests. A
simplified 1D model of the actuators was developed to connect the shape of the
force-velocity curve to the material parameters of the actuator and sheaths.
Using these viscoelastic materials, we were able to modulate the shape and
magnitude of the actuators' force-velocity curves, and using the developed
model, these changes were connected back to the material properties of the
sheaths.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 19:22:12 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Bennington",
"Michael J.",
""
],
[
"Wang",
"Tuo",
""
],
[
"Yin",
"Jiaguo",
""
],
[
"Bergbreiter",
"Sarah",
""
],
[
"Majidi",
"Carmel",
""
],
[
"Webster-Wood",
"Victoria A.",
""
]
] |
new_dataset
| 0.985493 |
2301.04725
|
Georgios Drakopoulos Dr
|
Georgios Drakopoulos and Michail Marountas and Xenophon Liapakis and
Giannis Tzimas and Phivos Mylonas and Spyros Sioutas
|
Blockchain For Mobile Health Applications: Acceleration With GPU
Computing
| null | null |
10.1007/978-3-030-32622-7_36
| null |
cs.CR cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Blockchain is a linearly linked, distributed, and very robust data structure.
Originally proposed as part of the Bitcoin distributed stack, it found a number
of applications in a number of fields, most notably in smart contracts, social
media, secure IoT, and cryptocurrency mining. It ensures data integrity by
distributing strongly encrypted data in widely redundant segments. Each new
insertion requires verification and approval by the majority of the users of
the blockchain. Both encryption and verification are computationally intensive
tasks which cannot be solved with ordinary off-the-shelf CPUs. This has
resulted in a renewed scientific interest in secure distributed communication
and coordination protocols. Mobile health applications are growing
progressively popular and have the enormous advantage of timely diagnosis of
certain conditions. However, privacy concerns have been raised as mobile health
application by default have access to highly sensitive personal data. This
chapter presents concisely how blockchain can be applied to mobile health
applications in order to enhance privacy.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 21:30:43 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Drakopoulos",
"Georgios",
""
],
[
"Marountas",
"Michail",
""
],
[
"Liapakis",
"Xenophon",
""
],
[
"Tzimas",
"Giannis",
""
],
[
"Mylonas",
"Phivos",
""
],
[
"Sioutas",
"Spyros",
""
]
] |
new_dataset
| 0.995681 |
2301.04751
|
Gerald Artner
|
Gerald Artner
|
Artificial Intelligence Generated Coins for Size Comparison
| null |
Mitteilungen der \"Osterreichischen Numismatischen Gesellschaft,
vol. 62, no. 2, pp. 9-16, 2022
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Authors of scientific articles use coins in photographs as a size reference
for objects. For this purpose, coins are placed next to objects when taking the
photo. In this letter we propose a novel method that uses artificial
intelligence (AI) generated images of coins to provide a size reference in
photos. The newest generation is able to quickly generate realistic
high-quality images from textual descriptions. With the proposed method no
physical coin is required while taking photos. Coins can be added to photos
that contain none. Furthermore, we show how the coin motif can be matched to
the object.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 23:10:38 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Artner",
"Gerald",
""
]
] |
new_dataset
| 0.99502 |
2301.04753
|
Hadi Reisizadeh
|
Hadi Reisizadeh, Mohammad Ali Maddah-Ali, and Soheil Mohajer
|
Cache-Aided $K$-User Broadcast Channels with State Information at
Receivers
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We study a $K$-user coded-caching broadcast problem in a joint source-channel
coding framework. The transmitter observes a database of files that are being
generated at a certain rate per channel use, and each user has a cache, which
can store a fixed fraction of the generated symbols. In the delivery phase, the
transmitter broadcasts a message so that the users can decode their desired
files using the received signal and their cache content. The communication
between the transmitter and the receivers happens over a (deterministic)
\textit{time-varying} erasure broadcast channel, and the channel state
information is only available to the users. We characterize the maximum
achievable source rate for the $2$-user and the degraded $K$-user problems. We
provide an upper bound for any caching strategy's achievable source rates.
Finally, we present a linear programming formulation to show that the upper
bound is not a sharp characterization. Closing the gap between the achievable
rate and the optimum rate remains open.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 23:24:00 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Reisizadeh",
"Hadi",
""
],
[
"Maddah-Ali",
"Mohammad Ali",
""
],
[
"Mohajer",
"Soheil",
""
]
] |
new_dataset
| 0.966166 |
2301.04770
|
Yiren Liu
|
Liri Fang, Lan Li, Yiren Liu, Vetle I. Torvik, Bertram Lud\"ascher
|
KAER: A Knowledge Augmented Pre-Trained Language Model for Entity
Resolution
| null | null | null | null |
cs.CL cs.DB cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Entity resolution has been an essential and well-studied task in data
cleaning research for decades. Existing work has discussed the feasibility of
utilizing pre-trained language models to perform entity resolution and achieved
promising results. However, few works have discussed injecting domain knowledge
to improve the performance of pre-trained language models on entity resolution
tasks. In this study, we propose Knowledge Augmented Entity Resolution (KAER),
a novel framework named for augmenting pre-trained language models with
external knowledge for entity resolution. We discuss the results of utilizing
different knowledge augmentation and prompting methods to improve entity
resolution performance. Our model improves on Ditto, the existing
state-of-the-art entity resolution method. In particular, 1) KAER performs more
robustly and achieves better results on "dirty data", and 2) with more general
knowledge injection, KAER outperforms the existing baseline models on the
textual dataset and dataset from the online product domain. 3) KAER achieves
competitive results on highly domain-specific datasets, such as citation
datasets, requiring the injection of expert knowledge in future work.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 00:15:40 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Fang",
"Liri",
""
],
[
"Li",
"Lan",
""
],
[
"Liu",
"Yiren",
""
],
[
"Torvik",
"Vetle I.",
""
],
[
"Ludäscher",
"Bertram",
""
]
] |
new_dataset
| 0.987901 |
2301.04841
|
Liz Izhikevich
|
Liz Izhikevich, Renata Teixeira, Zakir Durumeric
|
LZR: Identifying Unexpected Internet Services
|
In 30th USENIX Security Symposium, 2021
| null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Internet-wide scanning is a commonly used research technique that has helped
uncover real-world attacks, find cryptographic weaknesses, and understand both
operator and miscreant behavior. Studies that employ scanning have largely
assumed that services are hosted on their IANA-assigned ports, overlooking the
study of services on unusual ports. In this work, we investigate where Internet
services are deployed in practice and evaluate the security posture of services
on unexpected ports. We show protocol deployment is more diffuse than
previously believed and that protocols run on many additional ports beyond
their primary IANA-assigned port. For example, only 3% of HTTP and 6% of TLS
services run on ports 80 and 443, respectively. Services on non-standard ports
are more likely to be insecure, which results in studies dramatically
underestimating the security posture of Internet hosts. Building on our
observations, we introduce LZR ("Laser"), a system that identifies 99% of
identifiable unexpected services in five handshakes and dramatically reduces
the time needed to perform application-layer scans on ports with few responsive
expected services (e.g., 5500% speedup on 27017/MongoDB). We conclude with
recommendations for future studies.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 06:58:59 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Izhikevich",
"Liz",
""
],
[
"Teixeira",
"Renata",
""
],
[
"Durumeric",
"Zakir",
""
]
] |
new_dataset
| 0.99653 |
2301.04862
|
Mohammad Ghafari
|
Mohammad Mehdi Pourhashem Kallehbasti and Mohammad Ghafari
|
Naturalistic Static Program Analysis
|
The 30th IEEE International Conference on Software Analysis,
Evolution and Reengineering, March 21st-24th, 2023
| null | null | null |
cs.PL cs.CR cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Static program analysis development is a non-trivial and time-consuming task.
We present a framework through which developers can define static program
analyses in natural language. We show the application of this framework to
identify cryptography misuses in Java programs, and we discuss how it
facilitates static program analysis development for developers.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 08:13:43 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Kallehbasti",
"Mohammad Mehdi Pourhashem",
""
],
[
"Ghafari",
"Mohammad",
""
]
] |
new_dataset
| 0.997217 |
2301.04882
|
Ke Zhang
|
Ke Zhang, Xiahai Zhuang
|
ZScribbleSeg: Zen and the Art of Scribble Supervised Medical Image
Segmentation
|
31 pages, 10 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Curating a large scale fully-annotated dataset can be both labour-intensive
and expertise-demanding, especially for medical images. To alleviate this
problem, we propose to utilize solely scribble annotations for weakly
supervised segmentation. Existing solutions mainly leverage selective losses
computed solely on annotated areas and generate pseudo gold standard
segmentation by propagating labels to adjacent areas. However, these methods
could suffer from the inaccurate and sometimes unrealistic pseudo segmentation
due to the insufficient supervision and incomplete shape features. Different
from previous efforts, we first investigate the principle of ''good scribble
annotations'', which leads to efficient scribble forms via supervision
maximization and randomness simulation. Furthermore, we introduce
regularization terms to encode the spatial relationship and shape prior, where
a new formulation is developed to estimate the mixture ratios of label classes.
These ratios are critical in identifying the unlabeled pixels for each class
and correcting erroneous predictions, thus the accurate estimation lays the
foundation for the incorporation of spatial prior. Finally, we integrate the
efficient scribble supervision with the prior into a unified framework, denoted
as ZScribbleSeg, and apply the method to multiple scenarios. Leveraging only
scribble annotations, ZScribbleSeg set new state-of-the-arts on four
segmentation tasks using ACDC, MSCMRseg, MyoPS and PPSS datasets.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 09:00:40 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Zhang",
"Ke",
""
],
[
"Zhuang",
"Xiahai",
""
]
] |
new_dataset
| 0.999012 |
2301.04883
|
Ryota Tanaka
|
Ryota Tanaka, Kyosuke Nishida, Kosuke Nishida, Taku Hasegawa, Itsumi
Saito, Kuniko Saito
|
SlideVQA: A Dataset for Document Visual Question Answering on Multiple
Images
|
Accepted by AAAI2023
| null | null | null |
cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Visual question answering on document images that contain textual, visual,
and layout information, called document VQA, has received much attention
recently. Although many datasets have been proposed for developing document VQA
systems, most of the existing datasets focus on understanding the content
relationships within a single image and not across multiple images. In this
study, we propose a new multi-image document VQA dataset, SlideVQA, containing
2.6k+ slide decks composed of 52k+ slide images and 14.5k questions about a
slide deck. SlideVQA requires complex reasoning, including single-hop,
multi-hop, and numerical reasoning, and also provides annotated arithmetic
expressions of numerical answers for enhancing the ability of numerical
reasoning. Moreover, we developed a new end-to-end document VQA model that
treats evidence selection and question answering in a unified
sequence-to-sequence format. Experiments on SlideVQA show that our model
outperformed existing state-of-the-art QA models, but that it still has a large
gap behind human performance. We believe that our dataset will facilitate
research on document VQA.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 09:00:42 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Tanaka",
"Ryota",
""
],
[
"Nishida",
"Kyosuke",
""
],
[
"Nishida",
"Kosuke",
""
],
[
"Hasegawa",
"Taku",
""
],
[
"Saito",
"Itsumi",
""
],
[
"Saito",
"Kuniko",
""
]
] |
new_dataset
| 0.999876 |
2301.04888
|
Maximilian Sch\"offel
|
Maximilian Sch\"offel, Johannes Feldmann, Norbert Wehn
|
Code-based Cryptography in IoT: A HW/SW Co-Design of HQC
|
to be published in Proceedings of the 8th IEEE World Forum on the
Internet of Things
| null | null | null |
cs.CR cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in quantum computing pose a serious threat on the security of
widely used public-key cryptosystems. Thus, new post-quantum cryptographic
algorithms have been proposed as part of the associated US NIST process to
enable secure, encrypted communication in the age of quantum computing. Many
hardware accelerators for structured lattice-based algorithms have already been
published to meet the strict power, area and latency requirements of low-power
IoT edge devices. However, the security of these algorithms is still uncertain.
Currently, many new attacks against the lattice structure are investigated to
judge on their security. In contrast, code-based algorithms, which rely on
deeply explored security metrics and are appealing candidates in the NIST
process, have not yet been investigated to the same depth in the context of IoT
due to the computational complexity and memory footprint of state-of-the-art
software implementations.
In this paper, we present to the best of our knowledge the first HW/SW
co-design based implementation of the code-based Hamming Quasi Cyclic
Key-Encapsulation Mechanism. We profile and evaluate this algorithm in order to
explore the trade-off between software optimizations, tightly coupled hardware
acceleration by instruction set extension and modular, loosely coupled
accelerators. We provide detailed results on the energy consumption and
performance of our design and compare it to existing implementations of
lattice- and code-based algorithms. The design was implemented in two
technologies: FPGA and ASIC. Our results show that code-based algorithms are
valid alternatives in low-power IoT from an implementation perspective.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 09:05:06 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Schöffel",
"Maximilian",
""
],
[
"Feldmann",
"Johannes",
""
],
[
"Wehn",
"Norbert",
""
]
] |
new_dataset
| 0.991717 |
2301.04962
|
Hossein Hassani
|
Sazan Salar and Hossein Hassani
|
A Dataset of Kurdish (Sorani) Named Entities -- An Amendment to
Kurdish-BLARK Named Entities
|
The dataset is available at
https://github.com/KurdishBLARK/KurdishNamedEntities
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Named Entity Recognition (NER) is one of the essential applications of
Natural Language Processing (NLP). It is also an instrument that plays a
significant role in many other NLP applications, such as Machine Translation
(MT), Information Retrieval (IR), and Part of Speech Tagging (POST). Kurdish is
an under-resourced language from the NLP perspective. Particularly, in all the
categories, the lack of NER resources hinders other aspects of Kurdish
processing. In this work, we present a data set that covers several categories
of NEs in Kurdish (Sorani). The dataset is a significant amendment to a
previously developed dataset in the Kurdish BLARK (Basic Language Resource
Kit). It covers 11 categories and 33261 entries in total. The dataset is
publicly available for non-commercial use under CC BY-NC-SA 4.0 license at
https://kurdishblark.github.io/.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 12:13:44 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Salar",
"Sazan",
""
],
[
"Hassani",
"Hossein",
""
]
] |
new_dataset
| 0.999776 |
2301.04976
|
Andreas Haahr Larsen PhD
|
Andreas Haahr Larsen, Emre Brookes, Martin Cramer Pedersen and Jacob
Judas Kain Kirkensgaard
|
Shape2SAS -- a web application to simulate small-angle scattering data
and pair distance distributions from user-defined shapes
| null | null | null | null |
cs.GR physics.bio-ph physics.data-an
|
http://creativecommons.org/licenses/by/4.0/
|
Shape2SAS is a web application that allows researchers and students to build
intuition and understanding of small-angle scattering. It is available at
https://somo.chem.utk.edu/shape2sas. The user defines a model of arbitrary
shape by combining geometrical subunits, and Shape2SAS then calculates and
displays the scattering intensity, the pair distance distribution as well as a
visualization of the user-defined shape. Simulated data with realistic noise
are also generated. We demonstrate how Shape2SAS can calculate and display the
different scattering patterns for various geometrical shapes, such as spheres
and cylinders. We also demonstrate how the effect of structure factors can be
visualized. Finally, we show how multi-contrast particles can readily be
generated, and how the calculated scattering may be used to validate and
visualize analytical models generated in analysis software for fitting
small-angle scattering data.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 12:37:11 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Larsen",
"Andreas Haahr",
""
],
[
"Brookes",
"Emre",
""
],
[
"Pedersen",
"Martin Cramer",
""
],
[
"Kirkensgaard",
"Jacob Judas Kain",
""
]
] |
new_dataset
| 0.998712 |
2301.05027
|
Chengzhi Wu
|
Chengzhi Wu, Linxi Qiu, Kanran Zhou, Julius Pfrommer and J\"urgen
Beyerer
|
SynMotor: A Benchmark Suite for Object Attribute Regression and
Multi-task Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we develop a novel benchmark suite including both a 2D
synthetic image dataset and a 3D synthetic point cloud dataset. Our work is a
sub-task in the framework of a remanufacturing project, in which small electric
motors are used as fundamental objects. Apart from the given detection,
classification, and segmentation annotations, the key objects also have
multiple learnable attributes with ground truth provided. This benchmark can be
used for computer vision tasks including 2D/3D detection, classification,
segmentation, and multi-attribute learning. It is worth mentioning that most
attributes of the motors are quantified as continuously variable rather than
binary, which makes our benchmark well-suited for the less explored regression
tasks. In addition, appropriate evaluation metrics are adopted or developed for
each task and promising baseline results are provided. We hope this benchmark
can stimulate more research efforts on the sub-domain of object attribute
learning and multi-task learning in the future.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 18:27:29 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Wu",
"Chengzhi",
""
],
[
"Qiu",
"Linxi",
""
],
[
"Zhou",
"Kanran",
""
],
[
"Pfrommer",
"Julius",
""
],
[
"Beyerer",
"Jürgen",
""
]
] |
new_dataset
| 0.999686 |
2301.05048
|
Nils Weissgerber
|
Nils Weissgerber, Thorsten Jenke, Elmar Padilla, Lilli Bruckschen
|
Open SESAME: Fighting Botnets with Seed Reconstructions of Domain
Generation Algorithms
|
12 pages, 3 pages appendix, 13 figures
| null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An important aspect of many botnets is their capability to generate
pseudorandom domain names using Domain Generation Algorithms (DGAs). A cyber
criminal can register such domains to establish periodically changing
rendezvous points with the bots. DGAs make use of seeds to generate sets of
domains. Seeds can easily be changed in order to generate entirely new groups
of domains while using the same underlying algorithm. While this requires very
little manual effort for an adversary, security specialists typically have to
manually reverse engineer new malware strains to reconstruct the seeds. Only
when the seed and DGA are known, past and future domains can be generated,
efficiently attributed, blocked, sinkholed or used for a take-down. Common
counters in the literature consist of databases or Machine Learning (ML) based
detectors to keep track of past and future domains of known DGAs and to
identify DGA-generated domain names, respectively. However, database based
approaches can not detect domains generated by new DGAs, and ML approaches can
not generate future domain names. In this paper, we introduce SESAME, a system
that combines the two above-mentioned approaches and contains a module for
automatic Seed Reconstruction, which is, to our knowledge, the first of its
kind. It is used to automatically classify domain names, rate their novelty,
and determine the seeds of the underlying DGAs. SESAME consists of multiple
DGA-specific Seed Reconstructors and is designed to work purely based on domain
names, as they are easily obtainable from observing the network traffic. We
evaluated our approach on 20.8 gigabytes of DNS-lookups. Thereby, we identified
17 DGAs, of which 4 were entirely new to us.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 14:25:31 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Weissgerber",
"Nils",
""
],
[
"Jenke",
"Thorsten",
""
],
[
"Padilla",
"Elmar",
""
],
[
"Bruckschen",
"Lilli",
""
]
] |
new_dataset
| 0.99269 |
2301.05070
|
Daniel Eldan
|
Eldan R. Daniel
|
Wildfire Smoke Detection with Computer Vision
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Wildfires are becoming more frequent and their effects more devastating every
day. Climate change has directly and indirectly affected the occurrence of
these, as well as social phenomena have increased the vulnerability of people.
Consequently, and given the inevitable occurrence of these, it is important to
have early warning systems that allow a timely and effective response.
Artificial intelligence, machine learning and Computer Vision offer an
effective and achievable alternative for opportune detection of wildfires and
thus reduce the risk of disasters. YOLOv7 offers a simple, fast, and efficient
algorithm for training object detection models which can be used in early
detection of smoke columns in the initial stage wildfires. The developed model
showed promising results, achieving a score of 0.74 in the F1 curve when the
confidence level is 0.298, that is, a higher score at lower confidence levels
was obtained. This means when the conditions are favorable for false positives.
The metrics demonstrates the resilience and effectiveness of the model in
detecting smoke columns.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 15:12:56 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Daniel",
"Eldan R.",
""
]
] |
new_dataset
| 0.998805 |
2301.05108
|
Ibrahim Abdelaziz
|
Wenting Zhao, Ibrahim Abdelaziz, Julian Dolby, Kavitha Srinivas,
Mossad Helali, Essam Mansour
|
Serenity: Library Based Python Code Analysis for Code Completion and
Automated Machine Learning
| null | null | null | null |
cs.PL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamically typed languages such as Python have become very popular. Among
other strengths, Python's dynamic nature and its straightforward linking to
native code have made it the de-facto language for many research areas such as
Artificial Intelligence. This flexibility, however, makes static analysis very
hard. While creating a sound, or a soundy, analysis for Python remains an open
problem, we present in this work Serenity, a framework for static analysis of
Python that turns out to be sufficient for some tasks. The Serenity framework
exploits two basic mechanisms: (a) reliance on dynamic dispatch at the core of
language translation, and (b) extreme abstraction of libraries, to generate an
abstraction of the code. We demonstrate the efficiency and usefulness of
Serenity's analysis in two applications: code completion and automated machine
learning. In these two applications, we demonstrate that such analysis has a
strong signal, and can be leveraged to establish state-of-the-art performance,
comparable to neural models and dynamic analysis respectively.
|
[
{
"version": "v1",
"created": "Thu, 5 Jan 2023 02:09:08 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Zhao",
"Wenting",
""
],
[
"Abdelaziz",
"Ibrahim",
""
],
[
"Dolby",
"Julian",
""
],
[
"Srinivas",
"Kavitha",
""
],
[
"Helali",
"Mossad",
""
],
[
"Mansour",
"Essam",
""
]
] |
new_dataset
| 0.990169 |
2301.05137
|
Vitaliy Kurlin
|
Olga Anosova and Vitaliy Kurlin
|
Density functions of periodic sequences of continuous events
|
16 pages, 12 figures, the latest version is maintained at
http://kurlin.org/projects/periodic-geometry/densities-sequences-intervals.pdf.
arXiv admin note: text overlap with arXiv:2205.02226
| null | null | null |
cs.CG math.MG
|
http://creativecommons.org/licenses/by/4.0/
|
Periodic Geometry studies isometry invariants of periodic point sets that are
also continuous under perturbations. The motivations come from periodic
crystals whose structures are determined in a rigid form but any minimal cells
can discontinuously change due to small noise in measurements. For any integer
k>=0, the density function of a periodic set S was previously defined as the
fractional volume of all k-fold intersections (within a minimal cell) of balls
that have a variable radius t and centers at all points of S. This paper
introduces the density functions for periodic sets of points with different
initial radii motivated by atomic radii of chemical elements and by continuous
events occupying disjoint intervals in time series. The contributions are
explicit descriptions of the densities for periodic sequences of intervals. The
new densities are strictly stronger and distinguish periodic sequences that
have identical densities in the case of zero radii.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 16:44:29 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Anosova",
"Olga",
""
],
[
"Kurlin",
"Vitaliy",
""
]
] |
new_dataset
| 0.997667 |
2301.05154
|
Jack Urbanek
|
Jack Urbanek and Pratik Ringshia
|
Mephisto: A Framework for Portable, Reproducible, and Iterative
Crowdsourcing
| null | null | null | null |
cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Mephisto, a framework to make crowdsourcing for research more
reproducible, transparent, and collaborative. Mephisto provides abstractions
that cover a broad set of task designs and data collection workflows, and
provides a simple user experience to make best-practices easy defaults. In this
whitepaper we discuss the current state of data collection and annotation in ML
research, establish the motivation for building a shared framework to enable
researchers to create and open-source data collection and annotation tools as
part of their publication, and outline a set of suggested requirements for a
system to facilitate these goals. We then step through our resolution in
Mephisto, explaining the abstractions we use, our design decisions around the
user experience, and share implementation details and where they align with the
original motivations. We also discuss current limitations, as well as future
work towards continuing to deliver on the framework's initial goals. Mephisto
is available as an open source project, and its documentation can be found at
www.mephisto.ai.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 17:14:51 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Urbanek",
"Jack",
""
],
[
"Ringshia",
"Pratik",
""
]
] |
new_dataset
| 0.990295 |
2301.05174
|
Mariya Hendriksen
|
Mariya Hendriksen, Svitlana Vakulenko, Ernst Kuiper, Maarten de Rijke
|
Scene-centric vs. Object-centric Image-Text Cross-modal Retrieval: A
Reproducibility Study
|
18 pages, accepted as a reproducibility paper at ECIR 2023
| null | null | null |
cs.IR cs.CV cs.LG cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Most approaches to cross-modal retrieval (CMR) focus either on object-centric
datasets, meaning that each document depicts or describes a single object, or
on scene-centric datasets, meaning that each image depicts or describes a
complex scene that involves multiple objects and relations between them. We
posit that a robust CMR model should generalize well across both dataset types.
Despite recent advances in CMR, the reproducibility of the results and their
generalizability across different dataset types has not been studied before. We
address this gap and focus on the reproducibility of the state-of-the-art CMR
results when evaluated on object-centric and scene-centric datasets. We select
two state-of-the-art CMR models with different architectures: (i) CLIP; and
(ii) X-VLM. Additionally, we select two scene-centric datasets, and three
object-centric datasets, and determine the relative performance of the selected
models on these datasets. We focus on reproducibility, replicability, and
generalizability of the outcomes of previously published CMR experiments. We
discover that the experiments are not fully reproducible and replicable.
Besides, the relative performance results partially generalize across
object-centric and scene-centric datasets. On top of that, the scores obtained
on object-centric datasets are much lower than the scores obtained on
scene-centric datasets. For reproducibility and transparency we make our source
code and the trained models publicly available.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 18:00:00 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Hendriksen",
"Mariya",
""
],
[
"Vakulenko",
"Svitlana",
""
],
[
"Kuiper",
"Ernst",
""
],
[
"de Rijke",
"Maarten",
""
]
] |
new_dataset
| 0.997529 |
2301.05218
|
Saurab Dulal
|
Saurab Dulal, Lan Wang
|
NDNSD: Service Publishing and Discovery in NDN
|
MILCOM-2022
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Service discovery is a crucial component in today's massively distributed
applications. In this paper, we propose NDNSD -- a fully distributed and
general-purpose service discovery protocol for Named Data Networking (NDN). By
leveraging NDN's data synchronization capability, NDNSD offers a high-level API
for service publishing and discovery. We present NDNSD's main design features
including hierarchical naming, service information specification, and service
accessibility. %and illustrate the design with a use case. We also implemented
two other discovery schemes, one reactive and one proactive, and compared them
with NDNSD. Our evaluation shows that NDNSD achieves (a) lower latency, lower
overhead, and same reliability compared to the reactive scheme, and (b)
comparable latency, lower overhead at larger scale, and higher reliability
compared to the proactive scheme.
|
[
{
"version": "v1",
"created": "Thu, 12 Jan 2023 18:58:24 GMT"
}
] | 2023-01-13T00:00:00 |
[
[
"Dulal",
"Saurab",
""
],
[
"Wang",
"Lan",
""
]
] |
new_dataset
| 0.997905 |
2107.05475
|
Fei Shen
|
Fei Shen, Yi Xie, Jianqing Zhu, Xiaobin Zhu, and Huanqiang Zeng
|
GiT: Graph Interactive Transformer for Vehicle Re-identification
|
Accepted in IEEE TIP 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformers are more and more popular in computer vision, which treat an
image as a sequence of patches and learn robust global features from the
sequence. However, pure transformers are not entirely suitable for vehicle
re-identification because vehicle re-identification requires both robust global
features and discriminative local features. For that, a graph interactive
transformer (GiT) is proposed in this paper. In the macro view, a list of GiT
blocks are stacked to build a vehicle re-identification model, in where graphs
are to extract discriminative local features within patches and transformers
are to extract robust global features among patches. In the micro view, graphs
and transformers are in an interactive status, bringing effective cooperation
between local and global features. Specifically, one current graph is embedded
after the former level's graph and transformer, while the current transform is
embedded after the current graph and the former level's transformer. In
addition to the interaction between graphs and transforms, the graph is a
newly-designed local correction graph, which learns discriminative local
features within a patch by exploring nodes' relationships. Extensive
experiments on three large-scale vehicle re-identification datasets demonstrate
that our GiT method is superior to state-of-the-art vehicle re-identification
approaches.
|
[
{
"version": "v1",
"created": "Mon, 12 Jul 2021 14:43:44 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jan 2023 14:41:46 GMT"
},
{
"version": "v3",
"created": "Wed, 11 Jan 2023 03:25:22 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Shen",
"Fei",
""
],
[
"Xie",
"Yi",
""
],
[
"Zhu",
"Jianqing",
""
],
[
"Zhu",
"Xiaobin",
""
],
[
"Zeng",
"Huanqiang",
""
]
] |
new_dataset
| 0.999292 |
2108.09117
|
Miguel \'Angel Mu\~noz-Ba\~n\'on Mu\~noz-Ba\~n\'on
|
Miguel Angel Munoz-Banon, Edison Velasco-Sanchez, Francisco A.
Candelas and Fernando Torres
|
OpenStreetMap-based Autonomous Navigation With LiDAR Naive-Valley-Path
Obstacle Avoidance
|
This paper is in its second revision for publication at IEEE
Transactions on Intelligent Transportation Systems (T-ITS)
|
IEEE Transactions on Intelligent Transportation Systems, vol. 23,
no. 12, pp. 24428-24438, Dec. 2022
|
10.1109/TITS.2022.3208829
| null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/publicdomain/zero/1.0/
|
OpenStreetMaps (OSM) is currently studied as the environment representation
for autonomous navigation. It provides advantages such as global consistency, a
heavy-less map construction process, and a wide variety of road information
publicly available. However, the location of this information is usually not
very accurate locally.
In this paper, we present a complete autonomous navigation pipeline using OSM
information as environment representation for global planning. To avoid the
flaw of local low-accuracy, we offer the novel LiDAR-based Naive-Valley-Path
(NVP) method that exploits the concept of "valley" areas to infer the local
path always furthest from obstacles. This behavior allows navigation always
through the center of trafficable areas following the road's shape
independently of OSM error. Furthermore, NVP is a naive method that is highly
sample-time-efficient. This time efficiency also enables obstacle avoidance,
even for dynamic objects.
We demonstrate the system's robustness in our research platform BLUE, driving
autonomously across the University of Alicante Scientific Park for more than 20
km with 0.24 meters of average error against the road's center with a 19.8 ms
of average sample time. Our vehicle avoids static obstacles in the road and
even dynamic ones, such as vehicles and pedestrians.
|
[
{
"version": "v1",
"created": "Fri, 20 Aug 2021 11:27:52 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Dec 2021 18:51:12 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Jan 2022 11:32:03 GMT"
},
{
"version": "v4",
"created": "Thu, 30 Jun 2022 09:38:47 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Munoz-Banon",
"Miguel Angel",
""
],
[
"Velasco-Sanchez",
"Edison",
""
],
[
"Candelas",
"Francisco A.",
""
],
[
"Torres",
"Fernando",
""
]
] |
new_dataset
| 0.983657 |
2111.02394
|
Zhe Chen
|
Zhe Chen, Jiahao Wang, Wenhai Wang, Guo Chen, Enze Xie, Ping Luo, Tong
Lu
|
FAST: Faster Arbitrarily-Shaped Text Detector with Minimalist Kernel
Representation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an accurate and efficient scene text detection framework, termed
FAST (i.e., faster arbitrarily-shaped text detector). Different from recent
advanced text detectors that used complicated post-processing and hand-crafted
network architectures, resulting in low inference speed, FAST has two new
designs. (1) We design a minimalist kernel representation (only has 1-channel
output) to model text with arbitrary shape, as well as a GPU-parallel
post-processing to efficiently assemble text lines with a negligible time
overhead. (2) We search the network architecture tailored for text detection,
leading to more powerful features than most networks that are searched for
image classification. Benefiting from these two designs, FAST achieves an
excellent trade-off between accuracy and efficiency on several challenging
datasets, including Total Text, CTW1500, ICDAR 2015, and MSRA-TD500. For
example, FAST-T yields 81.6% F-measure at 152 FPS on Total-Text, outperforming
the previous fastest method by 1.7 points and 70 FPS in terms of accuracy and
speed. With TensorRT optimization, the inference speed can be further
accelerated to over 600 FPS. Code and models will be released at
https://github.com/czczup/FAST.
|
[
{
"version": "v1",
"created": "Wed, 3 Nov 2021 17:58:47 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Jan 2023 14:04:01 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Chen",
"Zhe",
""
],
[
"Wang",
"Jiahao",
""
],
[
"Wang",
"Wenhai",
""
],
[
"Chen",
"Guo",
""
],
[
"Xie",
"Enze",
""
],
[
"Luo",
"Ping",
""
],
[
"Lu",
"Tong",
""
]
] |
new_dataset
| 0.991788 |
2202.03879
|
Nisar Ahmed
|
Nisar Ahmed, Shahzad Asif
|
BIQ2021: A Large-Scale Blind Image Quality Assessment Database
|
Journal of Electronic Imaging, Vol. 31, Issue 5: 16 pages
|
Journal of Electronic Imaging 31(5), 053010 (13 September 2022)
|
10.1117/1.JEI.31.5.053010
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
The assessment of the perceptual quality of digital images is becoming
increasingly important as a result of the widespread use of digital multimedia
devices. Smartphones and high-speed internet are just two examples of
technologies that have multiplied the amount of multimedia content available.
Thus, obtaining a representative dataset, which is required for objective
quality assessment training, is a significant challenge. The Blind Image
Quality Assessment Database, BIQ2021, is presented in this article. By
selecting images with naturally occurring distortions and reliable labeling,
the dataset addresses the challenge of obtaining representative images for
no-reference image quality assessment. The dataset consists of three sets of
images: those taken without the intention of using them for image quality
assessment, those taken with intentionally introduced natural distortions, and
those taken from an open-source image-sharing platform. It is attempted to
maintain a diverse collection of images from various devices, containing a
variety of different types of objects and varying degrees of foreground and
background information. To obtain reliable scores, these images are
subjectively scored in a laboratory environment using a single stimulus method.
The database contains information about subjective scoring, human subject
statistics, and the standard deviation of each image. The dataset's Mean
Opinion Scores (MOS) make it useful for assessing visual quality. Additionally,
the proposed database is used to evaluate existing blind image quality
assessment approaches, and the scores are analyzed using Pearson and Spearman's
correlation coefficients. The image database and MOS are freely available for
use and benchmarking.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 14:07:38 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jan 2023 21:31:46 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Ahmed",
"Nisar",
""
],
[
"Asif",
"Shahzad",
""
]
] |
new_dataset
| 0.999814 |
2203.12273
|
Denis Coquenet
|
Denis Coquenet and Cl\'ement Chatelain and Thierry Paquet
|
DAN: a Segmentation-free Document Attention Network for Handwritten
Document Recognition
| null |
IEEE Transactions on Pattern Analysis and Machine Intelligence
2023
|
10.1109/TPAMI.2023.3235826
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unconstrained handwritten text recognition is a challenging computer vision
task. It is traditionally handled by a two-step approach, combining line
segmentation followed by text line recognition. For the first time, we propose
an end-to-end segmentation-free architecture for the task of handwritten
document recognition: the Document Attention Network. In addition to text
recognition, the model is trained to label text parts using begin and end tags
in an XML-like fashion. This model is made up of an FCN encoder for feature
extraction and a stack of transformer decoder layers for a recurrent
token-by-token prediction process. It takes whole text documents as input and
sequentially outputs characters, as well as logical layout tokens. Contrary to
the existing segmentation-based approaches, the model is trained without using
any segmentation label. We achieve competitive results on the READ 2016 dataset
at page level, as well as double-page level with a CER of 3.43% and 3.70%,
respectively. We also provide results for the RIMES 2009 dataset at page level,
reaching 4.54% of CER.
We provide all source code and pre-trained model weights at
https://github.com/FactoDeepLearning/DAN.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2022 08:40:42 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2022 09:26:23 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Aug 2022 15:28:39 GMT"
},
{
"version": "v4",
"created": "Tue, 13 Dec 2022 10:06:59 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Coquenet",
"Denis",
""
],
[
"Chatelain",
"Clément",
""
],
[
"Paquet",
"Thierry",
""
]
] |
new_dataset
| 0.999314 |
2204.10777
|
Xu Shen
|
Xu Shen, Matthew Lacayo, Nidhir Guggilla, Francesco Borrelli
|
ParkPredict+: Multimodal Intent and Motion Prediction for Vehicles in
Parking Lots with CNN and Transformer
|
Published at IEEE ITSC 2022
| null |
10.1109/ITSC55140.2022.9922162
| null |
cs.CV cs.AI cs.LG cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of multimodal intent and trajectory prediction for human-driven
vehicles in parking lots is addressed in this paper. Using models designed with
CNN and Transformer networks, we extract temporal-spatial and contextual
information from trajectory history and local bird's eye view (BEV) semantic
images, and generate predictions about intent distribution and future
trajectory sequences. Our methods outperform existing models in accuracy, while
allowing an arbitrary number of modes, encoding complex multi-agent scenarios,
and adapting to different parking maps. To train and evaluate our method, we
present the first public 4K video dataset of human driving in parking lots with
accurate annotation, high frame rate, and rich traffic scenarios.
|
[
{
"version": "v1",
"created": "Sun, 17 Apr 2022 01:54:25 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jan 2023 23:39:42 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Shen",
"Xu",
""
],
[
"Lacayo",
"Matthew",
""
],
[
"Guggilla",
"Nidhir",
""
],
[
"Borrelli",
"Francesco",
""
]
] |
new_dataset
| 0.978969 |
2205.09255
|
Taylor Howell
|
Taylor A. Howell, Simon Le Cleac'h, Kevin Tracy, and Zachary
Manchester
|
CALIPSO: A Differentiable Solver for Trajectory Optimization with Conic
and Complementarity Constraints
|
Fixes and minor reformatting
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
We present a new solver for non-convex trajectory optimization problems that
is specialized for robotics applications. CALIPSO, or the Conic Augmented
Lagrangian Interior-Point SOlver, combines several strategies for constrained
numerical optimization to natively handle second-order cones and
complementarity constraints. It reliably solves challenging motion-planning
problems that include contact-implicit formulations of impacts and Coulomb
friction and state-triggered constraints where general-purpose non-convex
solvers like SNOPT and Ipopt fail to converge. Additionally, CALIPSO supports
efficient differentiation of solutions with respect to problem data, enabling
bi-level optimization applications like auto-tuning of feedback policies.
Reliable convergence of the solver is demonstrated on a range of problems from
manipulation, locomotion, and aerospace domains. An open-source implementation
of this solver is available.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2022 00:19:46 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Jun 2022 18:28:44 GMT"
},
{
"version": "v3",
"created": "Tue, 10 Jan 2023 23:35:55 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Howell",
"Taylor A.",
""
],
[
"Cleac'h",
"Simon Le",
""
],
[
"Tracy",
"Kevin",
""
],
[
"Manchester",
"Zachary",
""
]
] |
new_dataset
| 0.999434 |
2206.10340
|
Jai Prakash
|
Jai Prakash, Michele Vignati, Edoardo Sabbioni, and Federico Cheli
|
Vehicle Teleoperation: Successive Reference-Pose Tracking
|
VPPC2022 conference submitted
| null |
10.1109/VPPC55846.2022.10003367
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Vehicle teleoperation is an interesting feature in many fields. A typical
problem of teleoperation is communication time delay which, together with
actuator saturation and environmental disturbance, can cause a vehicle
deviation from the target trajectory imposed by the human operator who imposes
to the vehicle a steering wheel angle reference and a speed/acceleration
reference. With predictive techniques, time-delay can be accounted at
sufficient extent. But, in presence of disturbances, due to the absence of
instantaneous haptic and visual feedback, human-operator steering command
transmitted to the the vehicle is unaccounted with disturbances observed by the
vehicle. To improve reference tracking without losing promptness in driving
control, reference trajectory in the form of successive reference poses can be
transmitted instead of steering commands to the vehicle. We introduce this new
concept, namely, the 'successive reference-pose tracking (SRPT)' to improve
path tracking in vehicle teleoperation. This paper discusses feasibility and
advantages of this new method, compare to the smith predictor control approach.
Simulations are performed in SIMULINK environment, where a 14-dof vehicle model
is being controlled with Smith and SRPT controllers in presence of variable
network delay. Scenarios for performance comparison are low adhesion ground,
strong lateral wind and steer-rate demanding maneuvers. Simulation result shows
significant improvement in reference tracking with SRPT approach.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 15:12:19 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Prakash",
"Jai",
""
],
[
"Vignati",
"Michele",
""
],
[
"Sabbioni",
"Edoardo",
""
],
[
"Cheli",
"Federico",
""
]
] |
new_dataset
| 0.999644 |
2207.02535
|
Yang Li
|
Yang Li, Shixin Zhu
|
On Galois hulls of linear codes and new entanglement-assisted quantum
error-correcting codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Galois hull of a linear code is the intersection of itself and its Galois
dual code, which has aroused the interest of researchers in these years. In
this paper, we study Galois hulls of linear codes. Firstly, the symmetry of the
dimensions of Galois hulls of linear codes is found. Some new necessary and
sufficient conditions for linear codes being Galois self-orthogonal codes,
Galois self-dual codes, and Galois linear complementary dual codes are
characterized. Then, we propose explicit methods to construct Galois
self-orthogonal codes of larger length from given Galois self-orthogonal codes.
As an application, linear codes of larger length with Galois hulls of arbitrary
dimensions are further derived. Focusing on the Hermitian inner product, two
new classes of Hermitian self-orthogonal maximum distance separable (MDS) codes
are also constructed. Finally, applying all the results to the construction of
entanglement-assisted quantum error-correcting codes (EAQECCs), many new
$q$-ary or $\sqrt{q}$-ary EAQECCs and MDS EAQECCs with rates greater than or
equal to $\frac{1}{2}$ and positive net rates can be obtained. Moreover, the
minimum distance of many $\sqrt{q}$-ary MDS EAQECCs of length $n>\sqrt{q}+1$ is
greater than or equal to $\lceil \frac{\sqrt{q}}{2} \rceil$.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 09:28:41 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jul 2022 09:03:16 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Aug 2022 14:26:02 GMT"
},
{
"version": "v4",
"created": "Fri, 23 Sep 2022 12:36:27 GMT"
},
{
"version": "v5",
"created": "Sun, 23 Oct 2022 13:22:37 GMT"
},
{
"version": "v6",
"created": "Wed, 11 Jan 2023 15:00:30 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Li",
"Yang",
""
],
[
"Zhu",
"Shixin",
""
]
] |
new_dataset
| 0.997155 |
2209.12062
|
Lionel Tabourier
|
Maximilien Danisch, Ioannis Panagiotas, Lionel Tabourier
|
Compressing bipartite graphs with a dual reordering scheme
| null | null | null | null |
cs.SI cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
In order to manage massive graphs in practice, it is often necessary to
resort to graph compression, which aims at reducing the memory used when
storing and processing the graph. Efficient compression methods have been
proposed in the literature, especially for web graphs. In most cases, they are
combined with a vertex reordering pre-processing step which significantly
improves the compression rate. However, these techniques are not as efficient
when considering other kinds of graphs. In this paper, we focus on the class of
bipartite graphs and adapt the vertex reordering phase to their specific
structure by proposing a dual reordering scheme. By reordering each group of
vertices in the purpose of minimizing a specific score, we show that we can
reach better compression rates. We also suggest that this approach can be
further refined to make the node orderings more adapted to the compression
phase that follows the ordering phase.
|
[
{
"version": "v1",
"created": "Sat, 24 Sep 2022 18:19:13 GMT"
},
{
"version": "v2",
"created": "Sat, 7 Jan 2023 17:47:49 GMT"
},
{
"version": "v3",
"created": "Wed, 11 Jan 2023 14:46:16 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Danisch",
"Maximilien",
""
],
[
"Panagiotas",
"Ioannis",
""
],
[
"Tabourier",
"Lionel",
""
]
] |
new_dataset
| 0.988102 |
2301.04195
|
Mayank Mittal
|
Mayank Mittal, Calvin Yu, Qinxi Yu, Jingzhou Liu, Nikita Rudin, David
Hoeller, Jia Lin Yuan, Pooria Poorsarvi Tehrani, Ritvik Singh, Yunrong Guo,
Hammad Mazhar, Ajay Mandlekar, Buck Babich, Gavriel State, Marco Hutter,
Animesh Garg
|
ORBIT: A Unified Simulation Framework for Interactive Robot Learning
Environments
|
Project website: https://isaac-orbit.github.io/
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present ORBIT, a unified and modular framework for robot learning powered
by NVIDIA Isaac Sim. It offers a modular design to easily and efficiently
create robotic environments with photo-realistic scenes and fast and accurate
rigid and deformable body simulation. With ORBIT, we provide a suite of
benchmark tasks of varying difficulty -- from single-stage cabinet opening and
cloth folding to multi-stage tasks such as room reorganization. To support
working with diverse observations and action spaces, we include fixed-arm and
mobile manipulators with different physically-based sensors and motion
generators. ORBIT allows training reinforcement learning policies and
collecting large demonstration datasets from hand-crafted or expert solutions
in a matter of minutes by leveraging GPU-based parallelization. In summary, we
offer an open-sourced framework that readily comes with 16 robotic platforms, 4
sensor modalities, 10 motion generators, more than 20 benchmark tasks, and
wrappers to 4 learning libraries. With this framework, we aim to support
various research areas, including representation learning, reinforcement
learning, imitation learning, and task and motion planning. We hope it helps
establish interdisciplinary collaborations in these communities, and its
modularity makes it easily extensible for more tasks and applications in the
future. For videos, documentation, and code: https://isaac-orbit.github.io/.
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 20:19:17 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Mittal",
"Mayank",
""
],
[
"Yu",
"Calvin",
""
],
[
"Yu",
"Qinxi",
""
],
[
"Liu",
"Jingzhou",
""
],
[
"Rudin",
"Nikita",
""
],
[
"Hoeller",
"David",
""
],
[
"Yuan",
"Jia Lin",
""
],
[
"Tehrani",
"Pooria Poorsarvi",
""
],
[
"Singh",
"Ritvik",
""
],
[
"Guo",
"Yunrong",
""
],
[
"Mazhar",
"Hammad",
""
],
[
"Mandlekar",
"Ajay",
""
],
[
"Babich",
"Buck",
""
],
[
"State",
"Gavriel",
""
],
[
"Hutter",
"Marco",
""
],
[
"Garg",
"Animesh",
""
]
] |
new_dataset
| 0.99528 |
2301.04288
|
Van Thong Huynh
|
Van Thong Huynh, Hyung-Jeong Yang, Guee-Sang Lee, Soo-Hyung Kim
|
Generic Event Boundary Detection in Video with Pyramid Features
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Generic event boundary detection (GEBD) aims to split video into chunks at a
broad and diverse set of actions as humans naturally perceive event boundaries.
In this study, we present an approach that considers the correlation between
neighbor frames with pyramid feature maps in both spatial and temporal
dimensions to construct a framework for localizing generic events in video. The
features at multiple spatial dimensions of a pre-trained ResNet-50 are
exploited with different views in the temporal dimension to form a temporal
pyramid feature map. Based on that, the similarity between neighbor frames is
calculated and projected to build a temporal pyramid similarity feature vector.
A decoder with 1D convolution operations is used to decode these similarities
to a new representation that incorporates their temporal relationship for later
boundary score estimation. Extensive experiments conducted on the GEBD
benchmark dataset show the effectiveness of our system and its variations, in
which we outperformed the state-of-the-art approaches. Additional experiments
on TAPOS dataset, which contains long-form videos with Olympic sport actions,
demonstrated the effectiveness of our study compared to others.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 03:29:27 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Huynh",
"Van Thong",
""
],
[
"Yang",
"Hyung-Jeong",
""
],
[
"Lee",
"Guee-Sang",
""
],
[
"Kim",
"Soo-Hyung",
""
]
] |
new_dataset
| 0.999166 |
2301.04350
|
Ali Gholami Rudi
|
Ali Gholami Rudi
|
Maximum Centre-Disjoint Mergeable Disks
| null | null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
Given a set of disks on the plane, the goal of the problem studied in this
paper is to choose a subset of these disks such that none of its members
contains the centre of any other. Each disk not in this subset must be merged
with one of its nearby disks that is, increasing the latter's radius. We prove
that this problem is NP-hard. We also present polynomial-time algorithms for
the special case in which the centres of all disks are on a line.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 07:59:07 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Rudi",
"Ali Gholami",
""
]
] |
new_dataset
| 0.955771 |
2301.04402
|
Fernando Alonso-Fernandez
|
Fernando Alonso-Fernandez, Julian Fierrez-Aguilar, Javier
Ortega-Garcia, Joaquin Gonzalez-Rodriguez
|
Secure access system using signature verification over tablet PC
|
Published at IEEE Aerospace and Electronic Systems Magazine
| null |
10.1109/MAES.2007.351725
| null |
cs.CR cs.CV eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Low-cost portable devices capable of capturing signature signals are being
increasingly used. Additionally, the social and legal acceptance of the written
signature for authentication purposes is opening a range of new applications.
We describe a highly versatile and scalable prototype for Web-based secure
access using signature verification. The proposed architecture can be easily
extended to work with different kinds of sensors and large-scale databases.
Several remarks are also given on security and privacy of network-based
signature verification.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 11:05:47 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Alonso-Fernandez",
"Fernando",
""
],
[
"Fierrez-Aguilar",
"Julian",
""
],
[
"Ortega-Garcia",
"Javier",
""
],
[
"Gonzalez-Rodriguez",
"Joaquin",
""
]
] |
new_dataset
| 0.988862 |
2301.04408
|
Michael Bommarito Ii
|
Jillian Bommarito, Michael Bommarito, Daniel Martin Katz, Jessica Katz
|
GPT as Knowledge Worker: A Zero-Shot Evaluation of (AI)CPA Capabilities
|
Source code and data available in online SI at
https://github.com/mjbommar/gpt-as-knowledge-worker
| null | null | null |
cs.CL cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
The global economy is increasingly dependent on knowledge workers to meet the
needs of public and private organizations. While there is no single definition
of knowledge work, organizations and industry groups still attempt to measure
individuals' capability to engage in it. The most comprehensive assessment of
capability readiness for professional knowledge workers is the Uniform CPA
Examination developed by the American Institute of Certified Public Accountants
(AICPA). In this paper, we experimentally evaluate OpenAI's `text-davinci-003`
and prior versions of GPT on both a sample Regulation (REG) exam and an
assessment of over 200 multiple-choice questions based on the AICPA Blueprints
for legal, financial, accounting, technology, and ethical tasks. First, we find
that `text-davinci-003` achieves a correct rate of 14.4% on a sample REG exam
section, significantly underperforming human capabilities on quantitative
reasoning in zero-shot prompts. Second, `text-davinci-003` appears to be
approaching human-level performance on the Remembering & Understanding and
Application skill levels in the Exam absent calculation. For best prompt and
parameters, the model answers 57.6% of questions correctly, significantly
better than the 25% guessing rate, and its top two answers are correct 82.1% of
the time, indicating strong non-entailment. Finally, we find that recent
generations of GPT-3 demonstrate material improvements on this assessment,
rising from 30% for `text-davinci-001` to 57% for `text-davinci-003`. These
findings strongly suggest that large language models have the potential to
transform the quality and efficiency of future knowledge work.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 11:30:42 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Bommarito",
"Jillian",
""
],
[
"Bommarito",
"Michael",
""
],
[
"Katz",
"Daniel Martin",
""
],
[
"Katz",
"Jessica",
""
]
] |
new_dataset
| 0.995096 |
2301.04521
|
Kuncahyo Setyo Nugroho
|
Kuncahyo Setyo Nugroho, Ismail Akbar, Affi Nizar Suksmawati, Istiadi
|
Deteksi Depresi dan Kecemasan Pengguna Twitter Menggunakan Bidirectional
LSTM
|
in indonesian language, The 4th Conference on Innovation and
Application of Science and Technology (CIASTECH) 2021
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The most common mental disorders experienced by a person in daily life are
depression and anxiety. Social stigma makes people with depression and anxiety
neglected by their surroundings. Therefore, they turn to social media like
Twitter for support. Detecting users with potential depression and anxiety
disorders through textual data is not easy because they do not explicitly
discuss their mental state. It takes a model that can identify potential users
who experience depression and anxiety on textual data to get treatment earlier.
Text classification techniques can achieve this. One approach that can be used
is LSTM as an RNN architecture development in dealing with vanishing gradient
problems. Standard LSTM does not capture enough information because it can only
read sentences from one direction. Meanwhile, Bidirectional LSTM (BiLSTM) is a
two-way LSTM that can capture information without ignoring the context and
meaning of a sentence. The proposed BiLSTM model is higher than all traditional
machine learning models and standard LSTMs. Based on the test results, the
highest accuracy obtained by BiLSTM reached 94.12%. This study has succeeded in
developing a model for the detection of depression and anxiety in Twitter
users.
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 15:37:48 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Nugroho",
"Kuncahyo Setyo",
""
],
[
"Akbar",
"Ismail",
""
],
[
"Suksmawati",
"Affi Nizar",
""
],
[
"Istiadi",
"",
""
]
] |
new_dataset
| 0.99931 |
2301.04591
|
Arup Kumar Sarker
|
Arup Kumar Sarker, Md Khairul Islam, Yuan Tian
|
MVAM: Multi-variant Attacks on Memory for IoT Trust Computing
|
12 pages, 6 figures, 6 code blocks
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the significant development of the Internet of Things and low-cost cloud
services, the sensory and data processing requirements of IoT systems are
continually going up. TrustZone is a hardware-protected Trusted Execution
Environment (TEE) for ARM processors specifically designed for IoT handheld
systems. It provides memory isolation techniques to protect trusted application
data from being exploited by malicious entities. In this work, we focus on
identifying different vulnerabilities of the TrustZone extension of ARM
Cortex-M processors. Then design and implement a threat model to execute those
attacks. We have found that TrustZone is vulnerable to buffer overflow-based
attacks. We have used this to create an attack called MOFlow and successfully
leaked the data of another trusted app. This is done by intentionally
overflowing the memory of one app to access the encrypted memory of other apps
inside the secure world. We have also found that, by not validating the input
parameters in the entry function, TrustZone has exposed a security weakness. We
call this Achilles heel and present an attack model showing how to exploit this
weakness too. Our proposed novel attacks are implemented and successfully
tested on two recent ARM Cortex-M processors available on the market (M23 and
M33).
|
[
{
"version": "v1",
"created": "Wed, 11 Jan 2023 17:38:40 GMT"
}
] | 2023-01-12T00:00:00 |
[
[
"Sarker",
"Arup Kumar",
""
],
[
"Islam",
"Md Khairul",
""
],
[
"Tian",
"Yuan",
""
]
] |
new_dataset
| 0.978377 |
2001.01258
|
Vegard Antun
|
Nina M. Gottschling, Vegard Antun, Anders C. Hansen and Ben Adcock
|
The troublesome kernel -- On hallucinations, no free lunches and the
accuracy-stability trade-off in inverse problems
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Methods inspired by Artificial Intelligence (AI) are starting to
fundamentally change computational science and engineering through breakthrough
performances on challenging problems. However, reliability and trustworthiness
of such techniques is becoming a major concern. In inverse problems in imaging,
the focus of this paper, there is increasing empirical evidence that methods
may suffer from hallucinations, i.e., false, but realistic-looking artifacts;
instability, i.e., sensitivity to perturbations in the data; and unpredictable
generalization, i.e., excellent performance on some images, but significant
deterioration on others. This paper presents a theoretical foundation for these
phenomena. We give a mathematical framework describing how and when such
effects arise in arbitrary reconstruction methods, not just AI-inspired
techniques. Several of our results take the form of 'no free lunch' theorems.
Specifically, we show that (i) methods that overperform on a single image can
wrongly transfer details from one image to another, creating a hallucination,
(ii) methods that overperform on two or more images can hallucinate or be
unstable, (iii) optimizing the accuracy-stability trade-off is generally
difficult, (iv) hallucinations and instabilities, if they occur, are not rare
events, and may be encouraged by standard training, (v) it may be impossible to
construct optimal reconstruction maps for certain problems, (vi) standard
methods to improve reliability (e.g., regularization or adversarial training)
may themselves lead to unstable problems. Our results trace these effects to
the kernel of the forwards operator. They assert that such effects can be
avoided only if information about the kernel is encoded into the reconstruction
procedure. Based on this, this work aims to spur research into new ways to
develop robust and reliable AI-inspired methods for inverse problems in
imaging.
|
[
{
"version": "v1",
"created": "Sun, 5 Jan 2020 15:30:23 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jan 2023 14:09:43 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Gottschling",
"Nina M.",
""
],
[
"Antun",
"Vegard",
""
],
[
"Hansen",
"Anders C.",
""
],
[
"Adcock",
"Ben",
""
]
] |
new_dataset
| 0.964965 |
2001.08922
|
Ming-Chang Lee
|
Ming-Chang Lee, Jia-Chun Lin, and Ernst Gunnar Gran
|
RePAD: Real-time Proactive Anomaly Detection for Time Series
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
During the past decade, many anomaly detection approaches have been
introduced in different fields such as network monitoring, fraud detection, and
intrusion detection. However, they require understanding of data pattern and
often need a long off-line period to build a model or network for the target
data. Providing real-time and proactive anomaly detection for streaming time
series without human intervention and domain knowledge is highly valuable since
it greatly reduces human effort and enables appropriate countermeasures to be
undertaken before a disastrous damage, failure, or other harmful event occurs.
However, this issue has not been well studied yet. To address it, this paper
proposes RePAD, which is a Real-time Proactive Anomaly Detection algorithm for
streaming time series based on Long Short-Term Memory (LSTM). RePAD utilizes
short-term historic data points to predict and determine whether or not the
upcoming data point is a sign that an anomaly is likely to happen in the near
future. By dynamically adjusting the detection threshold over time, RePAD is
able to tolerate minor pattern change in time series and detect anomalies
either proactively or on time. Experiments based on two time series datasets
collected from the Numenta Anomaly Benchmark demonstrate that RePAD is able to
proactively detect anomalies and provide early warnings in real time without
human intervention and domain knowledge.
|
[
{
"version": "v1",
"created": "Fri, 24 Jan 2020 09:13:33 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Feb 2020 10:05:32 GMT"
},
{
"version": "v3",
"created": "Sat, 7 Mar 2020 13:48:49 GMT"
},
{
"version": "v4",
"created": "Tue, 12 Oct 2021 12:27:36 GMT"
},
{
"version": "v5",
"created": "Sun, 4 Dec 2022 23:11:55 GMT"
},
{
"version": "v6",
"created": "Fri, 30 Dec 2022 18:47:32 GMT"
},
{
"version": "v7",
"created": "Thu, 5 Jan 2023 10:51:14 GMT"
},
{
"version": "v8",
"created": "Mon, 9 Jan 2023 23:34:54 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Lee",
"Ming-Chang",
""
],
[
"Lin",
"Jia-Chun",
""
],
[
"Gran",
"Ernst Gunnar",
""
]
] |
new_dataset
| 0.996066 |
2202.11271
|
Dhruv Shah
|
Dhruv Shah, Sergey Levine
|
ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints
|
Best Systems Paper Finalist at XVII Robotics: Science and Systems
(RSS 2022), New York City, USA. Project page
https://sites.google.com/view/viking-release
| null |
10.15607/RSS.2022.XVIII.019
| null |
cs.RO cs.AI cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Robotic navigation has been approached as a problem of 3D reconstruction and
planning, as well as an end-to-end learning problem. However, long-range
navigation requires both planning and reasoning about local traversability, as
well as being able to utilize general knowledge about global geography, in the
form of a roadmap, GPS, or other side information providing important cues. In
this work, we propose an approach that integrates learning and planning, and
can utilize side information such as schematic roadmaps, satellite maps and GPS
coordinates as a planning heuristic, without relying on them being accurate.
Our method, ViKiNG, incorporates a local traversability model, which looks at
the robot's current camera observation and a potential subgoal to infer how
easily that subgoal can be reached, as well as a heuristic model, which looks
at overhead maps for hints and attempts to evaluate the appropriateness of
these subgoals in order to reach the goal. These models are used by a heuristic
planner to identify the best waypoint in order to reach the final destination.
Our method performs no explicit geometric reconstruction, utilizing only a
topological representation of the environment. Despite having never seen
trajectories longer than 80 meters in its training dataset, ViKiNG can leverage
its image-based learned controller and goal-directed heuristic to navigate to
goals up to 3 kilometers away in previously unseen environments, and exhibit
complex behaviors such as probing potential paths and backtracking when they
are found to be non-viable. ViKiNG is also robust to unreliable maps and GPS,
since the low-level controller ultimately makes decisions based on egocentric
image observations, using maps only as planning heuristics. For videos of our
experiments, please check out our project page
https://sites.google.com/view/viking-release.
|
[
{
"version": "v1",
"created": "Wed, 23 Feb 2022 02:14:23 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2022 22:50:36 GMT"
},
{
"version": "v3",
"created": "Tue, 10 Jan 2023 02:23:07 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Shah",
"Dhruv",
""
],
[
"Levine",
"Sergey",
""
]
] |
new_dataset
| 0.999092 |
2205.11236
|
Samy Tindel
|
Sheng Zhang, Guang Lin, Samy Tindel
|
2-d signature of images and texture classification
| null | null |
10.1098/rspa.2022.0346
| null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a proper notion of 2-dimensional signature for images. This
object is inspired by the so-called rough paths theory, and it captures many
essential features of a 2-dimensional object such as an image. It thus serves
as a low-dimensional feature for pattern classification. Here we implement a
simple procedure for texture classification. In this context, we show that a
low dimensional set of features based on signatures produces an excellent
accuracy.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2022 20:46:24 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Zhang",
"Sheng",
""
],
[
"Lin",
"Guang",
""
],
[
"Tindel",
"Samy",
""
]
] |
new_dataset
| 0.961946 |
2205.13064
|
Joao Lucas Rulff Da Costa
|
Joao Rulff, Fabio Miranda, Maryam Hosseini, Marcos Lage, Mark
Cartwright, Graham Dove, Juan Bello, Claudio T. Silva
|
Urban Rhapsody: Large-scale exploration of urban soundscapes
|
Accepted at EuroVis 2022. Source code available at:
https://github.com/VIDA-NYU/Urban-Rhapsody
| null |
10.1111/cgf.14534
| null |
cs.CY cs.HC cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Noise is one of the primary quality-of-life issues in urban environments. In
addition to annoyance, noise negatively impacts public health and educational
performance. While low-cost sensors can be deployed to monitor ambient noise
levels at high temporal resolutions, the amount of data they produce and the
complexity of these data pose significant analytical challenges. One way to
address these challenges is through machine listening techniques, which are
used to extract features in attempts to classify the source of noise and
understand temporal patterns of a city's noise situation. However, the
overwhelming number of noise sources in the urban environment and the scarcity
of labeled data makes it nearly impossible to create classification models with
large enough vocabularies that capture the true dynamism of urban soundscapes
In this paper, we first identify a set of requirements in the yet unexplored
domain of urban soundscape exploration. To satisfy the requirements and tackle
the identified challenges, we propose Urban Rhapsody, a framework that combines
state-of-the-art audio representation, machine learning, and visual analytics
to allow users to interactively create classification models, understand noise
patterns of a city, and quickly retrieve and label audio excerpts in order to
create a large high-precision annotated database of urban sound recordings. We
demonstrate the tool's utility through case studies performed by domain experts
using data generated over the five-year deployment of a one-of-a-kind sensor
network in New York City.
|
[
{
"version": "v1",
"created": "Wed, 25 May 2022 22:02:36 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Rulff",
"Joao",
""
],
[
"Miranda",
"Fabio",
""
],
[
"Hosseini",
"Maryam",
""
],
[
"Lage",
"Marcos",
""
],
[
"Cartwright",
"Mark",
""
],
[
"Dove",
"Graham",
""
],
[
"Bello",
"Juan",
""
],
[
"Silva",
"Claudio T.",
""
]
] |
new_dataset
| 0.992625 |
2207.09744
|
Jianrong Yao
|
Yansong Gao, Jianrong Yao, Lihui Pang, Wei Yang, Anmin Fu, Said F.
Al-Sarawi, and Derek Abbott
|
MLMSA: Multi-Label Multi-Side-Channel-Information enabled Deep Learning
Attacks on APUF Variants
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To improve the modeling resilience of silicon strong physical unclonable
functions (PUFs), in particular, the APUFs, that yield a very large number of
challenge response pairs (CRPs), a number of composited APUF variants such as
XOR-APUF, interpose-PUF (iPUF), feed-forward APUF (FF-APUF),and OAX-APUF have
been devised. When examining their security in terms of modeling resilience,
utilizing multiple information sources such as power side channel information
(SCI) or/and reliability SCI given a challenge is under-explored, which poses a
challenge to their supposed modeling resilience in practice. Building upon
multi-label/head deep learning model architecture,this work proposes
Multi-Label Multi-Side-channel-information enabled deep learning Attacks
(MLMSA) to thoroughly evaluate the modeling resilience of aforementioned APUF
variants. Despite its simplicity, MLMSA can successfully break large-scaled
APUF variants, which has not previously been achieved. More precisely, the
MLMSA breaks 128-stage 30-XOR-APUF, (9, 9)- and (2, 18)-iPUFs, and (2, 2,
30)-OAX-APUF when CRPs, power SCI and reliability SCI are concurrently used. It
breaks 128-stage 12-XOR-APUF and (2, 2, 9)-OAX-APUF even when only the
easy-to-obtain reliability SCI and CRPs are exploited. The 128-stage six-loop
FF-APUF and one-loop 20-XOR-FF-APUF can be broken by simultaneously using
reliability SCI and CRPs. All these attacks are normally completed within an
hour with a standard personalcomputer. Therefore, MLMSA is a useful technique
for evaluating other existing or any emerging strong PUF designs.
|
[
{
"version": "v1",
"created": "Wed, 20 Jul 2022 08:42:52 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jan 2023 12:33:40 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Gao",
"Yansong",
""
],
[
"Yao",
"Jianrong",
""
],
[
"Pang",
"Lihui",
""
],
[
"Yang",
"Wei",
""
],
[
"Fu",
"Anmin",
""
],
[
"Al-Sarawi",
"Said F.",
""
],
[
"Abbott",
"Derek",
""
]
] |
new_dataset
| 0.986165 |
2207.12297
|
Nicola Capece PhD
|
Gilda Manfredi, Nicola Capece, Ugo Erra, and Monica Gruosso
|
TreeSketchNet: From Sketch To 3D Tree Parameters Generation
| null |
ACM Transactions on Intelligent Systems and Technology, 09 January
2023
|
10.1145/3579831
| null |
cs.CV cs.AI cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D modeling of non-linear objects from stylized sketches is a challenge even
for experts in Computer Graphics (CG). The extrapolation of objects parameters
from a stylized sketch is a very complex and cumbersome task. In the present
study, we propose a broker system that mediates between the modeler and the 3D
modelling software and can transform a stylized sketch of a tree into a
complete 3D model. The input sketches do not need to be accurate or detailed,
and only need to represent a rudimentary outline of the tree that the modeler
wishes to 3D-model. Our approach is based on a well-defined Deep Neural Network
(DNN) architecture, we called TreeSketchNet (TSN), based on convolutions and
able to generate Weber and Penn parameters that can be interpreted by the
modelling software to generate a 3D model of a tree starting from a simple
sketch. The training dataset consists of Synthetically-Generated
\revision{(SG)} sketches that are associated with Weber-Penn parameters
generated by a dedicated Blender modelling software add-on. The accuracy of the
proposed method is demonstrated by testing the TSN with both synthetic and
hand-made sketches. Finally, we provide a qualitative analysis of our results,
by evaluating the coherence of the predicted parameters with several
distinguishing features.
|
[
{
"version": "v1",
"created": "Mon, 25 Jul 2022 16:08:05 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Oct 2022 16:33:47 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Manfredi",
"Gilda",
""
],
[
"Capece",
"Nicola",
""
],
[
"Erra",
"Ugo",
""
],
[
"Gruosso",
"Monica",
""
]
] |
new_dataset
| 0.997501 |
2208.01230
|
Chao Yan
|
Chao Yan, Yao Yan, Zhiyu Wan, Ziqi Zhang, Larsson Omberg, Justin
Guinney, Sean D. Mooney, Bradley A. Malin
|
A Multifaceted Benchmarking of Synthetic Electronic Health Record
Generation Models
| null | null |
10.1038/s41467-022-35295-1
| null |
cs.LG cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Synthetic health data have the potential to mitigate privacy concerns when
sharing data to support biomedical research and the development of innovative
healthcare applications. Modern approaches for data generation based on machine
learning, generative adversarial networks (GAN) methods in particular, continue
to evolve and demonstrate remarkable potential. Yet there is a lack of a
systematic assessment framework to benchmark methods as they emerge and
determine which methods are most appropriate for which use cases. In this work,
we introduce a generalizable benchmarking framework to appraise key
characteristics of synthetic health data with respect to utility and privacy
metrics. We apply the framework to evaluate synthetic data generation methods
for electronic health records (EHRs) data from two large academic medical
centers with respect to several use cases. The results illustrate that there is
a utility-privacy tradeoff for sharing synthetic EHR data. The results further
indicate that no method is unequivocally the best on all criteria in each use
case, which makes it evident why synthetic data generation methods need to be
assessed in context.
|
[
{
"version": "v1",
"created": "Tue, 2 Aug 2022 03:44:45 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Yan",
"Chao",
""
],
[
"Yan",
"Yao",
""
],
[
"Wan",
"Zhiyu",
""
],
[
"Zhang",
"Ziqi",
""
],
[
"Omberg",
"Larsson",
""
],
[
"Guinney",
"Justin",
""
],
[
"Mooney",
"Sean D.",
""
],
[
"Malin",
"Bradley A.",
""
]
] |
new_dataset
| 0.956355 |
2208.11012
|
Martinus Grady Naftali
|
Martinus Grady Naftali, Jason Sebastian Sulistyawan, and Kelvin Julian
|
AniWho : A Quick and Accurate Way to Classify Anime Character Faces in
Images
|
11 pages, 26 figures, 8 tables
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to classify Japanese animation-style character faces, this paper
attempts to delve further into the many models currently available, including
InceptionV3, InceptionResNetV2, MobileNetV2, and EfficientNet, employing
transfer learning. This paper demonstrates that EfficientNet-B7, which achieves
a top-1 accuracy of 85.08%, has the highest accuracy rate. MobileNetV2, which
achieves a less accurate result with a top-1 accuracy of 81.92%, benefits from
a significantly faster inference time and fewer required parameters. However,
from the experiment, MobileNet-V2 is prone to overfitting; EfficienNet-B0 fixed
the overfitting issue but with a cost of a little slower in inference time than
MobileNet-V2 but a little more accurate result, top-1 accuracy of 83.46%. This
paper also uses a few-shot learning architecture called Prototypical Networks,
which offers an adequate substitute for conventional transfer learning
techniques.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 14:50:01 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Aug 2022 08:33:17 GMT"
},
{
"version": "v3",
"created": "Tue, 10 Jan 2023 13:44:47 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Naftali",
"Martinus Grady",
""
],
[
"Sulistyawan",
"Jason Sebastian",
""
],
[
"Julian",
"Kelvin",
""
]
] |
new_dataset
| 0.996461 |
2210.12115
|
Zillur Rahman
|
Steven Nguyen, Zillur Rahman, Brendan Tan Morris
|
Pedestrian Emergency Braking in Ten Weeks
|
Accepted for publication, 6 pages
|
2022 IEEE International Conference on Vehicular Electronics and
Safety (ICVES)
|
10.1109/ICVES56941.2022.9987182
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In the last decade, research in the field of autonomous vehicles has grown
immensely, and there is a wealth of information available for researchers to
rapidly establish an autonomous vehicle platform for basic maneuvers. In this
paper, we design, implement, and test, in ten weeks, a PD approach to
longitudinal control for pedestrian emergency braking. We also propose a
lateral controller with a similar design for future testing in lane following.
Using widely available tools, we demonstrate the safety of the vehicle in
pedestrian emergency braking scenarios.
|
[
{
"version": "v1",
"created": "Fri, 21 Oct 2022 17:16:25 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Nguyen",
"Steven",
""
],
[
"Rahman",
"Zillur",
""
],
[
"Morris",
"Brendan Tan",
""
]
] |
new_dataset
| 0.999579 |
2210.12777
|
Fenglin Liu
|
Fenglin Liu, Bang Yang, Chenyu You, Xian Wu, Shen Ge, Zhangdaihong
Liu, Xu Sun, Yang Yang, David A. Clifton
|
Generating Accurate and Faithful Discharge Instructions: Task, Dataset,
and Model
|
Accepted by NeurIPS 2022. (Thirty-sixth Conference on Neural
Information Processing Systems, https://openreview.net/forum?id=dp0zWsdOV1h)
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The "Patient Instruction" (PI), known as "Discharge Instruction", which
contains critical instructional information provided both to carers and to the
patient at the time of discharge, is essential for the patient to manage their
condition outside hospital. An accurate and easy-to-follow PI can improve the
self-management of patients which can in turn reduce hospital readmission
rates. However, writing an appropriate PI can be extremely time-consuming for
physicians, and is subject to being incomplete or error-prone for (potentially
overworked) physicians. Therefore, we propose a new task that can provide an
objective means of avoiding incompleteness, while reducing clinical workload:
the automatic generation of the PI, which is imagined as being a document that
the clinician can review, modify, and approve as necessary (rather than taking
the human "out of the loop"). We build a benchmark clinical dataset and propose
the Re3Writer, which imitates the working patterns of physicians to first
retrieve related working experience from historical PIs written by physicians,
then reason related medical knowledge. Finally, it refines the retrieved
working experience and reasoned medical knowledge to extract useful
information, which is used to generate the PI for previously-unseen patient
according to their health records during hospitalization. Our experiments show
that, using our method, the performance of five different models can be
substantially boosted across all metrics, with up to 20%, 11%, and 19% relative
improvements in BLEU-4, ROUGE-L, and METEOR, respectively. Meanwhile, we show
results from human evaluations to measure the effectiveness in terms of its
usefulness for clinical practice. The code is available at
https://github.com/AI-in-Hospitals/Patient-Instructions
|
[
{
"version": "v1",
"created": "Sun, 23 Oct 2022 16:34:39 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jan 2023 16:00:01 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Liu",
"Fenglin",
""
],
[
"Yang",
"Bang",
""
],
[
"You",
"Chenyu",
""
],
[
"Wu",
"Xian",
""
],
[
"Ge",
"Shen",
""
],
[
"Liu",
"Zhangdaihong",
""
],
[
"Sun",
"Xu",
""
],
[
"Yang",
"Yang",
""
],
[
"Clifton",
"David A.",
""
]
] |
new_dataset
| 0.999673 |
2212.02635
|
Peilin Zhong
|
CJ Carey, Jonathan Halcrow, Rajesh Jayaram, Vahab Mirrokni, Warren
Schudy, Peilin Zhong
|
Stars: Tera-Scale Graph Building for Clustering and Graph Learning
|
NeurIPS 2022
| null | null | null |
cs.LG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A fundamental procedure in the analysis of massive datasets is the
construction of similarity graphs. Such graphs play a key role for many
downstream tasks, including clustering, classification, graph learning, and
nearest neighbor search. For these tasks, it is critical to build graphs which
are sparse yet still representative of the underlying data. The benefits of
sparsity are twofold: firstly, constructing dense graphs is infeasible in
practice for large datasets, and secondly, the runtime of downstream tasks is
directly influenced by the sparsity of the similarity graph. In this work, we
present $\textit{Stars}$: a highly scalable method for building extremely
sparse graphs via two-hop spanners, which are graphs where similar points are
connected by a path of length at most two. Stars can construct two-hop spanners
with significantly fewer similarity comparisons, which are a major bottleneck
for learning based models where comparisons are expensive to evaluate.
Theoretically, we demonstrate that Stars builds a graph in nearly-linear time,
where approximate nearest neighbors are contained within two-hop neighborhoods.
In practice, we have deployed Stars for multiple data sets allowing for graph
building at the $\textit{Tera-Scale}$, i.e., for graphs with tens of trillions
of edges. We evaluate the performance of Stars for clustering and graph
learning, and demonstrate 10~1000-fold improvements in pairwise similarity
comparisons compared to different baselines, and 2~10-fold improvement in
running time without quality loss.
|
[
{
"version": "v1",
"created": "Mon, 5 Dec 2022 22:43:26 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Jan 2023 22:23:38 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Carey",
"CJ",
""
],
[
"Halcrow",
"Jonathan",
""
],
[
"Jayaram",
"Rajesh",
""
],
[
"Mirrokni",
"Vahab",
""
],
[
"Schudy",
"Warren",
""
],
[
"Zhong",
"Peilin",
""
]
] |
new_dataset
| 0.996511 |
2212.07181
|
Waseem Shariff Mr
|
Waseem Shariff, Muhammad Ali Farooq, Joe Lemley and Peter Corcoran
|
Event-based YOLO Object Detection: Proof of Concept for Forward
Perception System
|
7 pages, 9 figures, ICMV conference 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Neuromorphic vision or event vision is an advanced vision technology, where
in contrast to the visible camera that outputs pixels, the event vision
generates neuromorphic events every time there is a brightness change which
exceeds a specific threshold in the field of view (FOV). This study focuses on
leveraging neuromorphic event data for roadside object detection. This is a
proof of concept towards building artificial intelligence (AI) based pipelines
which can be used for forward perception systems for advanced vehicular
applications. The focus is on building efficient state-of-the-art object
detection networks with better inference results for fast-moving forward
perception using an event camera. In this article, the event-simulated A2D2
dataset is manually annotated and trained on two different YOLOv5 networks
(small and large variants). To further assess its robustness, single model
testing and ensemble model testing are carried out.
|
[
{
"version": "v1",
"created": "Wed, 14 Dec 2022 12:12:29 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Jan 2023 12:22:07 GMT"
},
{
"version": "v3",
"created": "Tue, 10 Jan 2023 12:02:54 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Shariff",
"Waseem",
""
],
[
"Farooq",
"Muhammad Ali",
""
],
[
"Lemley",
"Joe",
""
],
[
"Corcoran",
"Peter",
""
]
] |
new_dataset
| 0.997377 |
2212.13993
|
Mansoor Ali
|
Mansoor Ali, Faisal Naeem, Georges Kaddoum, and Ekram Hossain
|
Metaverse Communications, Networking, Security, and Applications:
Research Issues, State-of-the-Art, and Future Directions
| null | null | null | null |
cs.CR cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Metaverse is an evolving orchestrator of the next-generation Internet
architecture that produces an immersive and self-adapting virtual world in
which humans perform activities similar to those in the real world, such as
playing sports, doing work, and socializing. It is becoming a reality and is
driven by ever-evolving advanced technologies such as extended reality,
artificial intelligence, and blockchain. In this context, Metaverse will play
an essential role in developing smart cities, which becomes more evident in the
post COVID 19 pandemic metropolitan setting. However, the new paradigm imposes
new challenges, such as developing novel privacy and security threats that can
emerge in the digital Metaverse ecosystem. Moreover, it requires the
convergence of several media types with the capability to quickly process
massive amounts of data to keep the residents safe and well-informed, which can
raise issues related to scalability and interoperability. In light of this,
this research study aims to review the literature on the state of the art of
integrating the Metaverse architecture concepts in smart cities. First, this
paper presents the theoretical architecture of Metaverse and discusses
international companies interest in this emerging technology. It also examines
the notion of Metaverse relevant to virtual reality, identifies the prevalent
threats, and determines the importance of communication infrastructure in
information gathering for efficient Metaverse operation. Next, the notion of
blockchain technologies is discussed regarding privacy preservation and how it
can provide tamper-proof content sharing among Metaverse users. Finally, the
application of distributed Metaverse for social good is highlighted.
|
[
{
"version": "v1",
"created": "Sun, 25 Dec 2022 03:37:35 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Jan 2023 20:15:29 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Ali",
"Mansoor",
""
],
[
"Naeem",
"Faisal",
""
],
[
"Kaddoum",
"Georges",
""
],
[
"Hossain",
"Ekram",
""
]
] |
new_dataset
| 0.99722 |
2301.03594
|
Jack Sturgess
|
Jack Sturgess, Simon Birnbach, Simon Eberz, Ivan Martinovic
|
RingAuth: Wearable Authentication using a Smart Ring
|
arXiv admin note: text overlap with arXiv:2202.01736
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we show that by using inertial sensor data generated by a
smart ring, worn on the finger, the user can be authenticated when making
mobile payments or when knocking on a door (for access control). The proposed
system can be deployed purely in software and does not require updates to
existing payment terminals or infrastructure. We also demonstrate that smart
ring data can authenticate smartwatch gestures, and vice versa, allowing either
device to act as an implicit second factor for the other. To validate the
system, we conduct a user study (n=21) to collect inertial sensor data from
users as they perform gestures, and we evaluate the system against an active
impersonation attacker. Based on this data, we develop payment and access
control authentication models for which we achieve EERs of 0.04 and 0.02,
respectively.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 23:32:21 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Sturgess",
"Jack",
""
],
[
"Birnbach",
"Simon",
""
],
[
"Eberz",
"Simon",
""
],
[
"Martinovic",
"Ivan",
""
]
] |
new_dataset
| 0.999796 |
2301.03641
|
Peng Hu
|
Peng Hu
|
SatNetOps: Toward Multi-Layer Networking for Satellite Network
Operations
| null | null | null | null |
cs.NI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in low-Earth-orbit (LEO) satellites aim to bring
resilience, ubiquitous, and high-quality service to future Internet
infrastructure. However, the soaring number of space assets, increasing
dynamics of LEO satellites and expanding dimensions of network threats call for
an enhanced approach to efficient satellite operations. To address these
pressing challenges, we propose an approach for satellite network operations
based on multi-layer satellite networking (MLSN), called "SatNetOps". Two
SatNetOps schemes are proposed, referred to as LEO-LEO MLSN (LLM) and GEO-LEO
MLSN (GLM). The performance of the proposed schemes is evaluated in 24-hr
satellite scenarios with typical payload setups in simulations, where the key
metrics such as latency and reliability are discussed with the consideration of
the Consultative Committee for Space Data Systems (CCSDS) standard-compliant
telemetry and telecommand missions. Although the SatNetOps approach is
promising, we analyze the factors affecting the performance of the LLM and GLM
schemes. The discussions on the results and conclusive remarks are made in the
end.
|
[
{
"version": "v1",
"created": "Mon, 9 Jan 2023 19:25:19 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Hu",
"Peng",
""
]
] |
new_dataset
| 0.955671 |
2301.03734
|
Sifei Luan
|
Frank Sifei Luan, Stephanie Wang, Samyukta Yagati, Sean Kim, Kenneth
Lien, Isaac Ong, Tony Hong, SangBin Cho, Eric Liang, Ion Stoica
|
Exoshuffle-CloudSort
| null | null | null | null |
cs.DC cs.OS
|
http://creativecommons.org/licenses/by/4.0/
|
We present Exoshuffle-CloudSort, a sorting application running on top of Ray
using the Exoshuffle architecture. Exoshuffle-CloudSort runs on Amazon EC2,
with input and output data stored on Amazon S3. Using 40 i4i.4xlarge workers,
Exoshuffle-CloudSort completes the 100 TB CloudSort Benchmark (Indy category)
in 5378 seconds, with an average total cost of $97.
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 00:43:32 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Luan",
"Frank Sifei",
""
],
[
"Wang",
"Stephanie",
""
],
[
"Yagati",
"Samyukta",
""
],
[
"Kim",
"Sean",
""
],
[
"Lien",
"Kenneth",
""
],
[
"Ong",
"Isaac",
""
],
[
"Hong",
"Tony",
""
],
[
"Cho",
"SangBin",
""
],
[
"Liang",
"Eric",
""
],
[
"Stoica",
"Ion",
""
]
] |
new_dataset
| 0.998534 |
2301.03771
|
David Noever
|
Forrest McKee, David Noever
|
Chatbots in a Honeypot World
| null | null | null | null |
cs.CR cs.CY cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Question-and-answer agents like ChatGPT offer a novel tool for use as a
potential honeypot interface in cyber security. By imitating Linux, Mac, and
Windows terminal commands and providing an interface for TeamViewer, nmap, and
ping, it is possible to create a dynamic environment that can adapt to the
actions of attackers and provide insight into their tactics, techniques, and
procedures (TTPs). The paper illustrates ten diverse tasks that a
conversational agent or large language model might answer appropriately to the
effects of command-line attacker. The original result features feasibility
studies for ten model tasks meant for defensive teams to mimic expected
honeypot interfaces with minimal risks. Ultimately, the usefulness outside of
forensic activities stems from whether the dynamic honeypot can extend the
time-to-conquer or otherwise delay attacker timelines short of reaching key
network assets like databases or confidential information. While ongoing
maintenance and monitoring may be required, ChatGPT's ability to detect and
deflect malicious activity makes it a valuable option for organizations seeking
to enhance their cyber security posture. Future work will focus on
cybersecurity layers, including perimeter security, host virus detection, and
data security.
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 03:43:35 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"McKee",
"Forrest",
""
],
[
"Noever",
"David",
""
]
] |
new_dataset
| 0.980742 |
2301.03831
|
Lin Song
|
Lin Song, Songyang Zhang, Songtao Liu, Zeming Li, Xuming He, Hongbin
Sun, Jian Sun, Nanning Zheng
|
Dynamic Grained Encoder for Vision Transformers
|
Accepted by NeurIPS2021
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Transformers, the de-facto standard for language modeling, have been recently
applied for vision tasks. This paper introduces sparse queries for vision
transformers to exploit the intrinsic spatial redundancy of natural images and
save computational costs. Specifically, we propose a Dynamic Grained Encoder
for vision transformers, which can adaptively assign a suitable number of
queries to each spatial region. Thus it achieves a fine-grained representation
in discriminative regions while keeping high efficiency. Besides, the dynamic
grained encoder is compatible with most vision transformer frameworks. Without
bells and whistles, our encoder allows the state-of-the-art vision transformers
to reduce computational complexity by 40%-60% while maintaining comparable
performance on image classification. Extensive experiments on object detection
and segmentation further demonstrate the generalizability of our approach. Code
is available at https://github.com/StevenGrove/vtpack.
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 07:55:29 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Song",
"Lin",
""
],
[
"Zhang",
"Songyang",
""
],
[
"Liu",
"Songtao",
""
],
[
"Li",
"Zeming",
""
],
[
"He",
"Xuming",
""
],
[
"Sun",
"Hongbin",
""
],
[
"Sun",
"Jian",
""
],
[
"Zheng",
"Nanning",
""
]
] |
new_dataset
| 0.998186 |
2301.03899
|
Rakesh Kumar
|
Truls Asheim, Boris Grot, Rakesh Kumar
|
A Storage-Effective BTB Organization for Servers
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Many contemporary applications feature multi-megabyte instruction footprints
that overwhelm the capacity of branch target buffers (BTB) and instruction
caches (L1-I), causing frequent front-end stalls that inevitably hurt
performance. BTB capacity is crucial for performance as a sufficiently large
BTB enables the front-end to accurately resolve the upcoming execution path and
steer instruction fetch appropriately. Moreover, it also enables highly
effective fetch-directed instruction prefetching that can eliminate a large
portion L1-I misses. For these reasons, commercial processors allocate vast
amounts of storage capacity to BTBs.
This work aims to reduce BTB storage requirements by optimizing the
organization of BTB entries. Our key insight is that storing branch target
offsets, instead of full or compressed targets, can drastically reduce BTB
storage cost as the vast majority of dynamic branches have short offsets
requiring just a handful of bits to encode. Based on this insight, we size the
ways of a set associative BTB to hold different number of target offset bits
such that each way stores offsets within a particular range. Doing so enables a
dramatic reduction in storage for target addresses. Our final design, called
BTB-X, uses an 8-way set associative BTB with differently sized ways that
enables it to track about 2.24x more branches than a conventional BTB and 1.3x
more branches than a storage-optimized state-of-the-art BTB organization,
called PDede, with the same storage budget.
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 10:52:19 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Asheim",
"Truls",
""
],
[
"Grot",
"Boris",
""
],
[
"Kumar",
"Rakesh",
""
]
] |
new_dataset
| 0.990834 |
2301.03971
|
Yifan Wang
|
Megan Dare, Valentina Fajardo Diaz, Averie Ho Zoen So, Yifan Wang,
Shibingfeng Zhang
|
Unsupervised Mandarin-Cantonese Machine Translation
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advancements in unsupervised machine translation have enabled the development
of machine translation systems that can translate between languages for which
there is not an abundance of parallel data available. We explored unsupervised
machine translation between Mandarin Chinese and Cantonese. Despite the vast
number of native speakers of Cantonese, there is still no large-scale corpus
for the language, due to the fact that Cantonese is primarily used for oral
communication. The key contributions of our project include: 1. The creation of
a new corpus containing approximately 1 million Cantonese sentences, and 2. A
large-scale comparison across different model architectures, tokenization
schemes, and embedding structures. Our best model trained with character-based
tokenization and a Transformer architecture achieved a character-level BLEU of
25.1 when translating from Mandarin to Cantonese and of 24.4 when translating
from Cantonese to Mandarin. In this paper we discuss our research process,
experiments, and results.
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 14:09:40 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Dare",
"Megan",
""
],
[
"Diaz",
"Valentina Fajardo",
""
],
[
"So",
"Averie Ho Zoen",
""
],
[
"Wang",
"Yifan",
""
],
[
"Zhang",
"Shibingfeng",
""
]
] |
new_dataset
| 0.999021 |
2301.04037
|
Mohammadreza Shetab-Bushehri
|
Mohammadreza Shetab-Bushehri, Miguel Aranda, Youcef Mezouar, Adrien
Bartoli, Erol Ozgur
|
ROBUSfT: Robust Real-Time Shape-from-Template, a C++ Library
|
19 Pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Tracking the 3D shape of a deforming object using only monocular 2D vision is
a challenging problem. This is because one should (i) infer the 3D shape from a
2D image, which is a severely underconstrained problem, and (ii) implement the
whole solution pipeline in real-time. The pipeline typically requires feature
detection and matching, mismatch filtering, 3D shape inference and feature
tracking algorithms. We propose ROBUSfT, a conventional pipeline based on a
template containing the object's rest shape, texturemap and deformation law.
ROBUSfT is ready-to-use, wide-baseline, capable of handling large deformations,
fast up to 30 fps, free of training, and robust against partial occlusions and
discontinuity in video frames. It outperforms the state-of-the-art methods in
challenging datasets. ROBUSfT is implemented as a publicly available C++
library and we provide a tutorial on how to use it in
https://github.com/mrshetab/ROBUSfT
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 15:39:02 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Shetab-Bushehri",
"Mohammadreza",
""
],
[
"Aranda",
"Miguel",
""
],
[
"Mezouar",
"Youcef",
""
],
[
"Bartoli",
"Adrien",
""
],
[
"Ozgur",
"Erol",
""
]
] |
new_dataset
| 0.996668 |
2301.04060
|
Xavier Allamigeon
|
Xavier Allamigeon, Quentin Canu and Pierre-Yves Strub
|
A Formal Disproof of the Hirsch Conjecture
|
15 pages, 6 figures, 1 table. To appear in the proceedings of CPP'23
| null | null | null |
cs.LO math.CO math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The purpose of this paper is the formal verification of a counterexample of
Santos et al. to the so-called Hirsch Conjecture on the diameter of polytopes
(bounded convex polyhedra). In contrast with the pen-and-paper proof, our
approach is entirely computational: we implement in Coq and prove correct an
algorithm that explicitly computes, within the proof assistant, vertex-edge
graphs of polytopes as well as their diameter. The originality of this
certificate-based algorithm is to achieve a tradeoff between simplicity and
efficiency.
Simplicity is crucial in obtaining the proof of correctness of the algorithm.
This proof splits into the correctness of an abstract algorithm stated over
proof-oriented data types and the correspondence with a low-level
implementation over computation-oriented data types. A special effort has been
made to reduce the algorithm to a small sequence of elementary operations
(e.g., matrix multiplications, basic routines on sets and graphs), in order to
make the derivation of the correctness of the low-level implementation more
transparent.
Efficiency allows us to scale up to polytopes with a challenging
combinatorics. For instance, we formally check the two counterexamples of
Matschke, Santos and Weibel to the Hirsch conjecture, respectively 20- and
23-dimensional polytopes with 36 425 and 73 224 vertices involving rational
coefficients with up to 40 digits in their numerator and denominator. We also
illustrate the performance of the method by computing the list of vertices or
the diameter of well-known classes of polytopes, such as (polars of) cyclic
polytopes involved in McMullen's Upper Bound Theorem.
|
[
{
"version": "v1",
"created": "Tue, 10 Jan 2023 16:24:58 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Allamigeon",
"Xavier",
""
],
[
"Canu",
"Quentin",
""
],
[
"Strub",
"Pierre-Yves",
""
]
] |
new_dataset
| 0.998198 |
2301.04120
|
Yu-Wen Chen
|
Yu-Wen Chen, Hsin-Min Wang, Yu Tsao
|
BASPRO: a balanced script producer for speech corpus collection based on
the genetic algorithm
|
accepted by APSIPA Transactions on Signal and Information Processing
| null | null | null |
cs.NE cs.AI cs.CL cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The performance of speech-processing models is heavily influenced by the
speech corpus that is used for training and evaluation. In this study, we
propose BAlanced Script PROducer (BASPRO) system, which can automatically
construct a phonetically balanced and rich set of Chinese sentences for
collecting Mandarin Chinese speech data. First, we used pretrained natural
language processing systems to extract ten-character candidate sentences from a
large corpus of Chinese news texts. Then, we applied a genetic algorithm-based
method to select 20 phonetically balanced sentence sets, each containing 20
sentences, from the candidate sentences. Using BASPRO, we obtained a recording
script called TMNews, which contains 400 ten-character sentences. TMNews covers
84% of the syllables used in the real world. Moreover, the syllable
distribution has 0.96 cosine similarity to the real-world syllable
distribution. We converted the script into a speech corpus using two
text-to-speech systems. Using the designed speech corpus, we tested the
performances of speech enhancement (SE) and automatic speech recognition (ASR),
which are one of the most important regression- and classification-based speech
processing tasks, respectively. The experimental results show that the SE and
ASR models trained on the designed speech corpus outperform their counterparts
trained on a randomly composed speech corpus.
|
[
{
"version": "v1",
"created": "Sun, 11 Dec 2022 02:05:30 GMT"
}
] | 2023-01-11T00:00:00 |
[
[
"Chen",
"Yu-Wen",
""
],
[
"Wang",
"Hsin-Min",
""
],
[
"Tsao",
"Yu",
""
]
] |
new_dataset
| 0.9974 |
2008.06397
|
Dylan Shah
|
Dylan S. Shah (1), Joshua P. Powers (2), Liana G. Tilton (1), Sam
Kriegman (2), Josh Bongard (2), and Rebecca Kramer-Bottiglio (1) ((1) Yale
University, (2) University of Vermont)
|
A soft robot that adapts to environments through shape change
|
25 Pages, 5 figures. Published at Nature Machine Intelligence, Vol.
2. (2020). For definitive version, see https://rdcu.be/cbuUW
| null |
10.1038/s42256-020-00263-1
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many organisms, including various species of spiders and caterpillars, change
their shape to switch gaits and adapt to different environments. Recent
technological advances, ranging from stretchable circuits to highly deformable
soft robots, have begun to make shape-changing robots a possibility. However,
it is currently unclear how and when shape change should occur, and what
capabilities could be gained, leading to a wide range of unsolved design and
control problems. To begin addressing these questions, here we simulate,
design, and build a soft robot that utilizes shape change to achieve locomotion
over both a flat and inclined surface. Modeling this robot in simulation, we
explore its capabilities in two environments and demonstrate the existence of
environment-specific shapes and gaits that successfully transfer to the
physical hardware. We found that the shape-changing robot traverses these
environments better than an equivalent but non-morphing robot, in simulation
and reality.
|
[
{
"version": "v1",
"created": "Fri, 14 Aug 2020 14:49:31 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Oct 2020 03:59:26 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Dec 2020 20:30:38 GMT"
},
{
"version": "v4",
"created": "Mon, 25 Jul 2022 16:25:21 GMT"
},
{
"version": "v5",
"created": "Mon, 9 Jan 2023 03:27:31 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Shah",
"Dylan S.",
""
],
[
"Powers",
"Joshua P.",
""
],
[
"Tilton",
"Liana G.",
""
],
[
"Kriegman",
"Sam",
""
],
[
"Bongard",
"Josh",
""
],
[
"Kramer-Bottiglio",
"Rebecca",
""
]
] |
new_dataset
| 0.992173 |
2103.15066
|
Fang Wu
|
Fang Wu, Stan Z. Li
|
InsertGNN: Can Graph Neural Networks Outperform Humans in TOEFL Sentence
Insertion Problem?
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sentence insertion is an interesting NLP problem but received insufficient
attention. Existing approaches in sentence ordering, text coherence, and
question answering are neither suitable nor good enough at solving it. To
bridge this gap, we propose InsertGNN, a simple yet effective model that
represents the problem as a graph and adopts a hierarchical graph neural
network (GNN) to learn the connection between sentences. We evaluate our method
in our newly collected TOEFL dataset and further verify its effectiveness on
the larger arXiv dataset using cross-domain learning. Extensive experiments
demonstrate that InsertGNN outperforms all baselines by a large margin with an
accuracy of 70\%, rivaling the average human test scores.
|
[
{
"version": "v1",
"created": "Sun, 28 Mar 2021 06:50:31 GMT"
},
{
"version": "v2",
"created": "Sat, 7 Jan 2023 05:24:34 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Wu",
"Fang",
""
],
[
"Li",
"Stan Z.",
""
]
] |
new_dataset
| 0.998833 |
2105.07172
|
Masoud Hayeri Khyavi
|
Masoud Hayeri Khyavi
|
Rescue Network: Using UAVs (drones) in Earthquake Crisis Management
| null | null | null | null |
cs.NI cs.CY cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Earthquake is one of the natural disasters which cannot be either controlled
or predicted absolutely. Since preventing earthquake is impossible, preventing
its damages is also difficult. Unfortunately, after each earthquake and its
financial and life losses, the initial panic of the people results in the
second wave of accidents and damages. Inrush of confused people to escape the
cities, streets and houses is a great problem. Apart from training in seismic
areas which is very important, considering security arrangements and observing
security principles in construction, instructing the people is also important.
Other than searching for and rescuing the people who are trapped under
detrimental or are in danger, those who thieve the damaged area is another
important issue after each earthquake. Thus, a solution is proposed to use
modern technology to reduce threats of natural disasters including earthquake.
Today, UAVs are being used in natural disasters and accidents. To this end and
considering the ever-increasing developments of network technologies and
communication including IoT and cloud, an efficient design is presented which
increases rescue factor of live creatures in natural disasters that can be used
to rescue human lives and prevent subsequent outcomes after a few seconds. In
this study, focus is on time of occurrence of earthquake and after earthquake
|
[
{
"version": "v1",
"created": "Sat, 15 May 2021 08:19:41 GMT"
},
{
"version": "v2",
"created": "Sun, 8 Jan 2023 11:15:56 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Khyavi",
"Masoud Hayeri",
""
]
] |
new_dataset
| 0.995522 |
2108.13167
|
Sushil Mahavir Varma
|
Sushil Mahavir Varma, Siva Theja Maguluri
|
Transportation Polytope and its Applications in Parallel Server Systems
|
56 pages, 10 Figures
| null | null | null |
cs.NI math.CO math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A parallel server system is a stochastic processing network with applications
in manufacturing, supply chain, ride-hailing, call centers, etc. Heterogeneous
customers arrive in the system, and only a subset of servers can serve any
customer type given by the flexibility graph. The goal of the system operator
is to minimize the delay that depends on the scheduling policy and the
flexibility graph. A long line of literature focuses on designing near-optimal
scheduling policies given a flexibility graph. On the contrary, we fix the
scheduling policy to be the so-called MaxWeight scheduling given its superior
delay performance and focus on designing near-optimal, sparse flexibility
graphs. Our contributions are threefold.
First, we analyze the expected delay in the heavy-traffic asymptotic regime
in terms of the properties of the flexibility graph and use this result to
translate the design question in terms of transportation polytope, the
deterministic equivalent of parallel server queues. Second, we design the
sparsest flexibility graph that achieves a given delay performance and shows
the robustness of the design to demand uncertainty. Third, given the budget to
add edges arrives sequentially in time, we present the optimal schedule for
adding them to the flexibility graph. These results are obtained by proving new
results for transportation polytopes and are of independent interest. In
particular, translating the difficulties to a simpler model, i.e.
transportation polytope, allows us to develop a unified framework to answer
several design questions.
|
[
{
"version": "v1",
"created": "Wed, 11 Aug 2021 16:16:01 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Jan 2023 22:07:07 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Varma",
"Sushil Mahavir",
""
],
[
"Maguluri",
"Siva Theja",
""
]
] |
new_dataset
| 0.998767 |
2109.00881
|
Bowei Chen
|
Jingmin Huang and Bowei Chen and Lan Luo and Shigang Yue and Iadh
Ounis
|
DVM-CAR: A large-scale automotive dataset for visual marketing research
and applications
|
Proceedings of IEEE International Conference on Big Data, pp.
4130-4137, 2022
| null | null |
978-1-6654-8045-1/22
|
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
There is a growing interest in product aesthetics analytics and design.
However, the lack of available large-scale data that covers various variables
and information is one of the biggest challenges faced by analysts and
researchers. In this paper, we present our multidisciplinary initiative of
developing a comprehensive automotive dataset from different online sources and
formats. Specifically, the created dataset contains 1.4 million images from 899
car models and their corresponding model specifications and sales information
over more than ten years in the UK market. Our work makes significant
contributions to: (i) research and applications in the automotive industry;
(ii) big data creation and sharing; (iii) database design; and (iv) data
fusion. Apart from our motivation, technical details and data structure, we
further present three simple examples to demonstrate how our data can be used
in business research and applications.
|
[
{
"version": "v1",
"created": "Tue, 10 Aug 2021 12:48:58 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Jan 2023 01:59:32 GMT"
},
{
"version": "v3",
"created": "Mon, 9 Jan 2023 15:36:23 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Huang",
"Jingmin",
""
],
[
"Chen",
"Bowei",
""
],
[
"Luo",
"Lan",
""
],
[
"Yue",
"Shigang",
""
],
[
"Ounis",
"Iadh",
""
]
] |
new_dataset
| 0.999869 |
2109.15017
|
Matteo Pagin
|
Matteo Pagin, Tommaso Zugno, Marco Giordani, Louis-Adrien Dufrene,
Quentin Lampin, Michele Zorzi
|
5G NR-Light at Millimeter Waves: Design Guidelines for Mid-Market IoT
Use Cases
|
Changed title, revised article and submitted to a different venue
| null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
5th generation (5G) systems have been designed with three main objectives in
mind: increasing throughput, reducing latency, and enabling reliable
communications. To meet these (often conflicting) constraints, the 3GPP
released a set of specifications for 5G NR, one of the main innovations being
the support for communications in the millimeter wave (mmWave) bands. However,
how to implement lower complexity, energy efficient, mid-market Internet of
Things (IoT) applications is still an on-going investigation, currently led by
the 3GPP which is extending the NR standard with NR-Light specifications to
support devices with reduced capabilities (REDCAP). While REDCAP devices may
also operate at mmWaves to improve the network performance, hardware/software
simplifications are needed to support balanced and mixed requirements compared
to 5G NR systems. In this context, the contributions of this paper are
threefold. First, we present some NR-Light use cases for which the support of
the mmWave bands is desirable. Second, we describe how 5G NR can be simplified
to achieve NR-Light requirements and expectations. Finally, we evaluate via
simulation the performance of NR-Light devices operating at mmWaves in an
industrial IoT setup, in terms of cost and complexity, throughput, and latency.
|
[
{
"version": "v1",
"created": "Thu, 30 Sep 2021 11:14:33 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Dec 2022 07:02:47 GMT"
},
{
"version": "v3",
"created": "Mon, 9 Jan 2023 09:02:18 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Pagin",
"Matteo",
""
],
[
"Zugno",
"Tommaso",
""
],
[
"Giordani",
"Marco",
""
],
[
"Dufrene",
"Louis-Adrien",
""
],
[
"Lampin",
"Quentin",
""
],
[
"Zorzi",
"Michele",
""
]
] |
new_dataset
| 0.955026 |
2201.07754
|
Navid Rekabsaz
|
Klara Krieg and Emilia Parada-Cabaleiro and Gertraud Medicus and Oleg
Lesota and Markus Schedl and Navid Rekabsaz
|
Grep-BiasIR: A Dataset for Investigating Gender Representation-Bias in
Information Retrieval Results
|
CHIIR 2023
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The provided contents by information retrieval (IR) systems can reflect the
existing societal biases and stereotypes. Such biases in retrieval results can
lead to further establishing and strengthening stereotypes in society and also
in the systems. To facilitate the studies of gender bias in the retrieval
results of IR systems, we introduce Gender Representation-Bias for Information
Retrieval (Grep-BiasIR), a novel thoroughly-audited dataset consisting of 118
bias-sensitive neutral search queries. The set of queries covers a wide range
of gender-related topics, for which a biased representation of genders in the
search result can be considered as socially problematic. Each query is
accompanied with one relevant and one non-relevant document, where the document
is also provided in three variations of female, male, and neutral. The dataset
is available at https://github.com/KlaraKrieg/GrepBiasIR.
|
[
{
"version": "v1",
"created": "Wed, 19 Jan 2022 17:50:18 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Mar 2022 14:14:59 GMT"
},
{
"version": "v3",
"created": "Mon, 9 Jan 2023 15:33:05 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Krieg",
"Klara",
""
],
[
"Parada-Cabaleiro",
"Emilia",
""
],
[
"Medicus",
"Gertraud",
""
],
[
"Lesota",
"Oleg",
""
],
[
"Schedl",
"Markus",
""
],
[
"Rekabsaz",
"Navid",
""
]
] |
new_dataset
| 0.985826 |
2202.00248
|
Tania Sidana
|
Tania Sidana and Navin Kashyap
|
Entanglement-Assisted Quantum Error-Correcting Codes over Local
Frobenius Rings
|
Extended version of the ISIT 2022 paper, DOI:
10.1109/ISIT50566.2022.9834381. Additions and corrections made in version v4.
In particular, Section 6 is added and Section 3 is rewritten
| null | null | null |
cs.IT math.IT quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we provide a framework for constructing entanglement-assisted
quantum error-correcting codes (EAQECCs) from classical additive codes over a
finite commutative local Frobenius ring $\mathcal{R}$. At the heart of the
framework, and this is one of the main technical contributions of our paper, is
a procedure to construct, for an additive code $\mathcal{C}$ over
$\mathcal{R}$, a generating set for $\mathcal{C}$ that is in standard form,
meaning that it consists purely of isotropic generators and hyperbolic pairs.
Moreover, when $\mathcal{R}$ is a Galois ring, we give an exact expression for
the minimum number of pairs of maximally entangled qudits required to construct
an EAQECC from an additive code over $\mathcal{R}$, which significantly extends
known results for EAQECCs over finite fields. We also demonstrate how adding
extra coordinates to an additive code can give us a certain degree of
flexibility in determining the parameters of the EAQECCs that result from our
construction.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 06:58:56 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Feb 2022 09:47:42 GMT"
},
{
"version": "v3",
"created": "Sun, 3 Apr 2022 06:20:52 GMT"
},
{
"version": "v4",
"created": "Sun, 8 Jan 2023 09:25:07 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Sidana",
"Tania",
""
],
[
"Kashyap",
"Navin",
""
]
] |
new_dataset
| 0.999695 |
2202.02673
|
Philipp del Hougne
|
Rashid Faqiri, Chlo\'e Saigre-Tardif, George C. Alexandropoulos, Nir
Shlezinger, Mohammadreza F. Imani, Philipp del Hougne
|
PhysFad: Physics-Based End-to-End Channel Modeling of RIS-Parametrized
Environments with Adjustable Fading
|
30 pages, 7 figures, submitted to an IEEE Journal
|
IEEE Trans. Wirel. Commun. 22, 580-595 (2023)
|
10.1109/TWC.2022.3196834
| null |
cs.IT eess.SP math.IT physics.app-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Programmable radio environments parametrized by reconfigurable intelligent
surfaces (RISs) are emerging as a new wireless communications paradigm, but
currently used channel models for the design and analysis of signal-processing
algorithms cannot include fading in a manner that is faithful to the underlying
wave physics. To overcome this roadblock, we introduce a physics-based
end-to-end model of RIS-parametrized wireless channels with adjustable fading
(coined PhysFad) which is based on a first-principles coupled-dipole formalism.
PhysFad naturally incorporates the notions of space and causality, dispersion
(i.e., frequency selectivity) and the intertwinement of each RIS element's
phase and amplitude response, as well as any arising mutual coupling effects
including long-range mesoscopic correlations. PhysFad offers the to-date
missing tuning knob for adjustable fading. We thoroughly characterize PhysFad
and demonstrate its capabilities for a prototypical problem of RIS-enabled
over-the-air channel equalization in rich-scattering wireless communications.
We also share a user-friendly version of our code to help the community
transition towards physics-based models with adjustable fading.
|
[
{
"version": "v1",
"created": "Sun, 6 Feb 2022 01:31:33 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Faqiri",
"Rashid",
""
],
[
"Saigre-Tardif",
"Chloé",
""
],
[
"Alexandropoulos",
"George C.",
""
],
[
"Shlezinger",
"Nir",
""
],
[
"Imani",
"Mohammadreza F.",
""
],
[
"del Hougne",
"Philipp",
""
]
] |
new_dataset
| 0.994253 |
2204.13640
|
Chandranshu Gupta
|
Chandranshu Gupta and Gaurav Varshney
|
An Improved Authentication Scheme for BLE Devices with no I/O
Capabilities
| null |
Computer Communications, Volume 200, 2023, Pages 42-53
|
10.1016/j.comcom.2023.01.001
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bluetooth Low Energy (BLE) devices have become very popular because of their
Low energy consumption and hence a prolonged battery life. They are being used
in smart wearable devices, smart home automation system, beacons and many more
areas. BLE uses pairing mechanisms to achieve a level of peer entity
authentication as well as encryption. Although, there are a set of pairing
mechanisms available but BLE devices having no keyboard or display mechanism
(and hence using the Just Works pairing) are still vulnerable. In this paper,
we propose and implement, a light-weight digital certificate based
authentication mechanism for the BLE devices making use of Just Works model.
The proposed model is an add-on to the already existing pairing mechanism and
therefore can be easily incorporated in the existing BLE stack. To counter the
existing Man-in-The-Middle attack scenario in Just Works pairing (device
spoofing), our proposed model allows the client and peripheral to make use of
the popular Public Key Infrastructure (PKI) to establish peer entity
authentication and a secure cryptographic tunnel for communication. We have
also developed a lightweight BLE profiled digital certificate containing the
bare minimum fields required for resource constrained devices, which
significantly reduces the memory (about 90\% reduction) and energy consumption.
We have experimentally evaluated the energy consumption of the device using the
proposed pairing mechanism to demonstrate that the model can be easily deployed
with less changes to the power requirements of the chips. The model has been
formally verified using automatic verification tool for protocol testing.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2022 16:58:51 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Gupta",
"Chandranshu",
""
],
[
"Varshney",
"Gaurav",
""
]
] |
new_dataset
| 0.995539 |
2205.08314
|
Yepeng Ding
|
Yepeng Ding and Hiroyuki Sato
|
Self-Sovereign Identity as a Service: Architecture in Practice
| null | null |
10.1109/COMPSAC54236.2022.00244
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-sovereign identity (SSI) has gained a large amount of interest. It
enables physical entities to retain ownership and control of their digital
identities, which naturally forms a conceptual decentralized architecture. With
the support of the distributed ledger technology (DLT), it is possible to
implement this conceptual decentralized architecture in practice and further
bring technical advantages such as privacy protection, security enhancement,
high availability. However, developing such a relatively new identity model has
high costs and risks with uncertainty. To facilitate the use of the DLT-based
SSI in practice, we formulate Self-Sovereign Identity as a Service (SSIaaS), a
concept that enables a system, especially a system cluster, to readily adopt
SSI as its identity model for identification, authentication, and
authorization. We propose a practical architecture by elaborating the service
concept, SSI, and DLT to implement SSIaaS platforms and SSI services. Besides,
we present an architecture for constructing and customizing SSI services with a
set of architectural patterns and provide corresponding evaluations.
Furthermore, we demonstrate the feasibility of our proposed architecture in
practice with Selfid, an SSIaaS platform based on our proposed architecture.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 13:13:06 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jun 2022 08:37:33 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Ding",
"Yepeng",
""
],
[
"Sato",
"Hiroyuki",
""
]
] |
new_dataset
| 0.985905 |
2205.13277
|
Duygu Sesver
|
Duygu Sesver, Alp Eren Gen\c{c}o\u{g}lu, \c{C}a\u{g}r{\i} Emre
Y{\i}ld{\i}z, Zehra G\"unindi, Faeze Habibi, Ziya Ata Yaz{\i}c{\i}, Haz{\i}m
Kemal Ekenel
|
VIDI: A Video Dataset of Incidents
| null |
2022 IEEE 14th Image, Video, and Multidimensional Signal
Processing Workshop (IVMSP)
|
10.1109/IVMSP54334.2022.9816319
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Automatic detection of natural disasters and incidents has become more
important as a tool for fast response. There have been many studies to detect
incidents using still images and text. However, the number of approaches that
exploit temporal information is rather limited. One of the main reasons for
this is that a diverse video dataset with various incident types does not
exist. To address this need, in this paper we present a video dataset, Video
Dataset of Incidents, VIDI, that contains 4,534 video clips corresponding to 43
incident categories. Each incident class has around 100 videos with a duration
of ten seconds on average. To increase diversity, the videos have been searched
in several languages. To assess the performance of the recent state-of-the-art
approaches, Vision Transformer and TimeSformer, as well as to explore the
contribution of video-based information for incident classification, we
performed benchmark experiments on the VIDI and Incidents Dataset. We have
shown that the recent methods improve the incident classification accuracy. We
have found that employing video data is very beneficial for the task. By using
the video data, the top-1 accuracy is increased to 76.56% from 67.37%, which
was obtained using a single frame. VIDI will be made publicly available.
Additional materials can be found at the following link:
https://github.com/vididataset/VIDI.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 11:30:59 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Sesver",
"Duygu",
""
],
[
"Gençoğlu",
"Alp Eren",
""
],
[
"Yıldız",
"Çağrı Emre",
""
],
[
"Günindi",
"Zehra",
""
],
[
"Habibi",
"Faeze",
""
],
[
"Yazıcı",
"Ziya Ata",
""
],
[
"Ekenel",
"Hazım Kemal",
""
]
] |
new_dataset
| 0.999863 |
2205.14276
|
Jan Thorben Frank
|
J. Thorben Frank, Oliver T. Unke, Klaus-Robert M\"uller
|
So3krates: Equivariant attention for interactions on arbitrary
length-scales in molecular systems
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The application of machine learning methods in quantum chemistry has enabled
the study of numerous chemical phenomena, which are computationally intractable
with traditional ab-initio methods. However, some quantum mechanical properties
of molecules and materials depend on non-local electronic effects, which are
often neglected due to the difficulty of modeling them efficiently. This work
proposes a modified attention mechanism adapted to the underlying physics,
which allows to recover the relevant non-local effects. Namely, we introduce
spherical harmonic coordinates (SPHCs) to reflect higher-order geometric
information for each atom in a molecule, enabling a non-local formulation of
attention in the SPHC space. Our proposed model So3krates - a self-attention
based message passing neural network - uncouples geometric information from
atomic features, making them independently amenable to attention mechanisms.
Thereby we construct spherical filters, which extend the concept of continuous
filters in Euclidean space to SPHC space and serve as foundation for a
spherical self-attention mechanism. We show that in contrast to other published
methods, So3krates is able to describe non-local quantum mechanical effects
over arbitrary length scales. Further, we find evidence that the inclusion of
higher-order geometric correlations increases data efficiency and improves
generalization. So3krates matches or exceeds state-of-the-art performance on
popular benchmarks, notably, requiring a significantly lower number of
parameters (0.25 - 0.4x) while at the same time giving a substantial speedup (6
- 14x for training and 2 - 11x for inference) compared to other models.
|
[
{
"version": "v1",
"created": "Sat, 28 May 2022 00:01:30 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2022 17:50:01 GMT"
},
{
"version": "v3",
"created": "Mon, 9 Jan 2023 13:38:04 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Frank",
"J. Thorben",
""
],
[
"Unke",
"Oliver T.",
""
],
[
"Müller",
"Klaus-Robert",
""
]
] |
new_dataset
| 0.997787 |
2207.10974
|
Anastasija Nikiforova
|
Anastasija Nikiforova
|
Open data hackathon as a tool for increased engagement of Generation Z:
to hack or not to hack?
| null |
Springer, Cham, 2023
|
10.1007/978-3-031-22950-3_13
| null |
cs.CY cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
A hackathon is known as a form of civic innovation in which participants
representing citizens can point out existing problems or social needs and
propose a solution. Given the high social, technical, and economic potential of
open government data, the concept of open data hackathons is becoming popular
around the world. This concept has become popular in Latvia with the annual
hackathons organized for a specific cluster of citizens called Generation Z.
Contrary to the general opinion, the organizer suggests that the main goal of
open data hackathons to raise an awareness of OGD has been achieved, and there
has been a debate about the need to continue them. This study presents the
latest findings on the role of open data hackathons and the benefits that they
can bring to both the society, participants, and government.
|
[
{
"version": "v1",
"created": "Fri, 22 Jul 2022 09:42:13 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Jan 2023 12:47:40 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Nikiforova",
"Anastasija",
""
]
] |
new_dataset
| 0.988028 |
2208.02764
|
Yiyou Sun
|
Yiyou Sun and Yixuan Li
|
OpenCon: Open-world Contrastive Learning
|
Accepted at TMLR
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning models deployed in the wild naturally encounter unlabeled
samples from both known and novel classes. Challenges arise in learning from
both the labeled and unlabeled data, in an open-world semi-supervised manner.
In this paper, we introduce a new learning framework, open-world contrastive
learning (OpenCon). OpenCon tackles the challenges of learning compact
representations for both known and novel classes and facilitates novelty
discovery along the way. We demonstrate the effectiveness of OpenCon on
challenging benchmark datasets and establish competitive performance. On the
ImageNet dataset, OpenCon significantly outperforms the current best method by
11.9% and 7.4% on novel and overall classification accuracy, respectively.
Theoretically, OpenCon can be rigorously interpreted from an EM algorithm
perspective--minimizing our contrastive loss partially maximizes the likelihood
by clustering similar samples in the embedding space. The code is available at
https://github.com/deeplearning-wisc/opencon.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 16:48:02 GMT"
},
{
"version": "v2",
"created": "Sun, 8 Jan 2023 18:27:39 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Sun",
"Yiyou",
""
],
[
"Li",
"Yixuan",
""
]
] |
new_dataset
| 0.986363 |
2209.03830
|
Andrea Galimberti
|
Gabriele Montanaro, Andrea Galimberti, Ernesto Colizzi, Davide Zoni
|
Hardware-Software Co-Design of BIKE with HLS-Generated Accelerators
| null |
2022 29th IEEE International Conference on Electronics, Circuits
and Systems (ICECS), 2022, pp. 1-4
|
10.1109/ICECS202256217.2022.9970992
| null |
cs.AR cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In order to mitigate the security threat of quantum computers, NIST is
undertaking a process to standardize post-quantum cryptosystems, aiming to
assess their security and speed up their adoption in production scenarios.
Several hardware and software implementations have been proposed for each
candidate, while only a few target heterogeneous platforms featuring CPUs and
FPGAs. This work presents a HW/SW co-design of BIKE for embedded platforms
featuring both CPUs and small FPGAs and employs high-level synthesis (HLS) to
timely deliver the hardware accelerators. In contrast to state-of-the-art
solutions targeting performance-optimized HLS accelerators, the proposed
solution targets the small FPGAs implemented in the heterogeneous platforms for
embedded systems. Compared to the software-only execution of BIKE, the
experimental results collected on the systems-on-chip of the entire Xilinx
Zynq-7000 family highlight a performance speedup ranging from 1.37x, on Z-7010,
to 2.78x, on Z-7020.
|
[
{
"version": "v1",
"created": "Thu, 8 Sep 2022 14:08:56 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Jan 2023 20:16:33 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Montanaro",
"Gabriele",
""
],
[
"Galimberti",
"Andrea",
""
],
[
"Colizzi",
"Ernesto",
""
],
[
"Zoni",
"Davide",
""
]
] |
new_dataset
| 0.997219 |
2209.05247
|
Jannik Z\"urn
|
Jannik Z\"urn, Sebastian Weber, Wolfram Burgard
|
TrackletMapper: Ground Surface Segmentation and Mapping from Traffic
Participant Trajectories
|
19 pages, 14 figures, CoRL 2022 v4 (updated acknowledgements)
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Robustly classifying ground infrastructure such as roads and street crossings
is an essential task for mobile robots operating alongside pedestrians. While
many semantic segmentation datasets are available for autonomous vehicles,
models trained on such datasets exhibit a large domain gap when deployed on
robots operating in pedestrian spaces. Manually annotating images recorded from
pedestrian viewpoints is both expensive and time-consuming. To overcome this
challenge, we propose TrackletMapper, a framework for annotating ground surface
types such as sidewalks, roads, and street crossings from object tracklets
without requiring human-annotated data. To this end, we project the robot
ego-trajectory and the paths of other traffic participants into the ego-view
camera images, creating sparse semantic annotations for multiple types of
ground surfaces from which a ground segmentation model can be trained. We
further show that the model can be self-distilled for additional performance
benefits by aggregating a ground surface map and projecting it into the camera
images, creating a denser set of training annotations compared to the sparse
tracklet annotations. We qualitatively and quantitatively attest our findings
on a novel large-scale dataset for mobile robots operating in pedestrian areas.
Code and dataset will be made available at
http://trackletmapper.cs.uni-freiburg.de.
|
[
{
"version": "v1",
"created": "Mon, 12 Sep 2022 13:43:10 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Sep 2022 07:54:09 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Oct 2022 07:37:14 GMT"
},
{
"version": "v4",
"created": "Sun, 8 Jan 2023 16:18:11 GMT"
}
] | 2023-01-10T00:00:00 |
[
[
"Zürn",
"Jannik",
""
],
[
"Weber",
"Sebastian",
""
],
[
"Burgard",
"Wolfram",
""
]
] |
new_dataset
| 0.998721 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.