id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2205.13102
|
Patrick Kin Man Tung
|
Patrick Kin Man Tung, Amalia Yunita Halim, Huixin Wang, Anne Rich,
Christopher Marjo, Klaus Regenauer-Lieb
|
Deep-XFCT: Deep learning 3D-mineral liberation analysis with micro X-ray
fluorescence and computed tomography
|
24 pages, 10 figures
|
Energies 2022, 15(15), 5326
|
10.3390/en15155326
| null |
cs.LG physics.data-an
|
http://creativecommons.org/licenses/by/4.0/
|
The rapid development of X-ray micro-computed tomography (micro-CT) opens new
opportunities for 3D analysis of particle and grain-size characterisation,
determination of particle densities and shape factors, estimation of mineral
associations and liberation and locking. Current practices in mineral
liberation analysis are based on 2D representations leading to systematic
errors in the extrapolation to volumetric properties. New quantitative methods
based on tomographic data are therefore urgently required for characterisation
of mineral deposits, mineral processing, characterisation of tailings, rock
typing, stratigraphic refinement, reservoir characterisation for applications
in the resource industry, environmental and material sciences. To date, no
simple non-destructive method exists for 3D mineral liberation analysis. We
present a new development based on combining micro-CT with micro-X-ray
fluorescence (micro-XRF) using deep learning. We demonstrate successful
semi-automated multi-modal analysis of a crystalline magmatic rock where the
new technique overcomes the difficult task of differentiating feldspar from
quartz in micro-CT data set. The approach is universal and can be extended to
any multi-modal and multi-instrument analysis for further refinement. We
conclude that the combination of micro-CT and micro-XRF already provides a new
opportunity for robust 3D mineral liberation analysis in both field and
laboratory applications.
|
[
{
"version": "v1",
"created": "Thu, 26 May 2022 01:35:58 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Tung",
"Patrick Kin Man",
""
],
[
"Halim",
"Amalia Yunita",
""
],
[
"Wang",
"Huixin",
""
],
[
"Rich",
"Anne",
""
],
[
"Marjo",
"Christopher",
""
],
[
"Regenauer-Lieb",
"Klaus",
""
]
] |
new_dataset
| 0.974738 |
2208.05664
|
Cunsheng Ding
|
Zhonghua Sun, Cunsheng Ding, Xiaoqiang Wang
|
Two Classes of Constacyclic Codes with Variable Parameters
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Constacyclic codes over finite fields are a family of linear codes and
contain cyclic codes as a subclass. Constacyclic codes are related to many
areas of mathematics and outperform cyclic codes in several aspects. Hence,
constacyclic codes are of theoretical importance. On the other hand,
constacyclic codes are important in practice, as they have rich algebraic
structures and may have efficient decoding algorithms. In this paper, two
classes of constacyclic codes are constructed using a general construction of
constacyclic codes with cyclic codes. The first class of constacyclic codes is
motivated by the punctured Dilix cyclic codes and the second class is motivated
by the punctured generalised Reed-Muller codes. The two classes of constacyclic
codes contain optimal linear codes. The parameters of the two classes of
constacyclic codes are analysed and some open problems are presented in this
paper.
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 06:45:13 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Aug 2022 00:59:26 GMT"
},
{
"version": "v3",
"created": "Fri, 26 Aug 2022 07:55:53 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Sun",
"Zhonghua",
""
],
[
"Ding",
"Cunsheng",
""
],
[
"Wang",
"Xiaoqiang",
""
]
] |
new_dataset
| 0.997944 |
2208.12195
|
Meir Goldenberg
|
Meir Goldenberg
|
ExpoCloud: a Framework for Time and Budget-Effective Parameter Space
Explorations Using a Cloud Compute Engine
|
Added acknowledgement of funding
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large parameter space explorations are among the most time consuming yet
critically important tasks in many fields of modern research. ExpoCloud enables
the researcher to harness cloud compute resources to achieve time and
budget-effective large-scale concurrent parameter space explorations.
ExpoCloud enables maximal possible levels of concurrency by creating compute
instances on-the-fly, saves money by terminating unneeded instances, provides a
mechanism for saving both time and money by avoiding the exploration of
parameter settings that are as hard or harder than the parameter settings whose
exploration timed out. Effective fault tolerance mechanisms make ExpoCloud
suitable for large experiments.
ExpoCloud provides an interface that allows its use under various cloud
environments. As a proof of concept, we implemented a class supporting the
Google Compute Engine (GCE). We also implemented a class that simulates a cloud
environment on the local machine, thereby facilitating further development of
ExpoCloud.
The article describes ExpoCloud's features and provides a usage example. The
software is well documented and is available under the MIT license.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 16:32:44 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Aug 2022 08:56:57 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Goldenberg",
"Meir",
""
]
] |
new_dataset
| 0.99767 |
2208.12250
|
Dylan Turpin
|
Dylan Turpin, Liquan Wang, Eric Heiden, Yun-Chun Chen, Miles Macklin,
Stavros Tsogkas, Sven Dickinson, Animesh Garg
|
Grasp'D: Differentiable Contact-rich Grasp Synthesis for Multi-fingered
Hands
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The study of hand-object interaction requires generating viable grasp poses
for high-dimensional multi-finger models, often relying on analytic grasp
synthesis which tends to produce brittle and unnatural results. This paper
presents Grasp'D, an approach for grasp synthesis with a differentiable contact
simulation from both known models as well as visual inputs. We use
gradient-based methods as an alternative to sampling-based grasp synthesis,
which fails without simplifying assumptions, such as pre-specified contact
locations and eigengrasps. Such assumptions limit grasp discovery and, in
particular, exclude high-contact power grasps. In contrast, our
simulation-based approach allows for stable, efficient, physically realistic,
high-contact grasp synthesis, even for gripper morphologies with high-degrees
of freedom. We identify and address challenges in making grasp simulation
amenable to gradient-based optimization, such as non-smooth object surface
geometry, contact sparsity, and a rugged optimization landscape. Grasp'D
compares favorably to analytic grasp synthesis on human and robotic hand
models, and resultant grasps achieve over 4x denser contact, leading to
significantly higher grasp stability. Video and code available at
https://graspd-eccv22.github.io/.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 17:50:16 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Aug 2022 03:53:51 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Turpin",
"Dylan",
""
],
[
"Wang",
"Liquan",
""
],
[
"Heiden",
"Eric",
""
],
[
"Chen",
"Yun-Chun",
""
],
[
"Macklin",
"Miles",
""
],
[
"Tsogkas",
"Stavros",
""
],
[
"Dickinson",
"Sven",
""
],
[
"Garg",
"Animesh",
""
]
] |
new_dataset
| 0.951487 |
2208.12385
|
Wanming Hao
|
Wanming Hao, Xiaobei You, Fuhui Zhou, Zheng Chu, Gangcan Sun, Pei Xiao
|
The Far-/Near-Field Beam Squint and Solutions for THz Intelligent
Reflecting Surface Communications
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Terahertz (THz) and intelligent reflecting surface (IRS) have been regarded
as two promising technologies to improve the capacity and coverage for future
6G networks. Generally, IRS is usually equipped with large-scale elements when
implemented at THz frequency. In this case, the near-field model and beam
squint should be considered. Therefore, in this paper, we investigate the
far-field and near-field beam squint problems in THz IRS communications for the
first time. The far-field and near-field channel models are constructed based
on the different electromagnetic radiation characteristics. Next, we first
analyze the far-field beam squint and its effect for the beam gain based on the
cascaded base station (BS)-IRS-user channel model, and then the near-field case
is studied. To overcome the far-field and near-field beam squint effects, we
propose to apply delay adjustable metasurface (DAM) to IRS, and develop a
scheme of optimizing the reflecting phase shifts and time delays of IRS
elements, which effectively eliminates the beam gain loss caused by beam
squint. Finally, simulations are conducted to demonstrate the effectiveness of
our proposed schemes in combating the near and far field beam squint.
|
[
{
"version": "v1",
"created": "Fri, 26 Aug 2022 00:42:08 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Hao",
"Wanming",
""
],
[
"You",
"Xiaobei",
""
],
[
"Zhou",
"Fuhui",
""
],
[
"Chu",
"Zheng",
""
],
[
"Sun",
"Gangcan",
""
],
[
"Xiao",
"Pei",
""
]
] |
new_dataset
| 0.989009 |
2208.12449
|
Mahathir Almashor
|
Mahathir Almashor, Ejaz Ahmed, Benjamin Pick, Sharif Abuadbba, Jason
Xue, Raj Gaire, Shuo Wang, Seyit Camtepe, Surya Nepal
|
Unraveling Threat Intelligence Through the Lens of Malicious URL
Campaigns
|
arXiv admin note: text overlap with arXiv:2108.12726
| null | null | null |
cs.CR cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The daily deluge of alerts is a sombre reality for Security Operations Centre
(SOC) personnel worldwide. They are at the forefront of an organisation's
cybersecurity infrastructure, and face the unenviable task of prioritising
threats amongst a flood of abstruse alerts triggered by their Security
Information and Event Management (SIEM) systems. URLs found within malicious
communications form the bulk of such alerts, and pinpointing pertinent patterns
within them allows teams to rapidly deescalate potential or extant threats.
This need for vigilance has been traditionally filled with machine-learning
based log analysis tools and anomaly detection concepts. To sidestep machine
learning approaches, we instead propose to analyse suspicious URLs from SIEM
alerts via the perspective of malicious URL campaigns. By first grouping URLs
within 311M records gathered from VirusTotal into 2.6M suspicious clusters, we
thereafter discovered 77.8K malicious campaigns. Corroborating our suspicions,
we found 9.9M unique URLs attributable to 18.3K multi-URL campaigns, and that
worryingly, only 2.97% of campaigns were found by security vendors. We also
confer insights on evasive tactics such as ever lengthier URLs and more diverse
domain names, with selected case studies exposing other adversarial techniques.
By characterising the concerted campaigns driving these URL alerts, we hope to
inform SOC teams of current threat trends, and thus arm them with better threat
intelligence.
|
[
{
"version": "v1",
"created": "Fri, 26 Aug 2022 06:10:13 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Almashor",
"Mahathir",
""
],
[
"Ahmed",
"Ejaz",
""
],
[
"Pick",
"Benjamin",
""
],
[
"Abuadbba",
"Sharif",
""
],
[
"Xue",
"Jason",
""
],
[
"Gaire",
"Raj",
""
],
[
"Wang",
"Shuo",
""
],
[
"Camtepe",
"Seyit",
""
],
[
"Nepal",
"Surya",
""
]
] |
new_dataset
| 0.968111 |
2208.12454
|
Anastasiia Tkalich
|
Anastasiia Tkalich, Darja Smite, Nina Haugland Andersen and Nils Brede
Moe
|
What happens to psychological safety when going remote?
| null | null |
10.13140/RG.2.2.29567.89767
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Psychological safety is a precondition for learning and success in software
teams. Companies such as SavingsBank, which is discussed in this article, have
developed good practices to facilitate psychological safety, most of which
depend on face-to-face interaction. However, what happens to psychological
safety when working remotely? In this article, we explore how Norwegian
software developers experienced pandemic and post-pandemic remote work and
describe simple behaviors and attitudes related to psychological safety. We pay
special attention to the hybrid work mode, in which team members alternate days
in the office with days working from home. Our key takeaway is that spontaneous
interaction in the office facilitates psychological safety, while remote work
increases the thresholds for both spontaneous interaction and psychological
safety. We recommend that software teams synchronize their office presence to
increase chances for spontaneous interaction in the office while benefitting
from focused work while at home.
|
[
{
"version": "v1",
"created": "Fri, 26 Aug 2022 06:31:57 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Tkalich",
"Anastasiia",
""
],
[
"Smite",
"Darja",
""
],
[
"Andersen",
"Nina Haugland",
""
],
[
"Moe",
"Nils Brede",
""
]
] |
new_dataset
| 0.967699 |
2208.12458
|
Akira Imakura
|
Akira Imakura, Masateru Kihira, Yukihiko Okada, Tetsuya Sakurai
|
Another Use of SMOTE for Interpretable Data Collaboration Analysis
|
19 pages, 3 figures, 7 tables
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, data collaboration (DC) analysis has been developed for
privacy-preserving integrated analysis across multiple institutions. DC
analysis centralizes individually constructed dimensionality-reduced
intermediate representations and realizes integrated analysis via collaboration
representations without sharing the original data. To construct the
collaboration representations, each institution generates and shares a
shareable anchor dataset and centralizes its intermediate representation.
Although, random anchor dataset functions well for DC analysis in general,
using an anchor dataset whose distribution is close to that of the raw dataset
is expected to improve the recognition performance, particularly for the
interpretable DC analysis. Based on an extension of the synthetic minority
over-sampling technique (SMOTE), this study proposes an anchor data
construction technique to improve the recognition performance without
increasing the risk of data leakage. Numerical results demonstrate the
efficiency of the proposed SMOTE-based method over the existing anchor data
constructions for artificial and real-world datasets. Specifically, the
proposed method achieves 9 percentage point and 38 percentage point performance
improvements regarding accuracy and essential feature selection, respectively,
over existing methods for an income dataset. The proposed method provides
another use of SMOTE not for imbalanced data classifications but for a key
technology of privacy-preserving integrated analysis.
|
[
{
"version": "v1",
"created": "Fri, 26 Aug 2022 06:39:13 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Imakura",
"Akira",
""
],
[
"Kihira",
"Masateru",
""
],
[
"Okada",
"Yukihiko",
""
],
[
"Sakurai",
"Tetsuya",
""
]
] |
new_dataset
| 0.978923 |
2208.12484
|
Sangjun Han
|
Sangjun Han, Taeil Hur, Youngmi Hur
|
Laplacian Pyramid-like Autoencoder
|
20 pages, 3 figures, 5 tables, Science and Information Conference
2022, Intelligent Computing
|
Intelligent Computing, SAI 2022. Lecture Notes in Networks and
Systems, vol 507, pp 59-78
|
10.1007/978-3-031-10464-0_5
| null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we develop the Laplacian pyramid-like autoencoder (LPAE) by
adding the Laplacian pyramid (LP) concept widely used to analyze images in
Signal Processing. LPAE decomposes an image into the approximation image and
the detail image in the encoder part and then tries to reconstruct the original
image in the decoder part using the two components. We use LPAE for experiments
on classifications and super-resolution areas. Using the detail image and the
smaller-sized approximation image as inputs of a classification network, our
LPAE makes the model lighter. Moreover, we show that the performance of the
connected classification networks has remained substantially high. In a
super-resolution area, we show that the decoder part gets a high-quality
reconstruction image by setting to resemble the structure of LP. Consequently,
LPAE improves the original results by combining the decoder part of the
autoencoder and the super-resolution network.
|
[
{
"version": "v1",
"created": "Fri, 26 Aug 2022 07:45:06 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Han",
"Sangjun",
""
],
[
"Hur",
"Taeil",
""
],
[
"Hur",
"Youngmi",
""
]
] |
new_dataset
| 0.967931 |
2208.12500
|
Anupam Kumar Gupta
|
Anupam K. Gupta, Alex Church, Nathan F. Lepora
|
Semi-Supervised Disentanglement of Tactile Contact~Geometry from
Sliding-Induced Shear
|
7 pages, 3 figures, accepted for publication in IROS 2022
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The sense of touch is fundamental to human dexterity. When mimicked in
robotic touch, particularly by use of soft optical tactile sensors, it suffers
from distortion due to motion-dependent shear. This complicates tactile tasks
like shape reconstruction and exploration that require information about
contact geometry. In this work, we pursue a semi-supervised approach to remove
shear while preserving contact-only information. We validate our approach by
showing a match between the model-generated unsheared images with their
counterparts from vertically tapping onto the object. The model-generated
unsheared images give faithful reconstruction of contact-geometry otherwise
masked by shear, along with robust estimation of object pose then used for
sliding exploration and full reconstruction of several planar shapes. We show
that our semi-supervised approach achieves comparable performance to its fully
supervised counterpart across all validation tasks with an order of magnitude
less supervision. The semi-supervised method is thus more computational and
labeled sample-efficient. We expect it will have broad applicability to wide
range of complex tactile exploration and manipulation tasks performed via a
shear-sensitive sense of touch.
|
[
{
"version": "v1",
"created": "Fri, 26 Aug 2022 08:30:19 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Gupta",
"Anupam K.",
""
],
[
"Church",
"Alex",
""
],
[
"Lepora",
"Nathan F.",
""
]
] |
new_dataset
| 0.997551 |
2208.12542
|
Yaping Zhao
|
Yaping Zhao, Shuhui Shi, Ramgopal Ravi, Zhongrui Wang, Edmund Y. Lam,
Jichang Zhao
|
H4M: Heterogeneous, Multi-source, Multi-modal, Multi-view and
Multi-distributional Dataset for Socioeconomic Analytics in the Case of
Beijing
|
Accepted by IEEE DSAA 2022. 10 pages, 10 figures
| null | null | null |
cs.CY cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
The study of socioeconomic status has been reformed by the availability of
digital records containing data on real estate, points of interest, traffic and
social media trends such as micro-blogging. In this paper, we describe a
heterogeneous, multi-source, multi-modal, multi-view and multi-distributional
dataset named "H4M". The mixed dataset contains data on real estate
transactions, points of interest, traffic patterns and micro-blogging trends
from Beijing, China. The unique composition of H4M makes it an ideal test bed
for methodologies and approaches aimed at studying and solving problems related
to real estate, traffic, urban mobility planning, social sentiment analysis
etc. The dataset is available at: https://indigopurple.github.io/H4M/index.html
|
[
{
"version": "v1",
"created": "Thu, 11 Aug 2022 13:57:57 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Zhao",
"Yaping",
""
],
[
"Shi",
"Shuhui",
""
],
[
"Ravi",
"Ramgopal",
""
],
[
"Wang",
"Zhongrui",
""
],
[
"Lam",
"Edmund Y.",
""
],
[
"Zhao",
"Jichang",
""
]
] |
new_dataset
| 0.998627 |
2208.12617
|
Antong Zhang
|
Antong Zhang, Jiani Yang, Yangcheng Luo, Siteng Fan
|
2060: Civilization, Energy, and Progression of Mankind on the Kardashev
Scale
|
4 Figures
| null | null | null |
cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Energy has been propelling the development of human civilization for
millennia, and technologies acquiring energy beyond human and animal power have
been continuously advanced and transformed. In 1964, the Kardashev Scale was
proposed to quantify the relationship between energy consumption and the
development of civilizations. Human civilization presently stands at Type
0.7276 on this scale. Projecting the future energy consumption, estimating the
change of its constituting structure, and evaluating the influence of possible
technological revolutions are critical in the context of civilization
development. In this study, we use two machine learning models, random forest
(RF) and autoregressive integrated moving average (ARIMA), to simulate and
predict energy consumption on a global scale. We further project the position
of human civilization on the Kardashev Scale in 2060. The result shows that the
global energy consumption is expected to reach 928-940 EJ in 2060, with a total
growth of over 50% in the coming 40 years, and our civilization is expected to
achieve Type 0.7474 on the Kardashev Scale, still far away from a Type 1
civilization. Additionally, we discuss the potential energy segmentation change
before 2060 and present the influence of the advent of nuclear fusion in this
context.
|
[
{
"version": "v1",
"created": "Wed, 10 Aug 2022 23:31:15 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Zhang",
"Antong",
""
],
[
"Yang",
"Jiani",
""
],
[
"Luo",
"Yangcheng",
""
],
[
"Fan",
"Siteng",
""
]
] |
new_dataset
| 0.96436 |
2208.12619
|
Dea Editya
|
Dea Avega Editya
|
Tinjauan atas Efektivitas Penggunaan Key Opinion Leader (KOL) dalam
Penjualan Surat Utang Negara Ritel seri SBR011
|
15 pages, 7 figures, in Indonesian
| null | null | null |
cs.CY cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Indonesian Ministry of Finance had endorsed 10 Key Opinion Leaders to help
promoting government retail bonds SBR011 during selling period of 25 May-16
June 2022. This study analyzed effectiveness of the endorsement by using
several indicators; engagement rate, enthusiasm rate and sentiment analysis of
feedbacks from KOL audiens. Data was gathered from social media Instagram and
TikTok social platform used by the KOL to post their marketing contents. This
paper found that the endorsement is quite effective to promote the SBR011 and
yields mostly positive feedback on the marketing campaign.
|
[
{
"version": "v1",
"created": "Sat, 13 Aug 2022 03:38:53 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Editya",
"Dea Avega",
""
]
] |
new_dataset
| 0.991222 |
2208.12628
|
Martin Kol\'a\v{r}
|
Martin Kol\'a\v{r}
|
PNPCoin: Distributed Computing on Bitcoin infrastructure
|
4 page version, 1 figure, AGI conference submission format
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Research and applications in Machine Learning are limited by computational
resources, while 1% of the world's electricity goes into calculating 34 billion
billion SHA-256 hashes per second, four orders of magnitude more than the 200
petaflop power of the world's most powerful supercomputer. The work presented
here describes how a simple soft fork on Bitcoin can adapt these incomparable
resources to a global distributed computer. By creating an infrastructure and
ledger fully compatible with blockchain technology, the hashes can be replaced
with stochastic optimizations such as Deep Net training, inverse problems such
as GANs, and arbitrary NP computations.
|
[
{
"version": "v1",
"created": "Fri, 26 Aug 2022 12:43:09 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Kolář",
"Martin",
""
]
] |
new_dataset
| 0.998106 |
2208.12634
|
Nandini Ramesh
|
Ram M. Kripa, Nandini Ramesh, William R. Boos
|
Wrangler for the Emergency Events Database: A Tool for Geocoding and
Analysis of a Global Disaster Dataset
|
13 pages, 4 figures, 4 tables. Submitted to the Journal of Open
Research Software
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is an increasing need for precise location information on historical
disasters, such as mass casualty events caused by weather or earthquakes, but
existing disaster datasets often do not provide geographic coordinates of past
events. Here we describe a new tool, the Wrangler for the Emergency Events
Database (WEED), that associates latitude and longitude coordinates with
entries in the widely used Emergency Events Database (EM-DAT). WEED takes as
input records from EM-DAT, and geocodes the list of cities, states, and other
location types associated with a given disaster using the R language with the
GeoNames web service. Error processing is performed, and users are given the
ability to customize the logic used in geocoding; the open-source nature of the
tool also allows more general customization or extension by users. This tool
provides researchers the ability to easily prepare EM-DAT data for analysis
with geophysical, hydrological, and other geospatial variables.
|
[
{
"version": "v1",
"created": "Fri, 26 Aug 2022 12:50:37 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Kripa",
"Ram M.",
""
],
[
"Ramesh",
"Nandini",
""
],
[
"Boos",
"William R.",
""
]
] |
new_dataset
| 0.999871 |
2208.12640
|
Soheyl Massoudi
|
Soheyl Massoudi, J\"urg Schiffmann
|
ARRID: ANN-based Rotordynamics for Robust and Integrated Design
|
Submitted to Machine Learning in Computational Design Workshop of the
39th International Conference on Machine Learning, 2022, 4 pages, 1 figure
| null | null | null |
cs.NE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The purpose of this study is to introduce ANN-based software for the fast
evaluation of rotordynamics in the context of robust and integrated design. It
is based on a surrogate model made of ensembles of artificial neural networks
running in a Bokeh web application. The use of a surrogate model has sped up
the computation by three orders of magnitude compared to the current models.
ARRID offers fast performance information, including the effect of
manufacturing deviations. As such, it helps the designer to make optimal design
choices early in the design process. The designer can manipulate the parameters
of the design and the operating conditions to obtain performance information in
a matter of seconds.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 16:08:05 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Massoudi",
"Soheyl",
""
],
[
"Schiffmann",
"Jürg",
""
]
] |
new_dataset
| 0.984014 |
2208.12646
|
Keisuke Fujii
|
Tomohiro Suzuki, Kazuya Takeda, Keisuke Fujii
|
Automatic detection of faults in race walking from a smartphone camera:
a comparison of an Olympic medalist and university athletes
|
16 pages, 9 figures
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Automatic fault detection is a major challenge in many sports. In race
walking, referees visually judge faults according to the rules. Hence, ensuring
objectivity and fairness while judging is important. To address this issue,
some studies have attempted to use sensors and machine learning to
automatically detect faults. However, there are problems associated with sensor
attachments and equipment such as a high-speed camera, which conflict with the
visual judgement of referees, and the interpretability of the fault detection
models. In this study, we proposed a fault detection system for non-contact
measurement. We used pose estimation and machine learning models trained based
on the judgements of multiple qualified referees to realize fair fault
judgement. We verified them using smartphone videos of normal race walking and
walking with intentional faults in several athletes including the medalist of
the Tokyo Olympics. The validation results show that the proposed system
detected faults with an average accuracy of over 90%. We also revealed that the
machine learning model detects faults according to the rules of race walking.
In addition, the intentional faulty walking movement of the medalist was
different from that of university walkers. This finding informs realization of
a more general fault detection model. The code and data are available at
https://github.com/SZucchini/racewalk-aijudge.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 07:04:36 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Suzuki",
"Tomohiro",
""
],
[
"Takeda",
"Kazuya",
""
],
[
"Fujii",
"Keisuke",
""
]
] |
new_dataset
| 0.978241 |
2208.12655
|
Xiaoyu Lin
|
Xiaoyu Lin
|
Towards Robust Drone Vision in the Wild
|
Master's thesis
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The past few years have witnessed the burst of drone-based applications where
computer vision plays an essential role. However, most public drone-based
vision datasets focus on detection and tracking. On the other hand, the
performance of most existing image super-resolution methods is sensitive to the
dataset, specifically, the degradation model between high-resolution and
low-resolution images. In this thesis, we propose the first image
super-resolution dataset for drone vision. Image pairs are captured by two
cameras on the drone with different focal lengths. We collect data at different
altitudes and then propose pre-processing steps to align image pairs. Extensive
empirical studies show domain gaps exist among images captured at different
altitudes. Meanwhile, the performance of pretrained image super-resolution
networks also suffers a drop on our dataset and varies among altitudes.
Finally, we propose two methods to build a robust image super-resolution
network at different altitudes. The first feeds altitude information into the
network through altitude-aware layers. The second uses one-shot learning to
quickly adapt the super-resolution model to unknown altitudes. Our results
reveal that the proposed methods can efficiently improve the performance of
super-resolution networks at varying altitudes.
|
[
{
"version": "v1",
"created": "Sun, 21 Aug 2022 18:19:19 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Lin",
"Xiaoyu",
""
]
] |
new_dataset
| 0.989234 |
2208.12657
|
Hao Bian
|
Chen Yang, Wang Ziyue, Fang Zijie, Bian Hao, Zhang Yongbing
|
Multi tasks RetinaNet for mitosis detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The account of mitotic cells is a key feature in tumor diagnosis. However,
due to the variability of mitotic cell morphology, it is a highly challenging
task to detect mitotic cells in tumor tissues. At the same time, although
advanced deep learning method have achieved great success in cell detection,
the performance is often unsatisfactory when tested data from another domain
(i.e. the different tumor types and different scanners). Therefore, it is
necessary to develop algorithms for detecting mitotic cells with robustness in
domain shifts scenarios. Our work further proposes a foreground detection and
tumor classification task based on the baseline(Retinanet), and utilizes data
augmentation to improve the domain generalization performance of our model. We
achieve the state-of-the-art performance (F1 score: 0.5809) on the challenging
premilary test dataset.
|
[
{
"version": "v1",
"created": "Fri, 26 Aug 2022 13:06:54 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Yang",
"Chen",
""
],
[
"Ziyue",
"Wang",
""
],
[
"Zijie",
"Fang",
""
],
[
"Hao",
"Bian",
""
],
[
"Yongbing",
"Zhang",
""
]
] |
new_dataset
| 0.987381 |
2208.12711
|
Saihao Huang
|
Saihao Huang, Lijie Wang, Zhenghua Li, Zeyang Liu, Chenhui Dou, Fukang
Yan, Xinyan Xiao, Hua Wu, Min Zhang
|
SeSQL: Yet Another Large-scale Session-level Chinese Text-to-SQL Dataset
|
12 pages,4 figures
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the first session-level Chinese dataset, CHASE contains two separate
parts, i.e., 2,003 sessions manually constructed from scratch (CHASE-C), and
3,456 sessions translated from English SParC (CHASE-T). We find the two parts
are highly discrepant and incompatible as training and evaluation data. In this
work, we present SeSQL, yet another large-scale session-level text-to-SQL
dataset in Chinese, consisting of 5,028 sessions all manually constructed from
scratch. In order to guarantee data quality, we adopt an iterative annotation
workflow to facilitate intense and in-time review of previous-round natural
language (NL) questions and SQL queries. Moreover, by completing all
context-dependent NL questions, we obtain 27,012 context-independent
question/SQL pairs, allowing SeSQL to be used as the largest dataset for
single-round multi-DB text-to-SQL parsing. We conduct benchmark session-level
text-to-SQL parsing experiments on SeSQL by employing three competitive
session-level parsers, and present detailed analysis.
|
[
{
"version": "v1",
"created": "Fri, 26 Aug 2022 15:11:10 GMT"
}
] | 2022-08-29T00:00:00 |
[
[
"Huang",
"Saihao",
""
],
[
"Wang",
"Lijie",
""
],
[
"Li",
"Zhenghua",
""
],
[
"Liu",
"Zeyang",
""
],
[
"Dou",
"Chenhui",
""
],
[
"Yan",
"Fukang",
""
],
[
"Xiao",
"Xinyan",
""
],
[
"Wu",
"Hua",
""
],
[
"Zhang",
"Min",
""
]
] |
new_dataset
| 0.999849 |
2110.10659
|
Nawras Alnaasan
|
Nawras Alnaasan, Arpan Jain, Aamir Shafi, Hari Subramoni, and
Dhabaleswar K Panda
|
OMB-Py: Python Micro-Benchmarks for Evaluating Performance of MPI
Libraries on HPC Systems
| null | null | null | null |
cs.DC cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Python has become a dominant programming language for emerging areas like
Machine Learning (ML), Deep Learning (DL), and Data Science (DS). An attractive
feature of Python is that it provides easy-to-use programming interface while
allowing library developers to enhance performance of their applications by
harnessing the computing power offered by High Performance Computing (HPC)
platforms. Efficient communication is key to scaling applications on parallel
systems, which is typically enabled by the Message Passing Interface (MPI)
standard and compliant libraries on HPC hardware. mpi4py is a Python-based
communication library that provides an MPI-like interface for Python
applications allowing application developers to utilize parallel processing
elements including GPUs. However, there is currently no benchmark suite to
evaluate communication performance of mpi4py -- and Python MPI codes in general
-- on modern HPC systems. In order to bridge this gap, we propose OMB-Py --
Python extensions to the open-source OSU Micro-Benchmark (OMB) suite -- aimed
to evaluate communication performance of MPI-based parallel applications in
Python. To the best of our knowledge, OMB-Py is the first communication
benchmark suite for parallel Python applications. OMB-Py consists of a variety
of point-to-point and collective communication benchmark tests that are
implemented for a range of popular Python libraries including NumPy, CuPy,
Numba, and PyCUDA. Our evaluation reveals that mpi4py introduces a small
overhead when compared to native MPI libraries. We plan to publicly release
OMB-Py to benefit the Python HPC community.
|
[
{
"version": "v1",
"created": "Wed, 20 Oct 2021 16:59:14 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Aug 2022 18:05:57 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Alnaasan",
"Nawras",
""
],
[
"Jain",
"Arpan",
""
],
[
"Shafi",
"Aamir",
""
],
[
"Subramoni",
"Hari",
""
],
[
"Panda",
"Dhabaleswar K",
""
]
] |
new_dataset
| 0.997343 |
2203.07845
|
Yuanhan Zhang
|
Yuanhan Zhang, Qinghong Sun, Yichun Zhou, Zexin He, Zhenfei Yin, Kun
Wang, Lu Sheng, Yu Qiao, Jing Shao, Ziwei Liu
|
Bamboo: Building Mega-Scale Vision Dataset Continually with
Human-Machine Synergy
|
Bamboo is available at https://github.com/ZhangYuanhan-AI/Bamboo
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Large-scale datasets play a vital role in computer vision. But current
datasets are annotated blindly without differentiation to samples, making the
data collection inefficient and unscalable. The open question is how to build a
mega-scale dataset actively. Although advanced active learning algorithms might
be the answer, we experimentally found that they are lame in the realistic
annotation scenario where out-of-distribution data is extensive. This work thus
proposes a novel active learning framework for realistic dataset annotation.
Equipped with this framework, we build a high-quality vision dataset -- Bamboo,
which consists of 69M image classification annotations with 119K categories and
28M object bounding box annotations with 809 categories. We organize these
categories by a hierarchical taxonomy integrated from several knowledge bases.
The classification annotations are four times larger than ImageNet22K, and that
of detection is three times larger than Object365. Compared to ImageNet22K and
Objects365, models pre-trained on Bamboo achieve superior performance among
various downstream tasks (6.2% gains on classification and 2.1% gains on
detection). We believe our active learning framework and Bamboo are essential
for future work.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 13:01:00 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Aug 2022 01:41:45 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Zhang",
"Yuanhan",
""
],
[
"Sun",
"Qinghong",
""
],
[
"Zhou",
"Yichun",
""
],
[
"He",
"Zexin",
""
],
[
"Yin",
"Zhenfei",
""
],
[
"Wang",
"Kun",
""
],
[
"Sheng",
"Lu",
""
],
[
"Qiao",
"Yu",
""
],
[
"Shao",
"Jing",
""
],
[
"Liu",
"Ziwei",
""
]
] |
new_dataset
| 0.979326 |
2205.08128
|
Francesco Ranzato
|
Marco Milanese and Francesco Ranzato
|
Local Completeness Logic on Kleene Algebra with Tests
| null | null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Local Completeness Logic (LCL) has been put forward as a program logic for
proving both the correctness and incorrectness of program specifications. LCL
is an abstract logic, parameterized by an abstract domain that allows combining
over- and under-approximations of program behaviors. It turns out that LCL
instantiated to the trivial singleton abstraction boils down to O'Hearn
incorrectness logic, which allows us to prove the presence of program bugs. It
has been recently proved that suitable extensions of Kleene algebra with tests
(KAT) allow representing both O'Hearn incorrectness and Hoare correctness
program logics within the same equational framework. In this work, we
generalize this result by showing how KATs extended either with a modal diamond
operator or with a top element are able to represent the local completeness
logic LCL. This is achieved by studying how these extended KATs can be endowed
with an abstract domain so as to define the validity of
correctness/incorrectness LCL triples and to show that the LCL proof system is
logically sound and, under some hypotheses, complete.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2022 06:58:07 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Aug 2022 08:15:36 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Milanese",
"Marco",
""
],
[
"Ranzato",
"Francesco",
""
]
] |
new_dataset
| 0.962887 |
2207.13629
|
Cagri Kilic
|
Cagri Kilic, Yu Gu, Jason N. Gross
|
Proprioceptive Slip Detection for Planetary Rovers in Perceptually
Degraded Extraterrestrial Environments
|
24 pages, 28 figures. Accepted for publication in Field Robotics
| null |
10.55417/fr.2022054
| null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Slip detection is of fundamental importance for the safety and efficiency of
rovers driving on the surface of extraterrestrial bodies. Current planetary
rover slip detection systems rely on visual perception on the assumption that
sufficient visual features can be acquired in the environment. However,
visual-based methods are prone to suffer in perceptually degraded planetary
environments with dominant low terrain features such as regolith, glacial
terrain, salt-evaporites, and poor lighting conditions such as dark caves and
permanently shadowed regions. Relying only on visual sensors for slip detection
also requires additional computational power and reduces the rover traversal
rate. This paper answers the question of how to detect wheel slippage of a
planetary rover without depending on visual perception. In this respect, we
propose a slip detection system that obtains its information from a
proprioceptive localization framework that is capable of providing reliable,
continuous, and computationally efficient state estimation over hundreds of
meters. This is accomplished by using zero velocity update, zero angular rate
update, and non-holonomic constraints as pseudo-measurement updates on an
inertial navigation system framework. The proposed method is evaluated on
actual hardware and field-tested in a planetary-analog environment. The method
achieves greater than 92% slip detection accuracy for distances around 150 m
using only an IMU and wheel encoders.
|
[
{
"version": "v1",
"created": "Wed, 27 Jul 2022 16:44:48 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2022 19:10:47 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Kilic",
"Cagri",
""
],
[
"Gu",
"Yu",
""
],
[
"Gross",
"Jason N.",
""
]
] |
new_dataset
| 0.999509 |
2208.07929
|
Hayat Ullah Mr
|
James Wensel, Hayat Ullah, Arslan Munir
|
ViT-ReT: Vision and Recurrent Transformer Neural Networks for Human
Activity Recognition in Videos
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Human activity recognition is an emerging and important area in computer
vision which seeks to determine the activity an individual or group of
individuals are performing. The applications of this field ranges from
generating highlight videos in sports, to intelligent surveillance and gesture
recognition. Most activity recognition systems rely on a combination of
convolutional neural networks (CNNs) to perform feature extraction from the
data and recurrent neural networks (RNNs) to determine the time dependent
nature of the data. This paper proposes and designs two transformer neural
networks for human activity recognition: a recurrent transformer (ReT), a
specialized neural network used to make predictions on sequences of data, as
well as a vision transformer (ViT), a transformer optimized for extracting
salient features from images, to improve speed and scalability of activity
recognition. We have provided an extensive comparison of the proposed
transformer neural networks with the contemporary CNN and RNN-based human
activity recognition models in terms of speed and accuracy.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 20:03:53 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Aug 2022 01:42:24 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Wensel",
"James",
""
],
[
"Ullah",
"Hayat",
""
],
[
"Munir",
"Arslan",
""
]
] |
new_dataset
| 0.99701 |
2208.11065
|
Philippe Mongeon
|
Philippe Mongeon, Timothy D. Bowman, Rodrigo Costas
|
An open dataset of scholars on Twitter
| null | null | null | null |
cs.DL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
The role played by research scholars in the dissemination of scientific
knowledge on social media has always been a central topic in social media
metrics (altmetrics) research. Different approaches have been implemented to
identify and characterize active scholars on social media platforms like
Twitter. Some limitations of past approaches were their complexity and, most
importantly, their reliance on licensed scientometric and altmetric data. The
emergence of new open data sources like OpenAlex or Crossref Event Data
provides opportunities to identify scholars on social media using only open
data. This paper presents a novel and simple approach to match authors from
OpenAlex with Twitter users identified in Crossref Event Data. The matching
procedure is described and validated with ORCID data. The new approach matches
nearly 500,000 matched scholars with their Twitter accounts with a level of
high precision and moderate recall. The dataset of matched scholars is
described and made openly available to the scientific community to empower more
advanced studies of the interactions of research scholars on Twitter.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 16:16:41 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Aug 2022 20:40:11 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Mongeon",
"Philippe",
""
],
[
"Bowman",
"Timothy D.",
""
],
[
"Costas",
"Rodrigo",
""
]
] |
new_dataset
| 0.999251 |
2208.11533
|
Hyejin Park
|
Hye-Jin Park, Young-Ju Choi, Young-Woon Lee, Byung-Gyu Kim
|
ssFPN: Scale Sequence (S^2) Feature Based-Feature Pyramid Network for
Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Feature Pyramid Network (FPN) has been an essential module for object
detection models to consider various scales of an object. However, average
precision (AP) on small objects is relatively lower than AP on medium and large
objects. The reason is why the deeper layer of CNN causes information loss as
feature extraction level. We propose a new scale sequence (S^2) feature
extraction of FPN to strengthen feature information of small objects. We
consider FPN structure as scale-space and extract scale sequence (S^2) feature
by 3D convolution on the level axis of FPN. It is basically scale invariant
feature and is built on high-resolution pyramid feature map for small objects.
Furthermore, the proposed S^2 feature can be extended to most object detection
models based on FPN. We demonstrate the proposed S2 feature can improve the
performance of both one-stage and two-stage detectors on MS COCO dataset. Based
on the proposed S2 feature, we achieve upto 1.3% and 1.1% of AP improvement for
YOLOv4-P5 and YOLOv4-P6, respectively. For Faster RCNN and Mask R-CNN, we
observe upto 2.0% and 1.6% of AP improvement with the suggested S^2 feature,
respectively.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 13:29:12 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Aug 2022 04:22:56 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Park",
"Hye-Jin",
""
],
[
"Choi",
"Young-Ju",
""
],
[
"Lee",
"Young-Woon",
""
],
[
"Kim",
"Byung-Gyu",
""
]
] |
new_dataset
| 0.986793 |
2208.11822
|
Shoaib Meraj Sami
|
Shoaib Meraj Sami, John McCauley, Sobhan Soleymani, Nasser Nasrabadi,
Jeremy Dawson
|
Benchmarking Human Face Similarity Using Identical Twins
|
34 pages, 48 figures, Accepted in IET Biometrics Journal (5th August
2022)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The problem of distinguishing identical twins and non-twin look-alikes in
automated facial recognition (FR) applications has become increasingly
important with the widespread adoption of facial biometrics. Due to the high
facial similarity of both identical twins and look-alikes, these face pairs
represent the hardest cases presented to facial recognition tools. This work
presents an application of one of the largest twin datasets compiled to date to
address two FR challenges: 1) determining a baseline measure of facial
similarity between identical twins and 2) applying this similarity measure to
determine the impact of doppelgangers, or look-alikes, on FR performance for
large face datasets. The facial similarity measure is determined via a deep
convolutional neural network. This network is trained on a tailored
verification task designed to encourage the network to group together highly
similar face pairs in the embedding space and achieves a test AUC of 0.9799.
The proposed network provides a quantitative similarity score for any two given
faces and has been applied to large-scale face datasets to identify similar
face pairs. An additional analysis which correlates the comparison score
returned by a facial recognition tool and the similarity score returned by the
proposed network has also been performed.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 01:45:02 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Sami",
"Shoaib Meraj",
""
],
[
"McCauley",
"John",
""
],
[
"Soleymani",
"Sobhan",
""
],
[
"Nasrabadi",
"Nasser",
""
],
[
"Dawson",
"Jeremy",
""
]
] |
new_dataset
| 0.972684 |
2208.11836
|
Mingqi Shao
|
Mingqi Shao, Chongkun Xia, Dongxu Duan, Xueqian Wang
|
Polarimetric Inverse Rendering for Transparent Shapes Reconstruction
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose a novel method for the detailed reconstruction of
transparent objects by exploiting polarimetric cues. Most of the existing
methods usually lack sufficient constraints and suffer from the over-smooth
problem. Hence, we introduce polarization information as a complementary cue.
We implicitly represent the object's geometry as a neural network, while the
polarization render is capable of rendering the object's polarization images
from the given shape and illumination configuration. Direct comparison of the
rendered polarization images to the real-world captured images will have
additional errors due to the transmission in the transparent object. To address
this issue, the concept of reflection percentage which represents the
proportion of the reflection component is introduced. The reflection percentage
is calculated by a ray tracer and then used for weighting the polarization
loss. We build a polarization dataset for multi-view transparent shapes
reconstruction to verify our method. The experimental results show that our
method is capable of recovering detailed shapes and improving the
reconstruction quality of transparent objects. Our dataset and code will be
publicly available at https://github.com/shaomq2187/TransPIR.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 02:52:31 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Shao",
"Mingqi",
""
],
[
"Xia",
"Chongkun",
""
],
[
"Duan",
"Dongxu",
""
],
[
"Wang",
"Xueqian",
""
]
] |
new_dataset
| 0.976705 |
2208.11865
|
Jianhao Jiao
|
Jianhao Jiao, Hexiang Wei, Tianshuai Hu, Xiangcheng Hu, Yilong Zhu,
Zhijian He, Jin Wu, Jingwen Yu, Xupeng Xie, Huaiyang Huang, Ruoyu Geng, Lujia
Wang, Ming Liu
|
FusionPortable: A Multi-Sensor Campus-Scene Dataset for Evaluation of
Localization and Mapping Accuracy on Diverse Platforms
|
IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS) 2022, 6 pages, 6 figures. URL:
https://ram-lab.com/file/site/multi-sensor-dataset
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Combining multiple sensors enables a robot to maximize its perceptual
awareness of environments and enhance its robustness to external disturbance,
crucial to robotic navigation. This paper proposes the FusionPortable
benchmark, a complete multi-sensor dataset with a diverse set of sequences for
mobile robots. This paper presents three contributions. We first advance a
portable and versatile multi-sensor suite that offers rich sensory
measurements: 10Hz LiDAR point clouds, 20Hz stereo frame images, high-rate and
asynchronous events from stereo event cameras, 200Hz inertial readings from an
IMU, and 10Hz GPS signal. Sensors are already temporally synchronized in
hardware. This device is lightweight, self-contained, and has plug-and-play
support for mobile robots. Second, we construct a dataset by collecting 17
sequences that cover a variety of environments on the campus by exploiting
multiple robot platforms for data collection. Some sequences are challenging to
existing SLAM algorithms. Third, we provide ground truth for the decouple
localization and mapping performance evaluation. We additionally evaluate
state-of-the-art SLAM approaches and identify their limitations. The dataset,
consisting of raw sensor easurements, ground truth, calibration data, and
evaluated algorithms, will be released:
https://ram-lab.com/file/site/multi-sensor-dataset.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 04:27:28 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Jiao",
"Jianhao",
""
],
[
"Wei",
"Hexiang",
""
],
[
"Hu",
"Tianshuai",
""
],
[
"Hu",
"Xiangcheng",
""
],
[
"Zhu",
"Yilong",
""
],
[
"He",
"Zhijian",
""
],
[
"Wu",
"Jin",
""
],
[
"Yu",
"Jingwen",
""
],
[
"Xie",
"Xupeng",
""
],
[
"Huang",
"Huaiyang",
""
],
[
"Geng",
"Ruoyu",
""
],
[
"Wang",
"Lujia",
""
],
[
"Liu",
"Ming",
""
]
] |
new_dataset
| 0.999774 |
2208.11877
|
Yunpu Zhang
|
Yunpu Zhang and Changsheng You
|
Multi-Hop Beam Routing for Hybrid Active/Passive IRS Aided Wireless
Communications
|
accepted to IEEE Globecom 2022
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prior studies on intelligent reflecting surface (IRS) have mostly considered
wireless communication systems aided by a single passive IRS, which, however,
has limited control over wireless propagation environment and suffers from
product-distance path-loss. To address these issues, we propose in this paper a
new hybrid active/passive IRS aided wireless communication system, where an
active IRS and multiple passive IRSs are deployed to assist the communication
between a base station (BS) and a remote user in complex environment, by
establishing a multihop reflection path across active/passive IRSs. In
particular, the active IRS enables signal reflection with power amplification,
thus effectively compensating the severe path-loss in the multi-reflection
path. To maximize the achievable rate at the user, we first design the optimal
beamforming of the BS and selected (active/passive) IRSs for a given
multi-reflection path, and then propose an efficient algorithm to obtain the
optimal multi-reflection path by using the path decomposition method and graph
theory. We show that the active IRS should be selected to establish the beam
routing path when its amplification power and/or number of active reflecting
elements are sufficiently large. Last, numerical results demonstrate the
effectiveness of the proposed hybrid active/passive IRS beam routing design as
compared to the benchmark scheme with passive IRSs only.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 05:33:04 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Zhang",
"Yunpu",
""
],
[
"You",
"Changsheng",
""
]
] |
new_dataset
| 0.996706 |
2208.11960
|
Bao Yiming
|
Yiming Bao, Xu Zhao and Dahong Qian
|
FusePose: IMU-Vision Sensor Fusion in Kinematic Space for Parametric
Human Pose Estimation
|
11 pages,8 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There exist challenging problems in 3D human pose estimation mission, such as
poor performance caused by occlusion and self-occlusion. Recently, IMU-vision
sensor fusion is regarded as valuable for solving these problems. However,
previous researches on the fusion of IMU and vision data, which is
heterogeneous, fail to adequately utilize either IMU raw data or reliable
high-level vision features. To facilitate a more efficient sensor fusion, in
this work we propose a framework called \emph{FusePose} under a parametric
human kinematic model. Specifically, we aggregate different information of IMU
or vision data and introduce three distinctive sensor fusion approaches:
NaiveFuse, KineFuse and AdaDeepFuse. NaiveFuse servers as a basic approach that
only fuses simplified IMU data and estimated 3D pose in euclidean space. While
in kinematic space, KineFuse is able to integrate the calibrated and aligned
IMU raw data with converted 3D pose parameters. AdaDeepFuse further develops
this kinematical fusion process to an adaptive and end-to-end trainable manner.
Comprehensive experiments with ablation studies demonstrate the rationality and
superiority of the proposed framework. The performance of 3D human pose
estimation is improved compared to the baseline result. On Total Capture
dataset, KineFuse surpasses previous state-of-the-art which uses IMU only for
testing by 8.6\%. AdaDeepFuse surpasses state-of-the-art which uses IMU for
both training and testing by 8.5\%. Moreover, we validate the generalization
capability of our framework through experiments on Human3.6M dataset.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 09:35:27 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Bao",
"Yiming",
""
],
[
"Zhao",
"Xu",
""
],
[
"Qian",
"Dahong",
""
]
] |
new_dataset
| 0.982009 |
2208.12003
|
Philipp Jeitner
|
Philipp Jeitner, Haya Shulman, Lucas Teichmann, Michael Waidner
|
XDRI Attacks - and - How to Enhance Resilience of Residential Routers
|
31th USENIX Security Symposium (USENIX Security 22), 2022
| null | null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We explore the security of residential routers and find a range of critical
vulnerabilities. Our evaluations show that 10 out of 36 popular routers are
vulnerable to injections of fake records via misinterpretation of special
characters. We also find that in 15 of the 36 routers the mechanisms, that are
meant to prevent cache poisoning attacks, can be circumvented. In our
Internet-wide study with an advertisement network, we identified and analyzed
976 residential routers used by web clients, out of which more than 95% were
found vulnerable to our attacks. Overall, vulnerable routers are prevalent and
are distributed among 177 countries and 4830 networks. To understand the core
factors causing the vulnerabilities we perform black- and white-box analyses of
the routers. We find that many problems can be attributed to incorrect
assumptions on the protocols' behaviour and the Internet, misunderstanding of
the standard recommendations, bugs, and simplified DNS software
implementations. We provide recommendations to mitigate our attacks. We also
set up a tool to enable everyone to evaluate the security of their routers at
https://xdi-attack.net/.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 11:13:01 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Jeitner",
"Philipp",
""
],
[
"Shulman",
"Haya",
""
],
[
"Teichmann",
"Lucas",
""
],
[
"Waidner",
"Michael",
""
]
] |
new_dataset
| 0.975527 |
2208.12111
|
Gloria Gori
|
Alessandro Fantechi, Gloria Gori, Marco Papini
|
Runtime reliability monitoring for complex fault-tolerance policies
| null | null | null | null |
cs.SE cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
Reliability of complex Cyber-Physical Systems is necessary to guarantee
availability and/or safety of the provided services. Diverse and complex fault
tolerance policies are adopted to enhance reliability, that include a varied
mix of redundancy and dynamic reconfiguration to address hardware reliability,
as well as specific software reliability techniques like diversity or software
rejuvenation. These complex policies call for flexible runtime health checks of
system executions that go beyond conventional runtime monitoring of
pre-programmed health conditions, also in order to minimize maintenance costs.
Defining a suitable monitoring model in the application of this method in
complex systems is still a challenge. In this paper we propose a novel
approach, Reliability Based Monitoring (RBM), for a flexible runtime monitoring
of reliability in complex systems, that exploits a hierarchical reliability
model periodically applied to runtime diagnostics data: this allows to
dynamically plan maintenance activities aimed at prevent failures. As a proof
of concept, we show how to apply RBM to a 2oo3 software system implementing
different fault-tolerant policies.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 14:17:29 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Fantechi",
"Alessandro",
""
],
[
"Gori",
"Gloria",
""
],
[
"Papini",
"Marco",
""
]
] |
new_dataset
| 0.996276 |
2208.12181
|
Ahmet Soyyigit
|
Ahmet Soyyigit, Shuochao Yao, Heechul Yun
|
Anytime-Lidar: Deadline-aware 3D Object Detection
|
RTCSA 2022
| null |
10.1109/RTCSA55878.2022.00010
| null |
cs.CV cs.AI cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this work, we present a novel scheduling framework enabling anytime
perception for deep neural network (DNN) based 3D object detection pipelines.
We focus on computationally expensive region proposal network (RPN) and
per-category multi-head detector components, which are common in 3D object
detection pipelines, and make them deadline-aware. We propose a scheduling
algorithm, which intelligently selects the subset of the components to make
effective time and accuracy trade-off on the fly. We minimize accuracy loss of
skipping some of the neural network sub-components by projecting previously
detected objects onto the current scene through estimations. We apply our
approach to a state-of-art 3D object detection network, PointPillars, and
evaluate its performance on Jetson Xavier AGX using nuScenes dataset. Compared
to the baselines, our approach significantly improve the network's accuracy
under various deadline constraints.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 16:07:10 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Soyyigit",
"Ahmet",
""
],
[
"Yao",
"Shuochao",
""
],
[
"Yun",
"Heechul",
""
]
] |
new_dataset
| 0.991157 |
2208.12187
|
Lucas Hofer
|
Lucas R. Hofer, Milan Krstaji\'c, Robert P. Smith
|
JAXFit: Trust Region Method for Nonlinear Least-Squares Curve Fitting on
the GPU
| null | null | null | null |
cs.LG cs.NA math.NA stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
We implement a trust region method on the GPU for nonlinear least squares
curve fitting problems using a new deep learning Python library called JAX. Our
open source package, JAXFit, works for both unconstrained and constrained curve
fitting problems and allows the fit functions to be defined in Python alone --
without any specialized knowledge of either the GPU or CUDA programming. Since
JAXFit runs on the GPU, it is much faster than CPU based libraries and even
other GPU based libraries, despite being very easy to use. Additionally, due to
JAX's deep learning foundations, the Jacobian in JAXFit's trust region
algorithm is calculated with automatic differentiation, rather than than using
derivative approximations or requiring the user to define the fit function's
partial derivatives.
|
[
{
"version": "v1",
"created": "Thu, 25 Aug 2022 16:13:29 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Hofer",
"Lucas R.",
""
],
[
"Krstajić",
"Milan",
""
],
[
"Smith",
"Robert P.",
""
]
] |
new_dataset
| 0.958842 |
2208.12223
|
Timotheus Kampik
|
Diana Sola, Christian Warmuth, Bernhard Sch\"afer, Peyman Badakhshan,
Jana-Rebecca Rehse, Timotheus Kampik
|
SAP Signavio Academic Models: A Large Process Model Dataset
| null | null | null | null |
cs.OH cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce the SAP Signavio Academic Models (SAP-SAM)
dataset, a collection of hundreds of thousands of business models, mainly
process models in BPMN notation. The model collection is a subset of the models
that were created over the course of roughly a decade on academic.signavio.com,
a free-of-charge software-as-a-service platform that researchers, teachers, and
students can use to create business (process) models. We provide a preliminary
analysis of the model collection, as well as recommendations on how to work
with it. In addition, we discuss potential use cases and limitations of the
model collection from academic and industry perspectives.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 12:50:04 GMT"
}
] | 2022-08-26T00:00:00 |
[
[
"Sola",
"Diana",
""
],
[
"Warmuth",
"Christian",
""
],
[
"Schäfer",
"Bernhard",
""
],
[
"Badakhshan",
"Peyman",
""
],
[
"Rehse",
"Jana-Rebecca",
""
],
[
"Kampik",
"Timotheus",
""
]
] |
new_dataset
| 0.999028 |
2006.15136
|
Matilde Marcolli
|
Yuri Manin and Matilde Marcolli
|
Homotopy Theoretic and Categorical Models of Neural Information Networks
|
105 pages LaTeX, v2: same content with different order of exposition
and added details
| null | null | null |
cs.LO cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we develop a novel mathematical formalism for the modeling of
neural information networks endowed with additional structure in the form of
assignments of resources, either computational or metabolic or informational.
The starting point for this construction is the notion of summing functors and
of Segal's Gamma-spaces in homotopy theory. The main results in this paper
include functorial assignments of concurrent/distributed computing
architectures and associated binary codes to networks and their subsystems, a
categorical form of the Hopfield network dynamics, which recovers the usual
Hopfield equations when applied to a suitable category of weighted codes, a
functorial assignment to networks of corresponding information structures and
information cohomology, and a cohomological version of integrated information.
|
[
{
"version": "v1",
"created": "Tue, 23 Jun 2020 12:29:37 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Aug 2022 01:57:49 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Manin",
"Yuri",
""
],
[
"Marcolli",
"Matilde",
""
]
] |
new_dataset
| 0.983946 |
2007.01113
|
Diego Ruano
|
Carlos Galindo, Fernando Hernando and Diego Ruano
|
Entanglement-Assisted Quantum Error Correcting Codes From RS Codes and
BCH Codes with Extension Degree 2
| null |
Quantum Information Processing 20, 158 (2021)
|
10.1007/s11128-021-03101-4
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Entanglement-assisted quantum error correcting codes (EAQECCs) constructed
from Reed-Solomon codes and BCH codes are considered in this work. It is
provided a complete and explicit formula for the parameters of EAQECCs coming
from any Reed-Solomon code, for the Hermitian metric, and from any BCH code
with extension degree $2$ and consecutive cyclotomic cosets, for both the
Euclidean and the Hermitian metric. The main task in this work is the
computation of a completely general formula for $c$, the minimum number of
required maximally entangled quantum states.
|
[
{
"version": "v1",
"created": "Thu, 2 Jul 2020 14:09:27 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Apr 2021 10:35:03 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Galindo",
"Carlos",
""
],
[
"Hernando",
"Fernando",
""
],
[
"Ruano",
"Diego",
""
]
] |
new_dataset
| 0.999854 |
2010.01235
|
Zhili Chen Prof.
|
Zhili Chen, Yuting Wang, Tianjiao Ni
|
DCDChain: A Credible Architecture of Digital Copyright Detection Based
on Blockchain
|
5 figures
|
Submission to Journal of Surveillance, Security and Safety (JSSS),
2022
| null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Copyright detection is an effective method to prevent piracy. However,
untrustworthy detection parties may lead to falsified detection results. Due to
its credibility and tamper resistance, blockchain has been applied to copyright
protection. Previous works mainly utilized blockchain for reliable copyright
information storage or copyrighted digital media trading. As far as we know,
the problem of credible copyright detection has not been addressed. In this
paper, we propose a credible copyright detection architecture based on the
blockchain, called DCDChain. In this architecture, the detection agency first
detects copyrights off the chain, then uploads the detection records to the
blockchain. Since data on the blockchain are publicly accessible, media
providers can verify the correctness of the copyright detection, and appeal to
a smart contract if there is any dissent. The smart contract then arbitrates
the disputes by verifying the correctness of detection on the chain. The
detect-verify-and-arbitrate mechanism guarantees the credibility of copyright
detection. Security analysis and experimental simulations show that the digital
copyright detection architecture is credible, secure and efficient. The
proposed credible copyright detection scheme is highly important for copyright
protection. The future work is to improve the scheme by designing more
effective locality sensitive hash algorithms for various digital media.
|
[
{
"version": "v1",
"created": "Sat, 3 Oct 2020 00:24:50 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Aug 2022 12:08:03 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Chen",
"Zhili",
""
],
[
"Wang",
"Yuting",
""
],
[
"Ni",
"Tianjiao",
""
]
] |
new_dataset
| 0.998302 |
2102.01124
|
Sukanya Pandey
|
Sukanya Pandey and Vibha Sahlot
|
Role Coloring Bipartite Graphs
|
17 pages including references, 5 figures
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
A k-role coloring of a graph G is an assignment of k colors to the vertices
of G such that if any two vertices are assigned the same color, then their
neighborhood are assigned the same set of colors. By definition, every graph on
n vertices admits an n-role coloring. While for every graph on n vertices, it
is trivial to decide if it admits a 1-role coloring, determining whether a
graph admits a k-role coloring is a notoriously hard problem for k greater than
1. In fact, it is known that k-Role coloring is NP-complete for k greater than
1 on arbitrary graphs. There has been extensive research on the complexity of
k-role coloring on various hereditary graph classes. Furthering this direction
of research, we show that k-Role coloring is NP-complete on bipartite graphs
for k greater than 2 (while it is trivial for k = 2). We complement the
hardness result by characterizing 3-role colorable bipartite chain graphs,
leading to a polynomial-time algorithm for 3-Role coloring for this class of
graphs. We further show that 2-Role coloring is NP-complete for graphs that are
d vertices or edges away from the class of bipartite graphs, even when d = 1.
|
[
{
"version": "v1",
"created": "Mon, 1 Feb 2021 19:38:03 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Aug 2022 13:59:04 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Pandey",
"Sukanya",
""
],
[
"Sahlot",
"Vibha",
""
]
] |
new_dataset
| 0.999735 |
2203.15351
|
Quentin Duchemin
|
Quentin Duchemin (LAMA), Yohann de Castro (ICJ)
|
Random Geometric Graph: Some recent developments and perspectives
|
This is a research report that is part of a Chapter of a PhD thesis.
An updated version will be available soon
| null | null | null |
cs.SI math.ST stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Random Geometric Graph (RGG) is a random graph model for network data
with an underlying spatial representation. Geometry endows RGGs with a rich
dependence structure and often leads to desirable properties of real-world
networks such as the small-world phenomenon and clustering. Originally
introduced to model wireless communication networks, RGGs are now very popular
with applications ranging from network user profiling to protein-protein
interactions in biology. RGGs are also of purely theoretical interest since the
underlying geometry gives rise to challenging mathematical questions. Their
resolutions involve results from probability, statistics, combinatorics or
information theory, placing RGGs at the intersection of a large span of
research communities. This paper surveys the recent developments in RGGs from
the lens of high dimensional settings and non-parametric inference. We also
explain how this model differs from classical community based random graph
models and we review recent works that try to take the best of both worlds. As
a by-product, we expose the scope of the mathematical tools used in the proofs.
|
[
{
"version": "v1",
"created": "Tue, 29 Mar 2022 08:48:55 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Aug 2022 08:31:07 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Duchemin",
"Quentin",
"",
"LAMA"
],
[
"de Castro",
"Yohann",
"",
"ICJ"
]
] |
new_dataset
| 0.975378 |
2204.05799
|
Anastasiia Kornilova
|
Anastasiia Kornilova, Dmitrii Iarosh, Denis Kukushkin, Nikolai
Goncharov, Pavel Mokeev, Arthur Saliou, Gonzalo Ferrer
|
EVOPS Benchmark: Evaluation of Plane Segmentation from RGBD and LiDAR
Data
|
Accepted to IROS'2022
| null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper provides the EVOPS dataset for plane segmentation from 3D data,
both from RGBD images and LiDAR point clouds. We have designed two annotation
methodologies (RGBD and LiDAR) running on well-known and widely-used datasets
for SLAM evaluation and we have provided a complete set of benchmarking tools
including point, planes and segmentation metrics. The data includes a total
number of 10k RGBD and 7K LiDAR frames over different selected scenes which
consist of high quality segmented planes. The experiments report quality of
SOTA methods for RGBD plane segmentation on our annotated data. We also have
provided learnable baseline for plane segmentation in LiDAR point clouds. All
labeled data and benchmark tools used have been made publicly available at
https://evops.netlify.app/.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2022 13:34:40 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Aug 2022 11:01:14 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Kornilova",
"Anastasiia",
""
],
[
"Iarosh",
"Dmitrii",
""
],
[
"Kukushkin",
"Denis",
""
],
[
"Goncharov",
"Nikolai",
""
],
[
"Mokeev",
"Pavel",
""
],
[
"Saliou",
"Arthur",
""
],
[
"Ferrer",
"Gonzalo",
""
]
] |
new_dataset
| 0.999182 |
2206.15241
|
Taehyeon Kim
|
Taehyeon Kim, Namgyu Ho, Donggyu Kim, Se-Young Yun
|
Benchmark Dataset for Precipitation Forecasting by Post-Processing the
Numerical Weather Prediction
|
Under Review on NeurIPS 22 Benchmark Dataset Track
| null | null | null |
cs.LG physics.ao-ph
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Precipitation forecasting is an important scientific challenge that has
wide-reaching impacts on society. Historically, this challenge has been tackled
using numerical weather prediction (NWP) models, grounded on physics-based
simulations. Recently, many works have proposed an alternative approach, using
end-to-end deep learning (DL) models to replace physics-based NWP models. While
these DL methods show improved performance and computational efficiency, they
exhibit limitations in long-term forecasting and lack the explainability. In
this work, we present a hybrid NWP-DL workflow to fill the gap between
standalone NWP and DL approaches. Under this workflow, the outputs of NWP
models are fed into a deep neural network, which post-processes the data to
yield a refined precipitation forecast. The deep model is trained with
supervision, using Automatic Weather Station (AWS) observations as ground-truth
labels. This can achieve the best of both worlds, and can even benefit from
future improvements in NWP technology. To facilitate study in this direction,
we present a novel dataset focused on the Korean Peninsula, termed KoMet (Korea
Meteorological Dataset), comprised of NWP outputs and AWS observations. For the
NWP model, the Global Data Assimilation and Prediction Systems-Korea Integrated
Model (GDAPS-KIM) is utilized. We provide analysis on a comprehensive set of
baseline methods aimed at addressing the challenges of KoMet, including the
sparsity of AWS observations and class imbalance. To lower the barrier to entry
and encourage further study, we also provide an extensive open-source Python
package for data processing and model development. Our benchmark data and code
are available at https://github.com/osilab-kaist/KoMet-Benchmark-Dataset.
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2022 12:41:32 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Aug 2022 06:04:37 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Kim",
"Taehyeon",
""
],
[
"Ho",
"Namgyu",
""
],
[
"Kim",
"Donggyu",
""
],
[
"Yun",
"Se-Young",
""
]
] |
new_dataset
| 0.9997 |
2208.11077
|
Sridhar Mahadevan
|
Sridhar Mahadevan
|
Categoroids: Universal Conditional Independence
|
26 pages
| null | null | null |
cs.AI cs.LG math.CT
|
http://creativecommons.org/licenses/by/4.0/
|
Conditional independence has been widely used in AI, causal inference,
machine learning, and statistics. We introduce categoroids, an algebraic
structure for characterizing universal properties of conditional independence.
Categoroids are defined as a hybrid of two categories: one encoding a
preordered lattice structure defined by objects and arrows between them; the
second dual parameterization involves trigonoidal objects and morphisms
defining a conditional independence structure, with bridge morphisms providing
the interface between the binary and ternary structures. We illustrate
categoroids using three well-known examples of axiom sets: graphoids,
integer-valued multisets, and separoids. Functoroids map one categoroid to
another, preserving the relationships defined by all three types of arrows in
the co-domain categoroid. We describe a natural transformation across
functoroids, which is natural across regular objects and trigonoidal objects,
to construct universal representations of conditional independence.. We use
adjunctions and monads between categoroids to abstractly characterize
faithfulness of graphical and non-graphical representations of conditional
independence.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 16:49:09 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Aug 2022 03:33:32 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Mahadevan",
"Sridhar",
""
]
] |
new_dataset
| 0.991709 |
2208.11144
|
Xingyu Liu
|
Xingyu "Bruce" Liu, Ruolin Wang, Dingzeyu Li, Xiang 'Anthony' Chen,
Amy Pavel
|
CrossA11y: Identifying Video Accessibility Issues via Cross-modal
Grounding
| null | null |
10.1145/3526113.3545703
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Authors make their videos visually accessible by adding audio descriptions
(AD), and auditorily accessible by adding closed captions (CC). However,
creating AD and CC is challenging and tedious, especially for non-professional
describers and captioners, due to the difficulty of identifying accessibility
problems in videos. A video author will have to watch the video through and
manually check for inaccessible information frame-by-frame, for both visual and
auditory modalities. In this paper, we present CrossA11y, a system that helps
authors efficiently detect and address visual and auditory accessibility issues
in videos. Using cross-modal grounding analysis, CrossA11y automatically
measures accessibility of visual and audio segments in a video by checking for
modality asymmetries. CrossA11y then displays these segments and surfaces
visual and audio accessibility issues in a unified interface, making it
intuitive to locate, review, script AD/CC in-place, and preview the described
and captioned video immediately. We demonstrate the effectiveness of CrossA11y
through a lab study with 11 participants, comparing to existing baseline.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 18:08:13 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Liu",
"Xingyu \"Bruce\"",
""
],
[
"Wang",
"Ruolin",
""
],
[
"Li",
"Dingzeyu",
""
],
[
"Chen",
"Xiang 'Anthony'",
""
],
[
"Pavel",
"Amy",
""
]
] |
new_dataset
| 0.999334 |
2208.11170
|
Yuhang Zhao
|
Kexin Zhang, Elmira Deldari, Zhicong Lu, Yaxing Yao, Yuhang Zhao
|
"It's Just Part of Me:" Understanding Avatar Diversity and
Self-presentation of People with Disabilities in Social Virtual Reality
|
The 24th International ACM SIGACCESS Conference on Computers and
Accessibility (ASSETS '22), 16 pages, 4 figures
| null |
10.1145/3517428.3544829
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In social Virtual Reality (VR), users are embodied in avatars and interact
with other users in a face-to-face manner using avatars as the medium. With the
advent of social VR, people with disabilities (PWD) have shown an increasing
presence on this new social media. With their unique disability identity, it is
not clear how PWD perceive their avatars and whether and how they prefer to
disclose their disability when presenting themselves in social VR. We fill this
gap by exploring PWD's avatar perception and disability disclosure preferences
in social VR. Our study involved two steps. We first conducted a systematic
review of fifteen popular social VR applications to evaluate their avatar
diversity and accessibility support. We then conducted an in-depth interview
study with 19 participants who had different disabilities to understand their
avatar experiences. Our research revealed a number of disability disclosure
preferences and strategies adopted by PWD (e.g., reflect selective
disabilities, present a capable self). We also identified several challenges
faced by PWD during their avatar customization process. We discuss the design
implications to promote avatar accessibility and diversity for future social VR
platforms.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 19:56:26 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Zhang",
"Kexin",
""
],
[
"Deldari",
"Elmira",
""
],
[
"Lu",
"Zhicong",
""
],
[
"Yao",
"Yaxing",
""
],
[
"Zhao",
"Yuhang",
""
]
] |
new_dataset
| 0.996074 |
2208.11253
|
Min Wang
|
Min Wang, Ata Mahjoubfar, Anupama Joshi
|
FashionVQA: A Domain-Specific Visual Question Answering System
| null | null | null | null |
cs.CV cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans apprehend the world through various sensory modalities, yet language
is their predominant communication channel. Machine learning systems need to
draw on the same multimodal richness to have informed discourses with humans in
natural language; this is particularly true for systems specialized in
visually-dense information, such as dialogue, recommendation, and search
engines for clothing. To this end, we train a visual question answering (VQA)
system to answer complex natural language questions about apparel in fashion
photoshoot images. The key to the successful training of our VQA model is the
automatic creation of a visual question-answering dataset with 168 million
samples from item attributes of 207 thousand images using diverse templates.
The sample generation employs a strategy that considers the difficulty of the
question-answer pairs to emphasize challenging concepts. Contrary to the recent
trends in using several datasets for pretraining the visual question answering
models, we focused on keeping the dataset fixed while training various models
from scratch to isolate the improvements from model architecture changes. We
see that using the same transformer for encoding the question and decoding the
answer, as in language models, achieves maximum accuracy, showing that visual
language models (VLMs) make the best visual question answering systems for our
dataset. The accuracy of the best model surpasses the human expert level, even
when answering human-generated questions that are not confined to the template
formats. Our approach for generating a large-scale multimodal domain-specific
dataset provides a path for training specialized models capable of
communicating in natural language. The training of such domain-expert models,
e.g., our fashion VLM model, cannot rely solely on the large-scale
general-purpose datasets collected from the web.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 01:18:13 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Wang",
"Min",
""
],
[
"Mahjoubfar",
"Ata",
""
],
[
"Joshi",
"Anupama",
""
]
] |
new_dataset
| 0.997614 |
2208.11258
|
Wonhui Park
|
Wonhui Park, Dongkwon Jin, Chang-Su Kim
|
Applying Eigencontours to PolarMask-Based Instance Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Eigencontours are the first data-driven contour descriptors based on singular
value decomposition. Based on the implementation of ESE-Seg, eigencontours were
applied to the instance segmentation task successfully. In this report, we
incorporate eigencontours into the PolarMask network for instance segmentation.
Experimental results demonstrate that the proposed algorithm yields better
results than PolarMask on two instance segmentation datasets of COCO2017 and
SBD. Also, we analyze the characteristics of eigencontours qualitatively. Our
codes are available at https://github.com/dnjs3594/Eigencontours.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 01:33:18 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Park",
"Wonhui",
""
],
[
"Jin",
"Dongkwon",
""
],
[
"Kim",
"Chang-Su",
""
]
] |
new_dataset
| 0.962568 |
2208.11313
|
Jun-Sang Yoo
|
Jun-Sang Yoo, Dong-Wook Kim, Yucheng Lu, and Seung-Won Jung
|
RZSR: Reference-based Zero-Shot Super-Resolution with Depth Guided
Self-Exemplars
|
Accepted by IEEE Transactions on Multimedia (TMM)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent methods for single image super-resolution (SISR) have demonstrated
outstanding performance in generating high-resolution (HR) images from
low-resolution (LR) images. However, most of these methods show their
superiority using synthetically generated LR images, and their generalizability
to real-world images is often not satisfactory. In this paper, we pay attention
to two well-known strategies developed for robust super-resolution (SR), i.e.,
reference-based SR (RefSR) and zero-shot SR (ZSSR), and propose an integrated
solution, called reference-based zero-shot SR (RZSR). Following the principle
of ZSSR, we train an image-specific SR network at test time using training
samples extracted only from the input image itself. To advance ZSSR, we obtain
reference image patches with rich textures and high-frequency details which are
also extracted only from the input image using cross-scale matching. To this
end, we construct an internal reference dataset and retrieve reference image
patches from the dataset using depth information. Using LR patches and their
corresponding HR reference patches, we train a RefSR network that is embodied
with a non-local attention module. Experimental results demonstrate the
superiority of the proposed RZSR compared to the previous ZSSR methods and
robustness to unseen images compared to other fully supervised SISR methods.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 05:48:17 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Yoo",
"Jun-Sang",
""
],
[
"Kim",
"Dong-Wook",
""
],
[
"Lu",
"Yucheng",
""
],
[
"Jung",
"Seung-Won",
""
]
] |
new_dataset
| 0.995282 |
2208.11405
|
Gorka Velez Ph.D.
|
\'Angel Mart\'in, Daniel Mej\'ias, Zaloa Fern\'andez, Roberto Viola,
Josu P\'erez, Mikel Garc\'ia, Gorka Velez, Jon Montalb\'an and Pablo Angueira
|
Adaptive QoS of WebRTC for Vehicular Media Communications
| null |
2022 IEEE International Symposium on Broadband Multimedia Systems
and Broadcasting (BMSB), 2022, pp. 1-6
|
10.1109/BMSB55706.2022.9828782
| null |
cs.NI cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Vehicles shipping sensors for onboard systems are gaining connectivity. This
enables information sharing to realize a more comprehensive understanding of
the environment. However, peer communication through public cellular networks
brings multiple networking hurdles to address, needing in-network systems to
relay communications and connect parties that cannot connect directly. Web
Real-Time Communication (WebRTC) is a good candidate for media streaming across
vehicles as it enables low latency communications, while bringing standard
protocols to security handshake, discovering public IPs and transverse Network
Address Translation (NAT) systems. However, the end-to-end Quality of Service
(QoS) adaptation in an infrastructure where transmission and reception are
decoupled by a relay, needs a mechanism to adapt the video stream to the
network capacity efficiently. To this end, this paper investigates a mechanism
to apply changes on resolution, framerate and bitrate by exploiting the Real
Time Transport Control Protocol (RTCP) metrics, such as bandwidth and
round-trip time. The solution aims to ensure that the receiving onboard system
gets relevant information in time. The impact on end-to-end throughput
efficiency and reaction time when applying different approaches to QoS
adaptation are analyzed in a real 5G testbed.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 09:51:59 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Martín",
"Ángel",
""
],
[
"Mejías",
"Daniel",
""
],
[
"Fernández",
"Zaloa",
""
],
[
"Viola",
"Roberto",
""
],
[
"Pérez",
"Josu",
""
],
[
"García",
"Mikel",
""
],
[
"Velez",
"Gorka",
""
],
[
"Montalbán",
"Jon",
""
],
[
"Angueira",
"Pablo",
""
]
] |
new_dataset
| 0.992666 |
2208.11422
|
Changqing Su
|
C.Q. Su, Y.H Gao, Y Zhou, Y.Q Sun, C.G Yan, H.B Yin, B Xiong
|
AutoDeconJ: a GPU accelerated ImageJ plugin for 3D light field
deconvolution with optimal iteration numbers predicting
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Light field microscopy is a compact solution to high-speed 3D fluorescence
imaging. Usually, we need to do 3D deconvolution to the captured raw data.
Although there are deep neural network methods that can accelerate the
reconstruction process, the model is not universally applicable for all system
parameters. Here, we develop AutoDeconJ, a GPU accelerated ImageJ plugin for
4.4x faster and accurate deconvolution of light field microscopy data. We
further propose an image quality metric for the deconvolution process, aiding
in automatically determining the optimal number of iterations with higher
reconstruction accuracy and fewer artifacts
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 10:41:40 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Su",
"C. Q.",
""
],
[
"Gao",
"Y. H",
""
],
[
"Zhou",
"Y",
""
],
[
"Sun",
"Y. Q",
""
],
[
"Yan",
"C. G",
""
],
[
"Yin",
"H. B",
""
],
[
"Xiong",
"B",
""
]
] |
new_dataset
| 0.976932 |
2208.11434
|
Cheng Han
|
Cheng Han, Qichao Zhao, Shuyi Zhang, Yinzi Chen, Zhenlin Zhang, Jinwei
Yuan
|
YOLOPv2: Better, Faster, Stronger for Panoptic Driving Perception
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Over the last decade, multi-tasking learning approaches have achieved
promising results in solving panoptic driving perception problems, providing
both high-precision and high-efficiency performance. It has become a popular
paradigm when designing networks for real-time practical autonomous driving
system, where computation resources are limited. This paper proposed an
effective and efficient multi-task learning network to simultaneously perform
the task of traffic object detection, drivable road area segmentation and lane
detection. Our model achieved the new state-of-the-art (SOTA) performance in
terms of accuracy and speed on the challenging BDD100K dataset. Especially, the
inference time is reduced by half compared to the previous SOTA model. Code
will be released in the near future.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 11:00:27 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Han",
"Cheng",
""
],
[
"Zhao",
"Qichao",
""
],
[
"Zhang",
"Shuyi",
""
],
[
"Chen",
"Yinzi",
""
],
[
"Zhang",
"Zhenlin",
""
],
[
"Yuan",
"Jinwei",
""
]
] |
new_dataset
| 0.980942 |
2208.11466
|
Jinge Wu
|
Jinge Wu, Rowena Smith, Honghan Wu
|
Adverse Childhood Experiences Identification from Clinical Notes with
Ontologies and NLP
| null | null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Adverse Childhood Experiences (ACEs) are defined as a collection of highly
stressful, and potentially traumatic, events or circumstances that occur
throughout childhood and/or adolescence. They have been shown to be associated
with increased risks of mental health diseases or other abnormal behaviours in
later lives. However, the identification of ACEs from free-text Electronic
Health Records (EHRs) with Natural Language Processing (NLP) is challenging
because (a) there is no NLP ready ACE ontologies; (b) there are limited cases
available for machine learning, necessitating the data annotation from clinical
experts. We are currently developing a tool that would use NLP techniques to
assist us in surfacing ACEs from clinical notes. This will enable us further
research in identifying evidence of the relationship between ACEs and the
subsequent developments of mental illness (e.g., addictions) in large-scale and
longitudinal free-text EHRs, which has previously not been possible.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 12:17:32 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Wu",
"Jinge",
""
],
[
"Smith",
"Rowena",
""
],
[
"Wu",
"Honghan",
""
]
] |
new_dataset
| 0.994457 |
2208.11500
|
Seungwon Song
|
Seungwon Song, Hyungtae Lim, Alex Junho Lee and Hyun Myung
|
DynaVINS: A Visual-Inertial SLAM for Dynamic Environments
|
8 pages, accepted to IEEE RA-L (August 22, 2022)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Visual inertial odometry and SLAM algorithms are widely used in various
fields, such as service robots, drones, and autonomous vehicles. Most of the
SLAM algorithms are based on assumption that landmarks are static. However, in
the real-world, various dynamic objects exist, and they degrade the pose
estimation accuracy. In addition, temporarily static objects, which are static
during observation but move when they are out of sight, trigger false positive
loop closings. To overcome these problems, we propose a novel visual-inertial
SLAM framework, called DynaVINS, which is robust against both dynamic objects
and temporarily static objects. In our framework, we first present a robust
bundle adjustment that could reject the features from dynamic objects by
leveraging pose priors estimated by the IMU preintegration. Then, a keyframe
grouping and a multi-hypothesis-based constraints grouping methods are proposed
to reduce the effect of temporarily static objects in the loop closing.
Subsequently, we evaluated our method in a public dataset that contains
numerous dynamic objects. Finally, the experimental results corroborate that
our DynaVINS has promising performance compared with other state-of-the-art
methods by successfully rejecting the effect of dynamic and temporarily static
objects. Our code is available at https://github.com/url-kaist/dynaVINS.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 12:50:37 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Song",
"Seungwon",
""
],
[
"Lim",
"Hyungtae",
""
],
[
"Lee",
"Alex Junho",
""
],
[
"Myung",
"Hyun",
""
]
] |
new_dataset
| 0.99121 |
2208.11527
|
Darshan Ganganna Ravindra
|
Darshan Ganganna Ravindra, Laslo Dinges, Al-Hamadi Ayoub, and Vasili
Baranau
|
Fast and Precise Binary Instance Segmentation of 2D Objects for
Automotive Applications
|
4 pages, 4 figures, WSCG 2022 conference [WSCG 2022 Proceedings, CSRN
3201, ISSN 2464-4617]
|
Journal of WSCG, Vol.30, 2022, 302-305 ISSN 1213-6972
|
10.24132/csrn.3201.38
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we focus on improving binary 2D instance segmentation to
assist humans in labeling ground truth datasets with polygons. Humans labeler
just have to draw boxes around objects, and polygons are generated
automatically. To be useful, our system has to run on CPUs in real-time. The
most usual approach for binary instance segmentation involves encoder-decoder
networks. This report evaluates state-of-the-art encoder-decoder networks and
proposes a method for improving instance segmentation quality using these
networks. Alongside network architecture improvements, our proposed method
relies upon providing extra information to the network input, so-called extreme
points, i.e. the outermost points on the object silhouette. The user can label
them instead of a bounding box almost as quickly. The bounding box can be
deduced from the extreme points as well. This method produces better IoU
compared to other state-of-the-art encoder-decoder networks and also runs fast
enough when it is deployed on a CPU.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 13:19:34 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Ravindra",
"Darshan Ganganna",
""
],
[
"Dinges",
"Laslo",
""
],
[
"Ayoub",
"Al-Hamadi",
""
],
[
"Baranau",
"Vasili",
""
]
] |
new_dataset
| 0.969943 |
2208.11537
|
Yoonwoo Jeong
|
Yoonwoo Jeong, Seungjoo Shin, Junha Lee, Christopher Choy, Animashree
Anandkumar, Minsu Cho, Jaesik Park
|
PeRFception: Perception using Radiance Fields
|
Project Page: https://postech-cvlab.github.io/PeRFception/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The recent progress in implicit 3D representation, i.e., Neural Radiance
Fields (NeRFs), has made accurate and photorealistic 3D reconstruction possible
in a differentiable manner. This new representation can effectively convey the
information of hundreds of high-resolution images in one compact format and
allows photorealistic synthesis of novel views. In this work, using the variant
of NeRF called Plenoxels, we create the first large-scale implicit
representation datasets for perception tasks, called the PeRFception, which
consists of two parts that incorporate both object-centric and scene-centric
scans for classification and segmentation. It shows a significant memory
compression rate (96.4\%) from the original dataset, while containing both 2D
and 3D information in a unified form. We construct the classification and
segmentation models that directly take as input this implicit format and also
propose a novel augmentation technique to avoid overfitting on backgrounds of
images. The code and data are publicly available in
https://postech-cvlab.github.io/PeRFception .
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 13:32:46 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Jeong",
"Yoonwoo",
""
],
[
"Shin",
"Seungjoo",
""
],
[
"Lee",
"Junha",
""
],
[
"Choy",
"Christopher",
""
],
[
"Anandkumar",
"Animashree",
""
],
[
"Cho",
"Minsu",
""
],
[
"Park",
"Jaesik",
""
]
] |
new_dataset
| 0.99738 |
2208.11688
|
Jake Gonzalez
|
Jake Gonzalez, Ngan V.T. Nguyen, and Tommy Dang
|
VisFCAC: An Interactive Family Clinical Attribute Comparison
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents VisFCAC, a visual analysis system that displays family
structures along with clinical attribute of family members to effectively
uncover patterns related to suicide deaths for submission to the BioVis 2020
Data Challenge. VisFCAC facilitates pattern tracing to offer insight on
potential clinical attributes that might connect suicide deaths while also
attempting to offer insight to prevent future suicides by at risk people with
similar detected patterns. This paper lays out an approach to compare family
members within a family structure to uncover patterns that may appear in
clinical diagnosis data. This approach also compares two different families and
their family structures to see whether there are patterns in suicide cases
amongst clinical attributes outside family structures. Our solution implements
a radial tree to display family structures with clinical attributes displayed
on radial charts to provide in depth visual analysis and offer a comprehensive
insight for underlying pattern discovery.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 17:50:24 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Gonzalez",
"Jake",
""
],
[
"Nguyen",
"Ngan V. T.",
""
],
[
"Dang",
"Tommy",
""
]
] |
new_dataset
| 0.998021 |
2208.11695
|
David Rolnick
|
Alexandra Sasha Luccioni and David Rolnick
|
Bugs in the Data: How ImageNet Misrepresents Biodiversity
| null | null | null | null |
cs.CV cs.CY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
ImageNet-1k is a dataset often used for benchmarking machine learning (ML)
models and evaluating tasks such as image recognition and object detection.
Wild animals make up 27% of ImageNet-1k but, unlike classes representing people
and objects, these data have not been closely scrutinized. In the current
paper, we analyze the 13,450 images from 269 classes that represent wild
animals in the ImageNet-1k validation set, with the participation of expert
ecologists. We find that many of the classes are ill-defined or overlapping,
and that 12% of the images are incorrectly labeled, with some classes having
>90% of images incorrect. We also find that both the wildlife-related labels
and images included in ImageNet-1k present significant geographical and
cultural biases, as well as ambiguities such as artificial animals, multiple
species in the same image, or the presence of humans. Our findings highlight
serious issues with the extensive use of this dataset for evaluating ML
systems, the use of such algorithms in wildlife-related tasks, and more broadly
the ways in which ML datasets are commonly created and curated.
|
[
{
"version": "v1",
"created": "Wed, 24 Aug 2022 17:55:48 GMT"
}
] | 2022-08-25T00:00:00 |
[
[
"Luccioni",
"Alexandra Sasha",
""
],
[
"Rolnick",
"David",
""
]
] |
new_dataset
| 0.998603 |
1911.02637
|
Jun Rekimoto
|
Jun Rekimoto
|
Homo Cyberneticus: The Era of Human-AI Integration
| null |
ACM UIST 2019
|
10.1145/1235
| null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article is submitted and accepted as ACM UIST 2019 Visions. UIST Visions
is a venue for forward thinking ideas to inspire the community. The goal is not
to report research but to project and propose new research directions. This
article, entitled "Homo Cyberneticus: The Era of Human-AI Integration",
proposes HCI research directions, namely human-augmentation and
human-AI-integration.
|
[
{
"version": "v1",
"created": "Mon, 21 Oct 2019 12:30:17 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Rekimoto",
"Jun",
""
]
] |
new_dataset
| 0.997511 |
2102.05586
|
Yuki Matsuda
|
Yuki Matsuda, Shogo Kawanaka, Hirohiko Suwa, Yutaka Arakawa, Keiichi
Yasumoto
|
ParmoSense: A Scenario-based Participatory Mobile Urban Sensing Platform
with User Motivation Engine
|
24 pages, 9 figures
|
Sensors and Materials, Vol.34, No.8, pp.3063-3091, 2022
|
10.18494/SAM3961
| null |
cs.SI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rapid proliferation of mobile devices with various sensors have enabled
Participatory Mobile Sensing (PMS). Several PMS platforms provide multiple
functions for various sensing purposes, but they are suffering from the open
issues: limited use of their functions for a specific scenario/case and
requiring technical knowledge for organizers. In this paper, we propose a novel
PMS platform named ParmoSense for easily and flexibly collecting urban
environmental information. To reduce the burden on both organizers and
participants, in ParmoSense, we employ two novel features: modularization of
functions and scenario-based PMS system description. For modularization, we
provide the essential PMS functions as modules which can be easily chosen and
combined for sensing in different scenarios. The scenario-based description
feature allows organizers to easily and quickly set up a new participatory
sensing instance and participants to easily install the corresponding scenario
and participate in the sensing. Moreover, ParmoSense provides GUI tools as well
for creating and distributing PMS system easily, editing and visualizing
collected data quickly. It also provides multiple functions for encouraging
participants' motivation for sustainable operation of the system. Through
performance comparison with existing PMS platforms, we confirmed ParmoSense
shows the best cost-performance in the perspective of the workload for
preparing PMS system and varieties of functions. In addition, to evaluate the
availability and usability of ParmoSense, we conducted 19 case studies, which
have different locations, scales, and purposes, over 4 years with cooperation
from ordinary citizens. Through the case studies and the questionnaire survey
for participants and organizers, we confirmed that ParmoSense can be easily
operated and participated by ordinary citizens including non-technical persons.
|
[
{
"version": "v1",
"created": "Wed, 10 Feb 2021 17:32:31 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Matsuda",
"Yuki",
""
],
[
"Kawanaka",
"Shogo",
""
],
[
"Suwa",
"Hirohiko",
""
],
[
"Arakawa",
"Yutaka",
""
],
[
"Yasumoto",
"Keiichi",
""
]
] |
new_dataset
| 0.999222 |
2102.10925
|
Ivan Jericevich
|
Ivan Jericevich and Dharmesh Sing and Tim Gebbie
|
CoinTossX: An open-source low-latency high-throughput matching engine
|
21 pages, 10 figures, 5 tables
|
SoftwareX Volume 19, July 2022, 101136
|
10.1016/j.softx.2022.101136
| null |
cs.DC cs.MA q-fin.CP q-fin.TR
|
http://creativecommons.org/licenses/by/4.0/
|
We deploy and demonstrate the CoinTossX low-latency, high-throughput,
open-source matching engine with orders sent using the Julia and Python
languages. We show how this can be deployed for small-scale local desk-top
testing and discuss a larger scale, but local hosting, with multiple traded
instruments managed concurrently and managed by multiple clients. We then
demonstrate a cloud based deployment using Microsoft Azure, with large-scale
industrial and simulation research use cases in mind. The system is exposed and
interacted with via sockets using UDP SBE message protocols and can be
monitored using a simple web browser interface using HTTP. We give examples
showing how orders can be be sent to the system and market data feeds monitored
using the Julia and Python languages. The system is developed in Java with
orders submitted as binary encodings (SBE) via UDP protocols using the Aeron
Media Driver as the low-latency, high throughput message transport. The system
separates the order-generation and simulation environments e.g. agent-based
model simulation, from the matching of orders, data-feeds and various
modularised components of the order-book system. This ensures a more natural
and realistic asynchronicity between events generating orders, and the events
associated with order-book dynamics and market data-feeds. We promote the use
of Julia as the preferred order submission and simulation environment.
|
[
{
"version": "v1",
"created": "Mon, 22 Feb 2021 11:50:34 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Jericevich",
"Ivan",
""
],
[
"Sing",
"Dharmesh",
""
],
[
"Gebbie",
"Tim",
""
]
] |
new_dataset
| 0.998955 |
2110.06800
|
Harrison Lee
|
Harrison Lee and Raghav Gupta and Abhinav Rastogi and Yuan Cao and Bin
Zhang and Yonghui Wu
|
SGD-X: A Benchmark for Robust Generalization in Schema-Guided Dialogue
Systems
|
AAAI 2022
|
Lee, H., Gupta, R., Rastogi, A., Cao, Y., Zhang, B., & Wu, Y.
(2022). SGD-X: A Benchmark for Robust Generalization in Schema-Guided
Dialogue Systems. Proceedings of the AAAI Conference on Artificial
Intelligence, 36(10), 10938-10946
|
10.1609/aaai.v36i10.21341
| null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Zero/few-shot transfer to unseen services is a critical challenge in
task-oriented dialogue research. The Schema-Guided Dialogue (SGD) dataset
introduced a paradigm for enabling models to support any service in zero-shot
through schemas, which describe service APIs to models in natural language. We
explore the robustness of dialogue systems to linguistic variations in schemas
by designing SGD-X - a benchmark extending SGD with semantically similar yet
stylistically diverse variants for every schema. We observe that two top state
tracking models fail to generalize well across schema variants, measured by
joint goal accuracy and a novel metric for measuring schema sensitivity.
Additionally, we present a simple model-agnostic data augmentation method to
improve schema robustness.
|
[
{
"version": "v1",
"created": "Wed, 13 Oct 2021 15:38:29 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jan 2022 21:30:41 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Aug 2022 17:28:58 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Lee",
"Harrison",
""
],
[
"Gupta",
"Raghav",
""
],
[
"Rastogi",
"Abhinav",
""
],
[
"Cao",
"Yuan",
""
],
[
"Zhang",
"Bin",
""
],
[
"Wu",
"Yonghui",
""
]
] |
new_dataset
| 0.998975 |
2111.12527
|
Junhao Zhang
|
David Junhao Zhang, Kunchang Li, Yali Wang, Yunpeng Chen, Shashwat
Chandra, Yu Qiao, Luoqi Liu, Mike Zheng Shou
|
MorphMLP: An Efficient MLP-Like Backbone for Spatial-Temporal
Representation Learning
|
ECCV2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, MLP-Like networks have been revived for image recognition. However,
whether it is possible to build a generic MLP-Like architecture on video domain
has not been explored, due to complex spatial-temporal modeling with large
computation burden. To fill this gap, we present an efficient self-attention
free backbone, namely MorphMLP, which flexibly leverages the concise
Fully-Connected (FC) layer for video representation learning. Specifically, a
MorphMLP block consists of two key layers in sequence, i.e., MorphFC_s and
MorphFC_t, for spatial and temporal modeling respectively. MorphFC_s can
effectively capture core semantics in each frame, by progressive token
interaction along both height and width dimensions. Alternatively, MorphFC_t
can adaptively learn long-term dependency over frames, by temporal token
aggregation on each spatial location. With such multi-dimension and multi-scale
factorization, our MorphMLP block can achieve a great accuracy-computation
balance. Finally, we evaluate our MorphMLP on a number of popular video
benchmarks. Compared with the recent state-of-the-art models, MorphMLP
significantly reduces computation but with better accuracy, e.g., MorphMLP-S
only uses 50% GFLOPs of VideoSwin-T but achieves 0.9% top-1 improvement on
Kinetics400, under ImageNet1K pretraining. MorphMLP-B only uses 43% GFLOPs of
MViT-B but achieves 2.4% top-1 improvement on SSV2, even though MorphMLP-B is
pretrained on ImageNet1K while MViT-B is pretrained on Kinetics400. Moreover,
our method adapted to the image domain outperforms previous SOTA MLP-Like
architectures. Code is available at https://github.com/MTLab/MorphMLP.
|
[
{
"version": "v1",
"created": "Wed, 24 Nov 2021 14:52:20 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Aug 2022 07:21:36 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Aug 2022 12:05:19 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Zhang",
"David Junhao",
""
],
[
"Li",
"Kunchang",
""
],
[
"Wang",
"Yali",
""
],
[
"Chen",
"Yunpeng",
""
],
[
"Chandra",
"Shashwat",
""
],
[
"Qiao",
"Yu",
""
],
[
"Liu",
"Luoqi",
""
],
[
"Shou",
"Mike Zheng",
""
]
] |
new_dataset
| 0.990795 |
2112.00064
|
Ahmad Biniaz
|
Ahmad Biniaz
|
Acute Tours in the Plane
|
Appeared in SoCG 2022. A special thanks to the anonymous SoCG 2022
reviewer who meticulously verified our proof, and provided valuable feedback
that reduced the number of subcases to two (which was three in our original
proof) and improved the bound on n to 20 (which was 36 originally)
| null | null | null |
cs.CG cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
We confirm the following conjecture of Fekete and Woeginger from 1997: for
any sufficiently large even number $n$, every set of $n$ points in the plane
can be connected by a spanning tour (Hamiltonian cycle) consisting of
straight-line edges such that the angle between any two consecutive edges is at
most $\pi/2$. Our proof is constructive and suggests a simple $O(n\log n)$-time
algorithm for finding such a tour. The previous best-known upper bound on the
angle is $2\pi/3$, and it is due to Dumitrescu, Pach and T\'oth (2009).
|
[
{
"version": "v1",
"created": "Tue, 30 Nov 2021 19:41:42 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Aug 2022 06:10:23 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Biniaz",
"Ahmad",
""
]
] |
new_dataset
| 0.991709 |
2112.02080
|
Roberto Mag\'an-Carri\'on Dr.
|
Roberto Mag\'an-Carri\'on, Daniel Urda, Ignacio D\'iaz-Cano, Bernab\'e
Dorronsoro
|
Improving the Reliability of Network Intrusion Detection Systems through
Dataset Integration
|
Submitted to the IEEE Transactions on Emerging Topics in Computing
journal
|
IEEE Transactions on Emerging Topics in Computing, Early Access,
2022
|
10.1109/TETC.2022.3178283
| null |
cs.LG cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents Reliable-NIDS (R-NIDS), a novel methodology for Machine
Learning (ML) based Network Intrusion Detection Systems (NIDSs) that allows ML
models to work on integrated datasets, empowering the learning process with
diverse information from different datasets. Therefore, R-NIDS targets the
design of more robust models, that generalize better than traditional
approaches. We also propose a new dataset, called UNK21. It is built from three
of the most well-known network datasets (UGR'16, USNW-NB15 and NLS-KDD), each
one gathered from its own network environment, with different features and
classes, by using a data aggregation approach present in R-NIDS. Following
R-NIDS, in this work we propose to build two well-known ML models (a linear and
a non-linear one) based on the information of three of the most common datasets
in the literature for NIDS evaluation, those integrated in UNK21. The results
that the proposed methodology offers show how these two ML models trained as a
NIDS solution could benefit from this approach, being able to generalize better
when training on the newly proposed UNK21 dataset. Furthermore, these results
are carefully analyzed with statistical tools that provide high confidence on
our conclusions.
|
[
{
"version": "v1",
"created": "Thu, 2 Dec 2021 09:30:18 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Magán-Carrión",
"Roberto",
""
],
[
"Urda",
"Daniel",
""
],
[
"Díaz-Cano",
"Ignacio",
""
],
[
"Dorronsoro",
"Bernabé",
""
]
] |
new_dataset
| 0.999106 |
2201.09337
|
Yuri Passos
|
Yuri Tavares dos Passos, Xavier Duquesne, Leandro Soriano Marcolino
|
Congestion control algorithms for robotic swarms with a common target
based on the throughput of the target area
|
Corrections were made to the TRVF algorithm and the text, and new
references were added
| null | null | null |
cs.RO cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
When a large number of robots try to reach a common area, congestions happen,
causing severe delays. To minimise congestion in a robotic swarm system,
traffic control algorithms must be employed in a decentralised manner. Based on
strategies aimed to maximise the throughput of the common target area, we
developed two novel algorithms for robots using artificial potential fields for
obstacle avoidance and navigation. One algorithm is inspired by creating a
queue to get to the target area (Single Queue Former -- SQF), while the other
makes the robots touch the boundary of the circular area by using vector fields
(Touch and Run Vector Fields -- TRVF). We performed simulation experiments to
show that the proposed algorithms are bounded by the throughput of their
inspired theoretical strategies and compare the two novel algorithms with
state-of-art algorithms for the same problem (PCC, EE and PCC-EE). The SQF
algorithm significantly outperforms all other algorithms for a large number of
robots or when the circular target region radius is small. TRVF, on the other
hand, is better than SQF only for a limited number of robots and outperforms
only PCC for numerous robots. However, it allows us to analyse the potential
impacts on the throughput when transferring an idea from a theoretical strategy
to a concrete algorithm that considers changing linear speeds and distances
between robots.
|
[
{
"version": "v1",
"created": "Sun, 23 Jan 2022 18:25:46 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jan 2022 21:07:28 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Aug 2022 09:39:21 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Passos",
"Yuri Tavares dos",
""
],
[
"Duquesne",
"Xavier",
""
],
[
"Marcolino",
"Leandro Soriano",
""
]
] |
new_dataset
| 0.960825 |
2202.13413
|
Karsten Paul
|
Karsten Paul, Roger A. Sauer
|
An isogeometric finite element formulation for boundary and shell
viscoelasticity based on a multiplicative surface deformation split
|
In this version, parts of the introduction and conclusion are
rewritten, remarks 1.1, 1.2 and 3.1 are added, the title is slightly
modified, the list of highlights is updated, and minor typos are fixed
|
Int. J. Numer. Methods Eng. (2022)
|
10.1002/nme.7080
| null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents a numerical formulation to model isotropic viscoelastic
material behavior for membranes and thin shells. The surface and the shell
theory are formulated within a curvilinear coordinate system, which allows the
representation of general surfaces and deformations. The kinematics follow from
Kirchhoff-Love theory and the discretization makes use of isogeometric shape
functions. A multiplicative split of the surface deformation gradient is
employed, such that an intermediate surface configuration is introduced. The
surface metric and curvature of this intermediate configuration follow from the
solution of nonlinear evolution laws - ordinary differential equations (ODEs) -
that stem from a generalized viscoelastic solid model. The evolution laws are
integrated numerically with the implicit Euler scheme and linearized within the
Newton-Raphson scheme of the nonlinear finite element framework. The
implementation of membrane and bending viscosity is verified with the help of
analytical solutions and shows ideal convergence behavior. The chosen numerical
examples capture large deformations and typical viscoelasticity behavior, such
as creep, relaxation, and strain rate dependence. It is also shown that the
proposed formulation can be straightforwardly applied to model boundary
viscoelasticity of 3D bodies.
|
[
{
"version": "v1",
"created": "Sun, 27 Feb 2022 18:07:27 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Apr 2022 11:45:02 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Aug 2022 13:32:26 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Paul",
"Karsten",
""
],
[
"Sauer",
"Roger A.",
""
]
] |
new_dataset
| 0.972321 |
2203.16095
|
Guoxin Kang
|
Guoxin Kang, Lei Wang, Wanling Gao, Fei Tang, and Jianfeng Zhan
|
OLxPBench: Real-time, Semantically Consistent, and Domain-specific are
Essential in Benchmarking, Designing, and Implementing HTAP Systems
|
Accepted to ICDE 2022. International Open Benchmark Council
(BenchCouncil) sets up the OLxPBench homepage at
https://www.benchcouncil.org/olxpbench/
| null |
10.1109/ICDE53745.2022.00182
| null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As real-time analysis of the new data become increasingly compelling, more
organizations deploy Hybrid Transactional/Analytical Processing (HTAP) systems
to support real-time queries on data recently generated by online transaction
processing. This paper argues that real-time queries, semantically consistent
schema, and domain-specific workloads are essential in benchmarking, designing,
and implementing HTAP systems. However, most state-of-the-art and
state-of-the-practice benchmarks ignore those critical factors. Hence, they are
incommensurable and, at worst, misleading in benchmarking, designing, and
implementing HTAP systems. This paper presents OLxPBench, a composite HTAP
benchmark suite. OLxPBench proposes: (1) the abstraction of a hybrid
transaction, performing a real-time query in-between an online transaction, to
model widely-observed behavior pattern -- making a quick decision while
consulting real-time analysis; (2) a semantically consistent schema to express
the relationships between OLTP and OLAP schema; (3) the combination of
domain-specific and general benchmarks to characterize diverse application
scenarios with varying resource demands. Our evaluations justify the three
design decisions of OLxPBench and pinpoint the bottlenecks of two mainstream
distributed HTAP DBMSs. International Open Benchmark Council (BenchCouncil)
sets up the OLxPBench homepage at https://www.benchcouncil.org/olxpbench/. Its
source code is available from https://github.com/BenchCouncil/olxpbench.git.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 06:52:19 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2022 07:57:29 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Kang",
"Guoxin",
""
],
[
"Wang",
"Lei",
""
],
[
"Gao",
"Wanling",
""
],
[
"Tang",
"Fei",
""
],
[
"Zhan",
"Jianfeng",
""
]
] |
new_dataset
| 0.986761 |
2208.04052
|
Peter De Roovere
|
Peter De Roovere, Steven Moonen, Nick Michiels, Francis Wyffels
|
Dataset of Industrial Metal Objects
|
7 pages, 9 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a diverse dataset of industrial metal objects. These objects are
symmetric, textureless and highly reflective, leading to challenging conditions
not captured in existing datasets. Our dataset contains both real-world and
synthetic multi-view RGB images with 6D object pose labels. Real-world data is
obtained by recording multi-view images of scenes with varying object shapes,
materials, carriers, compositions and lighting conditions. This results in over
30,000 images, accurately labelled using a new public tool. Synthetic data is
obtained by carefully simulating real-world conditions and varying them in a
controlled and realistic way. This leads to over 500,000 synthetic images. The
close correspondence between synthetic and real-world data, and controlled
variations, will facilitate sim-to-real research. Our dataset's size and
challenging nature will facilitate research on various computer vision tasks
involving reflective materials. The dataset and accompanying resources are made
available on the project website at https://pderoovere.github.io/dimo.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 10:49:06 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"De Roovere",
"Peter",
""
],
[
"Moonen",
"Steven",
""
],
[
"Michiels",
"Nick",
""
],
[
"Wyffels",
"Francis",
""
]
] |
new_dataset
| 0.999816 |
2208.07841
|
Pedro David Carneiro Neto
|
Pedro C. Neto, Tiago Gon\c{c}alves, Marco Huber, Naser Damer, Ana F.
Sequeira, Jaime S. Cardoso
|
OrthoMAD: Morphing Attack Detection Through Orthogonal Identity
Disentanglement
|
Accepted at BIOSIG 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Morphing attacks are one of the many threats that are constantly affecting
deep face recognition systems. It consists of selecting two faces from
different individuals and fusing them into a final image that contains the
identity information of both. In this work, we propose a novel regularisation
term that takes into account the existent identity information in both and
promotes the creation of two orthogonal latent vectors. We evaluate our
proposed method (OrthoMAD) in five different types of morphing in the FRLL
dataset and evaluate the performance of our model when trained on five distinct
datasets. With a small ResNet-18 as the backbone, we achieve state-of-the-art
results in the majority of the experiments, and competitive results in the
others. The code of this paper will be publicly available.
|
[
{
"version": "v1",
"created": "Tue, 16 Aug 2022 16:55:12 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Aug 2022 10:44:56 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Neto",
"Pedro C.",
""
],
[
"Gonçalves",
"Tiago",
""
],
[
"Huber",
"Marco",
""
],
[
"Damer",
"Naser",
""
],
[
"Sequeira",
"Ana F.",
""
],
[
"Cardoso",
"Jaime S.",
""
]
] |
new_dataset
| 0.992291 |
2208.08696
|
Chongming Gao
|
Chongming Gao, Shijun Li, Yuan Zhang, Jiawei Chen, Biao Li, Wenqiang
Lei, Peng Jiang, Xiangnan He
|
KuaiRand: An Unbiased Sequential Recommendation Dataset with Randomly
Exposed Videos
|
CIKM '22 Resource Paper. Dataset Webpage: https://kuairand.com
| null |
10.1145/3511808.3557624
| null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recommender systems deployed in real-world applications can have inherent
exposure bias, which leads to the biased logged data plaguing the researchers.
A fundamental way to address this thorny problem is to collect users'
interactions on randomly expose items, i.e., the missing-at-random data. A few
works have asked certain users to rate or select randomly recommended items,
e.g., Yahoo!, Coat, and OpenBandit. However, these datasets are either too
small in size or lack key information, such as unique user ID or the features
of users/items. In this work, we present KuaiRand, an unbiased sequential
recommendation dataset containing millions of intervened interactions on
randomly exposed videos, collected from the video-sharing mobile App, Kuaishou.
Different from existing datasets, KuaiRand records 12 kinds of user feedback
signals (e.g., click, like, and view time) on randomly exposed videos inserted
in the recommendation feeds in two weeks. To facilitate model learning, we
further collect rich features of users and items as well as users' behavior
history. By releasing this dataset, we enable the research of advanced
debiasing large-scale recommendation scenarios for the first time. Also, with
its distinctive features, KuaiRand can support various other research
directions such as interactive recommendation, long sequential behavior
modeling, and multi-task learning. The dataset and its news will be available
at https://kuairand.com.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 08:18:27 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Aug 2022 16:05:14 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Gao",
"Chongming",
""
],
[
"Li",
"Shijun",
""
],
[
"Zhang",
"Yuan",
""
],
[
"Chen",
"Jiawei",
""
],
[
"Li",
"Biao",
""
],
[
"Lei",
"Wenqiang",
""
],
[
"Jiang",
"Peng",
""
],
[
"He",
"Xiangnan",
""
]
] |
new_dataset
| 0.963149 |
2208.09829
|
Peter De Roovere
|
Peter De Roovere, Rembert Daems, Jonathan Croenen, Taoufik Bourgana,
Joris de Hoog and Francis Wyffels
|
CenDerNet: Center and Curvature Representations for Render-and-Compare
6D Pose Estimation
|
19 pages, 14 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce CenDerNet, a framework for 6D pose estimation from multi-view
images based on center and curvature representations. Finding precise poses for
reflective, textureless objects is a key challenge for industrial robotics. Our
approach consists of three stages: First, a fully convolutional neural network
predicts center and curvature heatmaps for each view; Second, center heatmaps
are used to detect object instances and find their 3D centers; Third, 6D object
poses are estimated using 3D centers and curvature heatmaps. By jointly
optimizing poses across views using a render-and-compare approach, our method
naturally handles occlusions and object symmetries. We show that CenDerNet
outperforms previous methods on two industry-relevant datasets: DIMO and
T-LESS.
|
[
{
"version": "v1",
"created": "Sun, 21 Aug 2022 07:37:04 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"De Roovere",
"Peter",
""
],
[
"Daems",
"Rembert",
""
],
[
"Croenen",
"Jonathan",
""
],
[
"Bourgana",
"Taoufik",
""
],
[
"de Hoog",
"Joris",
""
],
[
"Wyffels",
"Francis",
""
]
] |
new_dataset
| 0.998778 |
2208.10564
|
Aidan Boyd
|
Aidan Boyd, Jeremy Speth, Lucas Parzianello, Kevin Bowyer, Adam Czajka
|
State Of The Art In Open-Set Iris Presentation Attack Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research in presentation attack detection (PAD) for iris recognition has
largely moved beyond evaluation in "closed-set" scenarios, to emphasize ability
to generalize to presentation attack types not present in the training data.
This paper offers several contributions to understand and extend the
state-of-the-art in open-set iris PAD. First, it describes the most
authoritative evaluation to date of iris PAD. We have curated the largest
publicly-available image dataset for this problem, drawing from 26 benchmarks
previously released by various groups, and adding 150,000 images being released
with the journal version of this paper, to create a set of 450,000 images
representing authentic iris and seven types of presentation attack instrument
(PAI). We formulate a leave-one-PAI-out evaluation protocol, and show that even
the best algorithms in the closed-set evaluations exhibit catastrophic failures
on multiple attack types in the open-set scenario. This includes algorithms
performing well in the most recent LivDet-Iris 2020 competition, which may come
from the fact that the LivDet-Iris protocol emphasizes sequestered images
rather than unseen attack types. Second, we evaluate the accuracy of five
open-source iris presentation attack algorithms available today, one of which
is newly-proposed in this paper, and build an ensemble method that beats the
winner of the LivDet-Iris 2020 by a substantial margin. This paper demonstrates
that closed-set iris PAD, when all PAIs are known during training, is a solved
problem, with multiple algorithms showing very high accuracy, while open-set
iris PAD, when evaluated correctly, is far from being solved. The newly-created
dataset, new open-source algorithms, and evaluation protocol, made publicly
available with the journal version of this paper, provide the experimental
artifacts that researchers can use to measure progress on this important
problem.
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 19:40:59 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Boyd",
"Aidan",
""
],
[
"Speth",
"Jeremy",
""
],
[
"Parzianello",
"Lucas",
""
],
[
"Bowyer",
"Kevin",
""
],
[
"Czajka",
"Adam",
""
]
] |
new_dataset
| 0.999657 |
2208.10581
|
Abigail Wolf
|
Abigail Wolf, Shannon Brooks-Lehnert, and Keigo Hirakawa
|
EBSnoR: Event-Based Snow Removal by Optimal Dwell Time Thresholding
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose an Event-Based Snow Removal algorithm called EBSnoR. We developed
a technique to measure the dwell time of snowflakes on a pixel using
event-based camera data, which is used to carry out a Neyman-Pearson hypothesis
test to partition event stream into snowflake and background events. The
effectiveness of the proposed EBSnoR was verified on a new dataset called
UDayton22EBSnow, comprised of front-facing event-based camera in a car driving
through snow with manually annotated bounding boxes around surrounding
vehicles. Qualitatively, EBSnoR correctly identifies events corresponding to
snowflakes; and quantitatively, EBSnoR-preprocessed event data improved the
performance of event-based car detection algorithms.
|
[
{
"version": "v1",
"created": "Mon, 22 Aug 2022 20:25:39 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Wolf",
"Abigail",
""
],
[
"Brooks-Lehnert",
"Shannon",
""
],
[
"Hirakawa",
"Keigo",
""
]
] |
new_dataset
| 0.999721 |
2208.10682
|
Jing Zhu
|
Jing Zhu, Danai Koutra, Mark Heimann
|
CAPER: Coarsen, Align, Project, Refine - A General Multilevel Framework
for Network Alignment
|
CIKM 2022
| null |
10.1145/3511808.3557563
| null |
cs.SI cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Network alignment, or the task of finding corresponding nodes in different
networks, is an important problem formulation in many application domains. We
propose CAPER, a multilevel alignment framework that Coarsens the input graphs,
Aligns the coarsened graphs, Projects the alignment solution to finer levels
and Refines the alignment solution. We show that CAPER can improve upon many
different existing network alignment algorithms by enforcing alignment
consistency across multiple graph resolutions: nodes matched at finer levels
should also be matched at coarser levels. CAPER also accelerates the use of
slower network alignment methods, at the modest cost of linear-time coarsening
and refinement steps, by allowing them to be run on smaller coarsened versions
of the input graphs. Experiments show that CAPER can improve upon diverse
network alignment methods by an average of 33% in accuracy and/or an order of
magnitude faster in runtime.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 02:04:56 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Zhu",
"Jing",
""
],
[
"Koutra",
"Danai",
""
],
[
"Heimann",
"Mark",
""
]
] |
new_dataset
| 0.993464 |
2208.10738
|
Marco Pesavento
|
Marco Pesavento, Marco Volino and Adrian Hilton
|
Super-resolution 3D Human Shape from a Single Low-Resolution Image
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel framework to reconstruct super-resolution human shape from
a single low-resolution input image. The approach overcomes limitations of
existing approaches that reconstruct 3D human shape from a single image, which
require high-resolution images together with auxiliary data such as surface
normal or a parametric model to reconstruct high-detail shape. The proposed
framework represents the reconstructed shape with a high-detail implicit
function. Analogous to the objective of 2D image super-resolution, the approach
learns the mapping from a low-resolution shape to its high-resolution
counterpart and it is applied to reconstruct 3D shape detail from
low-resolution images. The approach is trained end-to-end employing a novel
loss function which estimates the information lost between a low and
high-resolution representation of the same 3D surface shape. Evaluation for
single image reconstruction of clothed people demonstrates that our method
achieves high-detail surface reconstruction from low-resolution images without
auxiliary data. Extensive experiments show that the proposed approach can
estimate super-resolution human geometries with a significantly higher level of
detail than that obtained with previous approaches when applied to
low-resolution images.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 05:24:39 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Pesavento",
"Marco",
""
],
[
"Volino",
"Marco",
""
],
[
"Hilton",
"Adrian",
""
]
] |
new_dataset
| 0.997653 |
2208.10769
|
Zhangyang Xiong
|
Zhangyang Xiong, Dong Du, Yushuang Wu, Jingqi Dong, Di Kang, Linchao
Bao, and Xiaoguang Han
|
PIFu for the Real World: A Self-supervised Framework to Reconstruct
Dressed Human from Single-view Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is very challenging to accurately reconstruct sophisticated human geometry
caused by various poses and garments from a single image. Recently, works based
on pixel-aligned implicit function (PIFu) have made a big step and achieved
state-of-the-art fidelity on image-based 3D human digitization. However, the
training of PIFu relies heavily on expensive and limited 3D ground truth data
(i.e. synthetic data), thus hindering its generalization to more diverse real
world images. In this work, we propose an end-to-end self-supervised network
named SelfPIFu to utilize abundant and diverse in-the-wild images, resulting in
largely improved reconstructions when tested on unconstrained in-the-wild
images. At the core of SelfPIFu is the depth-guided volume-/surface-aware
signed distance fields (SDF) learning, which enables self-supervised learning
of a PIFu without access to GT mesh. The whole framework consists of a normal
estimator, a depth estimator, and a SDF-based PIFu and better utilizes extra
depth GT during training. Extensive experiments demonstrate the effectiveness
of our self-supervised framework and the superiority of using depth as input.
On synthetic data, our Intersection-Over-Union (IoU) achieves to 93.5%, 18%
higher compared with PIFuHD. For in-the-wild images, we conduct user studies on
the reconstructed results, the selection rate of our results is over 68%
compared with other state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 07:00:44 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Xiong",
"Zhangyang",
""
],
[
"Du",
"Dong",
""
],
[
"Wu",
"Yushuang",
""
],
[
"Dong",
"Jingqi",
""
],
[
"Kang",
"Di",
""
],
[
"Bao",
"Linchao",
""
],
[
"Han",
"Xiaoguang",
""
]
] |
new_dataset
| 0.991332 |
2208.10773
|
Svetlana Pavlitskaya
|
Svetlana Pavlitskaya, Nikolai Polley, Michael Weber, J.Marius
Z\"ollner
|
Adversarial Vulnerability of Temporal Feature Networks for Object
Detection
|
Accepted for publication at ECCV 2022 SAIAD workshop
| null | null | null |
cs.CV cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Taking into account information across the temporal domain helps to improve
environment perception in autonomous driving. However, it has not been studied
so far whether temporally fused neural networks are vulnerable to deliberately
generated perturbations, i.e. adversarial attacks, or whether temporal history
is an inherent defense against them. In this work, we study whether temporal
feature networks for object detection are vulnerable to universal adversarial
attacks. We evaluate attacks of two types: imperceptible noise for the whole
image and locally-bound adversarial patch. In both cases, perturbations are
generated in a white-box manner using PGD. Our experiments confirm, that
attacking even a portion of a temporal input suffices to fool the network. We
visually assess generated perturbations to gain insights into the functioning
of attacks. To enhance the robustness, we apply adversarial training using
5-PGD. Our experiments on KITTI and nuScenes datasets demonstrate, that a model
robustified via K-PGD is able to withstand the studied attacks while keeping
the mAP-based performance comparable to that of an unattacked model.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 07:08:54 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Pavlitskaya",
"Svetlana",
""
],
[
"Polley",
"Nikolai",
""
],
[
"Weber",
"Michael",
""
],
[
"Zöllner",
"J. Marius",
""
]
] |
new_dataset
| 0.985917 |
2208.10801
|
Bhavesh Laddagiri
|
Yash Raj and Bhavesh Laddagiri
|
MATra: A Multilingual Attentive Transliteration System for Indian
Scripts
|
10 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Transliteration is a task in the domain of NLP where the output word is a
similar-sounding word written using the letters of any foreign language. Today
this system has been developed for several language pairs that involve English
as either the source or target word and deployed in several places like Google
Translate and chatbots. However, there is very little research done in the
field of Indic languages transliterated to other Indic languages. This paper
demonstrates a multilingual model based on transformers (with some
modifications) that can give noticeably higher performance and accuracy than
all existing models in this domain and get much better results than
state-of-the-art models. This paper shows a model that can perform
transliteration between any pair among the following five languages - English,
Hindi, Bengali, Kannada and Tamil. It is applicable in scenarios where language
is a barrier to communication in any written task. The model beats the
state-of-the-art (for all pairs among the five mentioned languages - English,
Hindi, Bengali, Kannada, and Tamil) and achieves a top-1 accuracy score of
80.7%, about 29.5% higher than the best current results. Furthermore, the model
achieves 93.5% in terms of Phonetic Accuracy (transliteration is primarily a
phonetic/sound-based task).
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 08:14:29 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Raj",
"Yash",
""
],
[
"Laddagiri",
"Bhavesh",
""
]
] |
new_dataset
| 0.999383 |
2208.10839
|
Wouter Jansen
|
Wouter Jansen, Dennis Laurijssen, Robin Kerstens, Walter Daems, Jan
Steckel
|
In-Air Imaging Sonar Sensor Network with Real-Time Processing Using GPUs
|
2019 International Conference on P2P, Parallel, Grid, Cloud and
Internet Computing
| null |
10.1007/978-3-030-33509-0_67
| null |
cs.CV cs.NI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For autonomous navigation and robotic applications, sensing the environment
correctly is crucial. Many sensing modalities for this purpose exist. In recent
years, one such modality that is being used is in-air imaging sonar. It is
ideal in complex environments with rough conditions such as dust or fog.
However, like with most sensing modalities, to sense the full environment
around the mobile platform, multiple such sensors are needed to capture the
full 360-degree range. Currently the processing algorithms used to create this
data are insufficient to do so for multiple sensors at a reasonably fast update
rate. Furthermore, a flexible and robust framework is needed to easily
implement multiple imaging sonar sensors into any setup and serve multiple
application types for the data. In this paper we present a sensor network
framework designed for this novel sensing modality. Furthermore, an
implementation of the processing algorithm on a Graphics Processing Unit is
proposed to potentially decrease the computing time to allow for real-time
processing of one or more imaging sonar sensors at a sufficiently high update
rate.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 09:46:18 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Jansen",
"Wouter",
""
],
[
"Laurijssen",
"Dennis",
""
],
[
"Kerstens",
"Robin",
""
],
[
"Daems",
"Walter",
""
],
[
"Steckel",
"Jan",
""
]
] |
new_dataset
| 0.986862 |
2208.10867
|
ZongHeng Wei
|
Qinglin Liu, Zhiyong Lin, Zongheng Wei, Jianfeng Wen, Congming Yi and
Hai Liu
|
A Quinary Coding and Matrix Structure-based Channel Hopping Algorithm
for Blind Rendezvous in Cognitive Radio Networks
|
10 pages
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The multi-channel blind rendezvous problem in distributed cognitive radio
networks (DCRNs) refers to how users in the network can hop to the same channel
at the same time slot without any prior knowledge (i.e., each user is unaware
of other users' information). The channel hopping (CH) technique is a typical
solution to this blind rendezvous problem. In this paper, we propose a quinary
coding and matrix structure-based CH algorithm called QCMS-CH. The QCMS-CH
algorithm can guarantee the rendezvous of users using only one cognitive radio
in the scenario of the asynchronous clock (i.e., arbitrary time drift between
the users), heterogeneous channels (i.e., the available channel sets of users
are distinct), and symmetric role (i.e., all users play a same role). The
QCMS-CH algorithm first represents a randomly selected channel (denoted by R)
as a fixed-length quaternary number. Then it encodes the quaternary number into
a quinary bootstrapping sequence according to a carefully designed
quaternary-quinary coding table with the prefix "R00". Finally, it builds a CH
matrix column by column according to the bootstrapping sequence and six
different types of elaborately generated subsequences. The user can access the
CH matrix row by row and accordingly perform its channel hopping to attempt to
rendezvous with other users. We prove the correctness of QCMS-CH and derive an
upper bound on its Maximum Time-to-Rendezvous (MTTR). Simulation results show
that the QCMS-CH algorithm outperforms the state-of-the-art in terms of the
MTTR and the Expected Time-to-Rendezvous (ETTR).
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 10:48:36 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Liu",
"Qinglin",
""
],
[
"Lin",
"Zhiyong",
""
],
[
"Wei",
"Zongheng",
""
],
[
"Wen",
"Jianfeng",
""
],
[
"Yi",
"Congming",
""
],
[
"Liu",
"Hai",
""
]
] |
new_dataset
| 0.996561 |
2208.10906
|
Haoran Xie
|
Haoran Xie, Keisuke Arihara, Syuhei Sato, Kazunori Miyata
|
DualSmoke: Sketch-Based Smoke Illustration Design with Two-Stage
Generative Model
|
13 pages, 17 figures, video is here
https://www.youtube.com/watch?v=1zQFaxBMgTA
| null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
The dynamic effects of smoke are impressive in illustration design, but it is
a troublesome and challenging issue for common users to design the smoke effect
without domain knowledge of fluid simulations. In this work, we propose
DualSmoke, two stage global-to-local generation framework for the interactive
smoke illustration design. For the global stage, the proposed approach utilizes
fluid patterns to generate Lagrangian coherent structure from the user's
hand-drawn sketches. For the local stage, the detailed flow patterns are
obtained from the generated coherent structure. Finally, we apply the guiding
force field to the smoke simulator to design the desired smoke illustration. To
construct the training dataset, DualSmoke generates flow patterns using the
finite-time Lyapunov exponents of the velocity fields. The synthetic sketch
data is generated from the flow patterns by skeleton extraction. From our user
study, it is verified that the proposed design interface can provide various
smoke illustration designs with good user usability. Our code is available at:
https://github.com/shasph/DualSmoke
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 12:30:32 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Xie",
"Haoran",
""
],
[
"Arihara",
"Keisuke",
""
],
[
"Sato",
"Syuhei",
""
],
[
"Miyata",
"Kazunori",
""
]
] |
new_dataset
| 0.989879 |
2208.10918
|
Cathy Jiao
|
Jessica Huynh, Shikib Mehri, Cathy Jiao and Maxine Eskenazi
|
The DialPort tools
|
Accepted to SIGDIAL 2022
| null | null | null |
cs.HC cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The DialPort project http://dialport.org/, funded by the National Science
Foundation (NSF), covers a group of tools and services that aim at fulfilling
the needs of the dialog research community. Over the course of six years,
several offerings have been created, including the DialPort Portal and
DialCrowd. This paper describes these contributions, which will be demoed at
SIGDIAL, including implementation, prior studies, corresponding discoveries,
and the locations at which the tools will remain freely available to the
community going forward.
|
[
{
"version": "v1",
"created": "Thu, 18 Aug 2022 19:22:36 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Huynh",
"Jessica",
""
],
[
"Mehri",
"Shikib",
""
],
[
"Jiao",
"Cathy",
""
],
[
"Eskenazi",
"Maxine",
""
]
] |
new_dataset
| 0.997052 |
2208.10922
|
Dongchan Min
|
Dongchan Min, Minyoung Song, Sung Ju Hwang
|
StyleTalker: One-shot Style-based Audio-driven Talking Head Video
Generation
| null | null | null | null |
cs.CV cs.LG eess.AS eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose StyleTalker, a novel audio-driven talking head generation model
that can synthesize a video of a talking person from a single reference image
with accurately audio-synced lip shapes, realistic head poses, and eye blinks.
Specifically, by leveraging a pretrained image generator and an image encoder,
we estimate the latent codes of the talking head video that faithfully reflects
the given audio. This is made possible with several newly devised components:
1) A contrastive lip-sync discriminator for accurate lip synchronization, 2) A
conditional sequential variational autoencoder that learns the latent motion
space disentangled from the lip movements, such that we can independently
manipulate the motions and lip movements while preserving the identity. 3) An
auto-regressive prior augmented with normalizing flow to learn a complex
audio-to-motion multi-modal latent space. Equipped with these components,
StyleTalker can generate talking head videos not only in a motion-controllable
way when another motion source video is given but also in a completely
audio-driven manner by inferring realistic motions from the input audio.
Through extensive experiments and user studies, we show that our model is able
to synthesize talking head videos with impressive perceptual quality which are
accurately lip-synced with the input audios, largely outperforming
state-of-the-art baselines.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 12:49:01 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Min",
"Dongchan",
""
],
[
"Song",
"Minyoung",
""
],
[
"Hwang",
"Sung Ju",
""
]
] |
new_dataset
| 0.994664 |
2208.10926
|
John Jenq
|
Sagina Athikkal and John Jenq
|
Voice Chatbot for Hospitality
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Chatbot is a machine with the ability to answer automatically through a
conversational interface. A chatbot is considered as one of the most
exceptional and promising expressions of human computer interaction.
Voice-based chatbots or artificial intelligence devices transform
human-computer bidirectional interactions that allow users to navigate an
interactive voice response system with their voice generally using natural
language. In this paper, we focus on voice based chatbots for mediating
interactions between hotels and guests from both the hospitality technology
providers' and guests' perspectives. We developed a hotel web application with
the capability to receive a voice input. The application was developed with
Speech recognition and deep synthesis API for voice to text and text to voice
conversion, a closed domain question answering NLP solution was used for query
the answer.
|
[
{
"version": "v1",
"created": "Sat, 13 Aug 2022 18:41:35 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Athikkal",
"Sagina",
""
],
[
"Jenq",
"John",
""
]
] |
new_dataset
| 0.998782 |
2208.10928
|
Yoichi Yamazaki
|
Yoichi Yamazaki, Tsukuto Yamada, Hiroki Nomura, Nobuaki Hosoda, Ryoma
Kawamura, Kazuaki Takeuchi, Hiroaki Kato, Ryuma Niiyama, and Kentaro
Yoshifuji
|
Meta Avatar Robot Cafe: Linking Physical and Virtual Cybernetic Avatars
to Provide Physical Augmentation for People with Disabilities
|
SIGGRAPH '22 Emerging Technologies, 2022, 2 Pages. Project page:
https://bit.ly/metaavatarrobotcafe
| null |
10.1145/3532721.3546117
| null |
cs.HC cs.CY cs.GR cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Meta avatar robot cafe is a cafe that fuses cyberspace and physical space to
create new encounters with people. We create a place where people with
disabilities who have difficulty going out can freely switch between their
physical bodies and virtual bodies, and communicate their presence and warmth
to each other.
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2022 09:58:07 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Yamazaki",
"Yoichi",
""
],
[
"Yamada",
"Tsukuto",
""
],
[
"Nomura",
"Hiroki",
""
],
[
"Hosoda",
"Nobuaki",
""
],
[
"Kawamura",
"Ryoma",
""
],
[
"Takeuchi",
"Kazuaki",
""
],
[
"Kato",
"Hiroaki",
""
],
[
"Niiyama",
"Ryuma",
""
],
[
"Yoshifuji",
"Kentaro",
""
]
] |
new_dataset
| 0.997728 |
2208.10931
|
Jindan Xu
|
Jindan Xu, Chau Yuen, Chongwen Huang, Naveed Ul Hassan, George C.
Alexandropoulos, Marco Di Renzo, Merouane Debbah
|
Reconfiguring Wireless Environment via Intelligent Surfaces for 6G:
Reflection, Modulation, and Security
|
Submitted to Science China Information Sciences
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfigurable intelligent surface (RIS) has been recognized as an essential
enabling technique for the sixth-generation (6G) mobile communication network.
Specifically, an RIS is comprised of a large number of small and low-cost
reflecting elements whose parameters are dynamically adjustable with a
programmable controller. Each of these elements can effectively reflect a
phase-shifted version of the incident electromagnetic wave. By adjusting the
wave phases in real time, the propagation environment of the reflected signals
can be dynamically reconfigured to enhance communication reliability, boost
transmission rate, expand cellular coverage, and strengthen communication
security. In this paper, we provide an overview on RIS-assisted wireless
communications. Specifically, we elaborate on the state-of-the-art enabling
techniques of RISs as well as their corresponding substantial benefits from the
perspectives of RIS reflection and RIS modulation. With these benefits, we
envision the integration of RIS into emerging applications for 6G. In addition,
communication security is of unprecedented importance in the 6G network with
ubiquitous wireless services in multifarious verticals and areas. We highlight
potential contributions of RIS to physical-layer security in terms of secrecy
rate and secrecy outage probability, exemplified by a typical case study from
both theoretical and numerical aspects. Finally, we discuss challenges and
opportunities on the deployment of RISs in practice to motivate future
research.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 13:02:30 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Xu",
"Jindan",
""
],
[
"Yuen",
"Chau",
""
],
[
"Huang",
"Chongwen",
""
],
[
"Hassan",
"Naveed Ul",
""
],
[
"Alexandropoulos",
"George C.",
""
],
[
"Di Renzo",
"Marco",
""
],
[
"Debbah",
"Merouane",
""
]
] |
new_dataset
| 0.998971 |
2208.11015
|
Won-Yong Shin
|
Yu Hou, Cong Tran, Won-Yong Shin
|
META-CODE: Community Detection via Exploratory Learning in Topologically
Unknown Networks
|
31st ACM International Conference on Information and Knowledge
Management (CIKM 2022) (to appear) (Please cite our conference version.)
| null | null | null |
cs.SI cs.AI cs.IR cs.LG cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
The discovery of community structures in social networks has gained
considerable attention as a fundamental problem for various network analysis
tasks. However, due to privacy concerns or access restrictions, the network
structure is often unknown, thereby rendering established community detection
approaches ineffective without costly data acquisition. To tackle this
challenge, we present META-CODE, a novel end-to-end solution for detecting
overlapping communities in networks with unknown topology via exploratory
learning aided by easy-to-collect node metadata. Specifically, META-CODE
consists of three steps: 1) initial network inference, 2) node-level
community-affiliation embedding based on graph neural networks (GNNs) trained
by our new reconstruction loss, and 3) network exploration via
community-affiliation-based node queries, where Steps 2 and 3 are performed
iteratively. Experimental results demonstrate that META-CODE exhibits (a)
superiority over benchmark methods for overlapping community detection, (b) the
effectiveness of our training model, and (c) fast network exploration.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 15:02:48 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Hou",
"Yu",
""
],
[
"Tran",
"Cong",
""
],
[
"Shin",
"Won-Yong",
""
]
] |
new_dataset
| 0.988816 |
2208.11024
|
Kiril Gashteovski
|
Haris Widjaja, Kiril Gashteovski, Wiem Ben Rim, Pengfei Liu,
Christopher Malon, Daniel Ruffinelli, Carolin Lawrence, Graham Neubig
|
KGxBoard: Explainable and Interactive Leaderboard for Evaluation of
Knowledge Graph Completion Models
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge Graphs (KGs) store information in the form of (head, predicate,
tail)-triples. To augment KGs with new knowledge, researchers proposed models
for KG Completion (KGC) tasks such as link prediction; i.e., answering (h; p;
?) or (?; p; t) queries. Such models are usually evaluated with averaged
metrics on a held-out test set. While useful for tracking progress, averaged
single-score metrics cannot reveal what exactly a model has learned -- or
failed to learn. To address this issue, we propose KGxBoard: an interactive
framework for performing fine-grained evaluation on meaningful subsets of the
data, each of which tests individual and interpretable capabilities of a KGC
model. In our experiments, we highlight the findings that we discovered with
the use of KGxBoard, which would have been impossible to detect with standard
averaged single-score metrics.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 15:11:45 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Widjaja",
"Haris",
""
],
[
"Gashteovski",
"Kiril",
""
],
[
"Rim",
"Wiem Ben",
""
],
[
"Liu",
"Pengfei",
""
],
[
"Malon",
"Christopher",
""
],
[
"Ruffinelli",
"Daniel",
""
],
[
"Lawrence",
"Carolin",
""
],
[
"Neubig",
"Graham",
""
]
] |
new_dataset
| 0.991711 |
2208.11071
|
Yuhang Zhao
|
Tiger Ji, Brianna R. Cochran, Yuhang Zhao
|
VRBubble: Enhancing Peripheral Awareness of Avatars for People with
Visual Impairments in Social Virtual Reality
|
The 24th International ACM SIGACCESS Conference on Computers and
Accessibility (ASSETS '22), 17 pages, 7 figures
| null |
10.1145/3517428.3544821
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Social Virtual Reality (VR) is growing for remote socialization and
collaboration. However, current social VR applications are not accessible to
people with visual impairments (PVI) due to their focus on visual experiences.
We aim to facilitate social VR accessibility by enhancing PVI's peripheral
awareness of surrounding avatar dynamics. We designed VRBubble, an audio-based
VR technique that provides surrounding avatar information based on social
distances. Based on Hall's proxemic theory, VRBubble divides the social space
with three Bubbles -- Intimate, Conversation, and Social Bubble -- generating
spatial audio feedback to distinguish avatars in different bubbles and provide
suitable avatar information. We provide three audio alternatives: earcons,
verbal notifications, and real-world sound effects. PVI can select and combine
their preferred feedback alternatives for different avatars, bubbles, and
social contexts. We evaluated VRBubble and an audio beacon baseline with 12 PVI
in a navigation and a conversation context. We found that VRBubble
significantly enhanced participants' avatar awareness during navigation and
enabled avatar identification in both contexts. However, VRBubble was shown to
be more distracting in crowded environments.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 16:27:17 GMT"
}
] | 2022-08-24T00:00:00 |
[
[
"Ji",
"Tiger",
""
],
[
"Cochran",
"Brianna R.",
""
],
[
"Zhao",
"Yuhang",
""
]
] |
new_dataset
| 0.978512 |
1910.06461
|
Yushan Li
|
Yushan Li, Jianping He, Cailian Chen, and Xinping Guan
|
Intelligent Physical Attack Against Mobile Robots With
Obstacle-Avoidance
|
19 pages; accepted by IEEE Transactions on Robotics
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The security issue of mobile robots has attracted considerable attention in
recent years. In this paper, we propose an intelligent physical attack to trap
mobile robots into a preset position by learning the obstacle-avoidance
mechanism from external observation. The salient novelty of our work lies in
revealing the possibility that physical-based attacks with intelligent and
advanced design can present real threats, while without prior knowledge of the
system dynamics or access to the internal system. This kind of attack cannot be
handled by countermeasures in traditional cyberspace security. To practice, the
cornerstone of the proposed attack is to actively explore the complex
interaction characteristic of the victim robot with the environment, and learn
the obstacle-avoidance knowledge exhibited in the limited observations of its
behaviors. Then, we propose shortest-path and hands-off attack algorithms to
find efficient attack paths from the tremendous motion space, achieving the
driving-to-trap goal with low costs in terms of path length and activity
period, respectively. The convergence of the algorithms is proved and the
attack performance bounds are further derived. Extensive simulations and
real-life experiments illustrate the effectiveness of the proposed attack,
beckoning future investigation for the new physical threats and defense on
robotic systems.
|
[
{
"version": "v1",
"created": "Mon, 14 Oct 2019 23:38:08 GMT"
},
{
"version": "v2",
"created": "Sat, 20 Aug 2022 13:39:22 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Li",
"Yushan",
""
],
[
"He",
"Jianping",
""
],
[
"Chen",
"Cailian",
""
],
[
"Guan",
"Xinping",
""
]
] |
new_dataset
| 0.991377 |
2103.04053
|
Fengbei Liu
|
Fengbei Liu, Yuanhong Chen, Yu Tian, Yuyuan Liu, Chong Wang, Vasileios
Belagiannis, Gustavo Carneiro
|
NVUM: Non-Volatile Unbiased Memory for Robust Medical Image
Classification
|
MICCAI 2022 Early Accept
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Real-world large-scale medical image analysis (MIA) datasets have three
challenges: 1) they contain noisy-labelled samples that affect training
convergence and generalisation, 2) they usually have an imbalanced distribution
of samples per class, and 3) they normally comprise a multi-label problem,
where samples can have multiple diagnoses. Current approaches are commonly
trained to solve a subset of those problems, but we are unaware of methods that
address the three problems simultaneously. In this paper, we propose a new
training module called Non-Volatile Unbiased Memory (NVUM), which
non-volatility stores running average of model logits for a new regularization
loss on noisy multi-label problem. We further unbias the classification
prediction in NVUM update for imbalanced learning problem. We run extensive
experiments to evaluate NVUM on new benchmarks proposed by this paper, where
training is performed on noisy multi-label imbalanced chest X-ray (CXR)
training sets, formed by Chest-Xray14 and CheXpert, and the testing is
performed on the clean multi-label CXR datasets OpenI and PadChest. Our method
outperforms previous state-of-the-art CXR classifiers and previous methods that
can deal with noisy labels on all evaluations. Our code is available at
https://github.com/FBLADL/NVUM.
|
[
{
"version": "v1",
"created": "Sat, 6 Mar 2021 07:42:36 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Mar 2022 07:11:22 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Mar 2022 09:45:16 GMT"
},
{
"version": "v4",
"created": "Thu, 9 Jun 2022 07:24:59 GMT"
},
{
"version": "v5",
"created": "Fri, 17 Jun 2022 10:15:57 GMT"
},
{
"version": "v6",
"created": "Sun, 21 Aug 2022 05:56:50 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Liu",
"Fengbei",
""
],
[
"Chen",
"Yuanhong",
""
],
[
"Tian",
"Yu",
""
],
[
"Liu",
"Yuyuan",
""
],
[
"Wang",
"Chong",
""
],
[
"Belagiannis",
"Vasileios",
""
],
[
"Carneiro",
"Gustavo",
""
]
] |
new_dataset
| 0.991069 |
2106.01917
|
Fabian Bauer-Marquart
|
Fabian Bauer-Marquart, David Boetius, Stefan Leue, Christian Schilling
|
SpecRepair: Counter-Example Guided Safety Repair of Deep Neural Networks
|
This is the extended version of a paper with the same title that
appeared at SPIN 2022
|
SPIN 2022
|
10.1007/978-3-031-15077-7_5
| null |
cs.LG cs.LO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Deep neural networks (DNNs) are increasingly applied in safety-critical
domains, such as self-driving cars, unmanned aircraft, and medical diagnosis.
It is of fundamental importance to certify the safety of these DNNs, i.e. that
they comply with a formal safety specification. While safety certification
tools exactly answer this question, they are of no help in debugging unsafe
DNNs, requiring the developer to iteratively verify and modify the DNN until
safety is eventually achieved. Hence, a repair technique needs to be developed
that can produce a safe DNN automatically. To address this need, we present
SpecRepair, a tool that efficiently eliminates counter-examples from a DNN and
produces a provably safe DNN without harming its classification accuracy.
SpecRepair combines specification-based counter-example search and resumes
training of the DNN, penalizing counter-examples and certifying the resulting
DNN. We evaluate SpecRepair's effectiveness on the ACAS Xu benchmark, a
DNN-based controller for unmanned aircraft, and two image classification
benchmarks. The results show that SpecRepair is more successful in producing
safe DNNs than comparable methods, has a shorter runtime, and produces safe
DNNs while preserving their classification accuracy.
|
[
{
"version": "v1",
"created": "Thu, 3 Jun 2021 15:09:43 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Oct 2021 14:04:24 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Mar 2022 08:15:22 GMT"
},
{
"version": "v4",
"created": "Mon, 9 May 2022 08:07:19 GMT"
},
{
"version": "v5",
"created": "Thu, 12 May 2022 13:43:46 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Bauer-Marquart",
"Fabian",
""
],
[
"Boetius",
"David",
""
],
[
"Leue",
"Stefan",
""
],
[
"Schilling",
"Christian",
""
]
] |
new_dataset
| 0.960599 |
2111.06014
|
Mark Presten
|
Mark Presten, Yahav Avigal, Mark Theis, Satvik Sharma, Rishi Parikh,
Shrey Aeron, Sandeep Mukherjee, Sebastian Oehme, Simeon Adebola, Walter
Teitelbaum, Varun Kamat and Ken Goldberg
|
AlphaGarden: Learning to Autonomously Tend a Polyculture Garden
|
Paper revised, extended, and resubmitted. See "Automated Pruning of
Polyculture Plants."
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents AlphaGarden: an autonomous polyculture garden that prunes
and irrigates living plants in a 1.5m x 3.0m physical testbed. AlphaGarden uses
an overhead camera and sensors to track the plant distribution and soil
moisture. We model individual plant growth and interplant dynamics to train a
policy that chooses actions to maximize leaf coverage and diversity. For
autonomous pruning, AlphaGarden uses two custom-designed pruning tools and a
trained neural network to detect prune points. We present results for four
60-day garden cycles. Results suggest AlphaGarden can autonomously achieve 0.96
normalized diversity with pruning shears while maintaining an average canopy
coverage of 0.86 during the peak of the cycle. Code, datasets, and supplemental
material can be found at https://github.com/BerkeleyAutomation/AlphaGarden.
|
[
{
"version": "v1",
"created": "Thu, 11 Nov 2021 01:55:54 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Aug 2022 17:51:48 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Presten",
"Mark",
""
],
[
"Avigal",
"Yahav",
""
],
[
"Theis",
"Mark",
""
],
[
"Sharma",
"Satvik",
""
],
[
"Parikh",
"Rishi",
""
],
[
"Aeron",
"Shrey",
""
],
[
"Mukherjee",
"Sandeep",
""
],
[
"Oehme",
"Sebastian",
""
],
[
"Adebola",
"Simeon",
""
],
[
"Teitelbaum",
"Walter",
""
],
[
"Kamat",
"Varun",
""
],
[
"Goldberg",
"Ken",
""
]
] |
new_dataset
| 0.9988 |
2112.11258
|
Chamira Edussooriya
|
Dishanika Denipitiyage, Vinoj Jayasundara, Ranga Rodrigo, Chamira U.
S. Edussooriya
|
PointCaps: Raw Point Cloud Processing using Capsule Networks with
Euclidean Distance Routing
|
Accepted to be published in Journal of Visual Communication and Image
Representation (Elsevier), 16 Pages, 4 Figures, 5 Tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Raw point cloud processing using capsule networks is widely adopted in
classification, reconstruction, and segmentation due to its ability to preserve
spatial agreement of the input data. However, most of the existing capsule
based network approaches are computationally heavy and fail at representing the
entire point cloud as a single capsule. We address these limitations in
existing capsule network based approaches by proposing PointCaps, a novel
convolutional capsule architecture with parameter sharing. Along with
PointCaps, we propose a novel Euclidean distance routing algorithm and a
class-independent latent representation. The latent representation captures
physically interpretable geometric parameters of the point cloud, with dynamic
Euclidean routing, PointCaps well-represents the spatial (point-to-part)
relationships of points. PointCaps has a significantly lower number of
parameters and requires a significantly lower number of FLOPs while achieving
better reconstruction with comparable classification and segmentation accuracy
for raw point clouds compared to state-of-the-art capsule networks.
|
[
{
"version": "v1",
"created": "Tue, 21 Dec 2021 14:34:39 GMT"
},
{
"version": "v2",
"created": "Sun, 21 Aug 2022 02:44:48 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Denipitiyage",
"Dishanika",
""
],
[
"Jayasundara",
"Vinoj",
""
],
[
"Rodrigo",
"Ranga",
""
],
[
"Edussooriya",
"Chamira U. S.",
""
]
] |
new_dataset
| 0.976054 |
2203.00836
|
Jun Wang
|
Xianbin Ye, Ziliang Li, Fei Ma, Zongbi Yi, Pengyong Li, Jun Wang, Peng
Gao, Yixuan Qiao, Guotong Xie
|
CandidateDrug4Cancer: An Open Molecular Graph Learning Benchmark on Drug
Discovery for Cancer
|
Accepted by Workshop on Graph Learning Benchmarks, The Web Conference
2021
| null | null | null |
cs.LG q-bio.BM
|
http://creativecommons.org/licenses/by/4.0/
|
Anti-cancer drug discoveries have been serendipitous, we sought to present
the Open Molecular Graph Learning Benchmark, named CandidateDrug4Cancer, a
challenging and realistic benchmark dataset to facilitate scalable, robust, and
reproducible graph machine learning research for anti-cancer drug discovery.
CandidateDrug4Cancer dataset encompasses multiple most-mentioned 29 targets for
cancer, covering 54869 cancer-related drug molecules which are ranged from
pre-clinical, clinical and FDA-approved. Besides building the datasets, we also
perform benchmark experiments with effective Drug Target Interaction (DTI)
prediction baselines using descriptors and expressive graph neural networks.
Experimental results suggest that CandidateDrug4Cancer presents significant
challenges for learning molecular graphs and targets in practical application,
indicating opportunities for future researches on developing candidate drugs
for treating cancers.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 03:09:50 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Aug 2022 03:52:54 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Ye",
"Xianbin",
""
],
[
"Li",
"Ziliang",
""
],
[
"Ma",
"Fei",
""
],
[
"Yi",
"Zongbi",
""
],
[
"Li",
"Pengyong",
""
],
[
"Wang",
"Jun",
""
],
[
"Gao",
"Peng",
""
],
[
"Qiao",
"Yixuan",
""
],
[
"Xie",
"Guotong",
""
]
] |
new_dataset
| 0.999837 |
2203.10359
|
Philippos Papaphilippou
|
Philippos Papaphilippou, Myrtle Shah
|
FPGA-extended General Purpose Computer Architecture
|
Accepted at the 18th International Symposium on Applied
Reconfigurable Computing (ARC) 2022
| null | null | null |
cs.AR cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a computer architecture, where part of the instruction
set architecture (ISA) is implemented on small highly-integrated
field-programmable gate arrays (FPGAs). Small FPGAs inside a general-purpose
processor (CPU) can be used effectively to implement custom or standardised
instructions. Our proposed architecture directly address related challenges for
high-end CPUs, where such highly-integrated FPGAs would have the highest
impact, such as on main memory bandwidth. This also enables
software-transparent context-switching. The simulation-based evaluation of a
dynamically reconfigurable core shows promising results approaching the
performance of an equivalent core with all enabled instructions. Finally, the
feasibility of adopting the proposed architecture in today's CPUs is studied
through the prototyping of fast-reconfigurable FPGAs and studying the miss
behaviour of opcodes.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2022 17:21:01 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Apr 2022 22:27:14 GMT"
},
{
"version": "v3",
"created": "Sun, 21 Aug 2022 21:00:41 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Papaphilippou",
"Philippos",
""
],
[
"Shah",
"Myrtle",
""
]
] |
new_dataset
| 0.999202 |
2203.13430
|
Satyajit Ghosh
|
Satyajit Ghosh, Aniruddha Ghosh, Bittaswer Ghosh, and Abhishek Roy
|
Plagiarism Detection in the Bengali Language: A Text Similarity-Based
Approach
|
ACCEPTED AT 3RD INTERNATIONAL CONFERENCE ON ENGINEERING AND
ADVANCEMENT IN TECHNOLOGY (ICEAT 2022)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Plagiarism means taking another person's work and not giving any credit to
them for it. Plagiarism is one of the most serious problems in academia and
among researchers. Even though there are multiple tools available to detect
plagiarism in a document but most of them are domain-specific and designed to
work in English texts, but plagiarism is not limited to a single language only.
Bengali is the most widely spoken language of Bangladesh and the second most
spoken language in India with 300 million native speakers and 37 million
second-language speakers. Plagiarism detection requires a large corpus for
comparison. Bengali Literature has a history of 1300 years. Hence most Bengali
Literature books are not yet digitalized properly. As there was no such corpus
present for our purpose so we have collected Bengali Literature books from the
National Digital Library of India and with a comprehensive methodology
extracted texts from it and constructed our corpus. Our experimental results
find out average accuracy between 72.10 % - 79.89 % in text extraction using
OCR. Levenshtein Distance algorithm is used for determining Plagiarism. We have
built a web application for end-user and successfully tested it for Plagiarism
detection in Bengali texts. In future, we aim to construct a corpus with more
books for more accurate detection.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 03:11:00 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2022 14:44:46 GMT"
},
{
"version": "v3",
"created": "Sat, 20 Aug 2022 15:40:27 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Ghosh",
"Satyajit",
""
],
[
"Ghosh",
"Aniruddha",
""
],
[
"Ghosh",
"Bittaswer",
""
],
[
"Roy",
"Abhishek",
""
]
] |
new_dataset
| 0.999716 |
2205.12311
|
Fabr\'icio Ceschin
|
Fabr\'icio Ceschin, Marcus Botacin, Heitor Murilo Gomes, Felipe
Pinag\'e, Luiz S. Oliveira, Andr\'e Gr\'egio
|
Fast & Furious: Modelling Malware Detection as Evolving Data Streams
| null | null |
10.1016/j.eswa.2022.118590
| null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Malware is a major threat to computer systems and imposes many challenges to
cyber security. Targeted threats, such as ransomware, cause millions of dollars
in losses every year. The constant increase of malware infections has been
motivating popular antiviruses (AVs) to develop dedicated detection strategies,
which include meticulously crafted machine learning (ML) pipelines. However,
malware developers unceasingly change their samples' features to bypass
detection. This constant evolution of malware samples causes changes to the
data distribution (i.e., concept drifts) that directly affect ML model
detection rates, something not considered in the majority of the literature
work. In this work, we evaluate the impact of concept drift on malware
classifiers for two Android datasets: DREBIN (about 130K apps) and a subset of
AndroZoo (about 285K apps). We used these datasets to train an Adaptive Random
Forest (ARF) classifier, as well as a Stochastic Gradient Descent (SGD)
classifier. We also ordered all datasets samples using their VirusTotal
submission timestamp and then extracted features from their textual attributes
using two algorithms (Word2Vec and TF-IDF). Then, we conducted experiments
comparing both feature extractors, classifiers, as well as four drift detectors
(DDM, EDDM, ADWIN, and KSWIN) to determine the best approach for real
environments. Finally, we compare some possible approaches to mitigate concept
drift and propose a novel data stream pipeline that updates both the classifier
and the feature extractor. To do so, we conducted a longitudinal evaluation by
(i) classifying malware samples collected over nine years (2009-2018), (ii)
reviewing concept drift detection algorithms to attest its pervasiveness, (iii)
comparing distinct ML approaches to mitigate the issue, and (iv) proposing an
ML data stream pipeline that outperformed literature approaches.
|
[
{
"version": "v1",
"created": "Tue, 24 May 2022 18:43:40 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Aug 2022 17:22:51 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Aug 2022 03:50:31 GMT"
}
] | 2022-08-23T00:00:00 |
[
[
"Ceschin",
"Fabrício",
""
],
[
"Botacin",
"Marcus",
""
],
[
"Gomes",
"Heitor Murilo",
""
],
[
"Pinagé",
"Felipe",
""
],
[
"Oliveira",
"Luiz S.",
""
],
[
"Grégio",
"André",
""
]
] |
new_dataset
| 0.999348 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.