id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2107.14171
|
Huayu Chen
|
Jiayi Weng, Huayu Chen, Dong Yan, Kaichao You, Alexis Duburcq, Minghao
Zhang, Yi Su, Hang Su, Jun Zhu
|
Tianshou: a Highly Modularized Deep Reinforcement Learning Library
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present Tianshou, a highly modularized Python library for
deep reinforcement learning (DRL) that uses PyTorch as its backend. Tianshou
intends to be research-friendly by providing a flexible and reliable
infrastructure of DRL algorithms. It supports online and offline training with
more than 20 classic algorithms through a unified interface. To facilitate
related research and prove Tianshou's reliability, we have released Tianshou's
benchmark of MuJoCo environments, covering eight classic algorithms with
state-of-the-art performance. We open-sourced Tianshou at
https://github.com/thu-ml/tianshou/.
|
[
{
"version": "v1",
"created": "Thu, 29 Jul 2021 16:49:03 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Sep 2021 07:17:45 GMT"
},
{
"version": "v3",
"created": "Wed, 10 Aug 2022 16:20:23 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Weng",
"Jiayi",
""
],
[
"Chen",
"Huayu",
""
],
[
"Yan",
"Dong",
""
],
[
"You",
"Kaichao",
""
],
[
"Duburcq",
"Alexis",
""
],
[
"Zhang",
"Minghao",
""
],
[
"Su",
"Yi",
""
],
[
"Su",
"Hang",
""
],
[
"Zhu",
"Jun",
""
]
] |
new_dataset
| 0.98237 |
2110.13846
|
Yixiao Zhang
|
Yixiao Zhang, Adam Kortylewski, Qing Liu, Seyoun Park, Benjamin Green,
Elizabeth Engle, Guillermo Almodovar, Ryan Walk, Sigfredo Soto-Diaz, Janis
Taube, Alex Szalay, and Alan Yuille
|
A Light-weight Interpretable Compositional Model for Nuclei Detection
and Weakly-Supervised Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The field of computational pathology has witnessed great advancements since
deep neural networks have been widely applied. These networks usually require
large numbers of annotated data to train vast parameters. However, it takes
significant effort to annotate a large histopathology dataset. We introduce a
light-weight and interpretable model for nuclei detection and weakly-supervised
segmentation. It only requires annotations on isolated nucleus, rather than on
all nuclei in the dataset. Besides, it is a generative compositional model that
first locates parts of nucleus, then learns the spatial correlation of the
parts to further locate the nucleus. This process brings interpretability in
its prediction. Empirical results on an in-house dataset show that in
detection, the proposed method achieved comparable or better performance than
its deep network counterparts, especially when the annotated data is limited.
It also outperforms popular weakly-supervised segmentation methods. The
proposed method could be an alternative solution for the data-hungry problem of
deep learning methods.
|
[
{
"version": "v1",
"created": "Tue, 26 Oct 2021 16:44:08 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Aug 2022 00:57:51 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Zhang",
"Yixiao",
""
],
[
"Kortylewski",
"Adam",
""
],
[
"Liu",
"Qing",
""
],
[
"Park",
"Seyoun",
""
],
[
"Green",
"Benjamin",
""
],
[
"Engle",
"Elizabeth",
""
],
[
"Almodovar",
"Guillermo",
""
],
[
"Walk",
"Ryan",
""
],
[
"Soto-Diaz",
"Sigfredo",
""
],
[
"Taube",
"Janis",
""
],
[
"Szalay",
"Alex",
""
],
[
"Yuille",
"Alan",
""
]
] |
new_dataset
| 0.988075 |
2203.05189
|
Yinhuai Wang
|
Yinhuai Wang, Shuzhou Yang, Yujie Hu and Jian Zhang
|
NeRFocus: Neural Radiance Field for 3D Synthetic Defocus
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural radiance fields (NeRF) bring a new wave for 3D interactive
experiences. However, as an important part of the immersive experiences, the
defocus effects have not been fully explored within NeRF. Some recent
NeRF-based methods generate 3D defocus effects in a post-process fashion by
utilizing multiplane technology. Still, they are either time-consuming or
memory-consuming. This paper proposes a novel thin-lens-imaging-based NeRF
framework that can directly render various 3D defocus effects, dubbed NeRFocus.
Unlike the pinhole, the thin lens refracts rays of a scene point, so its
imaging on the sensor plane is scattered as a circle of confusion (CoC). A
direct solution sampling enough rays to approximate this process is
computationally expensive. Instead, we propose to inverse the thin lens imaging
to explicitly model the beam path for each point on the sensor plane and
generalize this paradigm to the beam path of each pixel, then use the
frustum-based volume rendering to render each pixel's beam path. We further
design an efficient probabilistic training (p-training) strategy to simplify
the training process vastly. Extensive experiments demonstrate that our
NeRFocus can achieve various 3D defocus effects with adjustable camera pose,
focus distance, and aperture size. Existing NeRF can be regarded as our special
case by setting aperture size as zero to render large depth-of-field images.
Despite such merits, NeRFocus does not sacrifice NeRF's original performance
(e.g., training and inference time, parameter consumption, rendering quality),
which implies its great potential for broader application and further
improvement. Code and video are available at
https://github.com/wyhuai/NeRFocus.
|
[
{
"version": "v1",
"created": "Thu, 10 Mar 2022 06:59:10 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Aug 2022 06:36:19 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Wang",
"Yinhuai",
""
],
[
"Yang",
"Shuzhou",
""
],
[
"Hu",
"Yujie",
""
],
[
"Zhang",
"Jian",
""
]
] |
new_dataset
| 0.975916 |
2204.00486
|
Yuxuan Wang
|
Yuxuan Wang, Difei Gao, Licheng Yu, Stan Weixian Lei, Matt Feiszli,
Mike Zheng Shou
|
GEB+: A Benchmark for Generic Event Boundary Captioning, Grounding and
Retrieval
|
In Proceedings of the European Conference on Computer Vision 2022
[ECCV 2022]
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Cognitive science has shown that humans perceive videos in terms of events
separated by the state changes of dominant subjects. State changes trigger new
events and are one of the most useful among the large amount of redundant
information perceived. However, previous research focuses on the overall
understanding of segments without evaluating the fine-grained status changes
inside. In this paper, we introduce a new dataset called Kinetic-GEB+. The
dataset consists of over 170k boundaries associated with captions describing
status changes in the generic events in 12K videos. Upon this new dataset, we
propose three tasks supporting the development of a more fine-grained, robust,
and human-like understanding of videos through status changes. We evaluate many
representative baselines in our dataset, where we also design a new TPD
(Temporal-based Pairwise Difference) Modeling method for visual difference and
achieve significant performance improvements. Besides, the results show there
are still formidable challenges for current methods in the utilization of
different granularities, representation of visual difference, and the accurate
localization of status changes. Further analysis shows that our dataset can
drive developing more powerful methods to understand status changes and thus
improve video level comprehension. The dataset is available at
https://github.com/showlab/GEB-Plus
|
[
{
"version": "v1",
"created": "Fri, 1 Apr 2022 14:45:30 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Apr 2022 04:19:54 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Jul 2022 17:02:56 GMT"
},
{
"version": "v4",
"created": "Wed, 10 Aug 2022 15:33:03 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Wang",
"Yuxuan",
""
],
[
"Gao",
"Difei",
""
],
[
"Yu",
"Licheng",
""
],
[
"Lei",
"Stan Weixian",
""
],
[
"Feiszli",
"Matt",
""
],
[
"Shou",
"Mike Zheng",
""
]
] |
new_dataset
| 0.999816 |
2204.07268
|
Patrick Grady
|
Patrick Grady, Jeremy A. Collins, Samarth Brahmbhatt, Christopher D.
Twigg, Chengcheng Tang, James Hays, Charles C. Kemp
|
Visual Pressure Estimation and Control for Soft Robotic Grippers
|
IROS 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Soft robotic grippers facilitate contact-rich manipulation, including robust
grasping of varied objects. Yet the beneficial compliance of a soft gripper
also results in significant deformation that can make precision manipulation
challenging. We present visual pressure estimation & control (VPEC), a method
that infers pressure applied by a soft gripper using an RGB image from an
external camera. We provide results for visual pressure inference when a
pneumatic gripper and a tendon-actuated gripper make contact with a flat
surface. We also show that VPEC enables precision manipulation via closed-loop
control of inferred pressure images. In our evaluation, a mobile manipulator
(Stretch RE1 from Hello Robot) uses visual servoing to make contact at a
desired pressure; follow a spatial pressure trajectory; and grasp small
low-profile objects, including a microSD card, a penny, and a pill. Overall,
our results show that visual estimates of applied pressure can enable a soft
gripper to perform precision manipulation.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2022 23:45:17 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Aug 2022 21:44:08 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Grady",
"Patrick",
""
],
[
"Collins",
"Jeremy A.",
""
],
[
"Brahmbhatt",
"Samarth",
""
],
[
"Twigg",
"Christopher D.",
""
],
[
"Tang",
"Chengcheng",
""
],
[
"Hays",
"James",
""
],
[
"Kemp",
"Charles C.",
""
]
] |
new_dataset
| 0.991011 |
2205.15062
|
Chen Li
|
Chen Li, Antonios Tsourdos, Weisi Guo
|
A Transistor Operations Model for Deep Learning Energy Consumption
Scaling Law
|
15 pages, 11 figures
| null | null | null |
cs.LG cs.AI cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep Learning (DL) has transformed the automation of a wide range of
industries and finds increasing ubiquity in society. The high complexity of DL
models and its widespread adoption has led to global energy consumption
doubling every 3-4 months. Currently, the relationship between the DL model
configuration and energy consumption is not well established. At a general
computational energy model level, there is both strong dependency to both the
hardware architecture (e.g. generic processors with different configuration of
inner components- CPU and GPU, programmable integrated circuits - FPGA), as
well as different interacting energy consumption aspects (e.g., data movement,
calculation, control). At the DL model level, we need to translate non-linear
activation functions and its interaction with data into calculation tasks.
Current methods mainly linearize nonlinear DL models to approximate its
theoretical FLOPs and MACs as a proxy for energy consumption. Yet, this is
inaccurate (est. 93\% accuracy) due to the highly nonlinear nature of many
convolutional neural networks (CNNs) for example.
In this paper, we develop a bottom-level Transistor Operations (TOs) method
to expose the role of non-linear activation functions and neural network
structure in energy consumption. We translate a range of feedforward and CNN
models into ALU calculation tasks and then TO steps. This is then statistically
linked to real energy consumption values via a regression model for different
hardware configurations and data sets. We show that our proposed TOs method can
achieve a 93.61% - 99.51% precision in predicting its energy consumption.
|
[
{
"version": "v1",
"created": "Mon, 30 May 2022 12:42:33 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Aug 2022 21:09:34 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Li",
"Chen",
""
],
[
"Tsourdos",
"Antonios",
""
],
[
"Guo",
"Weisi",
""
]
] |
new_dataset
| 0.987915 |
2208.04936
|
Weiyu Zhang
|
Becky Pham, Weiyu Zhang
|
Young women's cognition of commercial digital signage in shopping malls:
A situated action approach
|
A previous version of the paper was presented in June 2016 at the
66th Annual Conference of the International Communication Association,
Fukuoka, Japan
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing literature on digital signage is growing but has not always
emphasized the cognitive processes of the audience. This research aims to
address this gap by studying how young women in Singapore cognize commercial
digital signage in shopping malls and what cause them to do so. Using cognitive
ethnography and taking the situated action approach, our findings suggest a
comprehensive list of factors, both external and internal, that influence young
women's cognition of commercial digital signage in both positive and negative
ways. The research's practical implications are discussed.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 05:44:18 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Pham",
"Becky",
""
],
[
"Zhang",
"Weiyu",
""
]
] |
new_dataset
| 0.978733 |
2208.04943
|
Huili Chen
|
Diego Garcia-soto, Huili Chen, and Farinaz Koushanfar
|
PerD: Perturbation Sensitivity-based Neural Trojan Detection Framework
on NLP Applications
| null | null | null | null |
cs.LG cs.CL cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep Neural Networks (DNNs) have been shown to be susceptible to Trojan
attacks. Neural Trojan is a type of targeted poisoning attack that embeds the
backdoor into the victim and is activated by the trigger in the input space.
The increasing deployment of DNNs in critical systems and the surge of
outsourcing DNN training (which makes Trojan attack easier) makes the detection
of Trojan attacks necessary. While Neural Trojan detection has been studied in
the image domain, there is a lack of solutions in the NLP domain. In this
paper, we propose a model-level Trojan detection framework by analyzing the
deviation of the model output when we introduce a specially crafted
perturbation to the input. Particularly, we extract the model's responses to
perturbed inputs as the `signature' of the model and train a meta-classifier to
determine if a model is Trojaned based on its signature. We demonstrate the
effectiveness of our proposed method on both a dataset of NLP models we create
and a public dataset of Trojaned NLP models from TrojAI. Furthermore, we
propose a lightweight variant of our detection method that reduces the
detection time while preserving the detection rates.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 22:50:03 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Garcia-soto",
"Diego",
""
],
[
"Chen",
"Huili",
""
],
[
"Koushanfar",
"Farinaz",
""
]
] |
new_dataset
| 0.997903 |
2208.04946
|
Weimin Lyu
|
Weimin Lyu, Songzhu Zheng, Tengfei Ma, Haibin Ling, Chao Chen
|
Attention Hijacking in Trojan Transformers
| null | null | null | null |
cs.LG cs.AI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Trojan attacks pose a severe threat to AI systems. Recent works on
Transformer models received explosive popularity and the self-attentions are
now indisputable. This raises a central question: Can we reveal the Trojans
through attention mechanisms in BERTs and ViTs? In this paper, we investigate
the attention hijacking pattern in Trojan AIs, \ie, the trigger token
``kidnaps'' the attention weights when a specific trigger is present. We
observe the consistent attention hijacking pattern in Trojan Transformers from
both Natural Language Processing (NLP) and Computer Vision (CV) domains. This
intriguing property helps us to understand the Trojan mechanism in BERTs and
ViTs. We also propose an Attention-Hijacking Trojan Detector (AHTD) to
discriminate the Trojan AIs from the clean ones.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 04:05:04 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Lyu",
"Weimin",
""
],
[
"Zheng",
"Songzhu",
""
],
[
"Ma",
"Tengfei",
""
],
[
"Ling",
"Haibin",
""
],
[
"Chen",
"Chao",
""
]
] |
new_dataset
| 0.996159 |
2208.04978
|
Yiwen Zhu
|
Joyce Cahoon, Wenjing Wang, Yiwen Zhu, Katherine Lin, Sean Liu,
Raymond Truong, Neetu Singh, Chengcheng Wan, Alexandra M Ciortea, Sreraman
Narasimhan, Subru Krishnan
|
Doppler: Automated SKU Recommendation in Migrating SQL Workloads to the
Cloud
| null |
Proceedings of the VLDB Endowment 15 (12), 3509-3521, 2022
|
10.14778/3554821.3554840
| null |
cs.DB
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Selecting the optimal cloud target to migrate SQL estates from on-premises to
the cloud remains a challenge. Current solutions are not only time-consuming
and error-prone, requiring significant user input, but also fail to provide
appropriate recommendations. We present Doppler, a scalable recommendation
engine that provides right-sized Azure SQL Platform-as-a-Service (PaaS)
recommendations without requiring access to sensitive customer data and
queries. Doppler introduces a novel price-performance methodology that allows
customers to get a personalized rank of relevant cloud targets solely based on
low-level resource statistics, such as latency and memory usage. Doppler
supplements this rank with internal knowledge of Azure customer behavior to
help guide new migration customers towards one optimal target. Experimental
results over a 9-month period from prospective and existing customers indicate
that Doppler can identify optimal targets and adapt to changes in customer
workloads. It has also found cost-saving opportunities among over-provisioned
cloud customers, without compromising on capacity or other requirements.
Doppler has been integrated and released in the Azure Data Migration Assistant
v5.5, which receives hundreds of assessment requests daily.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 18:07:07 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Cahoon",
"Joyce",
""
],
[
"Wang",
"Wenjing",
""
],
[
"Zhu",
"Yiwen",
""
],
[
"Lin",
"Katherine",
""
],
[
"Liu",
"Sean",
""
],
[
"Truong",
"Raymond",
""
],
[
"Singh",
"Neetu",
""
],
[
"Wan",
"Chengcheng",
""
],
[
"Ciortea",
"Alexandra M",
""
],
[
"Narasimhan",
"Sreraman",
""
],
[
"Krishnan",
"Subru",
""
]
] |
new_dataset
| 0.99958 |
2208.05065
|
Yuzhu Sun
|
Yuzhu Sun, Mien Van, Stephen McIlvanna, Sean McLoone and Dariusz
Ceglarek
|
Fixed-time Integral Sliding Mode Control for Admittance Control of a
Robot Manipulator
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes a novel fixed-time integral sliding mode controller for
admittance control to enhance physical human-robot collaboration. The proposed
method combines the benefits of compliance to external forces of admittance
control and high robustness to uncertainties of integral sliding mode control
(ISMC), such that the system can collaborate with a human partner in an
uncertain environment effectively. Firstly, a fixed-time sliding surface is
applied in the ISMC to make the tracking error of the system converge within a
fixed-time regardless of the initial condition. Then, a fixed-time backstepping
controller (BSP) is integrated into the ISMC as the nominal controller to
realize global fixed-time convergence. Furthermore, to overcome the singularity
problem, a non-singular fixed-time sliding surface is designed and integrated
into the controller, which is useful for practical application. Finally, the
proposed controller is validated for a two-link robot manipulator with
uncertainties and external human forces. The results show that the proposed
controller is superior in the sense of both tracking error and convergence
time, and at the same time, can comply with human motion in a shared workspace.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 22:47:19 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Sun",
"Yuzhu",
""
],
[
"Van",
"Mien",
""
],
[
"McIlvanna",
"Stephen",
""
],
[
"McLoone",
"Sean",
""
],
[
"Ceglarek",
"Dariusz",
""
]
] |
new_dataset
| 0.997365 |
2208.05109
|
Guangsheng Yu
|
Guangsheng Yu and Ren Ping Liu and J. Andrew Zhang and Y. Jay Guo
|
Tamperproof IoT with Blockchain
|
3 pages, 5 figures
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
We investigate the tamper-resistant property of Blockchain and its
effectiveness for IoT systems. In particular, we implemented an IoT testbed,
and built a Blockchain into the testbed. A number of tamper-resistance
experiments were conducted and analyzed to corroborate the process of block
validation in Blockchain. Our analysis and experimental results demonstrate the
tamper-resistant capability of Blockchain in securing trust in IoT systems. The
demonstration video is provided at [1].
|
[
{
"version": "v1",
"created": "Wed, 10 Aug 2022 02:14:28 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Yu",
"Guangsheng",
""
],
[
"Liu",
"Ren Ping",
""
],
[
"Zhang",
"J. Andrew",
""
],
[
"Guo",
"Y. Jay",
""
]
] |
new_dataset
| 0.991894 |
2208.05168
|
Zheng Cao
|
Zheng Cao, Yi Zhen, Gang Fan and Sheng Gao
|
TokenPatronus: A Decentralized NFT Anti-theft Mechanism
|
submitted to CESC 2022 as a work-in-progress paper
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emergence of metaverse brings tremendous evolution to Non-Fungible Tokens
(NFTs), which could certify the ownership the unique digital asset in the cyber
world. The NFT market has garnered unprecedented attention from investors and
created billions of dollars in transaction volume. Meanwhile, securing NFT is
still a challenging issue. Recently, numerous incidents of NFT theft have been
reported, leading to incalculable losses for holders. We propose a
decentralized NFT anti-theft mechanism called TokenPatronus, which supports the
general ERC-721 standard and provide the holders with strong property
protection. TokenPatronus contains pre-event protection, in-event interruption,
and post-event replevin enhancements for the complete NFTs transactions stages.
Four modules are designed to make up the decentralized anti-theft mechanism,
including the decentralized access control (DAC), the decentralized risk
management (DRM), the decentralized arbitration system (DAS) and the ERC-721G
standard smart contract. TokenPatronus is performing on the Turtlecase NFT
project of Ethereum and will support more blockchains in the future.
|
[
{
"version": "v1",
"created": "Wed, 10 Aug 2022 06:14:43 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Cao",
"Zheng",
""
],
[
"Zhen",
"Yi",
""
],
[
"Fan",
"Gang",
""
],
[
"Gao",
"Sheng",
""
]
] |
new_dataset
| 0.99934 |
2208.05201
|
Pengyu Wang
|
Pengyu Wang, Chaoqun Wang, Jiankun Wang and Max Q.-H. Meng
|
Quadrotor Autonomous Landing on Moving Platform
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a quadrotor's autonomous take-off and landing system on
a moving platform. The designed system addresses three challenging problems:
fast pose estimation, restricted external localization, and effective obstacle
avoidance. Specifically, first, we design a landing recognition and positioning
system based on the AruCo marker to help the quadrotor quickly calculate the
relative pose; second, we leverage a gradient-based local motion planner to
generate collision-free reference trajectories rapidly for the quadrotor;
third, we build an autonomous state machine that enables the quadrotor to
complete its take-off, tracking and landing tasks in full autonomy; finally, we
conduct experiments in simulated, real-world indoor and outdoor environments to
verify the system's effectiveness and demonstrate its potential.
|
[
{
"version": "v1",
"created": "Wed, 10 Aug 2022 07:50:17 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Wang",
"Pengyu",
""
],
[
"Wang",
"Chaoqun",
""
],
[
"Wang",
"Jiankun",
""
],
[
"Meng",
"Max Q. -H.",
""
]
] |
new_dataset
| 0.996777 |
2208.05216
|
Zhipeng Luo
|
Zhipeng Luo, Changqing Zhou, Liang Pan, Gongjie Zhang, Tianrui Liu,
Yueru Luo, Haiyu Zhao, Ziwei Liu, Shijian Lu
|
Exploring Point-BEV Fusion for 3D Point Cloud Object Tracking with
Transformer
|
arXiv admin note: substantial text overlap with arXiv:2112.02857
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the prevalence of LiDAR sensors in autonomous driving, 3D object
tracking has received increasing attention. In a point cloud sequence, 3D
object tracking aims to predict the location and orientation of an object in
consecutive frames given an object template. Motivated by the success of
transformers, we propose Point Tracking TRansformer (PTTR), which efficiently
predicts high-quality 3D tracking results in a coarse-to-fine manner with the
help of transformer operations. PTTR consists of three novel designs. 1)
Instead of random sampling, we design Relation-Aware Sampling to preserve
relevant points to the given template during subsampling. 2) We propose a Point
Relation Transformer for effective feature aggregation and feature matching
between the template and search region. 3) Based on the coarse tracking
results, we employ a novel Prediction Refinement Module to obtain the final
refined prediction through local feature pooling. In addition, motivated by the
favorable properties of the Bird's-Eye View (BEV) of point clouds in capturing
object motion, we further design a more advanced framework named PTTR++, which
incorporates both the point-wise view and BEV representation to exploit their
complementary effect in generating high-quality tracking results. PTTR++
substantially boosts the tracking performance on top of PTTR with low
computational overhead. Extensive experiments over multiple datasets show that
our proposed approaches achieve superior 3D tracking accuracy and efficiency.
|
[
{
"version": "v1",
"created": "Wed, 10 Aug 2022 08:36:46 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Luo",
"Zhipeng",
""
],
[
"Zhou",
"Changqing",
""
],
[
"Pan",
"Liang",
""
],
[
"Zhang",
"Gongjie",
""
],
[
"Liu",
"Tianrui",
""
],
[
"Luo",
"Yueru",
""
],
[
"Zhao",
"Haiyu",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Lu",
"Shijian",
""
]
] |
new_dataset
| 0.99556 |
2208.05358
|
Adam Dahlgren Lindstr\"om
|
Adam Dahlgren Lindstr\"om, Savitha Sam Abraham
|
CLEVR-Math: A Dataset for Compositional Language, Visual and
Mathematical Reasoning
|
NeSy 2022, 16th International Workshop on Neural-Symbolic Learning
and Reasoning, Cumberland Lodge, Windsor, UK
| null | null | null |
cs.LG cs.CL cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We introduce CLEVR-Math, a multi-modal math word problems dataset consisting
of simple math word problems involving addition/subtraction, represented partly
by a textual description and partly by an image illustrating the scenario. The
text describes actions performed on the scene that is depicted in the image.
Since the question posed may not be about the scene in the image, but about the
state of the scene before or after the actions are applied, the solver envision
or imagine the state changes due to these actions. Solving these word problems
requires a combination of language, visual and mathematical reasoning. We apply
state-of-the-art neural and neuro-symbolic models for visual question answering
on CLEVR-Math and empirically evaluate their performances. Our results show how
neither method generalise to chains of operations. We discuss the limitations
of the two in addressing the task of multi-modal word problem solving.
|
[
{
"version": "v1",
"created": "Wed, 10 Aug 2022 14:08:34 GMT"
}
] | 2022-08-11T00:00:00 |
[
[
"Lindström",
"Adam Dahlgren",
""
],
[
"Abraham",
"Savitha Sam",
""
]
] |
new_dataset
| 0.999904 |
2101.10729
|
Hyoungsung Kim
|
Hyoungsung Kim, Jehyuk Jang, Sangjun Park, Heung-no Lee
|
Ethereum ECCPoW
|
It is under the review of IEEE Access
|
IEEE Access, vol. 9, pp. 135942-135952, 2021
|
10.1109/ACCESS.2021.3113522
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The error-correction code based proof-of-work (ECCPoW) algorithm is based on
a low-density parity-check (LDPC) code. The ECCPoW is possible to impair ASIC
with its time-varying capability of the parameters of LDPC code. Previous
researches on the ECCPoW algorithm have presented its theory and implementation
on Bitcoin. But they do not discuss how stable the block generation time is. A
finite mean block generation time (BGT) and none heavy-tail BGT distribution
are the ones of the focus in this study. In the ECCPoW algorithm, BGT may show
a long-tailed distribution due to time-varying cryptographic puzzles. Thus, it
is of interest to see if the BGT distribution is not heavy-tailed and if it
shows a finite mean. If the distribution is heavy-tailed, then confirmation of
a transaction cannot be guaranteed. We present implementation, simulation, and
validation of ECCPoW Ethereum. In implementation, we explain how the ECCPoW
algorithm is integrated into Ethereum 1.0 as a new consensus algorithm. In the
simulation, we perform a multinode simulation to show that the ECCPoW Ethereum
works well with automatic difficulty change. In the validation, we present the
statistical results of the two-sample Anderson-Darling test to show that the
distribution of BGT satisfies the necessary condition of the exponential
distribution. Our implementation is downloadable at
https://github.com/cryptoecc/ETH-ECC.
|
[
{
"version": "v1",
"created": "Tue, 26 Jan 2021 11:50:06 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Kim",
"Hyoungsung",
""
],
[
"Jang",
"Jehyuk",
""
],
[
"Park",
"Sangjun",
""
],
[
"Lee",
"Heung-no",
""
]
] |
new_dataset
| 0.995681 |
2106.03830
|
Sascha Rothe
|
Sascha Rothe, Jonathan Mallinson, Eric Malmi, Sebastian Krause,
Aliaksei Severyn
|
A Simple Recipe for Multilingual Grammatical Error Correction
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a simple recipe to train state-of-the-art multilingual
Grammatical Error Correction (GEC) models. We achieve this by first proposing a
language-agnostic method to generate a large number of synthetic examples. The
second ingredient is to use large-scale multilingual language models (up to 11B
parameters). Once fine-tuned on language-specific supervised sets we surpass
the previous state-of-the-art results on GEC benchmarks in four languages:
English, Czech, German and Russian. Having established a new set of baselines
for GEC, we make our results easily reproducible and accessible by releasing a
cLang-8 dataset. It is produced by using our best model, which we call gT5, to
clean the targets of a widely used yet noisy lang-8 dataset. cLang-8 greatly
simplifies typical GEC training pipelines composed of multiple fine-tuning
stages -- we demonstrate that performing a single fine-tuning step on cLang-8
with the off-the-shelf language models yields further accuracy improvements
over an already top-performing gT5 model for English.
|
[
{
"version": "v1",
"created": "Mon, 7 Jun 2021 17:47:04 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Aug 2022 14:49:30 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Rothe",
"Sascha",
""
],
[
"Mallinson",
"Jonathan",
""
],
[
"Malmi",
"Eric",
""
],
[
"Krause",
"Sebastian",
""
],
[
"Severyn",
"Aliaksei",
""
]
] |
new_dataset
| 0.9876 |
2201.02726
|
Weiqi He
|
Xinyi Yu, Weiqi He, Xuecheng Qian, Yang Yang, Linlin Ou
|
Real-time Rail Recognition Based on 3D Point Clouds
| null | null |
10.1088/1361-6501/ac750c
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate rail location is a crucial part in the railway support driving
system for safety monitoring. LiDAR can obtain point clouds that carry 3D
information for the railway environment, especially in darkness and terrible
weather conditions. In this paper, a real-time rail recognition method based on
3D point clouds is proposed to solve the challenges, such as disorderly, uneven
density and large volume of the point clouds. A voxel down-sampling method is
first presented for density balanced of railway point clouds, and pyramid
partition is designed to divide the 3D scanning area into the voxels with
different volumes. Then, a feature encoding module is developed to find the
nearest neighbor points and to aggregate their local geometric features for the
center point. Finally, a multi-scale neural network is proposed to generate the
prediction results of each voxel and the rail location. The experiments are
conducted under 9 sequences of 3D point cloud data for the railway. The results
show that the method has good performance in detecting straight, curved and
other complex topologies rails.
|
[
{
"version": "v1",
"created": "Sat, 8 Jan 2022 01:42:02 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Yu",
"Xinyi",
""
],
[
"He",
"Weiqi",
""
],
[
"Qian",
"Xuecheng",
""
],
[
"Yang",
"Yang",
""
],
[
"Ou",
"Linlin",
""
]
] |
new_dataset
| 0.997091 |
2202.07824
|
Zhenhua Xu
|
Zhenhua Xu, Yuxuan Liu, Lu Gan, Yuxiang Sun, Xinyu Wu, Ming Liu and
Lujia Wang
|
RNGDet: Road Network Graph Detection by Transformer in Aerial Images
|
Accepted by IEEE Transactions on Geoscience and Remote Sensing
| null |
10.1109/TGRS.2022.3186993
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Road network graphs provide critical information for autonomous-vehicle
applications, such as drivable areas that can be used for motion planning
algorithms. To find road network graphs, manually annotation is usually
inefficient and labor-intensive. Automatically detecting road network graphs
could alleviate this issue, but existing works still have some limitations. For
example, segmentation-based approaches could not ensure satisfactory topology
correctness, and graph-based approaches could not present precise enough
detection results. To provide a solution to these problems, we propose a novel
approach based on transformer and imitation learning in this paper. In view of
that high-resolution aerial images could be easily accessed all over the world
nowadays, we make use of aerial images in our approach. Taken as input an
aerial image, our approach iteratively generates road network graphs
vertex-by-vertex. Our approach can handle complicated intersection points with
various numbers of incident road segments. We evaluate our approach on a
publicly available dataset. The superiority of our approach is demonstrated
through the comparative experiments. Our work is accompanied with a
demonstration video which is available at
\url{https://tonyxuqaq.github.io/projects/RNGDet/}.
|
[
{
"version": "v1",
"created": "Wed, 16 Feb 2022 01:59:41 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Jun 2022 11:08:18 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Xu",
"Zhenhua",
""
],
[
"Liu",
"Yuxuan",
""
],
[
"Gan",
"Lu",
""
],
[
"Sun",
"Yuxiang",
""
],
[
"Wu",
"Xinyu",
""
],
[
"Liu",
"Ming",
""
],
[
"Wang",
"Lujia",
""
]
] |
new_dataset
| 0.999023 |
2203.11397
|
Rakesh Shrestha
|
Rakesh Shrestha, Siqi Hu, Minghao Gou, Ziyuan Liu, Ping Tan
|
A Real World Dataset for Multi-view 3D Reconstruction
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We present a dataset of 998 3D models of everyday tabletop objects along with
their 847,000 real world RGB and depth images. Accurate annotations of camera
poses and object poses for each image are performed in a semi-automated fashion
to facilitate the use of the dataset for myriad 3D applications like shape
reconstruction, object pose estimation, shape retrieval etc. We primarily focus
on learned multi-view 3D reconstruction due to the lack of appropriate real
world benchmark for the task and demonstrate that our dataset can fill that
gap. The entire annotated dataset along with the source code for the annotation
tools and evaluation baselines is available at
http://www.ocrtoc.org/3d-reconstruction.html.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2022 00:15:54 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2022 21:22:20 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Shrestha",
"Rakesh",
""
],
[
"Hu",
"Siqi",
""
],
[
"Gou",
"Minghao",
""
],
[
"Liu",
"Ziyuan",
""
],
[
"Tan",
"Ping",
""
]
] |
new_dataset
| 0.999872 |
2206.01245
|
Andrea Sipos
|
Andrea Sipos and Nima Fazeli
|
Simultaneous Contact Location and Object Pose Estimation Using
Proprioception and Tactile Feedback
|
Accepted to the 2022 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Joint estimation of grasped object pose and extrinsic contacts is central to
robust and dexterous manipulation. In this paper, we propose a novel
state-estimation algorithm that jointly estimates contact location and object
pose in 3D using exclusively proprioception and tactile feedback. Our approach
leverages two complementary particle filters: one to estimate contact location
(CPFGrasp) and another to estimate object poses (SCOPE). We implement and
evaluate our approach on real-world single-arm and dual-arm robotic systems. We
demonstrate that by bringing two objects into contact, the robots can infer
contact location and object poses simultaneously. Our proposed method can be
applied to a number of downstream tasks that require accurate pose estimates,
such as tool use and assembly. Code and data can be found at
https://github.com/MMintLab/scope.
|
[
{
"version": "v1",
"created": "Thu, 2 Jun 2022 18:40:12 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Aug 2022 13:48:26 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Sipos",
"Andrea",
""
],
[
"Fazeli",
"Nima",
""
]
] |
new_dataset
| 0.988124 |
2206.13660
|
Debopriya Roy Dipta
|
Debopriya Roy Dipta and Berk Gulmezoglu
|
DF-SCA: Dynamic Frequency Side Channel Attacks are Practical
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The arm race between hardware security engineers and side-channel researchers
has become more competitive with more sophisticated attacks and defenses in the
last decade. While modern hardware features improve the system performance
significantly, they may create new attack surfaces for malicious people to
extract sensitive information about users without physical access to the victim
device. Although many previously exploited hardware and OS features were
patched by OS developers and chip vendors, any feature that is accessible from
userspace applications can be exploited to perform software-based side-channel
attacks.
In this paper, we present DF-SCA, which is a software-based dynamic frequency
side-channel attack on Linux and Android OS devices. We exploit unprivileged
access to cpufreq interface that exposes real-time CPU core frequency values
directly correlated with the system utilization, creating a reliable
side-channel for attackers. We show that Dynamic Voltage and Frequency Scaling
(DVFS) feature in modern systems can be utilized to perform website
fingerprinting attacks for Google Chrome and Tor browsers on modern Intel, AMD,
and ARM architectures. We further extend our analysis to a wide selection of
scaling governors on Intel and AMD CPUs, verifying that all scaling governors
provide enough information on the visited web page. Moreover, we extract
properties of keystroke patterns on frequency readings, that leads to 95%
accuracy to distinguish the keystrokes from other activities on Android phones.
We leverage inter-keystroke timings of a user by training a k-th nearest
neighbor model, which achieves 88% password recovery rate in the first guess on
Bank of America application. Finally, we propose several countermeasures to
mask the user activity to mitigate DF-SCA on Linux-based systems.
|
[
{
"version": "v1",
"created": "Mon, 27 Jun 2022 22:56:47 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2022 21:37:38 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Dipta",
"Debopriya Roy",
""
],
[
"Gulmezoglu",
"Berk",
""
]
] |
new_dataset
| 0.999846 |
2207.11149
|
Kevin Galligan
|
Kevin Galligan, Muriel M\'edard, Ken R. Duffy
|
Block turbo decoding with ORBGRAND
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Guessing Random Additive Noise Decoding (GRAND) is a family of universal
decoding algorithms suitable for decoding any moderate redundancy code of any
length. We establish that, through the use of list decoding, soft-input
variants of GRAND can replace the Chase algorithm as the component decoder in
the turbo decoding of product codes. In addition to being able to decode
arbitrary product codes, rather than just those with dedicated hard-input
component code decoders, results show that ORBGRAND achieves a coding gain of
up to 0.7dB over the Chase algorithm with same list size.
|
[
{
"version": "v1",
"created": "Fri, 22 Jul 2022 15:41:43 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Aug 2022 17:05:26 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Galligan",
"Kevin",
""
],
[
"Médard",
"Muriel",
""
],
[
"Duffy",
"Ken R.",
""
]
] |
new_dataset
| 0.956458 |
2208.02432
|
Huy Hieu Pham
|
Anh Duy Nguyen, Thuy Dung Nguyen, Huy Hieu Pham, Thanh Hung Nguyen,
Phi Le Nguyen
|
Image-based Contextual Pill Recognition with Medical Knowledge Graph
Assistance
|
Accepted for presentation at the 14th Asian Conference on Intelligent
Information and Database Systems (ACIIDS 2022)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Identifying pills given their captured images under various conditions and
backgrounds has been becoming more and more essential. Several efforts have
been devoted to utilizing the deep learning-based approach to tackle the pill
recognition problem in the literature. However, due to the high similarity
between pills' appearance, misrecognition often occurs, leaving pill
recognition a challenge. To this end, in this paper, we introduce a novel
approach named PIKA that leverages external knowledge to enhance pill
recognition accuracy. Specifically, we address a practical scenario (which we
call contextual pill recognition), aiming to identify pills in a picture of a
patient's pill intake. Firstly, we propose a novel method for modeling the
implicit association between pills in the presence of an external data source,
in this case, prescriptions. Secondly, we present a walk-based graph embedding
model that transforms from the graph space to vector space and extracts
condensed relational features of the pills. Thirdly, a final framework is
provided that leverages both image-based visual and graph-based relational
features to accomplish the pill identification task. Within this framework, the
visual representation of each pill is mapped to the graph embedding space,
which is then used to execute attention over the graph representation,
resulting in a semantically-rich context vector that aids in the final
classification. To our knowledge, this is the first study to use external
prescription data to establish associations between medicines and to classify
them using this aiding information. The architecture of PIKA is lightweight and
has the flexibility to incorporate into any recognition backbones. The
experimental results show that by leveraging the external knowledge graph, PIKA
can improve the recognition accuracy from 4.8% to 34.1% in terms of F1-score,
compared to baselines.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 03:55:53 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Aug 2022 03:34:30 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Nguyen",
"Anh Duy",
""
],
[
"Nguyen",
"Thuy Dung",
""
],
[
"Pham",
"Huy Hieu",
""
],
[
"Nguyen",
"Thanh Hung",
""
],
[
"Nguyen",
"Phi Le",
""
]
] |
new_dataset
| 0.996369 |
2208.03163
|
Nikola Simidjievski
|
Dragi Kocev, Nikola Simidjievski, Ana Kostovska, Ivica Dimitrovski,
\v{Z}iga Kokalj
|
Discover the Mysteries of the Maya: Selected Contributions from the
Machine Learning Challenge & The Discovery Challenge Workshop at ECML PKDD
2021
|
Chapter authors. Chapter 1: Matthew Painter and Iris Kramer; Chapter
2: J\"urgen Landauer, Burkhard Hoppenstedt, and Johannes Allgaier; Chapter 3:
Thorben Hellweg, Stefan Oehmcke, Ankit Kariryaa, Fabian Gieseke, and
Christian Igel; Chapter 4: Christian Ayala, Carlos Aranda, and Mikel Galar
| null | null |
COBISS.SI-ID: 117741827, ISBN: 978-961-264-228-0
|
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The volume contains selected contributions from the Machine Learning
Challenge "Discover the Mysteries of the Maya", presented at the Discovery
Challenge Track of The European Conference on Machine Learning and Principles
and Practice of Knowledge Discovery in Databases (ECML PKDD 2021).
Remote sensing has greatly accelerated traditional archaeological landscape
surveys in the forested regions of the ancient Maya. Typical exploration and
discovery attempts, beside focusing on whole ancient cities, focus also on
individual buildings and structures. Recently, there have been several
successful attempts of utilizing machine learning for identifying ancient Maya
settlements. These attempts, while relevant, focus on narrow areas and rely on
high-quality aerial laser scanning (ALS) data which covers only a fraction of
the region where ancient Maya were once settled. Satellite image data, on the
other hand, produced by the European Space Agency's (ESA) Sentinel missions, is
abundant and, more importantly, publicly available. The "Discover the Mysteries
of the Maya" challenge aimed at locating and identifying ancient Maya
architectures (buildings, aguadas, and platforms) by performing integrated
image segmentation of different types of satellite imagery (from Sentinel-1 and
Sentinel-2) data and ALS (lidar) data.
|
[
{
"version": "v1",
"created": "Fri, 5 Aug 2022 13:41:31 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Aug 2022 12:54:34 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Kocev",
"Dragi",
""
],
[
"Simidjievski",
"Nikola",
""
],
[
"Kostovska",
"Ana",
""
],
[
"Dimitrovski",
"Ivica",
""
],
[
"Kokalj",
"Žiga",
""
]
] |
new_dataset
| 0.992153 |
2208.03647
|
Yu Wang
|
Yifan Hu and Yu Wang
|
BSDGAN: Balancing Sensor Data Generative Adversarial Networks for Human
Activity Recognition
| null | null | null | null |
cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of IoT technology enables a variety of sensors can be
integrated into mobile devices. Human Activity Recognition (HAR) based on
sensor data has become an active research topic in the field of machine
learning and ubiquitous computing. However, due to the inconsistent frequency
of human activities, the amount of data for each activity in the human activity
dataset is imbalanced. Considering the limited sensor resources and the high
cost of manually labeled sensor data, human activity recognition is facing the
challenge of highly imbalanced activity datasets. In this paper, we propose
Balancing Sensor Data Generative Adversarial Networks (BSDGAN) to generate
sensor data for minority human activities. The proposed BSDGAN consists of a
generator model and a discriminator model. Considering the extreme imbalance of
human activity dataset, an autoencoder is employed to initialize the training
process of BSDGAN, ensure the data features of each activity can be learned.
The generated activity data is combined with the original dataset to balance
the amount of activity data across human activity classes. We deployed multiple
human activity recognition models on two publicly available imbalanced human
activity datasets, WISDM and UNIMIB. Experimental results show that the
proposed BSDGAN can effectively capture the data features of real human
activity sensor data, and generate realistic synthetic sensor data. Meanwhile,
the balanced activity dataset can effectively help the activity recognition
model to improve the recognition accuracy.
|
[
{
"version": "v1",
"created": "Sun, 7 Aug 2022 05:48:48 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Hu",
"Yifan",
""
],
[
"Wang",
"Yu",
""
]
] |
new_dataset
| 0.988682 |
2208.03773
|
Roshan Sah
|
Roshan Sah, Raunak Srivastava, Kaushik Das
|
Design and Analysis of Cold Gas Thruster to De-Orbit the PSLV Debris
|
11 pages, 19 figures, Accepted and Published in Small Satellite
Conference 2022. link:-
https://digitalcommons.usu.edu/smallsat/2022/all2022/9/
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Today\'s world of space\'s primary concern is the uncontrolled growth of
space debris and its probability of collision with spacecraft, particularly in
the low earth orbit (LEO) regions. This paper is aimed to design an optimized
micro-propulsion system, Cold Gas Thruster, to de-orbit the PSLV debris from
668km to 250 km height after capturing process. The propulsion system mainly
consists of a storage tank, pipes, control valves, and a convergent-divergent
nozzle. The paper gives an idea of the design of each component based on a
continuous iterative process until the design thrust requirements are met. All
the components are designed in the CATIA V5, and the structural analysis is
done in the ANSYS tool for each component where our cylinder tank can withstand
the high hoop stress generated on its wall of it. And flow analysis is done by
using the K-$\epsilon$ turbulence model for the CD nozzle, which provides the
required thrust to de-orbit PSLV from a higher orbit to a lower orbit, after
which the air drag will be enough to bring back to earth\'s atmosphere and burn
it. Hohmann\'s orbit transfer method has been used to de-orbit the PSLV space
debris, and it has been simulated by STK tools. And the result shows that our
optimized designed thruster generates enough thrust to de-orbit the PSLV debris
to a very low orbit.
|
[
{
"version": "v1",
"created": "Sun, 7 Aug 2022 17:06:01 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Sah",
"Roshan",
""
],
[
"Srivastava",
"Raunak",
""
],
[
"Das",
"Kaushik",
""
]
] |
new_dataset
| 0.999246 |
2208.04360
|
Jingbo Zhou
|
Jingbo Zhou, Xinjiang Lu, Yixiong Xiao, Jiantao Su, Junfu Lyu, Yanjun
Ma, Dejing Dou
|
SDWPF: A Dataset for Spatial Dynamic Wind Power Forecasting Challenge at
KDD Cup 2022
| null | null | null | null |
cs.LG eess.SP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The variability of wind power supply can present substantial challenges to
incorporating wind power into a grid system. Thus, Wind Power Forecasting (WPF)
has been widely recognized as one of the most critical issues in wind power
integration and operation. There has been an explosion of studies on wind power
forecasting problems in the past decades. Nevertheless, how to well handle the
WPF problem is still challenging, since high prediction accuracy is always
demanded to ensure grid stability and security of supply. We present a unique
Spatial Dynamic Wind Power Forecasting dataset: SDWPF, which includes the
spatial distribution of wind turbines, as well as the dynamic context factors.
Whereas, most of the existing datasets have only a small number of wind
turbines without knowing the locations and context information of wind turbines
at a fine-grained time scale. By contrast, SDWPF provides the wind power data
of 134 wind turbines from a wind farm over half a year with their relative
positions and internal statuses. We use this dataset to launch the Baidu KDD
Cup 2022 to examine the limit of current WPF solutions. The dataset is released
at https://aistudio.baidu.com/aistudio/competition/detail/152/0/datasets.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 18:38:45 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Zhou",
"Jingbo",
""
],
[
"Lu",
"Xinjiang",
""
],
[
"Xiao",
"Yixiong",
""
],
[
"Su",
"Jiantao",
""
],
[
"Lyu",
"Junfu",
""
],
[
"Ma",
"Yanjun",
""
],
[
"Dou",
"Dejing",
""
]
] |
new_dataset
| 0.999735 |
2208.04361
|
Yunqing Bao
|
Yunqing Bao, Hang Dai, Abdulmotaleb Elsaddik
|
Semi-Supervised Cross-Modal Salient Object Detection with U-Structure
Networks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Salient Object Detection (SOD) is a popular and important topic aimed at
precise detection and segmentation of the interesting regions in the images. We
integrate the linguistic information into the vision-based U-Structure networks
designed for salient object detection tasks. The experiments are based on the
newly created DUTS Cross Modal (DUTS-CM) dataset, which contains both visual
and linguistic labels. We propose a new module called efficient Cross-Modal
Self-Attention (eCMSA) to combine visual and linguistic features and improve
the performance of the original U-structure networks. Meanwhile, to reduce the
heavy burden of labeling, we employ a semi-supervised learning method by
training an image caption model based on the DUTS-CM dataset, which can
automatically label other datasets like DUT-OMRON and HKU-IS. The comprehensive
experiments show that the performance of SOD can be improved with the natural
language input and is competitive compared with other SOD methods.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 18:39:37 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Bao",
"Yunqing",
""
],
[
"Dai",
"Hang",
""
],
[
"Elsaddik",
"Abdulmotaleb",
""
]
] |
new_dataset
| 0.999791 |
2208.04378
|
Zhaodong Sun
|
Zhaodong Sun, Xiaobai Li
|
Contrast-Phys: Unsupervised Video-based Remote Physiological Measurement
via Spatiotemporal Contrast
|
accepted to ECCV 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Video-based remote physiological measurement utilizes face videos to measure
the blood volume change signal, which is also called remote
photoplethysmography (rPPG). Supervised methods for rPPG measurements achieve
state-of-the-art performance. However, supervised rPPG methods require face
videos and ground truth physiological signals for model training. In this
paper, we propose an unsupervised rPPG measurement method that does not require
ground truth signals for training. We use a 3DCNN model to generate multiple
rPPG signals from each video in different spatiotemporal locations and train
the model with a contrastive loss where rPPG signals from the same video are
pulled together while those from different videos are pushed away. We test on
five public datasets, including RGB videos and NIR videos. The results show
that our method outperforms the previous unsupervised baseline and achieves
accuracies very close to the current best supervised rPPG methods on all five
datasets. Furthermore, we also demonstrate that our approach can run at a much
faster speed and is more robust to noises than the previous unsupervised
baseline. Our code is available at
https://github.com/zhaodongsun/contrast-phys.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 19:30:57 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Sun",
"Zhaodong",
""
],
[
"Li",
"Xiaobai",
""
]
] |
new_dataset
| 0.962684 |
2208.04403
|
Eytan Adar
|
Eytan Adar, Elsie Lee-Robbins
|
Roboviz: A Game-Centered Project for Information Visualization Education
|
to appear, IEEE Vis'22
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Due to their pedagogical advantages, large final projects in information
visualization courses have become standard practice. Students take on a
client--real or simulated--a dataset, and a vague set of goals to create a
complete visualization or visual analytics product. Unfortunately, many
projects suffer from ambiguous goals, over or under-constrained client
expectations, and data constraints that have students spending their time on
non-visualization problems (e.g., data cleaning). These are important skills,
but are often secondary course objectives, and unforeseen problems can majorly
hinder students. We created an alternative for our information visualization
course: Roboviz, a real-time game for students to play by building a
visualization-focused interface. By designing the game mechanics around four
different data types, the project allows students to create a wide array of
interactive visualizations. Student teams play against their classmates with
the objective to collect the most (good) robots. The flexibility of the
strategies encourages variability, a range of approaches, and solving wicked
design constraints. We describe the construction of this game and report on
student projects over two years. We further show how the game mechanics can be
extended or adapted to other game-based projects.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 20:24:14 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Adar",
"Eytan",
""
],
[
"Lee-Robbins",
"Elsie",
""
]
] |
new_dataset
| 0.996549 |
2208.04441
|
Yonghao Xu
|
Yonghao Xu, Weikang Yu, Pedram Ghamisi, Michael Kopp, and Sepp
Hochreiter
|
Txt2Img-MHN: Remote Sensing Image Generation from Text Using Modern
Hopfield Networks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The synthesis of high-resolution remote sensing images based on text
descriptions has great potential in many practical application scenarios.
Although deep neural networks have achieved great success in many important
remote sensing tasks, generating realistic remote sensing images from text
descriptions is still very difficult. To address this challenge, we propose a
novel text-to-image modern Hopfield network (Txt2Img-MHN). The main idea of
Txt2Img-MHN is to conduct hierarchical prototype learning on both text and
image embeddings with modern Hopfield layers. Instead of directly learning
concrete but highly diverse text-image joint feature representations for
different semantics, Txt2Img-MHN aims to learn the most representative
prototypes from text-image embeddings, achieving a coarse-to-fine learning
strategy. These learned prototypes can then be utilized to represent more
complex semantics in the text-to-image generation task. To better evaluate the
realism and semantic consistency of the generated images, we further conduct
zero-shot classification on real remote sensing data using the classification
model trained on synthesized images. Despite its simplicity, we find that the
overall accuracy in the zero-shot classification may serve as a good metric to
evaluate the ability to generate an image from text. Extensive experiments on
the benchmark remote sensing text-image dataset demonstrate that the proposed
Txt2Img-MHN can generate more realistic remote sensing images than existing
methods. Code and pre-trained models are available online
(https://github.com/YonghaoXu/Txt2Img-MHN).
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 22:02:10 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Xu",
"Yonghao",
""
],
[
"Yu",
"Weikang",
""
],
[
"Ghamisi",
"Pedram",
""
],
[
"Kopp",
"Michael",
""
],
[
"Hochreiter",
"Sepp",
""
]
] |
new_dataset
| 0.99969 |
2208.04451
|
Matthew Brehmer
|
Brian D. Hall, Lyn Bartram, Matthew Brehmer
|
Augmented Chironomia for Presenting Data to Remote Audiences
|
To appear at the 2022 ACM Symposium on User Interface Software and
Technology (UIST, Bend, OR, Oct 29 - Nov 2, 2022). Supplemental video
available at https://vimeo.com/737703966
| null |
10.1145/3526113.3545614
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
To facilitate engaging and nuanced conversations around data, we contribute a
touchless approach to interacting directly with visualization in remote
presentations. We combine dynamic charts overlaid on a presenter's webcam feed
with continuous bimanual hand tracking, demonstrating interactions that
highlight and manipulate chart elements appearing in the foreground. These
interactions are simultaneously functional and deictic, and some allow for the
addition of "rhetorical flourish", or expressive movement used when speaking
about quantities, categories, and time intervals. We evaluated our approach in
two studies with professionals who routinely deliver and attend presentations
about data. The first study considered the presenter perspective, where 12
participants delivered presentations to a remote audience using a presentation
environment incorporating our approach. The second study considered the
audience experience of 17 participants who attended presentations supported by
our environment. Finally, we reflect on observations from these studies and
discuss related implications for engaging remote audiences in conversations
about data.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 22:27:29 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Hall",
"Brian D.",
""
],
[
"Bartram",
"Lyn",
""
],
[
"Brehmer",
"Matthew",
""
]
] |
new_dataset
| 0.997949 |
2208.04462
|
Thanh Tran
|
Thanh Tran, Sebastian Bader, Jan Lundgren
|
Denoising Induction Motor Sounds Using an Autoencoder
|
9 pages, 10 figures, conference
| null | null | null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Denoising is the process of removing noise from sound signals while improving
the quality and adequacy of the sound signals. Denoising sound has many
applications in speech processing, sound events classification, and machine
failure detection systems. This paper describes a method for creating an
autoencoder to map noisy machine sounds to clean sounds for denoising purposes.
There are several types of noise in sounds, for example, environmental noise
and generated frequency-dependent noise from signal processing methods. Noise
generated by environmental activities is environmental noise. In the factory,
environmental noise can be created by vehicles, drilling, people working or
talking in the survey area, wind, and flowing water. Those noises appear as
spikes in the sound record. In the scope of this paper, we demonstrate the
removal of generated noise with Gaussian distribution and the environmental
noise with a specific example of the water sink faucet noise from the induction
motor sounds. The proposed method was trained and verified on 49 normal
function sounds and 197 horizontal misalignment fault sounds from the Machinery
Fault Database (MAFAULDA). The mean square error (MSE) was used as the
assessment criteria to evaluate the similarity between denoised sounds using
the proposed autoencoder and the original sounds in the test set. The MSE is
below or equal to 0.14 when denoise both types of noises on 15 testing sounds
of the normal function category. The MSE is below or equal to 0.15 when
denoising 60 testing sounds on the horizontal misalignment fault category. The
low MSE shows that both the generated Gaussian noise and the environmental
noise were almost removed from the original sounds with the proposed trained
autoencoder.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 23:14:51 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Tran",
"Thanh",
""
],
[
"Bader",
"Sebastian",
""
],
[
"Lundgren",
"Jan",
""
]
] |
new_dataset
| 0.974333 |
2208.04484
|
Liming Ma
|
Shu Liu, Liming Ma, Tingyi Wu, and Chaoping Xing
|
Good locally repairable codes via propagation rules
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In classical coding theory, it is common to construct new codes via
propagation rules. There are various propagation rules to construct classical
block codes. However, propagation rules have not been extensively explored for
constructions of locally repairable codes. In this paper, we introduce a few
propagation rules to construct good locally repairable codes. To our surprise,
these simple propagation rules produce a few interesting results. Firstly, by
concatenating a locally repairable code as an inner code with a classical block
code as an outer code, we obtain quite a few dimension-optimal binary locally
repairable codes. Secondly, from this concatenation, we explicitly build a
family of locally repairable codes that exceeds the Zyablov-type bound.
Thirdly, by a lengthening propagation rule that adds some rows and columns from
a parity-check matrix of a given linear code, we are able to produce a family
of dimension-optimal binary locally repairable codes from the extended Hamming
codes, and to convert a classical maximum distance separable (MDS) code into a
Singleton-optimal locally repairable code. Furthermore, via the lengthening
propagation rule, we greatly simplify the construction of a family of locally
repairable codes in \cite[Theorem 5]{MX20} that breaks the asymptotic
Gilbert-Varshamov bound. In addition, we make use of three other propagation
rules to produce more dimension-optimal binary locally repairable codes.
Finally, one of phenomena that we observe in this paper is that some trivial
propagation rules in classical block codes do not hold anymore for locally
repairable codes.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 01:15:21 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Liu",
"Shu",
""
],
[
"Ma",
"Liming",
""
],
[
"Wu",
"Tingyi",
""
],
[
"Xing",
"Chaoping",
""
]
] |
new_dataset
| 0.999496 |
2208.04487
|
Andrew SaLoutos
|
Andrew SaLoutos, Elijah Stanger-Jones, Sangbae Kim
|
Fast Reflexive Grasping with a Proprioceptive Teleoperation Platform
|
To be published in IROS 2022. 8 pages, 10 figures. Supplementary
video at https://youtu.be/HsFT76add9g
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a proprioceptive teleoperation system that uses a reflexive
grasping algorithm to enhance the speed and robustness of pick-and-place tasks.
The system consists of two manipulators that use quasi-direct-drive actuation
to provide highly transparent force feedback. The end-effector has bimodal
force sensors that measure 3-axis force information and 2-dimensional contact
location. This information is used for anti-slip and re-grasping reflexes. When
the user makes contact with the desired object, the re-grasping reflex aligns
the gripper fingers with antipodal points on the object to maximize the grasp
stability. The reflex takes only 150ms to correct for inaccurate grasps chosen
by the user, so the user's motion is only minimally disturbed by the execution
of the re-grasp. Once antipodal contact is established, the anti-slip reflex
ensures that the gripper applies enough normal force to prevent the object from
slipping out of the grasp. The combination of proprioceptive manipulators and
reflexive grasping allows the user to complete teleoperated tasks with
precision at high speed.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 01:23:23 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"SaLoutos",
"Andrew",
""
],
[
"Stanger-Jones",
"Elijah",
""
],
[
"Kim",
"Sangbae",
""
]
] |
new_dataset
| 0.998454 |
2208.04547
|
Alexandru Albu
|
Ionu\c{t}-Alexandru Albu, Stelian Sp\^inu
|
Emotion Detection From Tweets Using a BERT and SVM Ensemble Model
| null |
U.P.B. Sci. Bull., Series C, Vol. 84, Iss. 1, 2022 ISSN 2286-3540
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Automatic identification of emotions expressed in Twitter data has a wide
range of applications. We create a well-balanced dataset by adding a neutral
class to a benchmark dataset consisting of four emotions: fear, sadness, joy,
and anger. On this extended dataset, we investigate the use of Support Vector
Machine (SVM) and Bidirectional Encoder Representations from Transformers
(BERT) for emotion recognition. We propose a novel ensemble model by combining
the two BERT and SVM models. Experiments show that the proposed model achieves
a state-of-the-art accuracy of 0.91 on emotion recognition in tweets.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 05:32:29 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Albu",
"Ionuţ-Alexandru",
""
],
[
"Spînu",
"Stelian",
""
]
] |
new_dataset
| 0.985688 |
2208.04553
|
Heng Cong
|
Heng Cong, Mingzhu Sun, Duoying Zhou, Xin Zhao
|
Multi-target Tracking of Zebrafish based on Particle Filter
|
6 pages, 8 figures, 2016 35th Chinese Control Conference (CCC)
|
2016 35th Chinese Control Conference (CCC). IEEE, 2016:
10308-10313
|
10.1109/ChiCC.2016.7554987
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Zebrafish is an excellent model organism, which has been widely used in the
fields of biological experiments, drug screening, and swarm intelligence. In
recent years, there are a large number of techniques for tracking of zebrafish
involved in the study of behaviors, which makes it attack much attention of
scientists from many fields. Multi-target tracking of zebrafish is still facing
many challenges. The high mobility and uncertainty make it difficult to predict
its motion; the similar appearances and texture features make it difficult to
establish an appearance model; it is even hard to link the trajectories because
of the frequent occlusion. In this paper, we use particle filter to approximate
the uncertainty of the motion. Firstly, by analyzing the motion characteristics
of zebrafish, we establish an efficient hybrid motion model to predict its
positions; then we establish an appearance model based on the predicted
positions to predict the postures of every targets, meanwhile weigh the
particles by comparing the difference of predicted pose and observation pose ;
finally, we get the optimal position of single zebrafish through the weighted
position, and use the joint particle filter to process trajectory linking of
multiple zebrafish.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 06:02:55 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Cong",
"Heng",
""
],
[
"Sun",
"Mingzhu",
""
],
[
"Zhou",
"Duoying",
""
],
[
"Zhao",
"Xin",
""
]
] |
new_dataset
| 0.956468 |
2208.04556
|
Zhilin Fu
|
Zhilin Fu, Sangwon Hwang, Jihwan Moon, Haibao Ren and Inkyu Lee
|
A Codebook Design for FD-MIMO Systems with Multi-Panel Array
| null | null |
10.1109/TVT.2022.3195529
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we study codebook designs for full-dimension multiple-input
multiple-output (FD-MIMO) systems with a multi-panel array (MPA). We propose
novel codebooks which allow precise beam structures for MPA FD-MIMO systems by
investigating the physical properties and alignments of the panels. We
specifically exploit the characteristic that a group of antennas in a vertical
direction exhibit more correlation than those in a horizontal direction. This
enables an economical use of feedback bits while constructing finer beams
compared to conventional codebooks. The codebook is further improved by
dynamically allocating the feedback bits on multiple parts such as beam
amplitude and co-phasing coefficients using reinforcement learning. The
numerical results confirm the effectiveness of the proposed approach in terms
of both performance and computational complexity.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 06:11:48 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Fu",
"Zhilin",
""
],
[
"Hwang",
"Sangwon",
""
],
[
"Moon",
"Jihwan",
""
],
[
"Ren",
"Haibao",
""
],
[
"Lee",
"Inkyu",
""
]
] |
new_dataset
| 0.998947 |
2208.04620
|
Francesco Bonchi
|
Marco Minici and Federico Cinus and Corrado Monti and Francesco Bonchi
and Giuseppe Manco
|
Cascade-based Echo Chamber Detection
|
Accepted for publication at ACM CIKM 2022
| null | null | null |
cs.SI cs.LG physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite echo chambers in social media have been under considerable scrutiny,
general models for their detection and analysis are missing. In this work, we
aim to fill this gap by proposing a probabilistic generative model that
explains social media footprints -- i.e., social network structure and
propagations of information -- through a set of latent communities,
characterized by a degree of echo-chamber behavior and by an opinion polarity.
Specifically, echo chambers are modeled as communities that are permeable to
pieces of information with similar ideological polarity, and impermeable to
information of opposed leaning: this allows discriminating echo chambers from
communities that lack a clear ideological alignment.
To learn the model parameters we propose a scalable, stochastic adaptation of
the Generalized Expectation Maximization algorithm, that optimizes the joint
likelihood of observing social connections and information propagation.
Experiments on synthetic data show that our algorithm is able to correctly
reconstruct ground-truth latent communities with their degree of echo-chamber
behavior and opinion polarity. Experiments on real-world data about polarized
social and political debates, such as the Brexit referendum or the COVID-19
vaccine campaign, confirm the effectiveness of our proposal in detecting echo
chambers. Finally, we show how our model can improve accuracy in auxiliary
predictive tasks, such as stance detection and prediction of future
propagations.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 09:30:38 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Minici",
"Marco",
""
],
[
"Cinus",
"Federico",
""
],
[
"Monti",
"Corrado",
""
],
[
"Bonchi",
"Francesco",
""
],
[
"Manco",
"Giuseppe",
""
]
] |
new_dataset
| 0.962225 |
2208.04635
|
EPTCS
|
Matteo Cimini (University of Massachusetts Lowell, USA)
|
Lang-n-Send Extended: Sending Regular Expressions to Monitors
|
In Proceedings ICE 2022, arXiv:2208.04086
|
EPTCS 365, 2022, pp. 69-84
|
10.4204/EPTCS.365.5
| null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
In prior work, Cimini has presented Lang-n-Send, a pi-calculus with language
definitions.
In this paper, we present an extension of this calculus called Lang-n-Send+m.
First, we revise Lang-n-Send to work with transition system specifications
rather than its language specifications. This revision allows the use of
negative premises in deduction rules. Next, we extend Lang-n-Send with monitors
and with the ability of sending and receiving regular expressions, which then
can be used in the context of larger regular expressions to monitor the
execution of programs.
We present a reduction semantics for Lang-n-Send+m, and we offer examples
that demonstrate the scenarios that our calculus captures.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 09:54:38 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Cimini",
"Matteo",
"",
"University of Massachusetts Lowell, USA"
]
] |
new_dataset
| 0.972904 |
2208.04685
|
Vinay Chaudhri
|
Vinay K Chaudhri
|
Computable Contracts in the Financial Services Industry
| null | null | null | null |
cs.CY cs.PL q-fin.GN
|
http://creativecommons.org/licenses/by/4.0/
|
A computable contract is a contract that a computer can read, understand and
execute. The financial services industry makes extensive use of contracts, for
example, mortgage agreements, derivatives contracts, arbitration agreements,
etc. Most of these contracts exist as text documents, making it difficult to
automatically query, execute and analyze them. In this vision paper, we argue
that the use of computable contracts in the financial services industry will
lead to substantial improvements in customer experience, reductions in the cost
of doing legal transactions, make it easier to respond to changing laws, and
provide a much better framework for making decisions impacted by contracts.
Using a simple payment agreement, we illustrate a Contract Definition Language,
sketch several use cases and discuss their benefits to the financial services
industry.
|
[
{
"version": "v1",
"created": "Sun, 3 Jul 2022 00:06:39 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Chaudhri",
"Vinay K",
""
]
] |
new_dataset
| 0.999724 |
2208.04688
|
Christian Colot
|
Christian Colot, Francois Robinet, Geoffrey Nichils, Raphael Frank
|
Connected Vehicle Platforms for Dynamic Insurance
|
Working paper
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Following a regulatory change in Europe which mandates that car manufacturers
include an eCall system in new vehicles, many car manufacturers are adding
additional services on top, so that more and more cars become connected
vehicles and act like IoT sensors. In the following study, we analyse the
maturity level of this new technology to build insurance products that would
take vehicle usage into account. For this, the connectivity of recent cars
a-priori eligible has been first tested. Then, an ad-hoc platform has been
designed to collect driving data. In particular, 4 cars have been connected to
this platform for periods of over one month. Our results highlight that, while
this technological innovation appears very promising in the future, the
pricing, the lack of uniformity of data collected and the enrollment process
are currently three pain points that should be addressed to offer large-scale
opportunities. In the meantime, this technology might still be used for high
value use cases such as the insurance of luxurious cars.
|
[
{
"version": "v1",
"created": "Mon, 1 Aug 2022 14:30:18 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Colot",
"Christian",
""
],
[
"Robinet",
"Francois",
""
],
[
"Nichils",
"Geoffrey",
""
],
[
"Frank",
"Raphael",
""
]
] |
new_dataset
| 0.972626 |
2208.04704
|
Debesh Jha
|
Ashish Rauniyar, Desta Haileselassie Hagos, Debesh Jha, Jan Erik
H{\aa}keg{\aa}rd
|
COROID: A Crowdsourcing-based Companion Drones to Tackle Current and
Future Pandemics
|
Accepted
|
IEEE SAM, 2022
| null | null |
cs.CY cs.CV cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Due to the current COVID-19 virus, which has already been declared a pandemic
by the World Health Organization (WHO), we are witnessing the greatest pandemic
of the decade. Millions of people are being infected, resulting in thousands of
deaths every day across the globe. Even it was difficult for the best
healthcare-providing countries could not handle the pandemic because of the
strain of treating thousands of patients at a time. The count of infections and
deaths is increasing at an alarming rate because of the spread of the virus. We
believe that innovative technologies could help reduce pandemics to a certain
extent until we find a definite solution from the medical field to handle and
treat such pandemic situations. Technology innovation has the potential to
introduce new technologies that could support people and society during these
difficult times. Therefore, this paper proposes the idea of using drones as a
companion to tackle current and future pandemics. Our COROID drone is based on
the principle of crowdsourcing sensors data of the public's smart devices,
which can correlate the reading of the infrared cameras equipped on the COROID
drones. To the best of our knowledge, this concept has yet to be investigated
either as a concept or as a product. Therefore, we believe that the COROID
drone is innovative and has a huge potential to tackle COVID-19 and future
pandemics.
|
[
{
"version": "v1",
"created": "Tue, 19 Jul 2022 18:38:03 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Rauniyar",
"Ashish",
""
],
[
"Hagos",
"Desta Haileselassie",
""
],
[
"Jha",
"Debesh",
""
],
[
"Håkegård",
"Jan Erik",
""
]
] |
new_dataset
| 0.99386 |
2208.04741
|
Miguel Pardal
|
Rui Claro and Samih Eisa and Miguel L. Pardal
|
Lisbon Hotspots: Wi-Fi access point dataset for time-bound location
proofs
|
14 pages
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Wi-Fi hotspots are a valuable resource for people on the go, especially
tourists, as they provide a means to connect personal devices to the Internet.
This extra connectivity can be helpful in many situations, e.g., to enable map
and chat applications to operate outdoors when cellular connectivity is
unavailable or is expensive. Retail stores and many public services have
recognized that hotspots have potential to attract and retain customers, so
many of them offer free and open Wi-Fi. In busy cities, with many locals and
visitors, the number of hotspots is very significant. Some of these hotspots
are available for long periods of time, while others are short-lived. When we
have many users with devices collecting hotspot observations, they can be used
to detect the location -- using the long-lived hotspots -- and to prove the
time when the location was visited -- using the short-lived hotspots observed
by others users at the location.
In this article, we present a dataset of collected Wi-Fi data from the most
important tourist locations in the city of Lisbon, Portugal, over a period of
months, that was used to show the feasibility of using hotspot data for
location detection and proof. The obtained data and algorithms were assessed
for a specific use case: smart tourism. We also present the data model used to
store the observations and the algorithms developed to detect and prove
location of a user device at a specific time. The Lisbon Hotspots dataset,
LXspots, is made publicly available to the scientific community so that other
researchers can also make use of it to develop new and innovative mobile and
Internet of Things applications.
|
[
{
"version": "v1",
"created": "Fri, 5 Aug 2022 11:21:48 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Claro",
"Rui",
""
],
[
"Eisa",
"Samih",
""
],
[
"Pardal",
"Miguel L.",
""
]
] |
new_dataset
| 0.99987 |
2208.04757
|
Grzegorz Ficht
|
Grzegorz Ficht and Sven Behnke
|
Direct Centroidal Control for Balanced Humanoid Locomotion
|
25th International Conference Series on Climbing and Walking Robots
(CLAWAR), Ponta Delgada, Azores, Portugal, September 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an integrated approach to locomotion and balancing of humanoid
robots based on direct centroidal control. Our method uses a five-mass
description of a humanoid. It generates whole-body motions from desired foot
trajectories and centroidal parameters of the robot. A set of simplified models
is used to formulate general and intuitive control laws, which are then applied
in real-time for estimating and regulating the center of mass position and
orientation of the multibody's principal axes of inertia. The combination of
proposed algorithms produces a stretched-leg gait with naturally looking upper
body motions. As only a 6-axis IMU and joint encoders are necessary for the
implementation, the portability between robots is high. Our method has been
experimentally verified using an igus Humanoid Open Platform, demonstrating
whole-body locomotion and push rejection capabilities.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 13:07:24 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Ficht",
"Grzegorz",
""
],
[
"Behnke",
"Sven",
""
]
] |
new_dataset
| 0.999625 |
2208.04799
|
Ekapol Chuangsuwanich
|
Wannaphong Phatthiyaphaibun, Chompakorn Chaksangchaichot, Peerat
Limkonchotiwat, Ekapol Chuangsuwanich, Sarana Nutanong
|
Thai Wav2Vec2.0 with CommonVoice V8
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, Automatic Speech Recognition (ASR), a system that converts audio
into text, has caught a lot of attention in the machine learning community.
Thus, a lot of publicly available models were released in HuggingFace. However,
most of these ASR models are available in English; only a minority of the
models are available in Thai. Additionally, most of the Thai ASR models are
closed-sourced, and the performance of existing open-sourced models lacks
robustness. To address this problem, we train a new ASR model on a pre-trained
XLSR-Wav2Vec model with the Thai CommonVoice corpus V8 and train a trigram
language model to boost the performance of our ASR model. We hope that our
models will be beneficial to individuals and the ASR community in Thailand.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 14:21:48 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Phatthiyaphaibun",
"Wannaphong",
""
],
[
"Chaksangchaichot",
"Chompakorn",
""
],
[
"Limkonchotiwat",
"Peerat",
""
],
[
"Chuangsuwanich",
"Ekapol",
""
],
[
"Nutanong",
"Sarana",
""
]
] |
new_dataset
| 0.999428 |
2208.04921
|
Weihong Lin
|
Weihong Lin, Zheng Sun, Chixiang Ma, Mingze Li, Jiawei Wang, Lei Sun,
Qiang Huo
|
TSRFormer: Table Structure Recognition with Transformers
|
Accepted by ACM MultiMedia 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new table structure recognition (TSR) approach, called
TSRFormer, to robustly recognizing the structures of complex tables with
geometrical distortions from various table images. Unlike previous methods, we
formulate table separation line prediction as a line regression problem instead
of an image segmentation problem and propose a new two-stage DETR based
separator prediction approach, dubbed \textbf{Sep}arator \textbf{RE}gression
\textbf{TR}ansformer (SepRETR), to predict separation lines from table images
directly. To make the two-stage DETR framework work efficiently and effectively
for the separation line prediction task, we propose two improvements: 1) A
prior-enhanced matching strategy to solve the slow convergence issue of DETR;
2) A new cross attention module to sample features from a high-resolution
convolutional feature map directly so that high localization accuracy is
achieved with low computational cost. After separation line prediction, a
simple relation network based cell merging module is used to recover spanning
cells. With these new techniques, our TSRFormer achieves state-of-the-art
performance on several benchmark datasets, including SciTSR, PubTabNet and WTW.
Furthermore, we have validated the robustness of our approach to tables with
complex structures, borderless cells, large blank spaces, empty or spanning
cells as well as distorted or even curved shapes on a more challenging
real-world in-house dataset.
|
[
{
"version": "v1",
"created": "Tue, 9 Aug 2022 17:36:13 GMT"
}
] | 2022-08-10T00:00:00 |
[
[
"Lin",
"Weihong",
""
],
[
"Sun",
"Zheng",
""
],
[
"Ma",
"Chixiang",
""
],
[
"Li",
"Mingze",
""
],
[
"Wang",
"Jiawei",
""
],
[
"Sun",
"Lei",
""
],
[
"Huo",
"Qiang",
""
]
] |
new_dataset
| 0.998748 |
2104.14805
|
Qi Fan
|
Qi Fan, Chi-Keung Tang, Yu-Wing Tai
|
Few-Shot Video Object Detection
|
ECCV 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We introduce Few-Shot Video Object Detection (FSVOD) with three contributions
to real-world visual learning challenge in our highly diverse and dynamic
world: 1) a large-scale video dataset FSVOD-500 comprising of 500 classes with
class-balanced videos in each category for few-shot learning; 2) a novel Tube
Proposal Network (TPN) to generate high-quality video tube proposals for
aggregating feature representation for the target video object which can be
highly dynamic; 3) a strategically improved Temporal Matching Network (TMN+)
for matching representative query tube features with better discriminative
ability thus achieving higher diversity. Our TPN and TMN+ are jointly and
end-to-end trained. Extensive experiments demonstrate that our method produces
significantly better detection results on two few-shot video object detection
datasets compared to image-based methods and other naive video-based
extensions. Codes and datasets are released at
\url{https://github.com/fanq15/FewX}.
|
[
{
"version": "v1",
"created": "Fri, 30 Apr 2021 07:38:04 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Nov 2021 08:33:25 GMT"
},
{
"version": "v3",
"created": "Sun, 7 Aug 2022 09:23:11 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Fan",
"Qi",
""
],
[
"Tang",
"Chi-Keung",
""
],
[
"Tai",
"Yu-Wing",
""
]
] |
new_dataset
| 0.985082 |
2109.02327
|
Trinh Van Chien
|
Van-Phuc Bui and Trinh Van Chien and Eva Lagunas and Jo\"el Grotz and
Symeon Chatzinotas and Bj\"orn Ottersten
|
Robust Congestion Control for Demand-Based Optimization in Precoded
Multi-Beam High Throughput Satellite Communications
|
20 pages, 12 figures, and 1 table. Accepted to publish in the IEEE
TCOM
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-throughput satellite communication systems are growing in strategic
importance thanks to their role in delivering broadband services to mobile
platforms and residences and/or businesses in rural and remote regions
globally. Although precoding has emerged as a prominent technique to meet
ever-increasing user demands, there is a lack of studies dealing with
congestion control. This paper enhances the performance of multi-beam high
throughput geostationary satellite systems under congestion, where the users'
quality of service (QoS) demands cannot be fully satisfied with limited
resources. In particular, we propose congestion control strategies, relying on
simple power control schemes. We formulate a multi-objective optimization
framework balancing the system sum-rate and the number of users satisfying
their QoS requirements. Next, we propose two novel approaches that effectively
handle the proposed multi-objective optimization problem. The former is a
model-based approach that relies on the weighted sum method to enrich the
number of satisfied users by solving a series of the sum-rate optimization
problems in an iterative manner. The latter is a data-driven approach that
offers a low-cost solution by utilizing supervised learning and exploiting the
optimization structures as continuous mappings. The proposed general framework
is evaluated for different linear precoding techniques, for which the low
computational complexity algorithms are designed. Numerical results manifest
that our proposed framework effectively handles the congestion issue and brings
superior improvements of rate satisfaction to many users than previous works.
Furthermore, the proposed algorithms show low run-time and make them realistic
for practical systems.
|
[
{
"version": "v1",
"created": "Mon, 6 Sep 2021 09:57:13 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2022 00:07:34 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Bui",
"Van-Phuc",
""
],
[
"Van Chien",
"Trinh",
""
],
[
"Lagunas",
"Eva",
""
],
[
"Grotz",
"Joël",
""
],
[
"Chatzinotas",
"Symeon",
""
],
[
"Ottersten",
"Björn",
""
]
] |
new_dataset
| 0.995289 |
2109.10504
|
Yongfei Liu
|
Yongfei Liu, Chenfei Wu, Shao-yen Tseng, Vasudev Lal, Xuming He, Nan
Duan
|
KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object
Knowledge Distillation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Self-supervised vision-and-language pretraining (VLP) aims to learn
transferable multi-modal representations from large-scale image-text data and
to achieve strong performances on a broad scope of vision-language tasks after
finetuning. Previous mainstream VLP approaches typically adopt a two-step
strategy relying on external object detectors to encode images in a multi-modal
Transformer framework, which suffer from restrictive object concept space,
limited image context and inefficient computation. In this paper, we propose an
object-aware end-to-end VLP framework, which directly feeds image grid features
from CNNs into the Transformer and learns the multi-modal representations
jointly. More importantly, we propose to perform object knowledge distillation
to facilitate learning cross-modal alignment at different semantic levels. To
achieve that, we design two novel pretext tasks by taking object features and
their semantic labels from external detectors as supervision: 1.) Object-guided
masked vision modeling task focuses on enforcing object-aware representation
learning in the multi-modal Transformer; 2.) Phrase-region alignment task aims
to improve cross-modal alignment by utilizing the similarities between noun
phrases and object labels in the linguistic space. Extensive experiments on a
wide range of vision-language tasks demonstrate the efficacy of our proposed
framework, and we achieve competitive or superior performances over the
existing pretraining strategies.
|
[
{
"version": "v1",
"created": "Wed, 22 Sep 2021 03:38:05 GMT"
},
{
"version": "v2",
"created": "Tue, 31 May 2022 03:15:01 GMT"
},
{
"version": "v3",
"created": "Sun, 7 Aug 2022 18:27:10 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Liu",
"Yongfei",
""
],
[
"Wu",
"Chenfei",
""
],
[
"Tseng",
"Shao-yen",
""
],
[
"Lal",
"Vasudev",
""
],
[
"He",
"Xuming",
""
],
[
"Duan",
"Nan",
""
]
] |
new_dataset
| 0.989545 |
2110.11488
|
AbdelRahman Abdou
|
Jegan Purushothaman, Ethan Thompson, AbdelRahman Abdou
|
Certificate Root Stores: An Area of Unity or Disparity?
| null |
USENIX Cyber Security Experimentation and Test Workshop (CSET
2022)
| null | null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Organizations like Apple, Microsoft, Mozilla and Google maintain certificate
root stores, which are used as trust anchors by their software platforms. Is
there sufficient consensus on their root-store inclusion and trust policies?
Disparities appear astounding, including in the government-owned certificates
that they trust. Such a status-quo is alarming.
|
[
{
"version": "v1",
"created": "Thu, 21 Oct 2021 21:29:00 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2022 14:03:57 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Purushothaman",
"Jegan",
""
],
[
"Thompson",
"Ethan",
""
],
[
"Abdou",
"AbdelRahman",
""
]
] |
new_dataset
| 0.986067 |
2110.12421
|
Thomas Preu
|
Thomas Preu
|
Refuting Tianrong Lin's arXiv:2110.05942 "Resolution of The
Linear-Bounded Automata Question"
|
7 pages, refers to arXiv:2110.05942 by Tianrong Lin
| null | null | null |
cs.CC cs.FL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In the preprint mentioned in the title Mr. Tianrong claims to prove
$\textrm{NSPACE}[n]\neq\textrm{DSPACE}[n]$, resolving a longstanding open
problem in automata theory called the LBA question. He claims to achieve this
by showing more generally $\textrm{NSPACE}[S(n)]\neq\textrm{DSPACE}[S(n)]$ for
suitable $S(n)$. We demonstrate that his proof is incomplete, even wrong, and
his strategy cannot be repaired.
Update to include recent developments of Mr. Tianrong's preprint.
|
[
{
"version": "v1",
"created": "Sun, 24 Oct 2021 12:00:58 GMT"
},
{
"version": "v2",
"created": "Sat, 6 Aug 2022 18:57:06 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Preu",
"Thomas",
""
]
] |
new_dataset
| 0.998111 |
2110.14183
|
Souvic Chakraborty
|
Souvic Chakraborty, Pawan Goyal, Animesh Mukherjee
|
(Im)balance in the Representation of News? An Extensive Study on a
Decade Long Dataset from India
|
14 pages, submitted to IEEE TCSS
|
International Conference on Social Informatics, SocInfo, 2022
| null | null |
cs.DL cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
(Im)balance in the representation of news has always been a topic of debate
in political circles.
The concept of balance has often been discussed and studied in the context of
the social responsibility theory and the prestige press in the USA. While
various qualitative, as well as quantitative measures of balance, have been
suggested in the literature, a comprehensive analysis of all these measures
across a large dataset of the post-truth era comprising different popular news
media houses and over a sufficiently long temporal scale in a non-US democratic
setting is lacking. We use this concept of balance to measure and understand
the evolution of imbalance in Indian media on various journalistic metrics on a
month-by-month basis. For this study, we amass a huge dataset of over four
million political articles from India for 9+ years and analyze the extent and
quality of coverage given to issues and political parties in the context of
contemporary influential events for three leading newspapers. We use several
state-of-the-art NLP tools to effectively understand political polarization (if
any) manifesting in these articles over time. We find that two out of the three
news outlets are more strongly clustered in their imbalance metrics. We also
observe that only a few locations are extensively covered across all the news
outlets and the situation is only slightly getting better for one of the three
news outlets. Cloze tests show that the changing landscape of events get
reflected in all the news outlets with border and terrorism issues dominating
in around 2010 while economic aspects like unemployment, GST, demonetization,
etc. became more dominant in the period 2014 -- 2018. Further, cloze tests
clearly portray the changing popularity profile of the political parties over
time.
|
[
{
"version": "v1",
"created": "Wed, 27 Oct 2021 05:33:09 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Chakraborty",
"Souvic",
""
],
[
"Goyal",
"Pawan",
""
],
[
"Mukherjee",
"Animesh",
""
]
] |
new_dataset
| 0.997447 |
2110.15087
|
Benedek Rozemberczki
|
Benedek Rozemberczki and Anna Gogleva and Sebastian Nilsson and Gavin
Edwards and Andriy Nikolov and Eliseo Papa
|
MOOMIN: Deep Molecular Omics Network for Anti-Cancer Drug Combination
Therapy
| null | null | null | null |
cs.LG cs.AI cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
We propose the molecular omics network (MOOMIN) a multimodal graph neural
network used by AstraZeneca oncologists to predict the synergy of drug
combinations for cancer treatment. Our model learns drug representations at
multiple scales based on a drug-protein interaction network and metadata.
Structural properties of compounds and proteins are encoded to create vertex
features for a message-passing scheme that operates on the bipartite
interaction graph. Propagated messages form multi-resolution drug
representations which we utilized to create drug pair descriptors. By
conditioning the drug combination representations on the cancer cell type we
define a synergy scoring function that can inductively score unseen pairs of
drugs. Experimental results on the synergy scoring task demonstrate that MOOMIN
outperforms state-of-the-art graph fingerprinting, proximity preserving node
embedding, and existing deep learning approaches. Further results establish
that the predictive performance of our model is robust to hyperparameter
changes. We demonstrate that the model makes high-quality predictions over a
wide range of cancer cell line tissues, out-of-sample predictions can be
validated with external synergy databases, and that the proposed model is data
efficient at learning.
|
[
{
"version": "v1",
"created": "Thu, 28 Oct 2021 13:10:25 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Apr 2022 13:01:17 GMT"
},
{
"version": "v3",
"created": "Mon, 8 Aug 2022 14:15:44 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Rozemberczki",
"Benedek",
""
],
[
"Gogleva",
"Anna",
""
],
[
"Nilsson",
"Sebastian",
""
],
[
"Edwards",
"Gavin",
""
],
[
"Nikolov",
"Andriy",
""
],
[
"Papa",
"Eliseo",
""
]
] |
new_dataset
| 0.998138 |
2111.11326
|
Arthur Douillard
|
Arthur Douillard, Alexandre Ram\'e, Guillaume Couairon, Matthieu Cord
|
DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion
|
CVPR 2022, Code at https://github.com/arthurdouillard/dytox
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Deep network architectures struggle to continually learn new tasks without
forgetting the previous tasks. A recent trend indicates that dynamic
architectures based on an expansion of the parameters can reduce catastrophic
forgetting efficiently in continual learning. However, existing approaches
often require a task identifier at test-time, need complex tuning to balance
the growing number of parameters, and barely share any information across
tasks. As a result, they struggle to scale to a large number of tasks without
significant overhead. In this paper, we propose a transformer architecture
based on a dedicated encoder/decoder framework. Critically, the encoder and
decoder are shared among all tasks. Through a dynamic expansion of special
tokens, we specialize each forward of our decoder network on a task
distribution. Our strategy scales to a large number of tasks while having
negligible memory and time overheads due to strict control of the parameters
expansion. Moreover, this efficient strategy doesn't need any hyperparameter
tuning to control the network's expansion. Our model reaches excellent results
on CIFAR100 and state-of-the-art performances on the large-scale ImageNet100
and ImageNet1000 while having less parameters than concurrent dynamic
frameworks.
|
[
{
"version": "v1",
"created": "Mon, 22 Nov 2021 16:29:06 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Mar 2022 14:24:58 GMT"
},
{
"version": "v3",
"created": "Sun, 7 Aug 2022 15:39:31 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Douillard",
"Arthur",
""
],
[
"Ramé",
"Alexandre",
""
],
[
"Couairon",
"Guillaume",
""
],
[
"Cord",
"Matthieu",
""
]
] |
new_dataset
| 0.98354 |
2111.13327
|
Yi-Chang Chen
|
Yi-Chang Chen, Yu-Chuan Chang, Yen-Cheng Chang and Yi-Ren Yeh
|
Traditional Chinese Synthetic Datasets Verified with Labeled Data for
Scene Text Recognition
|
Accepted in ICPR Workshop DLVDR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene text recognition (STR) has been widely studied in academia and
industry. Training a text recognition model often requires a large amount of
labeled data, but data labeling can be difficult, expensive, or time-consuming,
especially for Traditional Chinese text recognition. To the best of our
knowledge, public datasets for Traditional Chinese text recognition are
lacking. This paper presents a framework for a Traditional Chinese synthetic
data engine which aims to improve text recognition model performance. We
generated over 20 million synthetic data and collected over 7,000 manually
labeled data TC-STR 7k-word as the benchmark. Experimental results show that a
text recognition model can achieve much better accuracy either by training from
scratch with our generated synthetic data or by further fine-tuning with TC-STR
7k-word.
|
[
{
"version": "v1",
"created": "Fri, 26 Nov 2021 06:27:06 GMT"
},
{
"version": "v2",
"created": "Sun, 7 Aug 2022 06:54:24 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Chen",
"Yi-Chang",
""
],
[
"Chang",
"Yu-Chuan",
""
],
[
"Chang",
"Yen-Cheng",
""
],
[
"Yeh",
"Yi-Ren",
""
]
] |
new_dataset
| 0.984726 |
2203.01630
|
Walid Ghanem Mr
|
Walid R. Ghanem, Vahid Jamali, Malte Schellmann, Hanwen Cao, Joseph
Eichinger, and Robert Schober
|
Optimization-based Phase-shift Codebook Design for Large IRSs
|
13 pages, 4 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we focus on large intelligent reflecting surfaces (IRSs) and
propose a new codebook construction method to obtain a set of pre-designed
phase-shift configurations for the IRS unit cells. Since the complexity of
online optimization and the overhead for channel estimation scale with the size
of the phase-shift codebook, the design of small codebooks is of high
importance. We consider both continuous and discrete phase-shift designs and
formulate the codebook construction as optimization problems. To solve the
optimization problems, we propose an optimal algorithm for the discrete
phase-shift design and a low-complexity sub-optimal solution for the continuous
design. Simulation results show that the proposed algorithms facilitate the
construction of codebooks of different sizes and with different beamwidths.
Moreover, the performance of the discrete phaseshift design with 2-bit
quantization is shown to approach that of the continuous phase-shift design.
Finally, our simulation results show that the proposed designs enable large
transmit power savings compared to the existing linear and quadratic codebook
designs [1], [2].
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 10:45:05 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2022 16:43:33 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Ghanem",
"Walid R.",
""
],
[
"Jamali",
"Vahid",
""
],
[
"Schellmann",
"Malte",
""
],
[
"Cao",
"Hanwen",
""
],
[
"Eichinger",
"Joseph",
""
],
[
"Schober",
"Robert",
""
]
] |
new_dataset
| 0.982802 |
2203.03018
|
Arman Raayatsanati
|
Aurel Appius, Erik Bauer, Marc Bl\"ochlinger, Aashi Kalra, Robin
Oberson, Arman Raayatsanati, Pascal Strauch, Sarath Suresh, Marco von Salis,
Robert K. Katzschmann
|
RAPTOR: Rapid Aerial Pickup and Transport of Objects by Robots
|
7 pages, 10 figures, accepted to IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS) 2022. Video:
https://youtu.be/KHkBlBABsC8
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rapid aerial grasping through robots can lead to many applications that
utilize fast and dynamic picking and placing of objects. Rigid grippers
traditionally used in aerial manipulators require high precision and specific
object geometries for successful grasping. We propose RAPTOR, a quadcopter
platform combined with a custom Fin Ray gripper to enable more flexible
grasping of objects with different geometries, leveraging the properties of
soft materials to increase the contact surface between the gripper and the
objects. To reduce the communication latency, we present a new lightweight
middleware solution based on Fast DDS (Data Distribution Service) as an
alternative to ROS (Robot Operating System). We show that RAPTOR achieves an
average of 83% grasping efficacy in a real-world setting for four different
object geometries while moving at an average velocity of 1 m/s during grasping.
In a high-velocity setting, RAPTOR supports up to four times the payload
compared to previous works. Our results highlight the potential of aerial
drones in automated warehouses and other manipulation applications where speed,
swiftness, and robustness are essential while operating in hard-to-reach
places.
|
[
{
"version": "v1",
"created": "Sun, 6 Mar 2022 18:05:35 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Aug 2022 18:00:43 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Appius",
"Aurel",
""
],
[
"Bauer",
"Erik",
""
],
[
"Blöchlinger",
"Marc",
""
],
[
"Kalra",
"Aashi",
""
],
[
"Oberson",
"Robin",
""
],
[
"Raayatsanati",
"Arman",
""
],
[
"Strauch",
"Pascal",
""
],
[
"Suresh",
"Sarath",
""
],
[
"von Salis",
"Marco",
""
],
[
"Katzschmann",
"Robert K.",
""
]
] |
new_dataset
| 0.993202 |
2203.08222
|
Pierre Richemond
|
Stephanie C. Y. Chan and Andrew K. Lampinen and Pierre H. Richemond
and Felix Hill
|
Zipfian environments for Reinforcement Learning
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
As humans and animals learn in the natural world, they encounter
distributions of entities, situations and events that are far from uniform.
Typically, a relatively small set of experiences are encountered frequently,
while many important experiences occur only rarely. The highly-skewed,
heavy-tailed nature of reality poses particular learning challenges that humans
and animals have met by evolving specialised memory systems. By contrast, most
popular RL environments and benchmarks involve approximately uniform variation
of properties, objects, situations or tasks. How will RL algorithms perform in
worlds (like ours) where the distribution of environment features is far less
uniform? To explore this question, we develop three complementary RL
environments where the agent's experience varies according to a Zipfian
(discrete power law) distribution. On these benchmarks, we find that standard
Deep RL architectures and algorithms acquire useful knowledge of common
situations and tasks, but fail to adequately learn about rarer ones. To
understand this failure better, we explore how different aspects of current
approaches may be adjusted to help improve performance on rare events, and show
that the RL objective function, the agent's memory system and self-supervised
learning objectives can all influence an agent's ability to learn from uncommon
experiences. Together, these results show that learning robustly from skewed
experience is a critical challenge for applying Deep RL methods beyond
simulations or laboratories, and our Zipfian environments provide a basis for
measuring future progress towards this goal.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2022 19:59:10 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2022 13:45:59 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Chan",
"Stephanie C. Y.",
""
],
[
"Lampinen",
"Andrew K.",
""
],
[
"Richemond",
"Pierre H.",
""
],
[
"Hill",
"Felix",
""
]
] |
new_dataset
| 0.999257 |
2203.10638
|
Runsheng Xu
|
Runsheng Xu, Hao Xiang, Zhengzhong Tu, Xin Xia, Ming-Hsuan Yang, Jiaqi
Ma
|
V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision
Transformer
|
ECCV 2022. Code: https://github.com/DerrickXuNu/v2x-vit
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we investigate the application of Vehicle-to-Everything (V2X)
communication to improve the perception performance of autonomous vehicles. We
present a robust cooperative perception framework with V2X communication using
a novel vision Transformer. Specifically, we build a holistic attention model,
namely V2X-ViT, to effectively fuse information across on-road agents (i.e.,
vehicles and infrastructure). V2X-ViT consists of alternating layers of
heterogeneous multi-agent self-attention and multi-scale window self-attention,
which captures inter-agent interaction and per-agent spatial relationships.
These key modules are designed in a unified Transformer architecture to handle
common V2X challenges, including asynchronous information sharing, pose errors,
and heterogeneity of V2X components. To validate our approach, we create a
large-scale V2X perception dataset using CARLA and OpenCDA. Extensive
experimental results demonstrate that V2X-ViT sets new state-of-the-art
performance for 3D object detection and achieves robust performance even under
harsh, noisy environments. The code is available at
https://github.com/DerrickXuNu/v2x-vit.
|
[
{
"version": "v1",
"created": "Sun, 20 Mar 2022 20:18:25 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Jul 2022 04:25:39 GMT"
},
{
"version": "v3",
"created": "Mon, 8 Aug 2022 14:52:03 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Xu",
"Runsheng",
""
],
[
"Xiang",
"Hao",
""
],
[
"Tu",
"Zhengzhong",
""
],
[
"Xia",
"Xin",
""
],
[
"Yang",
"Ming-Hsuan",
""
],
[
"Ma",
"Jiaqi",
""
]
] |
new_dataset
| 0.998197 |
2205.15615
|
Zixiang Ren
|
Zixiang Ren, Xianxin Song, Yuan Fang, Ling Qiu, and Jie Xu
|
Fundamental CRB-Rate Tradeoff in Multi-antenna Multicast Channel with
ISAC
|
conference
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper studies the multi-antenna multicast channel with integrated
sensing and communication (ISAC), in which a multi-antenna base station (BS)
sends common messages to a set of single-antenna communication users (CUs) and
simultaneously estimates the parameters of an extended target via radar
sensing. We investigate the fundamental performance limits of this ISAC system,
in terms of the achievable rate for communication and the estimation
Cram\'er-Rao bound (CRB) for sensing. First, we derive the optimal transmit
covariance in semi-closed form to balance the CRB-rate (C-R) tradeoff, and
accordingly characterize the outer bound of a so-called C-R region. It is shown
that the optimal transmit covariance should be of full rank, consisting of both
information-carrying and dedicated sensing signals in general. Next, we
consider a practical joint information and sensing beamforming design, and
propose an efficient approach to optimize the joint beamforming for balancing
the C-R tradeoff. Numerical results are presented to show the C-R region
achieved by the optimal transmit covariance and the joint beamforming, as
compared to other benchmark schemes.
|
[
{
"version": "v1",
"created": "Tue, 31 May 2022 09:00:44 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2022 03:39:01 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Ren",
"Zixiang",
""
],
[
"Song",
"Xianxin",
""
],
[
"Fang",
"Yuan",
""
],
[
"Qiu",
"Ling",
""
],
[
"Xu",
"Jie",
""
]
] |
new_dataset
| 0.98069 |
2206.07510
|
Ciaran Eising
|
Arindam Das, Sudip Das, Ganesh Sistu, Jonathan Horgan, Ujjwal
Bhattacharya, Edward Jones, Martin Glavin, and Ciar\'an Eising
|
Deep Multi-Task Networks For Occluded Pedestrian Pose Estimation
|
4 pages, 5 tables, 2 figures
|
Proceedings of the 2022 Irish Machine Vision and Image Processing
Conference
| null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Most of the existing works on pedestrian pose estimation do not consider
estimating the pose of an occluded pedestrian, as the annotations of the
occluded parts are not available in relevant automotive datasets. For example,
CityPersons, a well-known dataset for pedestrian detection in automotive scenes
does not provide pose annotations, whereas MS-COCO, a non-automotive dataset,
contains human pose estimation. In this work, we propose a multi-task framework
to extract pedestrian features through detection and instance segmentation
tasks performed separately on these two distributions. Thereafter, an encoder
learns pose specific features using an unsupervised instance-level domain
adaptation method for the pedestrian instances from both distributions. The
proposed framework has improved state-of-the-art performances of pose
estimation, pedestrian detection, and instance segmentation.
|
[
{
"version": "v1",
"created": "Wed, 15 Jun 2022 13:09:24 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2022 14:03:51 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Das",
"Arindam",
""
],
[
"Das",
"Sudip",
""
],
[
"Sistu",
"Ganesh",
""
],
[
"Horgan",
"Jonathan",
""
],
[
"Bhattacharya",
"Ujjwal",
""
],
[
"Jones",
"Edward",
""
],
[
"Glavin",
"Martin",
""
],
[
"Eising",
"Ciarán",
""
]
] |
new_dataset
| 0.999408 |
2206.11619
|
Ting Zhang
|
Ivana Clairine Irsan, Ting Zhang, Ferdian Thung, David Lo, Lingxiao
Jiang
|
AutoPRTitle: A Tool for Automatic Pull Request Title Generation
|
Accepted by the ICSME'22 Tool Demonstration Track
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rise of the pull request mechanism in software development, the
quality of pull requests has gained more attention. Prior works focus on
improving the quality of pull request descriptions and several approaches have
been proposed to automatically generate pull request descriptions. As an
essential component of a pull request, pull request titles have not received a
similar level of attention. To further facilitate automation in software
development and to help developers in drafting high-quality pull request
titles, we introduce AutoPRTitle. AutoPRTitle is specifically designed to
automatically generate pull request titles. AutoPRTitle can generate a precise
and succinct pull request title based on the pull request description, commit
messages, and the associated issue titles. AutoPRTitle is built upon a
state-of-the-art text summarization model, BART, which has been pre-trained on
large-scale English corpora. We further fine-tuned BART in a pull request
dataset containing high-quality pull request titles. We implemented AutoPRTitle
as a stand-alone web application. We conducted two sets of evaluations: one
concerning the model accuracy and the other concerning the tool usability. For
model accuracy, BART outperforms the best baseline by 24.6%, 40.5%, and 23.3%,
respectively. For tool usability, the evaluators consider our tool as
easy-to-use and useful when creating a pull request title of good quality.
Source code: https://github.com/soarsmu/Auto-PR-Title
Video demo: https://tinyurl.com/AutoPRTitle
|
[
{
"version": "v1",
"created": "Thu, 23 Jun 2022 11:02:18 GMT"
},
{
"version": "v2",
"created": "Sat, 6 Aug 2022 02:11:21 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Irsan",
"Ivana Clairine",
""
],
[
"Zhang",
"Ting",
""
],
[
"Thung",
"Ferdian",
""
],
[
"Lo",
"David",
""
],
[
"Jiang",
"Lingxiao",
""
]
] |
new_dataset
| 0.973121 |
2206.12972
|
Kashu Yamazaki
|
Kashu Yamazaki, Sang Truong, Khoa Vo, Michael Kidd, Chase Rainwater,
Khoa Luu, Ngan Le
|
VLCap: Vision-Language with Contrastive Learning for Coherent Video
Paragraph Captioning
|
accepted by The 29th IEEE International Conference on Image
Processing (IEEE ICIP) 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we leverage the human perceiving process, that involves vision
and language interaction, to generate a coherent paragraph description of
untrimmed videos. We propose vision-language (VL) features consisting of two
modalities, i.e., (i) vision modality to capture global visual content of the
entire scene and (ii) language modality to extract scene elements description
of both human and non-human objects (e.g. animals, vehicles, etc), visual and
non-visual elements (e.g. relations, activities, etc). Furthermore, we propose
to train our proposed VLCap under a contrastive learning VL loss. The
experiments and ablation studies on ActivityNet Captions and YouCookII datasets
show that our VLCap outperforms existing SOTA methods on both accuracy and
diversity metrics.
|
[
{
"version": "v1",
"created": "Sun, 26 Jun 2022 20:51:05 GMT"
},
{
"version": "v2",
"created": "Sat, 6 Aug 2022 19:38:10 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Yamazaki",
"Kashu",
""
],
[
"Truong",
"Sang",
""
],
[
"Vo",
"Khoa",
""
],
[
"Kidd",
"Michael",
""
],
[
"Rainwater",
"Chase",
""
],
[
"Luu",
"Khoa",
""
],
[
"Le",
"Ngan",
""
]
] |
new_dataset
| 0.999622 |
2206.13082
|
Haiyan Cen
|
Ruiming Du, Zhihong Ma, Pengyao Xie, Yong He, Haiyan Cen
|
PST: Plant Segmentation Transformer for 3D Point Clouds of rapeseed
plants at the podding stage
|
44 pages, 10 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Segmentation of plant point clouds to obtain high-precise morphological
traits is essential for plant phenotyping. Although the fast development of
deep learning has boosted much research on segmentation of plant point clouds,
previous studies mainly focus on the hard voxelization-based or
down-sampling-based methods, which are limited to segmenting simple plant
organs. Segmentation of complex plant point clouds with a high spatial
resolution still remains challenging. In this study, we proposed a deep
learning network plant segmentation transformer (PST) to achieve the semantic
and instance segmentation of rapeseed plants point clouds acquired by handheld
laser scanning (HLS) with the high spatial resolution, which can characterize
the tiny siliques as the main traits targeted. PST is composed of: (i) a
dynamic voxel feature encoder (DVFE) to aggregate the point features with the
raw spatial resolution; (ii) the dual window sets attention blocks to capture
the contextual information; and (iii) a dense feature propagation module to
obtain the final dense point feature map. The results proved that PST and
PST-PointGroup (PG) achieved superior performance in semantic and instance
segmentation tasks. For the semantic segmentation, the mean IoU, mean
Precision, mean Recall, mean F1-score, and overall accuracy of PST were 93.96%,
97.29%, 96.52%, 96.88%, and 97.07%, achieving an improvement of 7.62%, 3.28%,
4.8%, 4.25%, and 3.88% compared to the second-best state-of-the-art network
PAConv. For instance segmentation, PST-PG reached 89.51%, 89.85%, 88.83% and
82.53% in mCov, mWCov, mPerc90, and mRec90, achieving an improvement of 2.93%,
2.21%, 1.99%, and 5.9% compared to the original PG. This study proves that the
deep-learning-based point cloud segmentation method has a great potential for
resolving dense plant point clouds with complex morphological traits.
|
[
{
"version": "v1",
"created": "Mon, 27 Jun 2022 06:56:48 GMT"
},
{
"version": "v2",
"created": "Sun, 7 Aug 2022 10:07:27 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Du",
"Ruiming",
""
],
[
"Ma",
"Zhihong",
""
],
[
"Xie",
"Pengyao",
""
],
[
"He",
"Yong",
""
],
[
"Cen",
"Haiyan",
""
]
] |
new_dataset
| 0.999556 |
2207.11341
|
Zhongwei Qiu
|
Zhongwei Qiu, Qiansheng Yang, Jian Wang, Dongmei Fu
|
Dynamic Graph Reasoning for Multi-person 3D Pose Estimation
|
ACM Multimedia 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-person 3D pose estimation is a challenging task because of occlusion
and depth ambiguity, especially in the cases of crowd scenes. To solve these
problems, most existing methods explore modeling body context cues by enhancing
feature representation with graph neural networks or adding structural
constraints. However, these methods are not robust for their single-root
formulation that decoding 3D poses from a root node with a pre-defined graph.
In this paper, we propose GR-M3D, which models the \textbf{M}ulti-person
\textbf{3D} pose estimation with dynamic \textbf{G}raph \textbf{R}easoning. The
decoding graph in GR-M3D is predicted instead of pre-defined. In particular, It
firstly generates several data maps and enhances them with a scale and depth
aware refinement module (SDAR). Then multiple root keypoints and dense decoding
paths for each person are estimated from these data maps. Based on them,
dynamic decoding graphs are built by assigning path weights to the decoding
paths, while the path weights are inferred from those enhanced data maps. And
this process is named dynamic graph reasoning (DGR). Finally, the 3D poses are
decoded according to dynamic decoding graphs for each detected person. GR-M3D
can adjust the structure of the decoding graph implicitly by adopting soft path
weights according to input data, which makes the decoding graphs be adaptive to
different input persons to the best extent and more capable of handling
occlusion and depth ambiguity than previous methods. We empirically show that
the proposed bottom-up approach even outperforms top-down methods and achieves
state-of-the-art results on three 3D pose datasets.
|
[
{
"version": "v1",
"created": "Fri, 22 Jul 2022 21:20:22 GMT"
},
{
"version": "v2",
"created": "Sat, 6 Aug 2022 03:05:58 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Qiu",
"Zhongwei",
""
],
[
"Yang",
"Qiansheng",
""
],
[
"Wang",
"Jian",
""
],
[
"Fu",
"Dongmei",
""
]
] |
new_dataset
| 0.997015 |
2208.00031
|
D. Murugan
|
Petchiammal A, Briskline Kiruba S, D. Murugan
|
Paddy Leaf diseases identification on Infrared Images based on
Convolutional Neural Networks
|
Uploaded a different draft by mistake
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Agriculture is the mainstay of human society because it is an essential need
for every organism. Paddy cultivation is very significant so far as humans are
concerned, largely in the Asian continent, and it is one of the staple foods.
However, plant diseases in agriculture lead to depletion in productivity. Plant
diseases are generally caused by pests, insects, and pathogens that decrease
productivity to a large scale if not controlled within a particular time.
Eventually, one cannot see an increase in paddy yield. Accurate and timely
identification of plant diseases can help farmers mitigate losses due to pests
and diseases. Recently, deep learning techniques have been used to identify
paddy diseases and overcome these problems. This paper implements a
convolutional neural network (CNN) based on a model and tests a public dataset
consisting of 636 infrared image samples with five paddy disease classes and
one healthy class. The proposed model proficiently identified and classified
paddy diseases of five different types and achieved an accuracy of 88.28%
|
[
{
"version": "v1",
"created": "Fri, 29 Jul 2022 18:24:29 GMT"
},
{
"version": "v2",
"created": "Sat, 6 Aug 2022 12:06:23 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"A",
"Petchiammal",
""
],
[
"S",
"Briskline Kiruba",
""
],
[
"Murugan",
"D.",
""
]
] |
new_dataset
| 0.999141 |
2208.02043
|
Jonathan Grizou
|
Emma Poliakova, Fraser Dempster, Abubakr Mahmood, Jonathan Grizou
|
SmartControllerJS: A JavaScript library to turn smartphones into
controllers for web-based interactive experiments
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce SmartControllerJS, a new JavaScript library for fast,
cost-effective designing of web applications controlled via everyday
smartphones. At its core, SmartControllerJS establishes a connection between
two webpages, one page running on a desktop browser and the other on the user's
smartphone. The smartphone webpage loads a controller interface allowing users
to control a web application running on their computer's browser. The
SmartControllerJS framework enables fast iteration loops when designing
interactive user experiments because it has minimal friction and allows for
scaling, while having no running costs. We first describe how this library is
built, how it can be used, and provide interactive examples. We then present
two games designed for public screens along with results from user studies
evaluating acceptability and ease of use. Finally, we implement a custom
controller based on user feedback and introduce connection monitoring tools. We
believe SmartControllerJS can accelerate the design of interactive experiments
for researchers in Human-Computer Interaction, and be a useful tool for
educational projects. All our code is available at
https://github.com/SmartControllerJS and links to all demos can be found in
Table I. To explore our demos, we recommend reading this work on a desktop
computer with your smartphone in hand.
|
[
{
"version": "v1",
"created": "Wed, 3 Aug 2022 13:11:42 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2022 14:05:43 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Poliakova",
"Emma",
""
],
[
"Dempster",
"Fraser",
""
],
[
"Mahmood",
"Abubakr",
""
],
[
"Grizou",
"Jonathan",
""
]
] |
new_dataset
| 0.998907 |
2208.03431
|
Zhongwei Qiu
|
Zhongwei Qiu, Qiansheng Yang, Jian Wang, Dongmei Fu
|
IVT: An End-to-End Instance-guided Video Transformer for 3D Pose
Estimation
|
ACM Multimedia 2022, oral
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video 3D human pose estimation aims to localize the 3D coordinates of human
joints from videos. Recent transformer-based approaches focus on capturing the
spatiotemporal information from sequential 2D poses, which cannot model the
contextual depth feature effectively since the visual depth features are lost
in the step of 2D pose estimation. In this paper, we simplify the paradigm into
an end-to-end framework, Instance-guided Video Transformer (IVT), which enables
learning spatiotemporal contextual depth information from visual features
effectively and predicts 3D poses directly from video frames. In particular, we
firstly formulate video frames as a series of instance-guided tokens and each
token is in charge of predicting the 3D pose of a human instance. These tokens
contain body structure information since they are extracted by the guidance of
joint offsets from the human center to the corresponding body joints. Then,
these tokens are sent into IVT for learning spatiotemporal contextual depth. In
addition, we propose a cross-scale instance-guided attention mechanism to
handle the variational scales among multiple persons. Finally, the 3D poses of
each person are decoded from instance-guided tokens by coordinate regression.
Experiments on three widely-used 3D pose estimation benchmarks show that the
proposed IVT achieves state-of-the-art performances.
|
[
{
"version": "v1",
"created": "Sat, 6 Aug 2022 02:36:33 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Qiu",
"Zhongwei",
""
],
[
"Yang",
"Qiansheng",
""
],
[
"Wang",
"Jian",
""
],
[
"Fu",
"Dongmei",
""
]
] |
new_dataset
| 0.984896 |
2208.03444
|
Shannan Guan
|
Shannan Guan, Haiyan Lu, Linchao Zhu, Gengfa Fang
|
AFE-CNN: 3D Skeleton-based Action Recognition with Action Feature
Enhancement
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing 3D skeleton-based action recognition approaches reach impressive
performance by encoding handcrafted action features to image format and
decoding by CNNs. However, such methods are limited in two ways: a) the
handcrafted action features are difficult to handle challenging actions, and b)
they generally require complex CNN models to improve action recognition
accuracy, which usually occur heavy computational burden. To overcome these
limitations, we introduce a novel AFE-CNN, which devotes to enhance the
features of 3D skeleton-based actions to adapt to challenging actions. We
propose feature enhance modules from key joint, bone vector, key frame and
temporal perspectives, thus the AFE-CNN is more robust to camera views and body
sizes variation, and significantly improve the recognition accuracy on
challenging actions. Moreover, our AFE-CNN adopts a light-weight CNN model to
decode images with action feature enhanced, which ensures a much lower
computational burden than the state-of-the-art methods. We evaluate the AFE-CNN
on three benchmark skeleton-based action datasets: NTU RGB+D, NTU RGB+D 120,
and UTKinect-Action3D, with extensive experimental results demonstrate our
outstanding performance of AFE-CNN.
|
[
{
"version": "v1",
"created": "Sat, 6 Aug 2022 04:55:12 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Guan",
"Shannan",
""
],
[
"Lu",
"Haiyan",
""
],
[
"Zhu",
"Linchao",
""
],
[
"Fang",
"Gengfa",
""
]
] |
new_dataset
| 0.965232 |
2208.03500
|
Dharanidhar Dang
|
Dharanidhar Dang, Amitash Nanda, Bill Lin and Debashis Sahoo
|
NeuCASL: From Logic Design to System Simulation of Neuromorphic Engines
|
2 pages, 2 figures, Presented at FMCAD 2021
| null | null | null |
cs.ET cs.AI cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
With Moore's law saturating and Dennard scaling hitting its wall, traditional
Von Neuman systems cannot offer the GFlops/watt for compute-intensive
algorithms such as CNN. Recent trends in unconventional computing approaches
give us hope to design highly energy-efficient computing systems for such
algorithms. Neuromorphic computing is a promising such approach with its
brain-inspired circuitry, use of emerging technologies, and low-power nature.
Researchers use a variety of novel technologies such as memristors, silicon
photonics, FinFET, and carbon nanotubes to demonstrate a neuromorphic computer.
However, a flexible CAD tool to start from neuromorphic logic design and go up
to architectural simulation is yet to be demonstrated to support the rise of
this promising paradigm. In this project, we aim to build NeuCASL, an
opensource python-based full system CAD framework for neuromorphic logic
design, circuit simulation, and system performance and reliability estimation.
This is a first of its kind to the best of our knowledge.
|
[
{
"version": "v1",
"created": "Sat, 6 Aug 2022 11:33:05 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Dang",
"Dharanidhar",
""
],
[
"Nanda",
"Amitash",
""
],
[
"Lin",
"Bill",
""
],
[
"Sahoo",
"Debashis",
""
]
] |
new_dataset
| 0.969357 |
2208.03541
|
Pino Caballero-Gil
|
V Mora-Afonso, Pino Caballero-Gil, Jezabel Molina-Gil
|
Strong authentication on smart wireless devices
| null |
Second International Conference on Future Generation Communication
Technologies (FGCT 2013), pp. 137-142,
|
10.1109/FGCT.2013.6767206
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The rapid deployment of wireless technologies has given rise to the current
situation where mobile phones and other wireless devices have become essential
elements in all types of activities, including in the home. In particular,
smartphones and laptops are used for wirelessly sharing photos and documents,
playing games, browsing websites, and viewing multimedia, for example. This
work describes a proposal for both desktop and mobile applications that use
Identity-Based Cryptography (IBC) to protect communications between smart
wireless devices in the home. It combines the use of IBC for Wi-Fi and
Bluetooth communication, with the promising Near Field Communication (NFC)
technology for secure authentication. The proposed scheme involves NFC pairing
to establish as public key a piece of information linked to the device, such as
a phone number or an IP address. In this way, such information can be then used
in an IBC scheme for peer-to-peer communication. This is a work in progress,
but preliminary implementations of prototypes on several mobile platforms have
already produced promising results.
|
[
{
"version": "v1",
"created": "Sat, 6 Aug 2022 16:42:39 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Mora-Afonso",
"V",
""
],
[
"Caballero-Gil",
"Pino",
""
],
[
"Molina-Gil",
"Jezabel",
""
]
] |
new_dataset
| 0.999159 |
2208.03552
|
Lingzhi Zhang
|
Lingzhi Zhang, Connelly Barnes, Kevin Wampler, Sohrab Amirghodsi, Eli
Shechtman, Zhe Lin, Jianbo Shi
|
Inpainting at Modern Camera Resolution by Guided PatchMatch with
Auto-Curation
|
34 pages, 15 figures, ECCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, deep models have established SOTA performance for low-resolution
image inpainting, but they lack fidelity at resolutions associated with modern
cameras such as 4K or more, and for large holes. We contribute an inpainting
benchmark dataset of photos at 4K and above representative of modern sensors.
We demonstrate a novel framework that combines deep learning and traditional
methods. We use an existing deep inpainting model LaMa to fill the hole
plausibly, establish three guide images consisting of structure, segmentation,
depth, and apply a multiply-guided PatchMatch to produce eight candidate
upsampled inpainted images. Next, we feed all candidate inpaintings through a
novel curation module that chooses a good inpainting by column summation on an
8x8 antisymmetric pairwise preference matrix. Our framework's results are
overwhelmingly preferred by users over 8 strong baselines, with improvements of
quantitative metrics up to 7.4 over the best baseline LaMa, and our technique
when paired with 4 different SOTA inpainting backbones improves each such that
ours is overwhelmingly preferred by users over a strong super-res baseline.
|
[
{
"version": "v1",
"created": "Sat, 6 Aug 2022 17:59:47 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Zhang",
"Lingzhi",
""
],
[
"Barnes",
"Connelly",
""
],
[
"Wampler",
"Kevin",
""
],
[
"Amirghodsi",
"Sohrab",
""
],
[
"Shechtman",
"Eli",
""
],
[
"Lin",
"Zhe",
""
],
[
"Shi",
"Jianbo",
""
]
] |
new_dataset
| 0.998659 |
2208.03560
|
Basma Hasanen
|
Basma B. Hasanen, Mohammad I. Awad, Mohamed N. Boushaki, Zhenwei Niu,
Mohammed A. Ramadan, Irfan Hussain
|
Novel Supernumerary Robotic Limb based on Variable Stiffness Actuators
for Hemiplegic Patients Assistance
|
8 pages, 11 figures, Proceedings of the 2022 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS 2022)
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Loss of upper extremity motor control and function is an unremitting symptom
in post-stroke patients. This would impose hardships on accomplishing their
daily life activities. Supernumerary robotic limbs (SRLs) were introduced as a
solution to regain the lost Degrees of Freedom (DoFs) by introducing an
independent new limb. The actuation systems in SRL can be categorized into
rigid and soft actuators. Soft actuators have proven advantageous over their
rigid counterparts through intrinsic safety, cost, and energy efficiency.
However, they suffer from low stiffness, which jeopardizes their accuracy.
Variable Stiffness Actuators (VSAs) are newly developed technologies that have
been proven to ensure accuracy and safety. In this paper, we introduce the
novel Supernumerary Robotic Limb based on Variable Stiffness Actuators. Based
on our knowledge, the proposed proof-of-concept SRL is the first that utilizes
Variable Stiffness Actuators. The developed SRL would assist post-stroke
patients in bi-manual tasks, e.g., eating with a fork and knife. The modeling,
design, and realization of the system are illustrated. The proposed SRL was
evaluated and verified for its accuracy via predefined trajectories. The safety
was verified by utilizing the momentum observer for collision detection, and
several post-collision reaction strategies were evaluated through the Soft
Tissue Injury Test. The assistance process is qualitatively verified through
standard user-satisfaction questionnaire.
|
[
{
"version": "v1",
"created": "Sat, 6 Aug 2022 18:28:19 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Hasanen",
"Basma B.",
""
],
[
"Awad",
"Mohammad I.",
""
],
[
"Boushaki",
"Mohamed N.",
""
],
[
"Niu",
"Zhenwei",
""
],
[
"Ramadan",
"Mohammed A.",
""
],
[
"Hussain",
"Irfan",
""
]
] |
new_dataset
| 0.994266 |
2208.03582
|
Emre Arslan
|
Emre Arslan, Fatih Kilinc, Sultangali Arzykulov, Ali Tugberk Dogukan,
Abdulkadir Celik, Ertugrul Basar, Ahmad M. Eltawil
|
Reconfigurable Intelligent Surface Enabled Over-the-Air Uplink
Non-orthogonal Multiple Access
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Innovative reconfigurable intelligent surface (RIS) technologies are rising
and recognized as promising candidates to enhance 6G and beyond wireless
communication systems. RISs acquire the ability to manipulate electromagnetic
signals, thus, offering a degree of control over the wireless channel and the
potential for many more benefits. Furthermore, active RIS designs have recently
been introduced to combat the critical double fading problem and other
impairments passive RIS designs may possess. In this paper, the potential and
flexibility of active RIS technology are exploited for uplink systems to
achieve virtual non-orthogonal multiple access (NOMA) through power disparity
over-the-air rather than controlling transmit powers at the user side.
Specifically, users with identical transmit power, path loss, and distance can
communicate with a base station sharing time and frequency resources in a NOMA
fashion with the aid of the proposed hybrid RIS system. Here, the RIS is
partitioned into active and passive parts and the distinctive partitions serve
different users aligning their phases accordingly while introducing a power
difference to the users' signals to enable NOMA. First, the end-to-end system
model is presented considering two users. Furthermore, outage probability
calculations and theoretical error probability analysis are discussed and
reinforced with computer simulation results.
|
[
{
"version": "v1",
"created": "Sat, 6 Aug 2022 20:54:22 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Arslan",
"Emre",
""
],
[
"Kilinc",
"Fatih",
""
],
[
"Arzykulov",
"Sultangali",
""
],
[
"Dogukan",
"Ali Tugberk",
""
],
[
"Celik",
"Abdulkadir",
""
],
[
"Basar",
"Ertugrul",
""
],
[
"Eltawil",
"Ahmad M.",
""
]
] |
new_dataset
| 0.993818 |
2208.03617
|
Masaaki Harada
|
Masaaki Harada
|
Self-dual codes over $\mathbb{F}_5$ and $s$-extremal unimodular lattices
|
21 pages
| null | null | null |
cs.IT math.CO math.IT math.NT
|
http://creativecommons.org/licenses/by/4.0/
|
New $s$-extremal extremal unimodular lattices in dimensions $38$, $40$, $42$
and $44$ are constructed from self-dual codes over $\mathbb{F}_5$ by
Construction A. In the process of constructing these codes, we obtain a
self-dual $[44,22,14]$ code over $\mathbb{F}_5$. In addition, the code implies
a $[43,22,13]$ code over $\mathbb{F}_5$. These codes have larger minimum
weights than the previously known $[44,22]$ codes and $[43,22]$ codes,
respectively.
|
[
{
"version": "v1",
"created": "Sun, 7 Aug 2022 02:03:40 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Harada",
"Masaaki",
""
]
] |
new_dataset
| 0.994156 |
2208.03631
|
Xuanle Ren
|
Xuanle Ren and Xiaoxia Cui
|
An Enclave-based TEE for SE-in-SoC in RISC-V Industry
|
Invited paper of Embedded World 2020
| null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Secure Element (SE) in SoC sees an increasing adoption in industry. Many
applications in IoT devices are bound to the SE because it provides strong
cryptographic functions and physical protection. Though SE-in-SoC provides
strong proven isolation for software programs, it also brings more design
complexity and higher cost to PCB board building. More, SE-in-SoC may still
have security concerns, such as malware installation and user impersonation. In
this work, we employ TEE, a hardware-backed security technique, for protecting
SE-in-SoC and RISCV. In particular, we construct various enclaves for isolating
applications and manipulating the SE, with the inherently-secure primitives
provided by RISC-V. Using hardware and software co-design, the solution ensures
trusted execution and secure communication among applications. The security of
SE is further protected by enforcing the SE to be controlled by a trusted
enclave and making the RISC-V core resilient to side-channel attacks.
|
[
{
"version": "v1",
"created": "Sun, 7 Aug 2022 03:50:34 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Ren",
"Xuanle",
""
],
[
"Cui",
"Xiaoxia",
""
]
] |
new_dataset
| 0.999518 |
2208.03699
|
Elizabeth Polgreen
|
Elizabeth Polgreen, Kevin Cheang, Pranav Gaddamadugu, Adwait Godbole,
Kevin Laeufer, Shaokai Lin, Yatin A. Manerkar, Federico Mora and Sanjit A.
Seshia
|
UCLID5: Multi-Modal Formal Modeling, Verification, and Synthesis
|
12 pages plus appendix. Published at CAV 2022
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
UCLID5 is a tool for the multi-modal formal modeling, verification, and
synthesis of systems. It enables one to tackle verification problems for
heterogeneous systems such as combinations of hardware and software, or those
that have multiple, varied specifications, or systems that require hybrid modes
of modeling. A novel aspect of \uclid is an emphasis on the use of
syntax-guided and inductive synthesis to automate steps in modeling and
verification. This tool paper presents new developments in the \uclid tool
including new language features, integration with new techniques for
syntax-guided synthesis and satisfiability solving, support for hyperproperties
and combinations of axiomatic and operational modeling, demonstrations on new
problem classes, and a robust implementation.
|
[
{
"version": "v1",
"created": "Sun, 7 Aug 2022 11:24:23 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Polgreen",
"Elizabeth",
""
],
[
"Cheang",
"Kevin",
""
],
[
"Gaddamadugu",
"Pranav",
""
],
[
"Godbole",
"Adwait",
""
],
[
"Laeufer",
"Kevin",
""
],
[
"Lin",
"Shaokai",
""
],
[
"Manerkar",
"Yatin A.",
""
],
[
"Mora",
"Federico",
""
],
[
"Seshia",
"Sanjit A.",
""
]
] |
new_dataset
| 0.999113 |
2208.03742
|
Yunpeng Bai
|
Yunpeng Bai, Chao Dong, Cairong Wang
|
PS-NeRV: Patch-wise Stylized Neural Representations for Videos
|
9 pages, 11 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study how to represent a video with implicit neural representations
(INRs). Classical INRs methods generally utilize MLPs to map input coordinates
to output pixels. While some recent works have tried to directly reconstruct
the whole image with CNNs. However, we argue that both the above pixel-wise and
image-wise strategies are not favorable to video data. Instead, we propose a
patch-wise solution, PS-NeRV, which represents videos as a function of patches
and the corresponding patch coordinate. It naturally inherits the advantages of
image-wise methods, and achieves excellent reconstruction performance with fast
decoding speed. The whole method includes conventional modules, like positional
embedding, MLPs and CNNs, while also introduces AdaIN to enhance intermediate
features. These simple yet essential changes could help the network easily fit
high-frequency details. Extensive experiments have demonstrated its
effectiveness in several video-related tasks, such as video compression and
video inpainting.
|
[
{
"version": "v1",
"created": "Sun, 7 Aug 2022 14:45:30 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Bai",
"Yunpeng",
""
],
[
"Dong",
"Chao",
""
],
[
"Wang",
"Cairong",
""
]
] |
new_dataset
| 0.997937 |
2208.03804
|
Martin Nisser
|
Martin Nisser and Yashaswini Makaram and Lucian Covarrubias and Amadou
Bah and Faraz Faruqi and Ryo Suzuki and Stefanie Mueller
|
Mixels: Fabricating Interfaces using Programmable Magnetic Pixels
|
ACM UIST 2022: ACM Symposium on User Interface Software and
Technology
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present Mixels, programmable magnetic pixels that can be
rapidly fabricated using an electromagnetic printhead mounted on an
off-the-shelve 3-axis CNC machine. The ability to program magnetic material
pixel-wise with varying magnetic force enables Mixels to create new tangible,
tactile, and haptic interfaces. To facilitate the creation of interactive
objects with Mixels, we provide a user interface that lets users specify the
high-level magnetic behavior and that then computes the underlying magnetic
pixel assignments and fabrication instructions to program the magnetic surface.
Our custom hardware add-on based on an electromagnetic printhead and hall
effect sensor clips onto a standard 3-axis CNC machine and can both write and
read magnetic pixel values from magnetic material. Our evaluation shows that
our system can reliably program and read magnetic pixels of various strengths,
that we can predict the behavior of two interacting magnetic surfaces before
programming them, that our electromagnet is strong enough to create pixels that
utilize the maximum magnetic strength of the material being programmed, and
that this material remains magnetized when removed from the magnetic plotter.
|
[
{
"version": "v1",
"created": "Sun, 7 Aug 2022 20:32:12 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Nisser",
"Martin",
""
],
[
"Makaram",
"Yashaswini",
""
],
[
"Covarrubias",
"Lucian",
""
],
[
"Bah",
"Amadou",
""
],
[
"Faruqi",
"Faraz",
""
],
[
"Suzuki",
"Ryo",
""
],
[
"Mueller",
"Stefanie",
""
]
] |
new_dataset
| 0.957381 |
2208.03806
|
Fatemeh Ganji
|
Mohammad Hashemi, Steffi Roy, Domenic Forte, Fatemeh Ganji
|
HWGN2: Side-channel Protected Neural Networks through Secure and Private
Function Evaluation
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recent work has highlighted the risks of intellectual property (IP) piracy of
deep learning (DL) models from the side-channel leakage of DL hardware
accelerators. In response, to provide side-channel leakage resiliency to DL
hardware accelerators, several approaches have been proposed, mainly borrowed
from the methodologies devised for cryptographic implementations. Therefore, as
expected, the same challenges posed by the complex design of such
countermeasures should be dealt with. This is despite the fact that fundamental
cryptographic approaches, specifically secure and private function evaluation,
could potentially improve the robustness against side-channel leakage. To
examine this and weigh the costs and benefits, we introduce hardware garbled NN
(HWGN2), a DL hardware accelerator implemented on FPGA. HWGN2 also provides NN
designers with the flexibility to protect their IP in real-time applications,
where hardware resources are heavily constrained, through a
hardware-communication cost trade-off. Concretely, we apply garbled circuits,
implemented using a MIPS architecture that achieves up to 62.5x fewer logical
and 66x less memory utilization than the state-of-the-art approaches at the
price of communication overhead. Further, the side-channel resiliency of HWGN2
is demonstrated by employing the test vector leakage assessment (TVLA) test
against both power and electromagnetic side-channels. This is in addition to
the inherent feature of HWGN2: it ensures the privacy of users' input,
including the architecture of NNs. We also demonstrate a natural extension to
the malicious security modeljust as a by-product of our implementation.
|
[
{
"version": "v1",
"created": "Sun, 7 Aug 2022 20:33:34 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Hashemi",
"Mohammad",
""
],
[
"Roy",
"Steffi",
""
],
[
"Forte",
"Domenic",
""
],
[
"Ganji",
"Fatemeh",
""
]
] |
new_dataset
| 0.997254 |
2208.03822
|
Fatemeh Ganji
|
Mohammad Hashemi, Steffi Roy, Fatemeh Ganji, Domenic Forte
|
Garbled EDA: Privacy Preserving Electronic Design Automation
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The complexity of modern integrated circuits (ICs) necessitates collaboration
between multiple distrusting parties, including thirdparty intellectual
property (3PIP) vendors, design houses, CAD/EDA tool vendors, and foundries,
which jeopardizes confidentiality and integrity of each party's IP. IP
protection standards and the existing techniques proposed by researchers are ad
hoc and vulnerable to numerous structural, functional, and/or side-channel
attacks. Our framework, Garbled EDA, proposes an alternative direction through
formulating the problem in a secure multi-party computation setting, where the
privacy of IPs, CAD tools, and process design kits (PDKs) is maintained. As a
proof-of-concept, Garbled EDA is evaluated in the context of simulation, where
multiple IP description formats (Verilog, C, S) are supported. Our results
demonstrate a reasonable logical-resource cost and negligible memory overhead.
To further reduce the overhead, we present another efficient implementation
methodology, feasible when the resource utilization is a bottleneck, but the
communication between two parties is not restricted. Interestingly, this
implementation is private and secure even in the presence of malicious
adversaries attempting to, e.g., gain access to PDKs or in-house IPs of the CAD
tool providers.
|
[
{
"version": "v1",
"created": "Sun, 7 Aug 2022 21:19:45 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Hashemi",
"Mohammad",
""
],
[
"Roy",
"Steffi",
""
],
[
"Ganji",
"Fatemeh",
""
],
[
"Forte",
"Domenic",
""
]
] |
new_dataset
| 0.993386 |
2208.03826
|
Lingzhi Zhang
|
Lingzhi Zhang, Shenghao Zhou, Simon Stent, Jianbo Shi
|
Fine-Grained Egocentric Hand-Object Segmentation: Dataset, Model, and
Applications
|
25 pages, 17 figures, ECCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Egocentric videos offer fine-grained information for high-fidelity modeling
of human behaviors. Hands and interacting objects are one crucial aspect of
understanding a viewer's behaviors and intentions. We provide a labeled dataset
consisting of 11,243 egocentric images with per-pixel segmentation labels of
hands and objects being interacted with during a diverse array of daily
activities. Our dataset is the first to label detailed hand-object contact
boundaries. We introduce a context-aware compositional data augmentation
technique to adapt to out-of-distribution YouTube egocentric video. We show
that our robust hand-object segmentation model and dataset can serve as a
foundational tool to boost or enable several downstream vision applications,
including hand state classification, video activity recognition, 3D mesh
reconstruction of hand-object interactions, and video inpainting of hand-object
foregrounds in egocentric videos. Dataset and code are available at:
https://github.com/owenzlz/EgoHOS
|
[
{
"version": "v1",
"created": "Sun, 7 Aug 2022 21:43:40 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Zhang",
"Lingzhi",
""
],
[
"Zhou",
"Shenghao",
""
],
[
"Stent",
"Simon",
""
],
[
"Shi",
"Jianbo",
""
]
] |
new_dataset
| 0.999871 |
2208.03849
|
Kshitiz Bansal
|
Kshitiz Bansal, Keshav Rungta and Dinesh Bharadia
|
RadSegNet: A Reliable Approach to Radar Camera Fusion
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Perception systems for autonomous driving have seen significant advancements
in their performance over last few years. However, these systems struggle to
show robustness in extreme weather conditions because sensors like lidars and
cameras, which are the primary sensors in a sensor suite, see a decline in
performance under these conditions. In order to solve this problem,
camera-radar fusion systems provide a unique opportunity for all weather
reliable high quality perception. Cameras provides rich semantic information
while radars can work through occlusions and in all weather conditions. In this
work, we show that the state-of-the-art fusion methods perform poorly when
camera input is degraded, which essentially results in losing the all-weather
reliability they set out to achieve. Contrary to these approaches, we propose a
new method, RadSegNet, that uses a new design philosophy of independent
information extraction and truly achieves reliability in all conditions,
including occlusions and adverse weather. We develop and validate our proposed
system on the benchmark Astyx dataset and further verify these results on the
RADIATE dataset. When compared to state-of-the-art methods, RadSegNet achieves
a 27% improvement on Astyx and 41.46% increase on RADIATE, in average precision
score and maintains a significantly better performance in adverse weather
conditions
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 00:09:16 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Bansal",
"Kshitiz",
""
],
[
"Rungta",
"Keshav",
""
],
[
"Bharadia",
"Dinesh",
""
]
] |
new_dataset
| 0.987559 |
2208.03864
|
Jie Peng
|
Yanjun Li, Jie Peng, Haibin Kan, Lijing Zheng
|
Minimal Binary Linear Codes from Vectorial Boolean Functions
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recently, much progress has been made to construct minimal linear codes due
to their preference in secret sharing schemes and secure two-party computation.
In this paper, we put forward a new method to construct minimal linear codes by
using vectorial Boolean functions. Firstly, we give a necessary and sufficient
condition for a generic class of linear codes from vectorial Boolean functions
to be minimal. Based on that, we derive some new three-weight minimal linear
codes and determine their weight distributions. Secondly, we obtain a necessary
and sufficient condition for another generic class of linear codes from
vectorial Boolean functions to be minimal and to be violated the AB condition.
As a result, we get three infinite families of minimal linear codes violating
the AB condition. To the best of our knowledge, this is the first time that
minimal liner codes are constructed from vectorial Boolean functions. Compared
with other known ones, in general the minimal liner codes obtained in this
paper have higher dimensions.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 01:42:17 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Li",
"Yanjun",
""
],
[
"Peng",
"Jie",
""
],
[
"Kan",
"Haibin",
""
],
[
"Zheng",
"Lijing",
""
]
] |
new_dataset
| 0.99881 |
2208.03938
|
Zeyan Li
|
Zeyan Li, Nengwen Zhao, Shenglin Zhang, Yongqian Sun, Pengfei Chen,
Xidao Wen, Minghua Ma, Dan Pei
|
Constructing Large-Scale Real-World Benchmark Datasets for AIOps
| null | null | null | null |
cs.SE cs.PF
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recently, AIOps (Artificial Intelligence for IT Operations) has been well
studied in academia and industry to enable automated and effective software
service management. Plenty of efforts have been dedicated to AIOps, including
anomaly detection, root cause localization, incident management, etc. However,
most existing works are evaluated on private datasets, so their generality and
real performance cannot be guaranteed. The lack of public large-scale
real-world datasets has prevented researchers and engineers from enhancing the
development of AIOps. To tackle this dilemma, in this work, we introduce three
public real-world, large-scale datasets about AIOps, mainly aiming at KPI
anomaly detection, root cause localization on multi-dimensional data, and
failure discovery and diagnosis. More importantly, we held three competitions
in 2018/2019/2020 based on these datasets, attracting thousands of teams to
participate. In the future, we will continue to publish more datasets and hold
competitions to promote the development of AIOps further.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 07:06:54 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Li",
"Zeyan",
""
],
[
"Zhao",
"Nengwen",
""
],
[
"Zhang",
"Shenglin",
""
],
[
"Sun",
"Yongqian",
""
],
[
"Chen",
"Pengfei",
""
],
[
"Wen",
"Xidao",
""
],
[
"Ma",
"Minghua",
""
],
[
"Pei",
"Dan",
""
]
] |
new_dataset
| 0.994093 |
2208.03945
|
Shuai Zhang
|
Shuai Zhang, Liang Zhao, Shoudong Huang, Hua Wang, Qi Luo, Qi Hao
|
SLAM-TKA: Real-time Intra-operative Measurement of Tibial Resection
Plane in Conventional Total Knee Arthroplasty
|
10 pages, 4 figures, The 25th International Conference on Medical
Image Computing and Computer Assisted Intervention, MICCAI 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Total knee arthroplasty (TKA) is a common orthopaedic surgery to replace a
damaged knee joint with artificial implants. The inaccuracy of achieving the
planned implant position can result in the risk of implant component aseptic
loosening, wear out, and even a joint revision, and those failures most of the
time occur on the tibial side in the conventional jig-based TKA (CON-TKA). This
study aims to precisely evaluate the accuracy of the proximal tibial resection
plane intra-operatively in real-time such that the evaluation processing
changes very little on the CON-TKA operative procedure. Two X-ray radiographs
captured during the proximal tibial resection phase together with a
pre-operative patient-specific tibia 3D mesh model segmented from computed
tomography (CT) scans and a trocar pin 3D mesh model are used in the proposed
simultaneous localisation and mapping (SLAM) system to estimate the proximal
tibial resection plane. Validations using both simulation and in-vivo datasets
are performed to demonstrate the robustness and the potential clinical value of
the proposed algorithm.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 07:22:24 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Zhang",
"Shuai",
""
],
[
"Zhao",
"Liang",
""
],
[
"Huang",
"Shoudong",
""
],
[
"Wang",
"Hua",
""
],
[
"Luo",
"Qi",
""
],
[
"Hao",
"Qi",
""
]
] |
new_dataset
| 0.998062 |
2208.03963
|
Maximilian Gilles
|
Maximilian Gilles, Yuhao Chen, Tim Robin Winter, E. Zhixuan Zeng,
Alexander Wong
|
MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware
Ambidextrous Bin Picking via Physics-based Metaverse Synthesis
|
Accepted for 2022 IEEE 18th International Conference on Automation
Science and Engineering (CASE)
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous bin picking poses significant challenges to vision-driven robotic
systems given the complexity of the problem, ranging from various sensor
modalities, to highly entangled object layouts, to diverse item properties and
gripper types. Existing methods often address the problem from one perspective.
Diverse items and complex bin scenes require diverse picking strategies
together with advanced reasoning. As such, to build robust and effective
machine-learning algorithms for solving this complex task requires significant
amounts of comprehensive and high quality data. Collecting such data in real
world would be too expensive and time prohibitive and therefore intractable
from a scalability perspective. To tackle this big, diverse data problem, we
take inspiration from the recent rise in the concept of metaverses, and
introduce MetaGraspNet, a large-scale photo-realistic bin picking dataset
constructed via physics-based metaverse synthesis. The proposed dataset
contains 217k RGBD images across 82 different article types, with full
annotations for object detection, amodal perception, keypoint detection,
manipulation order and ambidextrous grasp labels for a parallel-jaw and vacuum
gripper. We also provide a real dataset consisting of over 2.3k fully annotated
high-quality RGBD images, divided into 5 levels of difficulties and an unseen
object set to evaluate different object and layout properties. Finally, we
conduct extensive experiments showing that our proposed vacuum seal model and
synthetic dataset achieves state-of-the-art performance and generalizes to real
world use-cases.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 08:15:34 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Gilles",
"Maximilian",
""
],
[
"Chen",
"Yuhao",
""
],
[
"Winter",
"Tim Robin",
""
],
[
"Zeng",
"E. Zhixuan",
""
],
[
"Wong",
"Alexander",
""
]
] |
new_dataset
| 0.99985 |
2208.03974
|
Yue Hu
|
Yue Hu, Shaoheng Fang, Weidi Xie and Siheng Chen
|
Aerial Monocular 3D Object Detection
|
8 pages, 8 figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Drones equipped with cameras can significantly enhance human ability to
perceive the world because of their remarkable maneuverability in 3D space.
Ironically, object detection for drones has always been conducted in the 2D
image space, which fundamentally limits their ability to understand 3D scenes.
Furthermore, existing 3D object detection methods developed for autonomous
driving cannot be directly applied to drones due to the lack of deformation
modeling, which is essential for the distant aerial perspective with sensitive
distortion and small objects. To fill the gap, this work proposes a dual-view
detection system named DVDET to achieve aerial monocular object detection in
both the 2D image space and the 3D physical space. To address the severe view
deformation issue, we propose a novel trainable geo-deformable transformation
module that can properly warp information from the drone's perspective to the
BEV. Compared to the monocular methods for cars, our transformation includes a
learnable deformable network for explicitly revising the severe deviation. To
address the dataset challenge, we propose a new large-scale simulation dataset
named AM3D-Sim, generated by the co-simulation of AirSIM and CARLA, and a new
real-world aerial dataset named AM3D-Real, collected by DJI Matrice 300 RTK, in
both datasets, high-quality annotations for 3D object detection are provided.
Extensive experiments show that i) aerial monocular 3D object detection is
feasible; ii) the model pre-trained on the simulation dataset benefits
real-world performance, and iii) DVDET also benefits monocular 3D object
detection for cars. To encourage more researchers to investigate this area, we
will release the dataset and related code in
https://sjtu-magic.github.io/dataset/AM3D/.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 08:32:56 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Hu",
"Yue",
""
],
[
"Fang",
"Shaoheng",
""
],
[
"Xie",
"Weidi",
""
],
[
"Chen",
"Siheng",
""
]
] |
new_dataset
| 0.996005 |
2208.04024
|
Joon Sung Park
|
Joon Sung Park, Lindsay Popowski, Carrie J. Cai, Meredith Ringel
Morris, Percy Liang, Michael S. Bernstein
|
Social Simulacra: Creating Populated Prototypes for Social Computing
Systems
|
This work will appear in the 35th Annual ACM Symposium on User
Interface Software and Technology (UIST '22)
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Social computing prototypes probe the social behaviors that may arise in an
envisioned system design. This prototyping practice is currently limited to
recruiting small groups of people. Unfortunately, many challenges do not arise
until a system is populated at a larger scale. Can a designer understand how a
social system might behave when populated, and make adjustments to the design
before the system falls prey to such challenges? We introduce social simulacra,
a prototyping technique that generates a breadth of realistic social
interactions that may emerge when a social computing system is populated.
Social simulacra take as input the designer's description of a community's
design -- goal, rules, and member personas -- and produce as output an instance
of that design with simulated behavior, including posts, replies, and
anti-social behaviors. We demonstrate that social simulacra shift the behaviors
that they generate appropriately in response to design changes, and that they
enable exploration of "what if?" scenarios where community members or
moderators intervene. To power social simulacra, we contribute techniques for
prompting a large language model to generate thousands of distinct community
members and their social interactions with each other; these techniques are
enabled by the observation that large language models' training data already
includes a wide variety of positive and negative behavior on social media
platforms. In evaluations, we show that participants are often unable to
distinguish social simulacra from actual community behavior and that social
computing designers successfully refine their social computing designs when
using social simulacra.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 10:13:50 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Park",
"Joon Sung",
""
],
[
"Popowski",
"Lindsay",
""
],
[
"Cai",
"Carrie J.",
""
],
[
"Morris",
"Meredith Ringel",
""
],
[
"Liang",
"Percy",
""
],
[
"Bernstein",
"Michael S.",
""
]
] |
new_dataset
| 0.99657 |
2208.04043
|
Gwangtak Bae
|
Gwangtak Bae, Byungjun Kim, Seongyong Ahn, Jihong Min, Inwook Shim
|
SLiDE: Self-supervised LiDAR De-snowing through Reconstruction
Difficulty
|
ECCV 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR is widely used to capture accurate 3D outdoor scene structures.
However, LiDAR produces many undesirable noise points in snowy weather, which
hamper analyzing meaningful 3D scene structures. Semantic segmentation with
snow labels would be a straightforward solution for removing them, but it
requires laborious point-wise annotation. To address this problem, we propose a
novel self-supervised learning framework for snow points removal in LiDAR point
clouds. Our method exploits the structural characteristic of the noise points:
low spatial correlation with their neighbors. Our method consists of two deep
neural networks: Point Reconstruction Network (PR-Net) reconstructs each point
from its neighbors; Reconstruction Difficulty Network (RD-Net) predicts
point-wise difficulty of the reconstruction by PR-Net, which we call
reconstruction difficulty. With simple post-processing, our method effectively
detects snow points without any label. Our method achieves the state-of-the-art
performance among label-free approaches and is comparable to the
fully-supervised method. Moreover, we demonstrate that our method can be
exploited as a pretext task to improve label-efficiency of supervised training
of de-snowing.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 10:43:47 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Bae",
"Gwangtak",
""
],
[
"Kim",
"Byungjun",
""
],
[
"Ahn",
"Seongyong",
""
],
[
"Min",
"Jihong",
""
],
[
"Shim",
"Inwook",
""
]
] |
new_dataset
| 0.951774 |
2208.04050
|
Davide Villa
|
Davide Villa, Chih-Kuang Lin, Adam Kuenzi, Michael Lang
|
Bluetooth Low Energy mesh network for power-limited, robust and reliable
IoT services
|
6 pages, 5 figures, 4 tables, 2 algorithms
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bluetooth Low Energy (BLE) is an emerging wireless technology created for
short-range control and monitoring applications that is becoming increasingly
widespread among the Internet of Things (IoT) services because of its low-cost
and low-energy consumption. In this paper, we propose a novel neighbor
discovery scheme and failure recovery techniques for multi-path and single-path
low-power and reliable BLE networks. By exploiting energy-efficient access
control and fast and robust routing ideas with adaptive failure recovery, the
proposed methods outperform the well-known flooding approach used by the BLE
Mesh standard. We show varying improvements in packet latency and power
consumption in event-driven simulations as network topology and traffic
changes. The failure recovery approaches proposed are optimized and
demonstrated during the simulations, showing the varying of the overall failure
recovery latency and node power consumption in different use cases.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 10:48:01 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Villa",
"Davide",
""
],
[
"Lin",
"Chih-Kuang",
""
],
[
"Kuenzi",
"Adam",
""
],
[
"Lang",
"Michael",
""
]
] |
new_dataset
| 0.993669 |
2208.04079
|
Yili Jin
|
Yili Jin, Junhua Liu, Fangxin Wang, Shuguang Cui
|
Where Are You Looking?: A Large-Scale Dataset of Head and Gaze Behavior
for 360-Degree Videos and a Pilot Study
|
Accepted by ACM MM 2022. Dataset is available at
https://cuhksz-inml.github.io/head_gaze_dataset
| null |
10.1145/3503161.3548200
| null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
360{\deg} videos in recent years have experienced booming development.
Compared to traditional videos, 360{\deg} videos are featured with uncertain
user behaviors, bringing opportunities as well as challenges. Datasets are
necessary for researchers and developers to explore new ideas and conduct
reproducible analyses for fair comparisons among different solutions. However,
existing related datasets mostly focused on users' field of view (FoV),
ignoring the more important eye gaze information, not to mention the integrated
extraction and analysis of both FoV and eye gaze. Besides, users' behavior
patterns are highly related to videos, yet most existing datasets only
contained videos with subjective and qualitative classification from video
genres, which lack quantitative analysis and fail to characterize the intrinsic
properties of a video scene. To this end, we first propose a quantitative
taxonomy for 360{\deg} videos that contains three objective technical metrics.
Based on this taxonomy, we collect a dataset containing users' head and gaze
behaviors simultaneously, which outperforms existing datasets with rich
dimensions, large scale, strong diversity, and high frequency. Then we conduct
a pilot study on user's behaviors and get some interesting findings such as
user's head direction will follow his/her gaze direction with the most possible
time interval. A case of application in tile-based 360{\deg} video streaming
based on our dataset is later conducted, demonstrating a great performance
improvement of existing works by leveraging our provided gaze information. Our
dataset is available at https://cuhksz-inml.github.io/head_gaze_dataset/
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 12:00:27 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Jin",
"Yili",
""
],
[
"Liu",
"Junhua",
""
],
[
"Wang",
"Fangxin",
""
],
[
"Cui",
"Shuguang",
""
]
] |
new_dataset
| 0.994555 |
2208.04135
|
Rapha\"el Milli\`ere
|
Rapha\"el Milli\`ere
|
Adversarial Attacks on Image Generation With Made-Up Words
| null | null | null | null |
cs.CV cs.CL cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Text-guided image generation models can be prompted to generate images using
nonce words adversarially designed to robustly evoke specific visual concepts.
Two approaches for such generation are introduced: macaronic prompting, which
involves designing cryptic hybrid words by concatenating subword units from
different languages; and evocative prompting, which involves designing nonce
words whose broad morphological features are similar enough to that of existing
words to trigger robust visual associations. The two methods can also be
combined to generate images associated with more specific visual concepts. The
implications of these techniques for the circumvention of existing approaches
to content moderation, and particularly the generation of offensive or harmful
images, are discussed.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 15:10:23 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Millière",
"Raphaël",
""
]
] |
new_dataset
| 0.962552 |
2208.04144
|
Arash Shaban-Nejad
|
Whitney S Brakefield, Nariman Ammar, Arash Shaban-Nejad
|
An Urban Population Health Observatory for Disease Causal Pathway
Analysis and Decision Support: Underlying Explainable Artificial Intelligence
Model
|
15 Pages, 5 figures, and 3 tables
|
JMIR Form Res. 2022 Jul 20;6(7):e36055. PMID: 35857363
|
10.2196/36055
| null |
cs.AI cs.CY cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This study sought to (1) expand our existing Urban Population Health
Observatory (UPHO) system by incorporating a semantics layer; (2) cohesively
employ machine learning and semantic/logical inference to provide measurable
evidence and detect pathways leading to undesirable health outcomes; (3)
provide clinical use case scenarios and design case studies to identify
socioenvironmental determinants of health associated with the prevalence of
obesity, and (4) design a dashboard that demonstrates the use of UPHO in the
context of obesity surveillance using the provided scenarios. The system design
includes a knowledge graph generation component that provides contextual
knowledge from relevant domains of interest. This system leverages semantics
using concepts, properties, and axioms from existing ontologies. In addition,
we used the publicly available US Centers for Disease Control and Prevention
500 Cities data set to perform multivariate analysis. A cohesive approach that
employs machine learning and semantic/logical inference reveals pathways
leading to diseases. In this study, we present 2 clinical case scenarios and a
proof-of-concept prototype design of a dashboard that provides warnings,
recommendations, and explanations and demonstrates the use of UPHO in the
context of obesity surveillance, treatment, and prevention. While exploring the
case scenarios using a support vector regression machine learning model, we
found that poverty, lack of physical activity, education, and unemployment were
the most important predictive variables that contribute to obesity in Memphis,
TN. The application of UPHO could help reduce health disparities and improve
urban population health. The expanded UPHO feature incorporates an additional
level of interpretable knowledge to enhance physicians, researchers, and health
officials' informed decision-making at both patient and community levels.
|
[
{
"version": "v1",
"created": "Tue, 26 Jul 2022 15:59:22 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Brakefield",
"Whitney S",
""
],
[
"Ammar",
"Nariman",
""
],
[
"Shaban-Nejad",
"Arash",
""
]
] |
new_dataset
| 0.988663 |
2208.04166
|
Ahmed El Gazzar
|
Ahmed El-Gazzar, Rajat Mani Thomas, Guido Van Wingen
|
fMRI-S4: learning short- and long-range dynamic fMRI dependencies using
1D Convolutions and State Space Models
|
11 pages, 3 Figures, Accepted at MLCN 2022
| null | null | null |
cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Single-subject mapping of resting-state brain functional activity to
non-imaging phenotypes is a major goal of neuroimaging. The large majority of
learning approaches applied today rely either on static representations or on
short-term temporal correlations. This is at odds with the nature of brain
activity which is dynamic and exhibit both short- and long-range dependencies.
Further, new sophisticated deep learning approaches have been developed and
validated on single tasks/datasets. The application of these models for the
study of a different targets typically require exhaustive hyperparameter
search, model engineering and trial and error to obtain competitive results
with simpler linear models. This in turn limit their adoption and hinder fair
benchmarking in a rapidly developing area of research. To this end, we propose
fMRI-S4; a versatile deep learning model for the classification of phenotypes
and psychiatric disorders from the timecourses of resting-state functional
magnetic resonance imaging scans. fMRI-S4 capture short- and long- range
temporal dependencies in the signal using 1D convolutions and the recently
introduced state-space models S4. The proposed architecture is lightweight,
sample-efficient and robust across tasks/datasets. We validate fMRI-S4 on the
tasks of diagnosing major depressive disorder (MDD), autism spectrum disorder
(ASD) and sex classifcation on three multi-site rs-fMRI datasets. We show that
fMRI-S4 can outperform existing methods on all three tasks and can be trained
as a plug&play model without special hyperpararameter tuning for each setting
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 14:07:25 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"El-Gazzar",
"Ahmed",
""
],
[
"Thomas",
"Rajat Mani",
""
],
[
"Van Wingen",
"Guido",
""
]
] |
new_dataset
| 0.989889 |
2208.04173
|
Jiawei Li
|
Jiawei Li, Bolin Jiang, Yan Liu, Chengxiao Luo, Naiqi Li, Bin Chen
|
SsaA: A Self-supervised auto-Annotation System for Online Visual
Inspection and Manufacturing Automation
|
4 pages, 3 figures, conference
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent trends in cloud computing technology effectively boosted the
application of visual inspection. However, most of the available systems work
in a human-in-the-loop manner and can not provide long-term support to the
online application. To make a step forward, this paper outlines an automatic
annotation system called SsaA, working in a self-supervised learning manner,
for continuously making the online visual inspection in the manufacturing
automation scenarios. Benefit from the self-supervised learning, SsaA is
effective to establish a visual inspection application for the whole life-cycle
of manufacturing. In the early stage, with only the anomaly-free data, the
unsupervised algorithms are adopted to process the pretext task and generate
coarse labels for the following data. Then supervised algorithms are trained
for the downstream task. With user-friendly web-based interfaces, SsaA is very
convenient to integrate and deploy both of the unsupervised and supervised
algorithms. So far, the SsaA system has been adopted for some real-life
industrial applications.
|
[
{
"version": "v1",
"created": "Mon, 8 Aug 2022 14:26:35 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Li",
"Jiawei",
""
],
[
"Jiang",
"Bolin",
""
],
[
"Liu",
"Yan",
""
],
[
"Luo",
"Chengxiao",
""
],
[
"Li",
"Naiqi",
""
],
[
"Chen",
"Bin",
""
]
] |
new_dataset
| 0.987309 |
2208.04190
|
Mehdi Khoshboresh-Masouleh
|
Mehdi Khoshboresh-Masouleh and Reza Shah-Hosseini
|
SA-NET.v2: Real-time vehicle detection from oblique UAV images with use
of uncertainty estimation in deep meta-learning
| null |
The International Archives of the Photogrammetry, Remote Sensing
and Spatial Information Sciences, 2022
|
10.5194/isprs-archives-XLVI-M-2-2022-141-2022
| null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, unmanned aerial vehicle (UAV) imaging is a suitable solution
for real-time monitoring different vehicles on the urban scale. Real-time
vehicle detection with the use of uncertainty estimation in deep meta-learning
for the portable platforms (e.g., UAV) potentially improves video understanding
in real-world applications with a small training dataset, while many vehicle
monitoring approaches appear to understand single-time detection with a big
training dataset. The purpose of real-time vehicle detection from oblique UAV
images is to locate the vehicle on the time series UAV images by using semantic
segmentation. Real-time vehicle detection is more difficult due to the variety
of depth and scale vehicles in oblique view UAV images. Motivated by these
facts, in this manuscript, we consider the problem of real-time vehicle
detection for oblique UAV images based on a small training dataset and deep
meta-learning. The proposed architecture, called SA-Net.v2, is a developed
method based on the SA-CNN for real-time vehicle detection by reformulating the
squeeze-and-attention mechanism. The SA-Net.v2 is composed of two components,
including the squeeze-and-attention function that extracts the high-level
feature based on a small training dataset, and the gated CNN. For the real-time
vehicle detection scenario, we test our model on the UAVid dataset. UAVid is a
time series oblique UAV images dataset consisting of 30 video sequences. We
examine the proposed method's applicability for stand real-time vehicle
detection in urban environments using time series UAV images. The experiments
show that the SA-Net.v2 achieves promising performance in time series oblique
UAV images.
|
[
{
"version": "v1",
"created": "Thu, 4 Aug 2022 09:08:47 GMT"
}
] | 2022-08-09T00:00:00 |
[
[
"Khoshboresh-Masouleh",
"Mehdi",
""
],
[
"Shah-Hosseini",
"Reza",
""
]
] |
new_dataset
| 0.998796 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.