id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2101.06744
|
Ohr Kadrawi
|
Ron Yosef, Matan Mizrachi and Ohr Kadrawi
|
On Unimodality of Independence Polynomials of Trees
|
20 pages, 12 figures
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An independent set in a graph is a set of pairwise non-adjacent vertices. The
independence number $\alpha{(G)}$ is the size of a maximum independent set in
the graph $G$. The independence polynomial of a graph is the generating
function for the sequence of numbers of independent sets of each size. In other
words, the $k$-th coefficient of the independence polynomial equals the number
of independent sets comprised of $k$ vertices. For instance, the degree of the
independence polynomial of the graph $G$ is equal to $\alpha{(G)}$. In 1987,
Alavi, Malde, Schwenk, and Erd{\"o}s conjectured that the independence
polynomial of a tree is unimodal. In what follows, we provide support to this
assertion considering trees with up to $20$ vertices. Moreover, we show that
the corresponding independence polynomials are log-concave and, consequently,
unimodal. The algorithm computing the independence polynomial of a given tree
makes use of a database of non-isomorphic unlabeled trees to prevent repeated
computations.
|
[
{
"version": "v1",
"created": "Sun, 17 Jan 2021 18:34:17 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Jan 2021 20:13:08 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Feb 2021 11:37:20 GMT"
},
{
"version": "v4",
"created": "Wed, 19 May 2021 09:44:07 GMT"
},
{
"version": "v5",
"created": "Mon, 7 Mar 2022 06:49:17 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Yosef",
"Ron",
""
],
[
"Mizrachi",
"Matan",
""
],
[
"Kadrawi",
"Ohr",
""
]
] |
new_dataset
| 0.999444 |
2104.04683
|
Md Imran Hossen
|
Md Imran Hossen and Xiali Hei
|
A Low-Cost Attack against the hCaptcha System
|
To appear in the 15th IEEE Workshop on Offensive Technologies (WOOT
2021)
| null |
10.1109/SPW53761.2021.00061
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
CAPTCHAs are a defense mechanism to prevent malicious bot programs from
abusing websites on the Internet. hCaptcha is a relatively new but emerging
image CAPTCHA service. This paper presents an automated system that can break
hCaptcha challenges with a high success rate. We evaluate our system against
270 hCaptcha challenges from live websites and demonstrate that it can solve
them with 95.93% accuracy while taking only 18.76 seconds on average to crack a
challenge. We run our attack from a docker instance with only 2GB memory (RAM),
3 CPUs, and no GPU devices, demonstrating that it requires minimal resources to
launch a successful large-scale attack against the hCaptcha system.
|
[
{
"version": "v1",
"created": "Sat, 10 Apr 2021 05:15:15 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Hossen",
"Md Imran",
""
],
[
"Hei",
"Xiali",
""
]
] |
new_dataset
| 0.985832 |
2111.03260
|
Syed Muhammad Arsalan Bashir Mr.
|
Yi Wang, Syed Muhammad Arsalan Bashir, Mahrukh Khan, Qudrat Ullah, Rui
Wang, Yilin Song, Zhe Guo, Yilong Niu
|
Remote Sensing Image Super-resolution and Object Detection: Benchmark
and State of the Art
|
39 pages, 15 figures, 5 tables. Submitted to Elsevier journal for
review
|
Expert Systems with Applications, 2022
|
10.1016/j.eswa.2022.116793
| null |
cs.CV cs.AI cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
For the past two decades, there have been significant efforts to develop
methods for object detection in Remote Sensing (RS) images. In most cases, the
datasets for small object detection in remote sensing images are inadequate.
Many researchers used scene classification datasets for object detection, which
has its limitations; for example, the large-sized objects outnumber the small
objects in object categories. Thus, they lack diversity; this further affects
the detection performance of small object detectors in RS images. This paper
reviews current datasets and object detection methods (deep learning-based) for
remote sensing images. We also propose a large-scale, publicly available
benchmark Remote Sensing Super-resolution Object Detection (RSSOD) dataset. The
RSSOD dataset consists of 1,759 hand-annotated images with 22,091 instances of
very high resolution (VHR) images with a spatial resolution of ~0.05 m. There
are five classes with varying frequencies of labels per class. The image
patches are extracted from satellite images, including real image distortions
such as tangential scale distortion and skew distortion. We also propose a
novel Multi-class Cyclic super-resolution Generative adversarial network with
Residual feature aggregation (MCGR) and auxiliary YOLOv5 detector to benchmark
image super-resolution-based object detection and compare with the existing
state-of-the-art methods based on image super-resolution (SR). The proposed
MCGR achieved state-of-the-art performance for image SR with an improvement of
1.2dB PSNR compared to the current state-of-the-art NLSN method. MCGR achieved
best object detection mAPs of 0.758, 0.881, 0.841, and 0.983, respectively, for
five-class, four-class, two-class, and single classes, respectively surpassing
the performance of the state-of-the-art object detectors YOLOv5, EfficientDet,
Faster RCNN, SSD, and RetinaNet.
|
[
{
"version": "v1",
"created": "Fri, 5 Nov 2021 04:56:34 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Wang",
"Yi",
""
],
[
"Bashir",
"Syed Muhammad Arsalan",
""
],
[
"Khan",
"Mahrukh",
""
],
[
"Ullah",
"Qudrat",
""
],
[
"Wang",
"Rui",
""
],
[
"Song",
"Yilin",
""
],
[
"Guo",
"Zhe",
""
],
[
"Niu",
"Yilong",
""
]
] |
new_dataset
| 0.999853 |
2112.11641
|
Min Jin Chong
|
Min Jin Chong, David Forsyth
|
JoJoGAN: One Shot Face Stylization
|
code at https://github.com/mchong6/JoJoGAN
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A style mapper applies some fixed style to its input images (so, for example,
taking faces to cartoons). This paper describes a simple procedure -- JoJoGAN
-- to learn a style mapper from a single example of the style. JoJoGAN uses a
GAN inversion procedure and StyleGAN's style-mixing property to produce a
substantial paired dataset from a single example style. The paired dataset is
then used to fine-tune a StyleGAN. An image can then be style mapped by
GAN-inversion followed by the fine-tuned StyleGAN. JoJoGAN needs just one
reference and as little as 30 seconds of training time. JoJoGAN can use extreme
style references (say, animal faces) successfully. Furthermore, one can control
what aspects of the style are used and how much of the style is applied.
Qualitative and quantitative evaluation show that JoJoGAN produces high quality
high resolution images that vastly outperform the current state-of-the-art.
|
[
{
"version": "v1",
"created": "Wed, 22 Dec 2021 03:13:16 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Feb 2022 20:13:05 GMT"
},
{
"version": "v3",
"created": "Sun, 27 Feb 2022 19:13:35 GMT"
},
{
"version": "v4",
"created": "Sun, 6 Mar 2022 21:25:50 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Chong",
"Min Jin",
""
],
[
"Forsyth",
"David",
""
]
] |
new_dataset
| 0.999563 |
2201.11984
|
Chen Li
|
Chen Li, Kevin Lewis
|
The need for and feasibility of alternative ground robots to traverse
sandy and rocky extraterrestrial terrain
| null |
Advanced Intelligent Systems (2022)
|
10.1002/aisy.202100195
| null |
cs.RO cs.SY eess.SY physics.bio-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Robotic spacecraft have helped expand our reach for many planetary
exploration missions. Most ground mobile planetary exploration robots use
wheeled or modified wheeled platforms. Although extraordinarily successful at
completing intended mission goals, because of the limitations of wheeled
locomotion, they have been largely limited to benign, solid terrain and avoided
extreme terrain with loose soil/sand and large rocks. Unfortunately, such
challenging terrain is often scientifically interesting for planetary geology.
Although many animals traverse such terrain at ease, robots have not matched
their performance and robustness. This is in major part due to a lack of
fundamental understanding of how effective locomotion can be generated from
controlled interaction with complex terrain on the same level of flight
aerodynamics and underwater vehicle hydrodynamics. Early fundamental
understanding of legged and limbless locomotor-ground interaction has already
enabled stable and efficient bio-inspired robot locomotion on relatively flat
ground with small obstacles. Recent progress in the new field of terradynamics
of locomotor-terrain interaction begins to reveal the principles of
bio-inspired locomotion on loose soil/sand and over large obstacles.
Multi-legged and limbless platforms using terradynamics insights hold the
promise for serving as robust alternative platforms for traversing extreme
extraterrestrial terrain and expanding our reach in planetary exploration.
|
[
{
"version": "v1",
"created": "Fri, 28 Jan 2022 08:33:02 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Mar 2022 06:28:37 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Li",
"Chen",
""
],
[
"Lewis",
"Kevin",
""
]
] |
new_dataset
| 0.997365 |
2202.11868
|
Ruiqi Ma
|
Ruiqi Ma, Chi Chen, Bisheng Yang, Deren Li, Haiping Wang, Yangzi Cong,
Zongtian Hu
|
CG-SSD: Corner Guided Single Stage 3D Object Detection from LiDAR Point
Cloud
|
27 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
At present, the anchor-based or anchor-free models that use LiDAR point
clouds for 3D object detection use the center assigner strategy to infer the 3D
bounding boxes. However, in a real world scene, the LiDAR can only acquire a
limited object surface point clouds, but the center point of the object does
not exist. Obtaining the object by aggregating the incomplete surface point
clouds will bring a loss of accuracy in direction and dimension estimation. To
address this problem, we propose a corner-guided anchor-free single-stage 3D
object detection model (CG-SSD ).Firstly, 3D sparse convolution backbone
network composed of residual layers and sub-manifold sparse convolutional
layers are used to construct bird's eye view (BEV) features for further deeper
feature mining by a lite U-shaped network; Secondly, a novel corner-guided
auxiliary module (CGAM) is proposed to incorporate corner supervision signals
into the neural network. CGAM is explicitly designed and trained to detect
partially visible and invisible corners to obtains a more accurate object
feature representation, especially for small or partial occluded objects;
Finally, the deep features from both the backbone networks and CGAM module are
concatenated and fed into the head module to predict the classification and 3D
bounding boxes of the objects in the scene. The experiments demonstrate CG-SSD
achieves the state-of-art performance on the ONCE benchmark for supervised 3D
object detection using single frame point cloud data, with 62.77%mAP.
Additionally, the experiments on ONCE and Waymo Open Dataset show that CGAM can
be extended to most anchor-based models which use the BEV feature to detect
objects, as a plug-in and bring +1.17%-+14.27%AP improvement.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 02:30:15 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Mar 2022 02:40:38 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Ma",
"Ruiqi",
""
],
[
"Chen",
"Chi",
""
],
[
"Yang",
"Bisheng",
""
],
[
"Li",
"Deren",
""
],
[
"Wang",
"Haiping",
""
],
[
"Cong",
"Yangzi",
""
],
[
"Hu",
"Zongtian",
""
]
] |
new_dataset
| 0.997824 |
2203.02072
|
Siddharth Reddy
|
Jensen Gao, Siddharth Reddy, Glen Berseth, Nicholas Hardy, Nikhilesh
Natraj, Karunesh Ganguly, Anca D. Dragan, Sergey Levine
|
X2T: Training an X-to-Text Typing Interface with Online Learning from
User Feedback
|
Accepted to International Conference on Learning Representations
(ICLR) 2021
| null | null | null |
cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We aim to help users communicate their intent to machines using flexible,
adaptive interfaces that translate arbitrary user input into desired actions.
In this work, we focus on assistive typing applications in which a user cannot
operate a keyboard, but can instead supply other inputs, such as webcam images
that capture eye gaze or neural activity measured by a brain implant. Standard
methods train a model on a fixed dataset of user inputs, then deploy a static
interface that does not learn from its mistakes; in part, because extracting an
error signal from user behavior can be challenging. We investigate a simple
idea that would enable such interfaces to improve over time, with minimal
additional effort from the user: online learning from user feedback on the
accuracy of the interface's actions. In the typing domain, we leverage
backspaces as feedback that the interface did not perform the desired action.
We propose an algorithm called x-to-text (X2T) that trains a predictive model
of this feedback signal, and uses this model to fine-tune any existing, default
interface for translating user input into actions that select words or
characters. We evaluate X2T through a small-scale online user study with 12
participants who type sentences by gazing at their desired words, a large-scale
observational study on handwriting samples from 60 users, and a pilot study
with one participant using an electrocorticography-based brain-computer
interface. The results show that X2T learns to outperform a non-adaptive
default interface, stimulates user co-adaptation to the interface, personalizes
the interface to individual users, and can leverage offline data collected from
the default interface to improve its initial performance and accelerate online
learning.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 00:07:20 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Mar 2022 01:39:28 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Gao",
"Jensen",
""
],
[
"Reddy",
"Siddharth",
""
],
[
"Berseth",
"Glen",
""
],
[
"Hardy",
"Nicholas",
""
],
[
"Natraj",
"Nikhilesh",
""
],
[
"Ganguly",
"Karunesh",
""
],
[
"Dragan",
"Anca D.",
""
],
[
"Levine",
"Sergey",
""
]
] |
new_dataset
| 0.999448 |
2203.02587
|
Philipp Haindl
|
Philipp Haindl, Reinhold Pl\"osch
|
A DSL for Defining Feature-Level Quality Constraints and the Aggregation
of Evaluation Results in DevOps
|
15 pages, 2 figures, 8 code listings
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Quality requirements typically differ among software features, e.g., due to
different usage contexts of the features, different impacts of related quality
deficiencies onto overall user satisfaction, or long-term plans of the
developing organization. For instance, maintainability requirements might be
particularly high for software features which are frequently used or bear
strategic value for the developing organization. Also, software features where
even the smallest delays are perceived as negative by the user will be
subjected to specially tight performance requirements.
We defined an operational DSL to define software quality requirements as
individual feature-level constraints based on quantitative measures. The DSL
provides language elements to define the operationalization of measures from
external systems, time series operations, time filters, and the automatic
evaluation of these feature-level constraints in DevOps based on comparison
operators and threshold values. In addition, quality ratings summarize
evaluation results of features on an ordinal grading scheme. Likewise, quality
gates use these quality ratings to reflect the fitness of software features or
the overall software product using different states. Finally, we show an
example based on a widely-adopted secure mobile messaging app that illustrates
the interplay of the different DSL elements.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 22:11:57 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Haindl",
"Philipp",
""
],
[
"Plösch",
"Reinhold",
""
]
] |
new_dataset
| 0.980167 |
2203.02643
|
\'Etienne Villemure
|
\'Etienne Villemure (1), Philippe Arsenault (1), Gabriel Lessard (1),
Thierry Constantin (1), Hubert Dub\'e (1), Louis-Daniel Gaulin (1), Xavier
Groleau (1), Samuel Laperri\`ere (1), Charles Quesnel (1), Fran\c{c}ois
Ferland (1) ((1) Universit\'e de Sherbrooke)
|
SwarmUS: An open hardware and software on-board platform for swarm
robotics development
|
8 pages, 9 figures, submitted to IROS 2022
| null | null | null |
cs.RO cs.MA cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real life implementations of distributed swarm robotics are rare. The
standardization of a general purpose swarm robotics platform could greatly
accelerate swarm robotics towards real life implementations. The SwarmUS
platform is an open-source hardware and software on-board embedded system
designed to be added onto existing robots while providing them with swarm
features, thus proposing a new take on the platform standardization problem.
These features include a distributed relative localization system based on
Ultra-Wideband, a local communication system based on Wi-Fi and a distributed
coordination system based on the Buzz programming language between robots
connected within a SwarmUS platform. Additionally, a human-swarm interaction
mobile application and an emulation of the platform in the Robot Operating
System (ROS) is presented. Finally, an implementation of the system was
realized and tested on two types of robots : a TurtleBot3 Burger and two
Pioneer 2DX.
|
[
{
"version": "v1",
"created": "Sat, 5 Mar 2022 02:20:18 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Villemure",
"Étienne",
"",
"Université de Sherbrooke"
],
[
"Arsenault",
"Philippe",
"",
"Université de Sherbrooke"
],
[
"Lessard",
"Gabriel",
"",
"Université de Sherbrooke"
],
[
"Constantin",
"Thierry",
"",
"Université de Sherbrooke"
],
[
"Dubé",
"Hubert",
"",
"Université de Sherbrooke"
],
[
"Gaulin",
"Louis-Daniel",
"",
"Université de Sherbrooke"
],
[
"Groleau",
"Xavier",
"",
"Université de Sherbrooke"
],
[
"Laperrière",
"Samuel",
"",
"Université de Sherbrooke"
],
[
"Quesnel",
"Charles",
"",
"Université de Sherbrooke"
],
[
"Ferland",
"François",
"",
"Université de Sherbrooke"
]
] |
new_dataset
| 0.998179 |
2203.02660
|
Sicong Cao
|
Sicong Cao, Xiaobing Sun, Lili Bo, Rongxin Wu, Bin Li, and Chuanqi Tao
|
MVD: Memory-Related Vulnerability Detection Based on Flow-Sensitive
Graph Neural Networks
|
To appear in the Technical Track of ICSE 2022
| null |
10.1145/3510003.3510219
| null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Memory-related vulnerabilities constitute severe threats to the security of
modern software. Despite the success of deep learning-based approaches to
generic vulnerability detection, they are still limited by the underutilization
of flow information when applied for detecting memory-related vulnerabilities,
leading to high false positives.
In this paper,we propose MVD, a statement-level Memory-related Vulnerability
Detection approach based on flow-sensitive graph neural networks (FS-GNN).
FS-GNN is employed to jointly embed both unstructured information (i.e., source
code) and structured information (i.e., control- and data-flow) to capture
implicit memory-related vulnerability patterns. We evaluate MVD on the dataset
which contains 4,353 real-world memory-related vulnerabilities, and compare our
approach with three state-of-the-art deep learning-based approaches as well as
five popular static analysisbased memory detectors. The experiment results show
that MVD achieves better detection accuracy, outperforming both state-of-theart
DL-based and static analysis-based approaches. Furthermore, MVD makes a great
trade-off between accuracy and efficiency.
|
[
{
"version": "v1",
"created": "Sat, 5 Mar 2022 05:06:10 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Cao",
"Sicong",
""
],
[
"Sun",
"Xiaobing",
""
],
[
"Bo",
"Lili",
""
],
[
"Wu",
"Rongxin",
""
],
[
"Li",
"Bin",
""
],
[
"Tao",
"Chuanqi",
""
]
] |
new_dataset
| 0.993573 |
2203.02683
|
Louis Mahon
|
Louis Mahon and Carl Vogel
|
The Proof is in the Pudding: Using Automated Theorem Proving to Generate
Cooking Recipes
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents FASTFOOD, a rule-based Natural Language Generation
Program for cooking recipes. Recipes are generated by using an Automated
Theorem Proving procedure to select the ingredients and instructions, with
ingredients corresponding to axioms and instructions to implications. FASTFOOD
also contains a temporal optimization module which can rearrange the recipe to
make it more time-efficient for the user, e.g. the recipe specifies to chop the
vegetables while the rice is boiling. The system is described in detail, using
a framework which divides Natural Language Generation into 4 phases: content
production, content selection, content organisation and content realisation. A
comparison is then made with similar existing systems and techniques.
|
[
{
"version": "v1",
"created": "Sat, 5 Mar 2022 08:50:34 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Mahon",
"Louis",
""
],
[
"Vogel",
"Carl",
""
]
] |
new_dataset
| 0.992261 |
2203.02735
|
Md Imran Hossen
|
Md Imran Hossen and Xiali Hei
|
aaeCAPTCHA: The Design and Implementation of Audio Adversarial CAPTCHA
|
Accepted at 7th IEEE European Symposium on Security and Privacy
(EuroS&P 2022)
| null | null | null |
cs.CR cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
CAPTCHAs are designed to prevent malicious bot programs from abusing
websites. Most online service providers deploy audio CAPTCHAs as an alternative
to text and image CAPTCHAs for visually impaired users. However, prior research
investigating the security of audio CAPTCHAs found them highly vulnerable to
automated attacks using Automatic Speech Recognition (ASR) systems. To improve
the robustness of audio CAPTCHAs against automated abuses, we present the
design and implementation of an audio adversarial CAPTCHA (aaeCAPTCHA) system
in this paper. The aaeCAPTCHA system exploits audio adversarial examples as
CAPTCHAs to prevent the ASR systems from automatically solving them.
Furthermore, we conducted a rigorous security evaluation of our new audio
CAPTCHA design against five state-of-the-art DNN-based ASR systems and three
commercial Speech-to-Text (STT) services. Our experimental evaluations
demonstrate that aaeCAPTCHA is highly secure against these speech recognition
technologies, even when the attacker has complete knowledge of the current
attacks against audio adversarial examples. We also conducted a usability
evaluation of the proof-of-concept implementation of the aaeCAPTCHA scheme. Our
results show that it achieves high robustness at a moderate usability cost
compared to normal audio CAPTCHAs. Finally, our extensive analysis highlights
that aaeCAPTCHA can significantly enhance the security and robustness of
traditional audio CAPTCHA systems while maintaining similar usability.
|
[
{
"version": "v1",
"created": "Sat, 5 Mar 2022 13:32:19 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Hossen",
"Md Imran",
""
],
[
"Hei",
"Xiali",
""
]
] |
new_dataset
| 0.994588 |
2203.02810
|
Phaedra Curlin
|
Phaedra S. Curlin, Madaline A. Muniz, Mason M. Bell, Alexis A. Muniz
and Jack O. Burns
|
Virtual Reality Digital Twin and Environment for Troubleshooting
Lunar-based Infrastructure Assembly Failures
|
5 pages, 9 figures, submitted to: International Workshop on Virtual,
Augmented, and Mixed-Reality for Human-Robot Interactions 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Humans and robots will need to collaborate in order to create a sustainable
human lunar presence by the end of the 2020s. This includes cases in which a
human will be required to teleoperate an autonomous rover that has encountered
an instrument assembly failure. To aid teleoperators in the troubleshooting
process, we propose a virtual reality digital twin placed in a simulated
environment. Here, the operator can virtually interact with a digital version
of the rover and mechanical arm that uses the same controls and kinematic
model. The user can also adopt the egocentric (a first person view through
using stereoscopic passthrough) and exocentric (a third person view where the
operator can virtually walk around the environment and rover as if they were on
site) view. We also discuss our metrics for evaluating the differences between
our digital and physical robot, as well as the experimental concept based on
real and applicable missions, and future work that would compare our platform
to traditional troubleshooting methods.
|
[
{
"version": "v1",
"created": "Sat, 5 Mar 2022 19:36:16 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Curlin",
"Phaedra S.",
""
],
[
"Muniz",
"Madaline A.",
""
],
[
"Bell",
"Mason M.",
""
],
[
"Muniz",
"Alexis A.",
""
],
[
"Burns",
"Jack O.",
""
]
] |
new_dataset
| 0.994836 |
2203.02815
|
Dragoljub Duric
|
Dragoljub {\DJ}uri\'c
|
Double Choco is NP-complete
| null | null | null | null |
cs.CC
|
http://creativecommons.org/licenses/by/4.0/
|
In the Nikoli pencil-and-paper game Double Choco, a puzzle consists of an m
$\times$ n grid of cells of white or gray color, separated by dotted lines
where each cell possibly contains an integer. The goal is to partition the grid
into blocks by drawing solid lines over the dotted lines, where every block
must contain a pair of areas of white and gray cells having the same form (size
and shape). An integer indicates the number of cells of that color in the
block. A block can contain any number of cells with the integer. We prove this
puzzle NP-complete, establishing a Nikoli gap of 2 years.
|
[
{
"version": "v1",
"created": "Sat, 5 Mar 2022 20:22:28 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Đurić",
"Dragoljub",
""
]
] |
new_dataset
| 0.999873 |
2203.02955
|
Ehsan Ul Haq
|
Ehsan-Ul Haq, Gareth Tyson, Lik-Hang Lee, Tristan Braud, Pan Hui
|
Twitter Dataset for 2022 Russo-Ukrainian Crisis
| null | null | null | null |
cs.SI cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Online Social Networks (OSNs) play a significant role in information sharing
during a crisis. The data collected during such a crisis can reflect the large
scale public opinions and sentiment. In addition, OSN data can also be used to
study different campaigns that are employed by various entities to engineer
public opinions. Such information sharing campaigns can range from spreading
factual information to propaganda and misinformation. We provide a Twitter
dataset of the 2022 Russo-Ukrainian conflict. In the first release, we share
over 1.6 million tweets shared during the 1st week of the crisis.
|
[
{
"version": "v1",
"created": "Sun, 6 Mar 2022 12:49:40 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Haq",
"Ehsan-Ul",
""
],
[
"Tyson",
"Gareth",
""
],
[
"Lee",
"Lik-Hang",
""
],
[
"Braud",
"Tristan",
""
],
[
"Hui",
"Pan",
""
]
] |
new_dataset
| 0.999806 |
2203.02967
|
Qingyu Xing
|
Qingyu Xing and Xiaohan Ma
|
Variational Auto-Encoder based Mandarin Speech Cloning
|
Submitted to Insterspeech 2022
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Speech cloning technology is becoming more sophisticated thanks to the
advances in machine learning. Researchers have successfully implemented
natural-sounding English speech synthesis and good English speech cloning by
some effective models. However, because of prosodic phrasing and large
character set of Mandarin, Chinese utilization of these models is not yet
complete. By creating a new dataset and replacing Tacotron synthesizer with
VAENAR-TTS, we improved the existing speech cloning technique CV2TTS to almost
real-time speech cloning while guaranteeing synthesis quality. In the process,
we customized the subjective tests of synthesis quality assessment by attaching
various scenarios, so that subjects focus on the differences between voice and
our improvements maybe were more advantageous to practical applications. The
results of the A/B test, real-time factor (RTF) and 2.74 mean opinion score
(MOS) in terms of naturalness and similarity, reflect the real-time
high-quality Mandarin speech cloning we achieved.
|
[
{
"version": "v1",
"created": "Sun, 6 Mar 2022 14:01:39 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Xing",
"Qingyu",
""
],
[
"Ma",
"Xiaohan",
""
]
] |
new_dataset
| 0.993449 |
2203.03058
|
Heidi Howard
|
Heidi Howard, Richard Mortier
|
Relaxed Paxos: Quorum Intersection Revisited (Again)
|
to be published in the 9th Workshop on Principles and Practice of
Consistency for Distributed Data (PaPoC'22)
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Distributed consensus, the ability to reach agreement in the face of
failures, is a fundamental primitive for constructing reliable distributed
systems. The Paxos algorithm is synonymous with consensus and widely utilized
in production. Paxos uses two phases: phase one and phase two, each requiring a
quorum of acceptors, to reach consensus during a round of the protocol.
Traditionally, Paxos requires that all quorums, regardless of phase or round,
intersect and majorities are often used for this purpose. Flexible Paxos proved
that it is only necessary for phase one quorum of a given round to intersect
with the phase two quorums of all previous rounds.
In this paper, we re-examine how Paxos approaches the problem of consensus.
We look again at quorum intersection in Flexible Paxos and observe that quorum
intersection can be safely weakened further. Most notably, we observe that if a
proposer learns that a value was proposed in some previous round then its phase
one no longer needs to intersect with the phase two quorums from that round or
from any previous rounds. Furthermore, in order to provide an intuitive
explanation of our results, we propose a novel abstraction for reasoning about
Paxos which utilizes write-once registers.
|
[
{
"version": "v1",
"created": "Sun, 6 Mar 2022 21:30:15 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Howard",
"Heidi",
""
],
[
"Mortier",
"Richard",
""
]
] |
new_dataset
| 0.996191 |
2203.03119
|
Ryosuke Abe
|
Ryosuke Abe, Shigeya Suzuki, Kenji Saito, Hiroya Tanaka, Osamu
Nakamura, Jun Murai
|
Fabchain: Managing Audit-able 3D Print Job over Blockchain
| null | null | null | null |
cs.DC cs.CR cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Improvements in fabrication devices such as 3D printers are becoming possible
for personal fabrication to freely fabricate any products. To clarify who is
liable for the product, the fabricator should keep the fabrication history in
an immutable and sustainably accessible manner. In this paper, we propose a new
scheme, "Fabchain," that can record the fabrication history in such a manner.
By utilizing a scheme that employs a blockchain as an audit-able communication
channel, Fabchain manages print jobs for the fabricator's 3D printer over the
blockchain, while maintaining a history of a print job. We implemented Fabchain
on Ethereum and evaluated the performance for recording a print job. Our
results demonstrate that Fabchain can complete communication of a print job
sequence in less than 1 minute on the Ethereum test network. We conclude that
Fabchain can manage a print job in a reasonable duration for 3D printing, while
satisfying the requirements for immutability and sustainability.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 03:41:17 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Abe",
"Ryosuke",
""
],
[
"Suzuki",
"Shigeya",
""
],
[
"Saito",
"Kenji",
""
],
[
"Tanaka",
"Hiroya",
""
],
[
"Nakamura",
"Osamu",
""
],
[
"Murai",
"Jun",
""
]
] |
new_dataset
| 0.999463 |
2203.03149
|
Kunyi Zhang
|
Kunyi Zhang, Chenxing Jiang, Jinghang Li, Sheng Yang, Teng Ma, Chao
Xu, Fei Gao
|
DIDO: Deep Inertial Quadrotor Dynamical Odometry
|
8 pages, 6 figures, submitted to IROS 2022 with RA-L
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we propose an interoceptive-only state estimation system for a
quadrotor with deep neural network processing, where the quadrotor dynamics is
considered as a perceptive supplement of the inertial kinematics. To improve
the precision of multi-sensor fusion, we train cascaded networks on real-world
quadrotor flight data to learn IMU kinematic properties, quadrotor dynamic
characteristics, and motion states of the quadrotor along with their
uncertainty information, respectively. This encoded information empowers us to
address the issues of IMU bias stability, dynamic constraints, and multi-sensor
calibration during sensor fusion. The above multi-source information is fused
into a two-stage Extended Kalman Filter (EKF) framework for better estimation.
Experiments have demonstrated the advantages of our proposed work over several
conventional and learning-based methods.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 05:51:29 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Zhang",
"Kunyi",
""
],
[
"Jiang",
"Chenxing",
""
],
[
"Li",
"Jinghang",
""
],
[
"Yang",
"Sheng",
""
],
[
"Ma",
"Teng",
""
],
[
"Xu",
"Chao",
""
],
[
"Gao",
"Fei",
""
]
] |
new_dataset
| 0.970828 |
2203.03172
|
Mike Allenspach
|
Mike Allenspach, Yash Vyas, Matthias Rubio, Roland Siegwart, Marco
Tognon
|
Human-State-Aware Controller for a Tethered Aerial Robot Guiding a Human
by Physical Interaction
| null | null |
10.1109/LRA.2022.3143574
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
With the rapid development of Aerial Physical Interaction, the possibility to
have aerial robots physically interacting with humans is attracting a growing
interest. In one of our previous works, we considered one of the first systems
in which a human is physically connected to an aerial vehicle by a cable.
There, we developed a compliant controller that allows the robot to pull the
human toward a desired position using forces only as an indirect
communication-channel. However, this controller is based on the robot-state
only, which makes the system not adaptable to the human behavior, and in
particular to their walking speed. This reduces the effectiveness and comfort
of the guidance when the human is still far from the desired point. In this
paper, we formally analyze the problem and propose a human-state-aware
controller that includes a human`s velocity feedback. We theoretically prove
and experimentally show that this method provides a more consistent guiding
force which enhances the guiding experience.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 06:55:14 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Allenspach",
"Mike",
""
],
[
"Vyas",
"Yash",
""
],
[
"Rubio",
"Matthias",
""
],
[
"Siegwart",
"Roland",
""
],
[
"Tognon",
"Marco",
""
]
] |
new_dataset
| 0.997046 |
2203.03176
|
George Alexandropoulos
|
Mengnan Jian and George C. Alexandropoulos and Ertugrul Basar and
Chongwen Huang and Ruiqi Liu and Yuanwei Liu and Chau Yuen
|
Reconfigurable Intelligent Surfaces for Wireless Communications:
Overview of Hardware Designs, Channel Models, and Estimation Techniques
|
19 pages, 7 figures, to appear in an ITU journal
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The demanding objectives for the future sixth generation (6G) of wireless
communication networks have spurred recent research efforts on novel materials
and radio-frequency front-end architectures for wireless connectivity, as well
as revolutionary communication and computing paradigms. Among the pioneering
candidate technologies for 6G belong the reconfigurable intelligent surfaces
(RISs), which are artificial planar structures with integrated electronic
circuits that can be programmed to manipulate the incoming electromagnetic
field in a wide variety of functionalities. Incorporating RISs in wireless
networks has been recently advocated as a revolutionary means to transform any
wireless signal propagation environment to a dynamically programmable one,
intended for various networking objectives, such as coverage extension and
capacity boosting, spatiotemporal focusing with benefits in energy efficiency
and secrecy, and low electromagnetic field exposure. Motivated by the recent
increasing interests in the field of RISs and the consequent pioneering concept
of the RIS-enabled smart wireless environments, in this paper, we overview and
taxonomize the latest advances in RIS hardware architectures as well as the
most recent developments in the modeling of RIS unit elements and RIS-empowered
wireless signal propagation. We also present a thorough overview of the channel
estimation approaches for RIS-empowered communications systems, which
constitute a prerequisite step for the optimized incorporation of RISs in
future wireless networks. Finally, we discuss the relevance of the RIS
technology in the latest wireless communication standards, and highlight the
current and future standardization activities for the RIS technology and the
consequent RIS-empowered wireless networking approaches.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 07:07:38 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Jian",
"Mengnan",
""
],
[
"Alexandropoulos",
"George C.",
""
],
[
"Basar",
"Ertugrul",
""
],
[
"Huang",
"Chongwen",
""
],
[
"Liu",
"Ruiqi",
""
],
[
"Liu",
"Yuanwei",
""
],
[
"Yuen",
"Chau",
""
]
] |
new_dataset
| 0.998923 |
2203.03201
|
Salman Bari
|
Salman Bari, Volker Gabler and Dirk Wollherr
|
MS2MP: A Min-Sum Message Passing Algorithm for Motion Planning
| null | null |
10.1109/ICRA48506.2021.9561533
| null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Gaussian Process (GP) formulation of continuoustime trajectory offers a fast
solution to the motion planning problem via probabilistic inference on factor
graph. However, often the solution converges to in-feasible local minima and
the planned trajectory is not collision-free. We propose a message passing
algorithm that is more sensitive to obstacles with fast convergence time. We
leverage the utility of min-sum message passing algorithm that performs local
computations at each node to solve the inference problem on factor graph. We
first introduce the notion of compound factor node to transform the factor
graph to a linearly structured graph. We next develop an algorithm denoted as
Min-sum Message Passing algorithm for Motion Planning (MS2MP) that combines
numerical optimization with message passing to find collision-free
trajectories. MS2MP performs numerical optimization to solve non-linear least
square minimization problem at each compound factor node and then exploits the
linear structure of factor graph to compute the maximum a posteriori (MAP)
estimation of complete graph by passing messages among graph nodes. The
decentralized optimization approach of each compound node increases sensitivity
towards avoiding obstacles for harder planning problems. We evaluate our
algorithm by performing extensive experiments for exemplary motion planning
tasks for a robot manipulator. Our evaluation reveals that MS2MP improves
existing work in convergence time and success rate.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 08:24:20 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Bari",
"Salman",
""
],
[
"Gabler",
"Volker",
""
],
[
"Wollherr",
"Dirk",
""
]
] |
new_dataset
| 0.990397 |
2203.03324
|
Matteo Grimaldi
|
Matteo Grimaldi, Luca Mocerino, Antonio Cipolletta, Andrea Calimera
|
Dynamic ConvNets on Tiny Devices via Nested Sparsity
|
Submitted to the IEEE
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work introduces a new training and compression pipeline to build Nested
Sparse ConvNets, a class of dynamic Convolutional Neural Networks (ConvNets)
suited for inference tasks deployed on resource-constrained devices at the edge
of the Internet-of-Things. A Nested Sparse ConvNet consists of a single ConvNet
architecture containing N sparse sub-networks with nested weights subsets, like
a Matryoshka doll, and can trade accuracy for latency at run time, using the
model sparsity as a dynamic knob. To attain high accuracy at training time, we
propose a gradient masking technique that optimally routes the learning signals
across the nested weights subsets. To minimize the storage footprint and
efficiently process the obtained models at inference time, we introduce a new
sparse matrix compression format with dedicated compute kernels that fruitfully
exploit the characteristic of the nested weights subsets. Tested on image
classification and object detection tasks on an off-the-shelf ARM-M7 Micro
Controller Unit (MCU), Nested Sparse ConvNets outperform variable-latency
solutions naively built assembling single sparse models trained as stand-alone
instances, achieving (i) comparable accuracy, (ii) remarkable storage savings,
and (iii) high performance. Moreover, when compared to state-of-the-art dynamic
strategies, like dynamic pruning and layer width scaling, Nested Sparse
ConvNets turn out to be Pareto optimal in the accuracy vs. latency space.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 12:07:02 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Grimaldi",
"Matteo",
""
],
[
"Mocerino",
"Luca",
""
],
[
"Cipolletta",
"Antonio",
""
],
[
"Calimera",
"Andrea",
""
]
] |
new_dataset
| 0.988005 |
2203.03396
|
Minshan Xie
|
Minshan Xie, Menghan Xia, Xueting Liu, Tien-Tsin Wong
|
Screentone-Preserved Manga Retargeting
|
10 pages, 13 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a popular comic style, manga offers a unique impression by utilizing a
rich set of bitonal patterns, or screentones, for illustration. However,
screentones can easily be contaminated with visual-unpleasant aliasing and/or
blurriness after resampling, which harms its visualization on displays of
diverse resolutions. To address this problem, we propose the first manga
retargeting method that synthesizes a rescaled manga image while retaining the
screentone in each screened region. This is a non-trivial task as accurate
region-wise segmentation remains challenging. Fortunately, the rescaled manga
shares the same region-wise screentone correspondences with the original manga,
which enables us to simplify the screentone synthesis problem as an
anchor-based proposals selection and rearrangement problem. Specifically, we
design a novel manga sampling strategy to generate aliasing-free screentone
proposals, based on hierarchical grid-based anchors that connect the
correspondences between the original and the target rescaled manga.
Furthermore, a Recurrent Proposal Selection Module (RPSM) is proposed to
adaptively integrate these proposals for target screentone synthesis. Besides,
to deal with the translation insensitivity nature of screentones, we propose a
translation-invariant screentone loss to facilitate the training convergence.
Extensive qualitative and quantitative experiments are conducted to verify the
effectiveness of our method, and notably compelling results are achieved
compared to existing alternative techniques.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 13:48:15 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Xie",
"Minshan",
""
],
[
"Xia",
"Menghan",
""
],
[
"Liu",
"Xueting",
""
],
[
"Wong",
"Tien-Tsin",
""
]
] |
new_dataset
| 0.971679 |
2203.03454
|
Qingqing Li
|
Qingqing Li, Xianjia Yu, Jorge Pe\~na Queralta, Tomi Westerlund
|
Multi-Modal Lidar Dataset for Benchmarking General-Purpose Localization
and Mapping Algorithms
|
8 pages
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lidar technology has evolved significantly over the last decade, with higher
resolution, better accuracy, and lower cost devices available today. In
addition, new scanning modalities and novel sensor technologies have emerged in
recent years. Public datasets have enabled benchmarking of algorithms and have
set standards for the cutting edge technology. However, existing datasets are
not representative of the technological landscape, with only a reduced number
of lidars available. This inherently limits the development and comparison of
general-purpose algorithms in the evolving landscape. This paper presents a
novel multi-modal lidar dataset with sensors showcasing different scanning
modalities (spinning and solid-state), sensing technologies, and lidar cameras.
The focus of the dataset is on low-drift odometry, with ground truth data
available in both indoors and outdoors environment with sub-millimeter accuracy
from a motion capture (MOCAP) system. For comparison in longer distances, we
also include data recorded in larger spaces indoors and outdoors. The dataset
contains point cloud data from spinning lidars and solid-state lidars. Also, it
provides range images from high resolution spinning lidars, RGB and depth
images from a lidar camera, and inertial data from built-in IMUs. This is, to
the best of our knowledge, the lidar dataset with the most variety of sensors
and environments where ground truth data is available. This dataset can be
widely used in multiple research areas, such as 3D LiDAR simultaneous
localization and mapping (SLAM), performance comparison between multi-modal
lidars, appearance recognition and loop closure detection. The datasets are
available at: https://github.com/TIERS/tiers-lidars-dataset.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 15:14:08 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Li",
"Qingqing",
""
],
[
"Yu",
"Xianjia",
""
],
[
"Queralta",
"Jorge Peña",
""
],
[
"Westerlund",
"Tomi",
""
]
] |
new_dataset
| 0.999795 |
2203.03516
|
Joao Ramos
|
Yeongtae Jung and Joao Ramos
|
A Large Force Haptic Interface with Modular Linear Actuators
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a haptic interface with modular linear actuators which
can address limitations of conventional devices based on rotatory joints. The
proposed haptic interface is composed of parallel linear actuators that provide
high backdrivability and small inertia. The performance of the haptic interface
is compared with the conventional mechanisms in terms of force capability,
reflected inertia, and structural stiffness. High stiffness, large range of
motion with high force capability are achieved with the proposed mechanism,
which are in trade-off relationships in traditional haptic interfaces. The
device can apply up to 83 N continuously, which is three times larger than most
haptic devices. The theoretical minimum haptic force density and the stiffness
of the proposed mechanism were 1.3 to 1.9 times and 37 times of conventional
mechanisms in a similar condition, respectively. The system is also scalable
because its structural stiffness only depends on the timing belt stiffness,
while that of conventional haptic interfaces is inversely proportional to the
cube of structural lengths. The modular actuator design enables change of
degrees freedom (DOFs) for different applications. The proposed haptic
interface was tested by the interaction experiment with a virtual environment
with rigid walls.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 17:00:09 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Jung",
"Yeongtae",
""
],
[
"Ramos",
"Joao",
""
]
] |
new_dataset
| 0.969489 |
2203.03546
|
Ngoc Lai
|
Ngoc Minh Lai
|
LMN at SemEval-2022 Task 11: A Transformer-based System for English
Named Entity Recognition
|
SemEval 2022 (co-located with NAACL)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Processing complex and ambiguous named entities is a challenging research
problem, but it has not received sufficient attention from the natural language
processing community. In this short paper, we present our participation in the
English track of SemEval-2022 Task 11: Multilingual Complex Named Entity
Recognition. Inspired by the recent advances in pretrained Transformer language
models, we propose a simple yet effective Transformer-based baseline for the
task. Despite its simplicity, our proposed approach shows competitive results
in the leaderboard as we ranked 12 over 30 teams. Our system achieved a macro
F1 score of 72.50% on the held-out test set. We have also explored a data
augmentation approach using entity linking. While the approach does not improve
the final performance, we also discuss it in this paper.
|
[
{
"version": "v1",
"created": "Sun, 13 Feb 2022 05:46:14 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Lai",
"Ngoc Minh",
""
]
] |
new_dataset
| 0.997257 |
2203.03558
|
Joao Ramos
|
Amartya Purushottam, Yeongtae Jung, Kevin Murphy, Donghoon Baek and
Joao Ramos
|
Hands-free Telelocomotion of a Wheeled Humanoid toward Dynamic Mobile
Manipulation via Teleoperation
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Robotic systems that can dynamically combine manipulation and locomotion
could facilitate dangerous or physically demanding labor. For instance,
firefighter humanoid robots could leverage their body by leaning against
collapsed building rubble to push it aside. Here we introduce a teleoperation
system that targets the realization of these tasks using human whole-body motor
skills. We describe a new wheeled humanoid platform, SATYRR, and a novel
hands-free teleoperation architecture using a whole-body Human Machine
Interface (HMI). This system enables telelocomotion of the humanoid robot using
the operator body motion, freeing their arms for manipulation tasks. In this
study we evaluate the efficacy of the proposed system on hardware, and explore
the control of SATYRR using two teleoperation mappings that map the operators
body pitch and twist to the robot velocity or acceleration. Through experiments
and user feedback we showcase our preliminary findings of the pilot-system
response. Results suggest that the HMI is capable of effectively telelocomoting
SATYRR, that pilot preferences should dictate the appropriate motion mapping
and gains, and finally that the pilot can better learn to control the system
over time. This study represents a fundamental step towards the realization of
combined manipulation and locomotion via teleoperation.
|
[
{
"version": "v1",
"created": "Mon, 7 Mar 2022 17:59:25 GMT"
}
] | 2022-03-08T00:00:00 |
[
[
"Purushottam",
"Amartya",
""
],
[
"Jung",
"Yeongtae",
""
],
[
"Murphy",
"Kevin",
""
],
[
"Baek",
"Donghoon",
""
],
[
"Ramos",
"Joao",
""
]
] |
new_dataset
| 0.999473 |
1708.01425
|
Ivan Habernal
|
Ivan Habernal and Henning Wachsmuth and Iryna Gurevych and Benno Stein
|
The Argument Reasoning Comprehension Task: Identification and
Reconstruction of Implicit Warrants
|
Accepted as NAACL 2018 Long Paper; see details on the front page
| null |
10.18653/v1/N18-1175
| null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reasoning is a crucial part of natural language argumentation. To comprehend
an argument, one must analyze its warrant, which explains why its claim follows
from its premises. As arguments are highly contextualized, warrants are usually
presupposed and left implicit. Thus, the comprehension does not only require
language understanding and logic skills, but also depends on common sense. In
this paper we develop a methodology for reconstructing warrants systematically.
We operationalize it in a scalable crowdsourcing process, resulting in a freely
licensed dataset with warrants for 2k authentic arguments from news comments.
On this basis, we present a new challenging task, the argument reasoning
comprehension task. Given an argument with a claim and a premise, the goal is
to choose the correct implicit warrant from two options. Both warrants are
plausible and lexically close, but lead to contradicting claims. A solution to
this task will define a substantial step towards automatic warrant
reconstruction. However, experiments with several neural attention and language
models reveal that current approaches do not suffice.
|
[
{
"version": "v1",
"created": "Fri, 4 Aug 2017 08:46:03 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2017 13:34:24 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Feb 2018 12:34:20 GMT"
},
{
"version": "v4",
"created": "Tue, 27 Feb 2018 12:53:48 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Habernal",
"Ivan",
""
],
[
"Wachsmuth",
"Henning",
""
],
[
"Gurevych",
"Iryna",
""
],
[
"Stein",
"Benno",
""
]
] |
new_dataset
| 0.987802 |
2005.09025
|
Alexander Badri-Spr\"owitz
|
Felix Ruppert and Alexander Badri-Spr\"owitz
|
FootTile: a Rugged Foot Sensor for Force and Center of Pressure Sensing
in Soft Terrain
| null | null |
10.1109/ICRA40945.2020.9197466
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present FootTile, a foot sensor for reaction force and
center of pressure sensing in challenging terrain. We compare our sensor design
to standard biomechanical devices, force plates and pressure plates. We show
that FootTile can accurately estimate force and pressure distribution during
legged locomotion. FootTile weighs 0.9g, has a sampling rate of 330Hz, a
footprint of 10 by 10mm and can easily be adapted in sensor range to the
required load case. In three experiments we validate: first the performance of
the individual sensor, second an array of FootTiles for center of pressure
sensing and third the ground reaction force estimation during locomotion in
granular substrate. We then go on to show the accurate sensing capabilities of
the waterproof sensor in liquid mud, as a showcase for real world rough terrain
use.
|
[
{
"version": "v1",
"created": "Mon, 18 May 2020 18:45:39 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Ruppert",
"Felix",
""
],
[
"Badri-Spröwitz",
"Alexander",
""
]
] |
new_dataset
| 0.999675 |
2103.17228
|
Antonio Norelli
|
Antonio Norelli and Alessandro Panconesi
|
OLIVAW: Mastering Othello without Human Knowledge, nor a Fortune
|
Accepted for publication in IEEE Transactions on Games. Presented at
AAAI-21 Reinforcement Learning in Games Workshop, 8 pages
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce OLIVAW, an AI Othello player adopting the design principles of
the famous AlphaGo programs. The main motivation behind OLIVAW was to attain
exceptional competence in a non-trivial board game at a tiny fraction of the
cost of its illustrious predecessors. In this paper, we show how the AlphaGo
Zero's paradigm can be successfully applied to the popular game of Othello
using only commodity hardware and free cloud services. While being simpler than
Chess or Go, Othello maintains a considerable search space and difficulty in
evaluating board positions. To achieve this result, OLIVAW implements some
improvements inspired by recent works to accelerate the standard AlphaGo Zero
learning process. The main modification implies doubling the positions
collected per game during the training phase, by including also positions not
played but largely explored by the agent. We tested the strength of OLIVAW in
three different ways: by pitting it against Edax, the strongest open-source
Othello engine, by playing anonymous games on the web platform OthelloQuest,
and finally in two in-person matches against top-notch human players: a
national champion and a former world champion.
|
[
{
"version": "v1",
"created": "Wed, 31 Mar 2021 17:21:52 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Jun 2021 14:39:03 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Jun 2021 09:08:03 GMT"
},
{
"version": "v4",
"created": "Fri, 4 Mar 2022 09:21:19 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Norelli",
"Antonio",
""
],
[
"Panconesi",
"Alessandro",
""
]
] |
new_dataset
| 0.998949 |
2109.05120
|
Kasun Weerakoon Kulathun Mudiyanselage
|
Kasun Weerakoon, Adarsh Jagan Sathyamoorthy, Utsav Patel, and Dinesh
Manocha
|
TERP: Reliable Planning in Uneven Outdoor Environments using Deep
Reinforcement Learning
|
8 pages, 5 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel method for reliable robot navigation in uneven outdoor
terrains. Our approach employs a novel fully-trained Deep Reinforcement
Learning (DRL) network that uses elevation maps of the environment, robot pose,
and goal as inputs to compute an attention mask of the environment. The
attention mask is used to identify reduced stability regions in the elevation
map and is computed using channel and spatial attention modules and a novel
reward function. We continuously compute and update a navigation cost-map that
encodes the elevation information or the level-of-flatness of the terrain using
the attention mask. We then generate locally least-cost waypoints on the
cost-map and compute the final dynamically feasible trajectory using another
DRL-based method. Our approach guarantees safe, locally least-cost paths and
dynamically feasible robot velocities in uneven terrains. We observe an
increase of 35.18% in terms of success rate and, a decrease of 26.14% in the
cumulative elevation gradient of the robot's trajectory compared to prior
navigation methods in high-elevation regions. We evaluate our method on a Husky
robot in real-world uneven terrains (~ 4m of elevation gain) and demonstrate
its benefits.
|
[
{
"version": "v1",
"created": "Fri, 10 Sep 2021 22:06:14 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Sep 2021 20:40:34 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Mar 2022 05:49:01 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Weerakoon",
"Kasun",
""
],
[
"Sathyamoorthy",
"Adarsh Jagan",
""
],
[
"Patel",
"Utsav",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.981214 |
2110.00891
|
Ayush Agrawal
|
Ayush Agrawal, Shuxiao Chen, Akshara Rai, Koushil Sreenath
|
Vision-aided Dynamic Quadrupedal Locomotion on Discrete Terrain using
Motion Libraries
|
Accepted to ICRA 2022
| null | null | null |
cs.RO cs.SY eess.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a framework rooted in control and planning that
enables quadrupedal robots to traverse challenging terrains with discrete
footholds using visual feedback. Navigating discrete terrain is challenging for
quadrupeds because the motion of the robot can be aperiodic, highly dynamic,
and blind for the hind legs of the robot. Additionally, the robot needs to
reason over both the feasible footholds as well as robot velocity by speeding
up and slowing down at different parts of the terrain. We build an offline
library of periodic gaits which span two trotting steps on the robot, and
switch between different motion primitives to achieve aperiodic motions of
different step lengths on an A1 robot. The motion library is used to provide
targets to a geometric model predictive controller which controls stance. To
incorporate visual feedback, we use terrain mapping tools to build a local
height map of the terrain around the robot using RGB and depth cameras, and
extract feasible foothold locations around both the front and hind legs of the
robot. Our experiments show a Unitree A1 robot navigating multiple unknown,
challenging and discrete terrains in the real world.
|
[
{
"version": "v1",
"created": "Sat, 2 Oct 2021 23:19:36 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Mar 2022 09:52:02 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Agrawal",
"Ayush",
""
],
[
"Chen",
"Shuxiao",
""
],
[
"Rai",
"Akshara",
""
],
[
"Sreenath",
"Koushil",
""
]
] |
new_dataset
| 0.970269 |
2110.01399
|
Viet Quoc Pham
|
Pham Q. Viet and Daniel Romero
|
Aerial Base Station Placement: A Tutorial Introduction
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The deployment of Aerial Base Stations (ABSs) mounted on board Unmanned
Aerial Vehicles (UAVs) is emerging as a promising technology to provide
connectivity in areas where terrestrial infrastructure is insufficient or
absent. This may occur for example in remote areas, large events, emergency
situations, or areas affected by a natural disaster such as a wildfire or a
tsunami. To successfully materialize this goal, it is required that ABSs are
placed at locations in 3D space that ensure a high quality of service (QoS) to
the ground terminals. This paper provides a tutorial introduction to this ABS
placement problem where the fundamental challenges and trade-offs are first
investigated by means of a toy application example. Next, the different
approaches in the literature to address the aforementioned challenges in both
2D or 3D space will be introduced and a discussion on adaptive placement will
be provided. The paper is concluded by discussing future research directions.
|
[
{
"version": "v1",
"created": "Thu, 30 Sep 2021 11:29:33 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Mar 2022 05:35:46 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Viet",
"Pham Q.",
""
],
[
"Romero",
"Daniel",
""
]
] |
new_dataset
| 0.998924 |
2110.02178
|
Sachin Mehta
|
Sachin Mehta and Mohammad Rastegari
|
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision
Transformer
|
Accepted at ICLR'22
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Light-weight convolutional neural networks (CNNs) are the de-facto for mobile
vision tasks. Their spatial inductive biases allow them to learn
representations with fewer parameters across different vision tasks. However,
these networks are spatially local. To learn global representations,
self-attention-based vision trans-formers (ViTs) have been adopted. Unlike
CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is
it possible to combine the strengths of CNNs and ViTs to build a light-weight
and low latency network for mobile vision tasks? Towards this end, we introduce
MobileViT, a light-weight and general-purpose vision transformer for mobile
devices. MobileViT presents a different perspective for the global processing
of information with transformers, i.e., transformers as convolutions. Our
results show that MobileViT significantly outperforms CNN- and ViT-based
networks across different tasks and datasets. On the ImageNet-1k dataset,
MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters,
which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT
(ViT-based) for a similar number of parameters. On the MS-COCO object detection
task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of
parameters.
Our source code is open-source and available at:
https://github.com/apple/ml-cvnets
|
[
{
"version": "v1",
"created": "Tue, 5 Oct 2021 17:07:53 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Mar 2022 17:17:31 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Mehta",
"Sachin",
""
],
[
"Rastegari",
"Mohammad",
""
]
] |
new_dataset
| 0.992861 |
2112.06558
|
Wenqiao Zhang
|
Wenqiao Zhang, Haochen Shi, Jiannan Guo, Shengyu Zhang, Qingpeng Cai,
Juncheng Li, Sihui Luo, Yueting Zhuang
|
MAGIC: Multimodal relAtional Graph adversarIal inferenCe for Diverse and
Unpaired Text-based Image Captioning
| null |
AAAI 2022
| null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-based image captioning (TextCap) requires simultaneous comprehension of
visual content and reading the text of images to generate a natural language
description. Although a task can teach machines to understand the complex human
environment further given that text is omnipresent in our daily surroundings,
it poses additional challenges in normal captioning. A text-based image
intuitively contains abundant and complex multimodal relational content, that
is, image details can be described diversely from multiview rather than a
single caption. Certainly, we can introduce additional paired training data to
show the diversity of images' descriptions, this process is labor-intensive and
time-consuming for TextCap pair annotations with extra texts. Based on the
insight mentioned above, we investigate how to generate diverse captions that
focus on different image parts using an unpaired training paradigm. We propose
the Multimodal relAtional Graph adversarIal inferenCe (MAGIC) framework for
diverse and unpaired TextCap. This framework can adaptively construct multiple
multimodal relational graphs of images and model complex relationships among
graphs to represent descriptive diversity. Moreover, a cascaded generative
adversarial network is developed from modeled graphs to infer the unpaired
caption generation in image-sentence feature alignment and linguistic coherence
levels. We validate the effectiveness of MAGIC in generating diverse captions
from different relational information items of an image. Experimental results
show that MAGIC can generate very promising outcomes without using any
image-caption training pairs.
|
[
{
"version": "v1",
"created": "Mon, 13 Dec 2021 11:00:49 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Mar 2022 11:36:10 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Zhang",
"Wenqiao",
""
],
[
"Shi",
"Haochen",
""
],
[
"Guo",
"Jiannan",
""
],
[
"Zhang",
"Shengyu",
""
],
[
"Cai",
"Qingpeng",
""
],
[
"Li",
"Juncheng",
""
],
[
"Luo",
"Sihui",
""
],
[
"Zhuang",
"Yueting",
""
]
] |
new_dataset
| 0.95865 |
2112.07322
|
Maxime Bombar
|
Maxime Bombar and Alain Couvreur
|
Right-hand side decoding of Gabidulin code and applications
|
10 pages, Accepted at the conference WCC 2022
| null | null | null |
cs.IT cs.CR math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We discuss the decoding of Gabidulin and interleaved Gabidulin codes. We give
the full presentation of a decoding algorithm for Gabidulin codes, which as
Loidreau's seminal algorithm consists in localizing errors in the spirit of
Berlekamp-Welch algorithm for Reed-Solomon codes. On the other hand, this
algorithm consists in acting on codewords on the right while Loidreau's
algorithm considers an action on the left. This right-hand side decoder was
already introduced by the authors in a previous work for cryptanalytic
applications. We give here a generalised version which applies to the case of
non-full length Gabidulin codes. Finally, we show that this algorithm turns out
to provide a very clear and natural approach for the decoding of interleaved
Gabidulin codes.
|
[
{
"version": "v1",
"created": "Tue, 14 Dec 2021 12:14:45 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Mar 2022 13:30:35 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Bombar",
"Maxime",
""
],
[
"Couvreur",
"Alain",
""
]
] |
new_dataset
| 0.99168 |
2202.00181
|
Wei-Cheng Tseng
|
Wei-Cheng Tseng, Hung-Ju Liao, Lin Yen-Chen, Min Sun
|
CLA-NeRF: Category-Level Articulated Neural Radiance Field
|
accepted by ICRA 2022
| null | null | null |
cs.CV cs.CG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose CLA-NeRF -- a Category-Level Articulated Neural Radiance Field
that can perform view synthesis, part segmentation, and articulated pose
estimation. CLA-NeRF is trained at the object category level using no CAD
models and no depth, but a set of RGB images with ground truth camera poses and
part segments. During inference, it only takes a few RGB views (i.e., few-shot)
of an unseen 3D object instance within the known category to infer the object
part segmentation and the neural radiance field. Given an articulated pose as
input, CLA-NeRF can perform articulation-aware volume rendering to generate the
corresponding RGB image at any camera pose. Moreover, the articulated pose of
an object can be estimated via inverse rendering. In our experiments, we
evaluate the framework across five categories on both synthetic and real-world
data. In all cases, our method shows realistic deformation results and accurate
articulated pose estimation. We believe that both few-shot articulated object
rendering and articulated pose estimation open doors for robots to perceive and
interact with unseen articulated objects.
|
[
{
"version": "v1",
"created": "Tue, 1 Feb 2022 02:04:24 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Feb 2022 09:44:04 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Mar 2022 00:34:34 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Tseng",
"Wei-Cheng",
""
],
[
"Liao",
"Hung-Ju",
""
],
[
"Yen-Chen",
"Lin",
""
],
[
"Sun",
"Min",
""
]
] |
new_dataset
| 0.999697 |
2202.12391
|
Paulo Rezeck
|
Paulo Rezeck, Hector Azpurua, Mauricio FS Correa, Luiz Chaimowicz
|
HeRo 2.0: A Low-Cost Robot for Swarm Robotics Research
|
Submitted to Autonomous Robots - S.I. 208: Robot Swarms in the Real
World: from Design to Deployment
| null | null | null |
cs.RO cs.AI cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
The current state of electronic component miniaturization coupled with the
increasing efficiency in hardware and software allow the development of smaller
and compact robotic systems. The convenience of using these small, simple, yet
capable robots has gathered the research community's attention towards
practical applications of swarm robotics. This paper presents the design of a
novel platform for swarm robotics applications that is low cost, easy to
assemble using off-the-shelf components, and deeply integrated with the most
used robotic framework available today: ROS (Robot Operating System). The
robotic platform is entirely open, composed of a 3D printed body and
open-source software. We describe its architecture, present its main features,
and evaluate its functionalities executing experiments using a couple of
robots. Results demonstrate that the proposed mobile robot is very effective
given its small size and reduced cost, being suitable for swarm robotics
research and education.
|
[
{
"version": "v1",
"created": "Thu, 24 Feb 2022 22:23:14 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Mar 2022 12:32:04 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Rezeck",
"Paulo",
""
],
[
"Azpurua",
"Hector",
""
],
[
"Correa",
"Mauricio FS",
""
],
[
"Chaimowicz",
"Luiz",
""
]
] |
new_dataset
| 0.973759 |
2203.01595
|
Alexander Badri-Spr\"owitz
|
Marco Bolignari and An Mo and Marco Fontana and Alexander
Badri-Spr\"owitz
|
Diaphragm Ankle Actuation for Efficient Series Elastic Legged Robot
Hopping
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robots need lightweight legs for agile locomotion, and intrinsic series
elastic compliance has proven to be a major ingredient for energy-efficient
locomotion and robust locomotion control. Animals' anatomy and locomotion
capabilities emphasize the importance of that lightweight legs and integrated,
compact, series elastically actuated for distal leg joints. But unlike robots,
animals achieve series elastic actuation by their muscle-tendon units. So far
no designs are available that feature all characteristics of a perfect distal
legged locomotion actuator; a low-weight and low-inertia design, with high
mechanical efficiency, no stick and sliding friction, low mechanical
complexity, high-power output while being easy to mount. Ideally, such an
actuator can be controlled directly and without mechanical cross-coupling, for
example remotely. With this goal in mind, we propose a low-friction,
lightweight Series ELastic Diaphragm distal Actuator (SELDA) which meets many,
although not all, of the above requirements. We develop, implement, and
characterize a bioinspired robot leg that features a SELDA-actuated foot
segment. We compare two leg configurations controlled by a central pattern
generator that both feature agile forward hopping. By tuning SELDA's activation
timing, we effectively adjust the robot's hopping height by 11% and its forward
velocity by 14%, even with comparatively low power injection to the distal
joint.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 09:48:24 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Mar 2022 06:08:17 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Bolignari",
"Marco",
""
],
[
"Mo",
"An",
""
],
[
"Fontana",
"Marco",
""
],
[
"Badri-Spröwitz",
"Alexander",
""
]
] |
new_dataset
| 0.998296 |
2203.02078
|
Chaowen Deng
|
Chaowen Deng, Lu Yang, Hao Wu, Dmitry Zaporozhets, Miaomiao Dong and
Bo Bai
|
CGN: A Capacity-Guaranteed Network Architecture for Future Ultra-Dense
Wireless Systems
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The sixth generation (6G) era is envisioned to be a fully intelligent and
autonomous era, with physical and digital lifestyles merged together. Future
wireless network architectures should provide a solid support for such new
lifestyles. A key problem thus arises that what kind of network architectures
are suitable for 6G. In this paper, we propose a capacity-guaranteed network
(CGN) architecture, which provides high capacity for wireless devices densely
distributed everywhere, and ensures a superior scalability with low signaling
overhead and computation complexity simultaneously. Our theorem proves that the
essence of a CGN architecture is to decompose the whole network into
non-overlapping clusters with equal cluster sum capacity. Simulation results
reveal that in terms of the minimum cluster sum capacity, the proposed CGN can
achieve at least 30% performance gain compared with existing base station
clustering (BS-clustering) architectures. In addition, our theorem is
sufficiently general and can be applied for networks with different
distributions of BSs and users.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 00:49:44 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Deng",
"Chaowen",
""
],
[
"Yang",
"Lu",
""
],
[
"Wu",
"Hao",
""
],
[
"Zaporozhets",
"Dmitry",
""
],
[
"Dong",
"Miaomiao",
""
],
[
"Bai",
"Bo",
""
]
] |
new_dataset
| 0.999525 |
2203.02112
|
Yi-Nan Chen
|
Yi-Nan Chen and Hang Dai and Yong Ding
|
Pseudo-Stereo for Monocular 3D Object Detection in Autonomous Driving
|
Accepted to CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Pseudo-LiDAR 3D detectors have made remarkable progress in monocular 3D
detection by enhancing the capability of perceiving depth with depth estimation
networks, and using LiDAR-based 3D detection architectures. The advanced stereo
3D detectors can also accurately localize 3D objects. The gap in image-to-image
generation for stereo views is much smaller than that in image-to-LiDAR
generation. Motivated by this, we propose a Pseudo-Stereo 3D detection
framework with three novel virtual view generation methods, including
image-level generation, feature-level generation, and feature-clone, for
detecting 3D objects from a single image. Our analysis of depth-aware learning
shows that the depth loss is effective in only feature-level virtual view
generation and the estimated depth map is effective in both image-level and
feature-level in our framework. We propose a disparity-wise dynamic convolution
with dynamic kernels sampled from the disparity feature map to filter the
features adaptively from a single image for generating virtual image features,
which eases the feature degradation caused by the depth estimation errors. Till
submission (November 18, 2021), our Pseudo-Stereo 3D detection framework ranks
1st on car, pedestrian, and cyclist among the monocular 3D detectors with
publications on the KITTI-3D benchmark. The code is released at
https://github.com/revisitq/Pseudo-Stereo-3D.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 03:00:34 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Chen",
"Yi-Nan",
""
],
[
"Dai",
"Hang",
""
],
[
"Ding",
"Yong",
""
]
] |
new_dataset
| 0.987149 |
2203.02116
|
Michal Ptaszynski Prof.
|
Michal Ptaszynski, Pawel Dybala, Tatsuaki Matsuba, Fumito Masui, Rafal
Rzepka, Kenji Araki, Yoshio Momouchi
|
In the Service of Online Order: Tackling Cyber-Bullying with Machine
Learning and Affect Analysis
|
12 pages, 11 tables, 6 figures
|
International Journal of Computational Linguistics Research, Vol.
1, Issue 3, pp. 135-154, 2010
| null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
One of the burning problems lately in Japan has been cyber-bullying, or
slandering and bullying people online. The problem has been especially noticed
on unofficial Web sites of Japanese schools. Volunteers consisting of school
personnel and PTA (Parent-Teacher Association) members have started Online
Patrol to spot malicious contents within Web forums and blogs. In practise,
Online Patrol assumes reading through the whole Web contents, which is a task
difficult to perform manually. With this paper we introduce a research intended
to help PTA members perform Online Patrol more efficiently. We aim to develop a
set of tools that can automatically detect malicious entries and report them to
PTA members. First, we collected cyber-bullying data from unofficial school Web
sites. Then we performed analysis of this data in two ways. Firstly, we
analysed the entries with a multifaceted affect analysis system in order to
find distinctive features for cyber-bullying and apply them to a machine
learning classifier. Secondly, we applied a SVM based machine learning method
to train a classifier for detection of cyber-bullying. The system was able to
classify cyber-bullying entries with 88.2% of balanced F-score.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 03:13:45 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Ptaszynski",
"Michal",
""
],
[
"Dybala",
"Pawel",
""
],
[
"Matsuba",
"Tatsuaki",
""
],
[
"Masui",
"Fumito",
""
],
[
"Rzepka",
"Rafal",
""
],
[
"Araki",
"Kenji",
""
],
[
"Momouchi",
"Yoshio",
""
]
] |
new_dataset
| 0.983262 |
2203.02133
|
Yixuan Xu
|
Hamidreza Fazlali, Yixuan Xu, Yuan Ren, Bingbing Liu
|
A Versatile Multi-View Framework for LiDAR-based 3D Object Detection
with Guidance from Panoptic Segmentation
|
Accepted to CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
3D object detection using LiDAR data is an indispensable component for
autonomous driving systems. Yet, only a few LiDAR-based 3D object detection
methods leverage segmentation information to further guide the detection
process. In this paper, we propose a novel multi-task framework that jointly
performs 3D object detection and panoptic segmentation. In our method, the 3D
object detection backbone in Bird's-Eye-View (BEV) plane is augmented by the
injection of Range-View (RV) feature maps from the 3D panoptic segmentation
backbone. This enables the detection backbone to leverage multi-view
information to address the shortcomings of each projection view. Furthermore,
foreground semantic information is incorporated to ease the detection task by
highlighting the locations of each object class in the feature maps. Finally, a
new center density heatmap generated based on the instance-level information
further guides the detection backbone by suggesting possible box center
locations for objects. Our method works with any BEV-based 3D object detection
method, and as shown by extensive experiments on the nuScenes dataset, it
provides significant performance gains. Notably, the proposed method based on a
single-stage CenterPoint 3D object detection network achieved state-of-the-art
performance on nuScenes 3D Detection Benchmark with 67.3 NDS.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 04:57:05 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Fazlali",
"Hamidreza",
""
],
[
"Xu",
"Yixuan",
""
],
[
"Ren",
"Yuan",
""
],
[
"Liu",
"Bingbing",
""
]
] |
new_dataset
| 0.998712 |
2203.02142
|
Pravein Govindan Kannan
|
Pravein Govindan Kannan (1), Brent Salisbury (2), Palanivel Kodeswaran
(1), Sayandeep Sen (1) ((1) IBM Research - India, (2) Red Hat)
|
Benchmarking tunnel and encryption methodologies in cloud environments
| null | null | null | null |
cs.NI cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
The recent past has seen the adoption of multi-cloud deployments by
enterprises due to availability, features, and regulatory requirements. A
typical deployment involves parts of an application/workloads running inside a
private cloud with the other parts spread across multiple on-prem/public
clouds. Typical cluster-to-cluster networking in such deployments involve the
establishment of site-to-site encrypted tunnels to connect the workloads.
In this report, we benchmark the performance of various tunneling and
encryption technologies to provide directions on their use in multi-cloud
deployments. Based on the various experiments conducted on three different
testbeds, we present quantifiable data which can be leveraged by operators and
cloud providers tasked with design and development decisions of multi-cloud
network connectivity and orchestration.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 06:13:20 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Kannan",
"Pravein Govindan",
"",
"IBM Research - India"
],
[
"Salisbury",
"Brent",
"",
"Red Hat"
],
[
"Kodeswaran",
"Palanivel",
"",
"IBM Research - India"
],
[
"Sen",
"Sayandeep",
"",
"IBM Research - India"
]
] |
new_dataset
| 0.962917 |
2203.02156
|
Haonan Dong
|
Haonan Dong, Jian Yao
|
PatchMVSNet: Patch-wise Unsupervised Multi-View Stereo for
Weakly-Textured Surface Reconstruction
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Learning-based multi-view stereo (MVS) has gained fine reconstructions on
popular datasets. However, supervised learning methods require ground truth for
training, which is hard to be collected, especially for the large-scale
datasets. Though nowadays unsupervised learning methods have been proposed and
have gotten gratifying results, those methods still fail to reconstruct intact
results in challenging scenes, such as weakly-textured surfaces, as those
methods primarily depend on pixel-wise photometric consistency which is
subjected to various illuminations. To alleviate matching ambiguity in those
challenging scenes, this paper proposes robust loss functions leveraging
constraints beneath multi-view images: 1) Patch-wise photometric consistency
loss, which expands the receptive field of the features in multi-view
similarity measuring, 2) Robust twoview geometric consistency, which includes a
cross-view depth consistency checking with the minimum occlusion. Our
unsupervised strategy can be implemented with arbitrary depth estimation
frameworks and can be trained with arbitrary large-scale MVS datasets.
Experiments show that our method can decrease the matching ambiguity and
particularly improve the completeness of weakly-textured reconstruction.
Moreover, our method reaches the performance of the state-of-the-art methods on
popular benchmarks, like DTU, Tanks and Temples and ETH3D. The code will be
released soon.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 07:05:23 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Dong",
"Haonan",
""
],
[
"Yao",
"Jian",
""
]
] |
new_dataset
| 0.999326 |
2203.02244
|
Tanuj Singh Shekhawat
|
Tanuj Singh Shekhawat, Manoj Kumar, Udaybhan Rathore, Aditya Joshi,
Jasabanta Patro
|
IISERB Brains at SemEval 2022 Task 6: A Deep-learning Framework to
Identify Intended Sarcasm in English
|
7 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes the system architectures and the models submitted by our
team "IISERBBrains" to SemEval 2022 Task 6 competition. We contested for all
three sub-tasks floated for the English dataset. On the leader-board, wegot19th
rank out of43 teams for sub-taskA, the 8th rank out of22 teams for sub-task
B,and13th rank out of 16 teams for sub-taskC. Apart from the submitted results
and models, we also report the other models and results that we obtained
through our experiments after organizers published the gold labels of their
evaluation data
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 11:23:54 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Shekhawat",
"Tanuj Singh",
""
],
[
"Kumar",
"Manoj",
""
],
[
"Rathore",
"Udaybhan",
""
],
[
"Joshi",
"Aditya",
""
],
[
"Patro",
"Jasabanta",
""
]
] |
new_dataset
| 0.999605 |
2203.02355
|
Rui Fan
|
Rui Fan, Sicen Guo, Li Wang, Mohammud Junaid Bocus
|
Computer-Aided Road Inspection: Systems and Algorithms
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Road damage is an inconvenience and a safety hazard, severely affecting
vehicle condition, driving comfort, and traffic safety. The traditional manual
visual road inspection process is pricey, dangerous, exhausting, and
cumbersome. Also, manual road inspection results are qualitative and
subjective, as they depend entirely on the inspector's personal experience.
Therefore, there is an ever-increasing need for automated road inspection
systems. This chapter first compares the five most common road damage types.
Then, 2-D/3-D road imaging systems are discussed. Finally, state-of-the-art
machine vision and intelligence-based road damage detection algorithms are
introduced.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 14:43:07 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Fan",
"Rui",
""
],
[
"Guo",
"Sicen",
""
],
[
"Wang",
"Li",
""
],
[
"Bocus",
"Mohammud Junaid",
""
]
] |
new_dataset
| 0.99901 |
2203.02358
|
BIn Chen
|
Bin Chen, Ran Wang, Di Ming and Xin Feng
|
ViT-P: Rethinking Data-efficient Vision Transformers from Locality
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances of Transformers have brought new trust to computer vision
tasks. However, on small dataset, Transformers is hard to train and has lower
performance than convolutional neural networks. We make vision transformers as
data-efficient as convolutional neural networks by introducing multi-focal
attention bias. Inspired by the attention distance in a well-trained ViT, we
constrain the self-attention of ViT to have multi-scale localized receptive
field. The size of receptive field is adaptable during training so that optimal
configuration can be learned. We provide empirical evidence that proper
constrain of receptive field can reduce the amount of training data for vision
transformers. On Cifar100, our ViT-P Base model achieves the state-of-the-art
accuracy (83.16%) trained from scratch. We also perform analysis on ImageNet to
show our method does not lose accuracy on large data sets.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 14:49:48 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Chen",
"Bin",
""
],
[
"Wang",
"Ran",
""
],
[
"Ming",
"Di",
""
],
[
"Feng",
"Xin",
""
]
] |
new_dataset
| 0.995249 |
2203.02385
|
Dou Hu
|
Dou Hu, Xiaolong Hou, Lingwei Wei, Lianxin Jiang, Yang Mo
|
MM-DFN: Multimodal Dynamic Fusion Network for Emotion Recognition in
Conversations
|
Accepted by ICASSP 2022
| null | null | null |
cs.CL cs.AI cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Emotion Recognition in Conversations (ERC) has considerable prospects for
developing empathetic machines. For multimodal ERC, it is vital to understand
context and fuse modality information in conversations. Recent graph-based
fusion methods generally aggregate multimodal information by exploring unimodal
and cross-modal interactions in a graph. However, they accumulate redundant
information at each layer, limiting the context understanding between
modalities. In this paper, we propose a novel Multimodal Dynamic Fusion Network
(MM-DFN) to recognize emotions by fully understanding multimodal conversational
context. Specifically, we design a new graph-based dynamic fusion module to
fuse multimodal contextual features in a conversation. The module reduces
redundancy and enhances complementarity between modalities by capturing the
dynamics of contextual information in different semantic spaces. Extensive
experiments on two public benchmark datasets demonstrate the effectiveness and
superiority of MM-DFN.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 15:42:53 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Hu",
"Dou",
""
],
[
"Hou",
"Xiaolong",
""
],
[
"Wei",
"Lingwei",
""
],
[
"Jiang",
"Lianxin",
""
],
[
"Mo",
"Yang",
""
]
] |
new_dataset
| 0.96819 |
2203.02392
|
Varvara Logacheva
|
Nikolay Babakov, Varvara Logacheva, Alexander Panchenko
|
Beyond Plain Toxic: Detection of Inappropriate Statements on Flammable
Topics for the Russian Language
|
arXiv admin note: text overlap with arXiv:2103.05345
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Toxicity on the Internet, such as hate speech, offenses towards particular
users or groups of people, or the use of obscene words, is an acknowledged
problem. However, there also exist other types of inappropriate messages which
are usually not viewed as toxic, e.g. as they do not contain explicit offences.
Such messages can contain covered toxicity or generalizations, incite harmful
actions (crime, suicide, drug use), provoke "heated" discussions. Such messages
are often related to particular sensitive topics, e.g. on politics, sexual
minorities, social injustice which more often than other topics, e.g. cars or
computing, yield toxic emotional reactions. At the same time, clearly not all
messages within such flammable topics are inappropriate.
Towards this end, in this work, we present two text collections labelled
according to binary notion of inapropriateness and a multinomial notion of
sensitive topic. Assuming that the notion of inappropriateness is common among
people of the same culture, we base our approach on human intuitive
understanding of what is not acceptable and harmful. To objectivise the notion
of inappropriateness, we define it in a data-driven way though crowdsourcing.
Namely we run a large-scale annotation study asking workers if a given chatbot
textual statement could harm reputation of a company created it. Acceptably
high values of inter-annotator agreement suggest that the notion of
inappropriateness exists and can be uniformly understood by different people.
To define the notion of sensitive topics in an objective way we use on
guidelines suggested commonly by specialists of legal and PR department of a
large public company as potentially harmful.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 15:59:06 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Babakov",
"Nikolay",
""
],
[
"Logacheva",
"Varvara",
""
],
[
"Panchenko",
"Alexander",
""
]
] |
new_dataset
| 0.994953 |
2203.02395
|
Takuhiro Kaneko
|
Takuhiro Kaneko, Kou Tanaka, Hirokazu Kameoka, Shogo Seki
|
iSTFTNet: Fast and Lightweight Mel-Spectrogram Vocoder Incorporating
Inverse Short-Time Fourier Transform
|
Accepted to ICASSP 2022. Project page:
https://www.kecl.ntt.co.jp/people/kaneko.takuhiro/projects/istftnet/
| null | null | null |
cs.SD cs.LG eess.AS stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent text-to-speech synthesis and voice conversion systems, a
mel-spectrogram is commonly applied as an intermediate representation, and the
necessity for a mel-spectrogram vocoder is increasing. A mel-spectrogram
vocoder must solve three inverse problems: recovery of the original-scale
magnitude spectrogram, phase reconstruction, and frequency-to-time conversion.
A typical convolutional mel-spectrogram vocoder solves these problems jointly
and implicitly using a convolutional neural network, including temporal
upsampling layers, when directly calculating a raw waveform. Such an approach
allows skipping redundant processes during waveform synthesis (e.g., the direct
reconstruction of high-dimensional original-scale spectrograms). By contrast,
the approach solves all problems in a black box and cannot effectively employ
the time-frequency structures existing in a mel-spectrogram. We thus propose
iSTFTNet, which replaces some output-side layers of the mel-spectrogram vocoder
with the inverse short-time Fourier transform (iSTFT) after sufficiently
reducing the frequency dimension using upsampling layers, reducing the
computational cost from black-box modeling and avoiding redundant estimations
of high-dimensional spectrograms. During our experiments, we applied our ideas
to three HiFi-GAN variants and made the models faster and more lightweight with
a reasonable speech quality. Audio samples are available at
https://www.kecl.ntt.co.jp/people/kaneko.takuhiro/projects/istftnet/.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 16:05:48 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Kaneko",
"Takuhiro",
""
],
[
"Tanaka",
"Kou",
""
],
[
"Kameoka",
"Hirokazu",
""
],
[
"Seki",
"Shogo",
""
]
] |
new_dataset
| 0.98876 |
2203.02445
|
Yu-Min Zhang
|
Yu-Ming Zhang, Jun-Wei Hsieh, Chun-Chieh Lee, Kuo-Chin Fan
|
SFPN: Synthetic FPN for Object Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
FPN (Feature Pyramid Network) has become a basic component of most SoTA one
stage object detectors. Many previous studies have repeatedly proved that FPN
can caputre better multi-scale feature maps to more precisely describe objects
if they are with different sizes. However, for most backbones such VGG, ResNet,
or DenseNet, the feature maps at each layer are downsized to their quarters due
to the pooling operation or convolutions with stride 2. The gap of
down-scaling-by-2 is large and makes its FPN not fuse the features smoothly.
This paper proposes a new SFPN (Synthetic Fusion Pyramid Network) arichtecture
which creates various synthetic layers between layers of the original FPN to
enhance the accuracy of light-weight CNN backones to extract objects' visual
features more accurately. Finally, experiments prove the SFPN architecture
outperforms either the large backbone VGG16, ResNet50 or light-weight backbones
such as MobilenetV2 based on AP score.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 17:19:50 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Zhang",
"Yu-Ming",
""
],
[
"Hsieh",
"Jun-Wei",
""
],
[
"Lee",
"Chun-Chieh",
""
],
[
"Fan",
"Kuo-Chin",
""
]
] |
new_dataset
| 0.996309 |
2203.02475
|
Jingkai Chen
|
Jingkai Chen, Jiaoyang Li, Yijiang Huang, Caelan Garrett, Dawei Sun,
Chuchu Fan, Andreas Hofmann, Caitlin Mueller, Sven Koenig, Brian C. Williams
|
Cooperative Task and Motion Planning for Multi-Arm Assembly Systems
|
8 pages, 6 figures, 1 table
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-robot assembly systems are becoming increasingly appealing in
manufacturing due to their ability to automatically, flexibly, and quickly
construct desired structural designs. However, effectively planning for these
systems in a manner that ensures each robot is simultaneously productive, and
not idle, is challenging due to (1) the close proximity that the robots must
operate in to manipulate the structure and (2) the inherent structural partial
orderings on when each part can be installed. In this paper, we present a task
and motion planning framework that jointly plans safe, low-makespan plans for a
team of robots to assemble complex spatial structures. Our framework takes a
hierarchical approach that, at the high level, uses Mixed-integer Linear
Programs to compute an abstract plan comprised of an allocation of robots to
tasks subject to precedence constraints and, at the low level, builds on a
state-of-the-art algorithm for Multi-Agent Path Finding to plan collision-free
robot motions that realize this abstract plan. Critical to our approach is the
inclusion of certain collision constraints and movement durations during
high-level planning, which better informs the search for abstract plans that
are likely to be both feasible and low-makespan while keeping the search
tractable. We demonstrate our planning system on several challenging assembly
domains with several (sometimes heterogeneous) robots with grippers or suction
plates for assembling structures with up to 23 objects involving Lego bricks,
bars, plates, or irregularly shaped blocks.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2022 18:12:49 GMT"
}
] | 2022-03-07T00:00:00 |
[
[
"Chen",
"Jingkai",
""
],
[
"Li",
"Jiaoyang",
""
],
[
"Huang",
"Yijiang",
""
],
[
"Garrett",
"Caelan",
""
],
[
"Sun",
"Dawei",
""
],
[
"Fan",
"Chuchu",
""
],
[
"Hofmann",
"Andreas",
""
],
[
"Mueller",
"Caitlin",
""
],
[
"Koenig",
"Sven",
""
],
[
"Williams",
"Brian C.",
""
]
] |
new_dataset
| 0.961854 |
2008.08937
|
Christopher Frantz
|
Christopher K. Frantz and Saba N. Siddiki
|
Institutional Grammar 2.0 Codebook
|
121 pages, 16 figures, 14 tables
| null |
10.1111/padm.12719 10.1007/978-3-030-86372-2
|
IG-001
|
cs.MA cs.AI cs.CL stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Grammar of Institutions, or Institutional Grammar, is an established
approach to encode policy information in terms of institutional statements
based on a set of pre-defined syntactic components. This codebook provides
coding guidelines for a revised version of the Institutional Grammar, the
Institutional Grammar 2.0 (IG 2.0). IG 2.0 is a specification that aims at
facilitating the encoding of policy to meet varying analytical objectives. To
this end, it revises the grammar with respect to comprehensiveness,
flexibility, and specificity by offering multiple levels of expressiveness (IG
Core, IG Extended, IG Logico). In addition to the encoding of regulative
statements, it further introduces the encoding of constitutive institutional
statements, as well as statements that exhibit both constitutive and regulative
characteristics. Introducing those aspects, the codebook initially covers
fundamental concepts of IG 2.0, before providing an overview of pre-coding
steps relevant for document preparation. Detailed coding guidelines are
provided for both regulative and constitutive statements across all levels of
expressiveness, along with the encoding guidelines for statements of mixed form
-- hybrid and polymorphic institutional statements. The document further
provides an overview of taxonomies used in the encoding process and referred to
throughout the codebook. The codebook concludes with a summary and discussion
of relevant considerations to facilitate the coding process. An initial
Reader's Guide helps the reader tailor the content to her interest.
Note that this codebook specifically focuses on operational aspects of IG 2.0
in the context of policy coding. Links to additional resources such as the
underlying scientific literature (that offers a comprehensive treatment of the
underlying theoretical concepts) are referred to in the DOI and the concluding
section of the codebook.
|
[
{
"version": "v1",
"created": "Thu, 20 Aug 2020 12:38:55 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Dec 2020 21:15:52 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Jun 2021 13:12:00 GMT"
},
{
"version": "v4",
"created": "Tue, 1 Mar 2022 23:18:56 GMT"
}
] | 2022-03-04T00:00:00 |
[
[
"Frantz",
"Christopher K.",
""
],
[
"Siddiki",
"Saba N.",
""
]
] |
new_dataset
| 0.996656 |
2109.08615
|
Aso Mahmudi
|
Morteza Naserzade, Aso Mahmudi, Hadi Veisi, Hawre Hosseini, Mohammad
MohammadAmini
|
CKMorph: A Comprehensive Morphological Analyzer for Central Kurdish
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A morphological analyzer, which is a significant component of many natural
language processing applications especially for morphologically rich languages,
divides an input word into all its composing morphemes and identifies their
morphological roles. In this paper, we introduce a comprehensive morphological
analyzer for Central Kurdish (CK), a low-resourced language with a rich
morphology. Building upon the limited existing literature, we first assembled
and systematically categorized a comprehensive collection of the morphological
and morphophonological rules of the language. Additionally, we collected and
manually labeled a generative lexicon containing nearly 10,000 verb, noun and
adjective stems, named entities, and other types of word stems. We used these
rule sets and resources to implement CKMorph Analyzer based on finite-state
transducers. In order to provide a benchmark for future research, we collected,
manually labeled, and publicly shared test sets for evaluating accuracy and
coverage of the analyzer. CKMorph was able to correctly analyze 95.9% of the
accuracy test set, containing 1,000 CK words morphologically analyzed according
to the context. Moreover, CKMorph gave at least one analysis for 95.5% of 4.22M
CK tokens of the coverage test set. The demonstration of the application and
resources including CK verb database and test sets are openly accessible at
https://github.com/CKMorph.
|
[
{
"version": "v1",
"created": "Fri, 17 Sep 2021 15:45:27 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Mar 2022 20:26:44 GMT"
}
] | 2022-03-04T00:00:00 |
[
[
"Naserzade",
"Morteza",
""
],
[
"Mahmudi",
"Aso",
""
],
[
"Veisi",
"Hadi",
""
],
[
"Hosseini",
"Hawre",
""
],
[
"MohammadAmini",
"Mohammad",
""
]
] |
new_dataset
| 0.999781 |
2111.03539
|
Bryan Habas
|
Bryan Habas, Bader AlAttar, Brian Davis, Jack W. Langelaan, Bo Cheng
|
Optimal Inverted Landing in a Small Aerial Robot with Varied Approach
Velocities and Landing Gear Designs
|
7 pages, 9 figures, Submitted to ICRA 2022 conference
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inverted landing is a challenging feat to perform in aerial robots,
especially without external positioning. However, it is routinely performed by
biological fliers such as bees, flies, and bats. Our previous observations of
landing behaviors in flies suggest an open-loop causal relationship between
their putative visual cues and the kinematics of the aerial maneuvers executed.
For example, the degree of rotational maneuver (the amount of body inversion
prior to touchdown) and the amount of leg-assisted body swing both depend on
the flies' initial body states while approaching the ceiling. In this work,
inspired by the inverted landing behavior of flies, we used a physics-based
simulation with experimental validation to systematically investigate how
optimized inverted landing maneuvers depend on the initial approach velocities
with varied magnitude and direction. This was done by analyzing the putative
visual cues (that can be derived from onboard measurements) during optimal
maneuvering trajectories. We identified a three-dimensional policy region, from
which a mapping to a global inverted landing policy can be developed without
the use of external positioning data. Through simulation, we also investigated
the effects of an array of landing gear designs on the optimized landing
performance and identified their advantages and disadvantages. The above
results have been partially validated using limited experimental testing and
will continue to inform and guide our future experiments, for example by
applying the calculated global policy.
|
[
{
"version": "v1",
"created": "Fri, 5 Nov 2021 15:01:12 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Mar 2022 13:38:50 GMT"
}
] | 2022-03-04T00:00:00 |
[
[
"Habas",
"Bryan",
""
],
[
"AlAttar",
"Bader",
""
],
[
"Davis",
"Brian",
""
],
[
"Langelaan",
"Jack W.",
""
],
[
"Cheng",
"Bo",
""
]
] |
new_dataset
| 0.993446 |
2112.00348
|
Eimantas Ledinauskas
|
Eimantas Ledinauskas, Julius Ruseckas, Julius Marozas, Kasparas
Karlauskas, Justas Terentjevas, Augustas Ma\v{c}ijauskas, Alfonsas
Jur\v{s}\.enas
|
Automatic travel pattern extraction from visa page stamps using CNN
models
|
15 pages, 13 figures, 4 tables, submitted for peer review
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Manual travel pattern inference from visa page stamps is a time consuming
activity and constitutes an important bottleneck in the efficiency of traveler
inspection at border crossings. Despite efforts to digitize and record the
border crossing information into databases, travel pattern inference from
stamps will remain a problem until every country in the world is incorporated
into such a unified system. This could take decades. We propose an automated
document analysis system that processes scanned visa pages and automatically
extracts the travel pattern from detected stamps. The system processes the page
via the following pipeline: stamp detection in the visa page; general stamp
country and entry/exit recognition; Schengen area stamp country and entry/exit
recognition; Schengen area stamp date extraction. For each stage of the
proposed pipeline we construct neural network models and train then on a
mixture of real and synthetic data. We integrated Schengen area stamp detection
and date, country, entry/exit recognition models together with a graphical user
interface into a prototype of an automatic travel pattern extraction tool. We
find that by combining simple neural network models into our proposed pipeline
a useful tool can be created which can speed up the travel pattern extraction
significantly.
|
[
{
"version": "v1",
"created": "Wed, 1 Dec 2021 08:54:29 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Mar 2022 09:27:17 GMT"
}
] | 2022-03-04T00:00:00 |
[
[
"Ledinauskas",
"Eimantas",
""
],
[
"Ruseckas",
"Julius",
""
],
[
"Marozas",
"Julius",
""
],
[
"Karlauskas",
"Kasparas",
""
],
[
"Terentjevas",
"Justas",
""
],
[
"Mačijauskas",
"Augustas",
""
],
[
"Juršėnas",
"Alfonsas",
""
]
] |
new_dataset
| 0.979286 |
2201.11462
|
Ting Yang
|
Ting Yang, Kai Wan, Minquan Cheng and Giuseppe Caire
|
Multiple-antenna Placement Delivery Array for Cache-aided MISO Systems
|
33 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the cache-aided multiple-input single-output (MISO) broadcast
channel, which consists of a server with $L$ antennas and $K$ single-antenna
users, where the server contains $N$ files of equal length and each user is
equipped with a local cache of size $M$ files. Each user requests an arbitrary
file from library. The objective is to design a coded caching scheme based on
uncoded placement and one-shot linear delivery, to achieve the maximum sum
Degree-of-Freedom (sum-DoF) with low subpacketization. It was shown in the
literature that under the constraint of uncoded placement and one-shot linear
delivery, the optimal sum-DoF is $L+\frac{KM}{N}$. However, previously proposed
schemes for this setting incurred either an exponential subpacketization order
in $K$, or required specific conditions in the system parameters $L$, $K$, $M$
and $N$. In this paper, we propose a new combinatorial structure called
multiple-antenna placement delivery array (MAPDA). Based on MAPDA and Latin
square, the first proposed scheme achieves the optimal sum-DoF $L+\frac{KM}{N}$
with the subpacketization of $K$ when $\frac{KM}{N}+L=K$. Subsequently, for the
general case we propose a transformation approach to construct an MAPDA from
any $g$-regular PDA (a class of PDA where each integer in the array occurs $g$
times) for the original shared-link coded caching problem. When the original
PDA corresponds to the Maddah-Ali and Niesen coded caching scheme, the
resulting scheme under the combinatorial structure of MAPDA can achieve the
optimal sum-DoF $L+\frac{KM}{N}$ with reduced subpacketization with respect to
the existing schemes. The work can be extended to the multiple independent
single-antenna transmitters (servers) corresponding to the cache-aided
interference channel proposed by Naderializadeh et al. and the scenario of
transmitters equipped with multiple antennas.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 11:58:10 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Mar 2022 04:27:04 GMT"
}
] | 2022-03-04T00:00:00 |
[
[
"Yang",
"Ting",
""
],
[
"Wan",
"Kai",
""
],
[
"Cheng",
"Minquan",
""
],
[
"Caire",
"Giuseppe",
""
]
] |
new_dataset
| 0.957813 |
2202.05385
|
Hyeongyu Lee
|
Hyeongyu Lee, Jaegeun Park, Changjin Koo, Jong-Chan Kim, and Yongsoon
Eun
|
Cyclops: Open Platform for Scale Truck Platooning
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cyclops, introduced in this paper, is an open research platform for everyone
that wants to validate novel ideas and approaches in the area of self-driving
heavy-duty vehicle platooning. The platform consists of multiple 1/14 scale
semi-trailer trucks, a scale proving ground, and associated computing,
communication and control modules that enable self-driving on the proving
ground. A perception system for each vehicle is composed of a lidar-based
object tracking system and a lane detection/control system. The former is to
maintain the gap to the leading vehicle and the latter is to maintain the
vehicle within the lane by steering control. The lane detection system is
optimized for truck platooning where the field of view of the front-facing
camera is severely limited due to a small gap to the leading vehicle. This
platform is particularly amenable to validate mitigation strategies for
safety-critical situations. Indeed, a simplex structure is adopted in the
embedded module for testing various fail safe operations. We illustrate a
scenario where camera sensor fails in the perception system but the vehicle
operates at a reduced capacity to a graceful stop. Details of the Cyclops
including 3D CAD designs and algorithm source codes are released for those who
want to build similar testbeds.
|
[
{
"version": "v1",
"created": "Fri, 11 Feb 2022 01:01:31 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Mar 2022 03:16:34 GMT"
}
] | 2022-03-04T00:00:00 |
[
[
"Lee",
"Hyeongyu",
""
],
[
"Park",
"Jaegeun",
""
],
[
"Koo",
"Changjin",
""
],
[
"Kim",
"Jong-Chan",
""
],
[
"Eun",
"Yongsoon",
""
]
] |
new_dataset
| 0.999633 |
2202.12582
|
May Alhajri
|
May Alhajri, Carsten Rudolph and Ahmad Salehi Shahraki
|
A Blockchain-Based Consent Mechanism for Access to Fitness Data in the
Healthcare Context
|
This article has been accepted for publication in a future issue of
IEEE Access journal
| null |
10.1109/ACCESS.2022.3154106
| null |
cs.CR cs.CY cs.DC cs.LO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Wearable fitness devices are widely used to track an individual's health and
physical activities to improve the quality of health services. These devices
sense a considerable amount of sensitive data processed by a centralized third
party. While many researchers have thoroughly evaluated privacy issues
surrounding wearable fitness trackers, no study has addressed privacy issues in
trackers by giving control of the data to the user. Blockchain is an emerging
technology with outstanding advantages in resolving consent management privacy
concerns. As there are no fully transparent, legally compliant solutions for
sharing personal fitness data, this study introduces an architecture for a
human-centric, legally compliant, decentralized and dynamic consent system
based on blockchain and smart contracts. Algorithms and sequence diagrams of
the proposed system's activities show consent-related data flow among various
agents, which are used later to prove the system's trustworthiness by
formalizing the security requirements. The security properties of the proposed
system were evaluated using the formal security modeling framework SeMF, which
demonstrates the feasibility of the solution at an abstract level based on
formal language theory. As a result, we have empirically proven that blockchain
technology is suitable for mitigating the privacy issues of fitness providers
by recording individuals' consent using blockchain and smart contracts.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 09:51:02 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Mar 2022 20:48:09 GMT"
}
] | 2022-03-04T00:00:00 |
[
[
"Alhajri",
"May",
""
],
[
"Rudolph",
"Carsten",
""
],
[
"Shahraki",
"Ahmad Salehi",
""
]
] |
new_dataset
| 0.982601 |
2203.01438
|
Qifan Wang
|
Qifan Wang, Shujie Cui, Lei Zhou, Ocean Wu, Yonghua Zhu and Giovanni
Russello
|
EnclaveTree: Privacy-preserving Data Stream Training and Inference Using
TEE
|
15 pages, 12 figures
| null |
10.1145/3488932.3517391
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The classification service over a stream of data is becoming an important
offering for cloud providers, but users may encounter obstacles in providing
sensitive data due to privacy concerns. While Trusted Execution Environments
(TEEs) are promising solutions for protecting private data, they remain
vulnerable to side-channel attacks induced by data-dependent access patterns.
We propose a Privacy-preserving Data Stream Training and Inference scheme,
called EnclaveTree, that provides confidentiality for user's data and the
target models against a compromised cloud service provider. We design a
matrix-based training and inference procedure to train the Hoeffding Tree (HT)
model and perform inference with the trained model inside the trusted area of
TEEs, which provably prevent the exploitation of access-pattern-based attacks.
The performance evaluation shows that EnclaveTree is practical for processing
the data streams with small or medium number of features. When there are less
than 63 binary features, EnclaveTree is up to ${\thicksim}10{\times}$ and
${\thicksim}9{\times}$ faster than na\"ive oblivious solution on training and
inference, respectively.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 22:23:49 GMT"
}
] | 2022-03-04T00:00:00 |
[
[
"Wang",
"Qifan",
""
],
[
"Cui",
"Shujie",
""
],
[
"Zhou",
"Lei",
""
],
[
"Wu",
"Ocean",
""
],
[
"Zhu",
"Yonghua",
""
],
[
"Russello",
"Giovanni",
""
]
] |
new_dataset
| 0.997383 |
2203.01495
|
Yong-Jin Kim
|
Yong-Jin Kim, Yong-Ho Yon, Son-Gyong Kim
|
Disperse rotation operator DRT and use in some stream ciphers
|
12 pages, 1 figures, 20 tables
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rotation operator is frequently used in several stream ciphers, including
HC-128, Rabbit, and Salsa20, the final candidates for eSTREAM. This is because
the rotation operator (ROT) is simple but has very good dispersibility. In this
paper, we propose a disperse rotation operator (DRT), which has the same
structure as ROT but has better dispersibility. In addition, the use of DRT
instead of ROT has shown that the quality of the output stream of all three
stream ciphers was significantly improved. However, the use of DRT instead of
ROT in the HC-128 stream cipher prevents the expansion of differential attacks
based on LSB.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 03:13:27 GMT"
}
] | 2022-03-04T00:00:00 |
[
[
"Kim",
"Yong-Jin",
""
],
[
"Yon",
"Yong-Ho",
""
],
[
"Kim",
"Son-Gyong",
""
]
] |
new_dataset
| 0.997453 |
2203.01611
|
Mohsen Annabestani
|
Mohsen Annabestani, Majid Shabani, Samuel Videira Magalhaes, Alessio
Mondini, and Barbara Mazzolai
|
A Plant-Inspired Multifunctional, Two Way, and Fiberless Soft Gripper
with Sensorized Kinaesthesia
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents a new fiberless soft pneumatic actuator that can work
multifunctional and bidirectional, and its embedded sensors give it a
self-proprioception ability. This actuator works based on the idea of employing
helical pressure channels. Applying the controlled input pressures into these
two channels causes a variety of deformations and actuation. In particular,
single pressure, imbalanced pressures, and balanced pressures applied in the
channels cause bidirectional coilings, opposite bendings, and elongation,
respectively, in a single unit actuator. Also, two U-shaped microchannels are
created, and by injecting a gel-based conductive material, the actuator is
equipped with resistive sensors which are responsive to a vast dynamic range
from a small oscillation to a large elongation. This actuator has so many
promising features as a multifunctional soft gripper, and its embedded soft
sensors enable it to have better controllability in real problems. The
multifunctionality of this actuator has been validated with several
experimental tests, and also we have shown it has excellent potential in
gripping a variety of objects. Finally, the embedded sensors can discriminate
the main functions of actuators, and also they can play the role of independent
sensors as well like a stretch, pressure, or bending sensors.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 10:10:24 GMT"
}
] | 2022-03-04T00:00:00 |
[
[
"Annabestani",
"Mohsen",
""
],
[
"Shabani",
"Majid",
""
],
[
"Magalhaes",
"Samuel Videira",
""
],
[
"Mondini",
"Alessio",
""
],
[
"Mazzolai",
"Barbara",
""
]
] |
new_dataset
| 0.996703 |
2203.01661
|
Sarah Meiklejohn
|
Sarah Meiklejohn, Joe DeBlasio, Devon O'Brien, Chris Thompson, Kevin
Yeo, Emily Stark
|
SoK: SCT Auditing in Certificate Transparency
|
PETS 2022, issue 3
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The Web public key infrastructure is essential to providing secure
communication on the Internet today, and certificate authorities play a crucial
role in this ecosystem by issuing certificates. These authorities may misissue
certificates or suffer misuse attacks, however, which has given rise to the
Certificate Transparency (CT) project. The goal of CT is to store all issued
certificates in public logs, which can then be checked for the presence of
potentially misissued certificates. Thus, the requirement that a given
certificate is indeed in one (or several) of these logs lies at the core of CT.
In its current deployment, however, most individual clients do not check that
the certificates they see are in logs, as requesting a proof of inclusion
directly reveals the certificate and thus creates the clear potential for a
violation of that client's privacy. In this paper, we explore the techniques
that have been proposed for privacy-preserving auditing of certificate
inclusion, focusing on their effectiveness, efficiency, and suitability in a
near-term deployment. In doing so, we also explore the parallels with related
problems involving browser clients. Guided by a set of constraints that we
develop, we ultimately observe several key limitations in many proposals,
ranging from their privacy provisions to the fact that they focus on the
interaction between a client and a log but leave open the question of how a
client could privately report any certificates that are missing.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 11:32:31 GMT"
}
] | 2022-03-04T00:00:00 |
[
[
"Meiklejohn",
"Sarah",
""
],
[
"DeBlasio",
"Joe",
""
],
[
"O'Brien",
"Devon",
""
],
[
"Thompson",
"Chris",
""
],
[
"Yeo",
"Kevin",
""
],
[
"Stark",
"Emily",
""
]
] |
new_dataset
| 0.98846 |
2203.01675
|
Yongguo Ling
|
Yongguo Ling, Zhun Zhong, Donglin Cao, Zhiming Luo, Yaojin Lin, Shaozi
Li, Nicu Sebe
|
Cross-Modality Earth Mover's Distance for Visible Thermal Person
Re-Identification
|
10 pages, 5 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Visible thermal person re-identification (VT-ReID) suffers from the
inter-modality discrepancy and intra-identity variations. Distribution
alignment is a popular solution for VT-ReID, which, however, is usually
restricted to the influence of the intra-identity variations. In this paper, we
propose the Cross-Modality Earth Mover's Distance (CM-EMD) that can alleviate
the impact of the intra-identity variations during modality alignment. CM-EMD
selects an optimal transport strategy and assigns high weights to pairs that
have a smaller intra-identity variation. In this manner, the model will focus
on reducing the inter-modality discrepancy while paying less attention to
intra-identity variations, leading to a more effective modality alignment.
Moreover, we introduce two techniques to improve the advantage of CM-EMD.
First, the Cross-Modality Discrimination Learning (CM-DL) is designed to
overcome the discrimination degradation problem caused by modality alignment.
By reducing the ratio between intra-identity and inter-identity variances,
CM-DL leads the model to learn more discriminative representations. Second, we
construct the Multi-Granularity Structure (MGS), enabling us to align
modalities from both coarse- and fine-grained levels with the proposed CM-EMD.
Extensive experiments show the benefits of the proposed CM-EMD and its
auxiliary techniques (CM-DL and MGS). Our method achieves state-of-the-art
performance on two VT-ReID benchmarks.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 12:26:59 GMT"
}
] | 2022-03-04T00:00:00 |
[
[
"Ling",
"Yongguo",
""
],
[
"Zhong",
"Zhun",
""
],
[
"Cao",
"Donglin",
""
],
[
"Luo",
"Zhiming",
""
],
[
"Lin",
"Yaojin",
""
],
[
"Li",
"Shaozi",
""
],
[
"Sebe",
"Nicu",
""
]
] |
new_dataset
| 0.993773 |
2203.01701
|
Mahmood Ahmadi
|
Mazdak Fatahi, Masou Soursouri, Pooya Pourmohammad, Mahmood Ahmadi
|
Open Source Routers: A Survey
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Variety, size and complexity of data types, services and applications in
Internet is continuously growing up. This increasing of complexity needs more
powerful and sophisticated equipment's. One group of these devices that has
essential role are routers. Some of vendors produce some elaborate and complex
products but the commercial solutions are too closed and inflexible. The term
"Open Source Routers" covers a lot of implementations of free software routers.
Open Source Routers are solutions to overcome commercial solutions with closed
platforms. In this article, we survey the existing implementations and a wide
array of past and state-of-the-art projects on open software routers followed
by a discussion of major challenges in this area.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 13:13:23 GMT"
}
] | 2022-03-04T00:00:00 |
[
[
"Fatahi",
"Mazdak",
""
],
[
"Soursouri",
"Masou",
""
],
[
"Pourmohammad",
"Pooya",
""
],
[
"Ahmadi",
"Mahmood",
""
]
] |
new_dataset
| 0.993245 |
2203.01730
|
Chaoda Zheng
|
Chaoda Zheng, Xu Yan, Haiming Zhang, Baoyuan Wang, Shenghui Cheng,
Shuguang Cui, Zhen Li
|
Beyond 3D Siamese Tracking: A Motion-Centric Paradigm for 3D Single
Object Tracking in Point Clouds
|
To appear in CVPR2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D single object tracking (3D SOT) in LiDAR point clouds plays a crucial role
in autonomous driving. Current approaches all follow the Siamese paradigm based
on appearance matching. However, LiDAR point clouds are usually textureless and
incomplete, which hinders effective appearance matching. Besides, previous
methods greatly overlook the critical motion clues among targets. In this work,
beyond 3D Siamese tracking, we introduce a motion-centric paradigm to handle 3D
SOT from a new perspective. Following this paradigm, we propose a matching-free
two-stage tracker M^2-Track. At the 1^st-stage, M^2-Track localizes the target
within successive frames via motion transformation. Then it refines the target
box through motion-assisted shape completion at the 2^nd-stage. Extensive
experiments confirm that M^2-Track significantly outperforms previous
state-of-the-arts on three large-scale datasets while running at 57FPS (~8%,
~17%, and ~22%) precision gains on KITTI, NuScenes, and Waymo Open Dataset
respectively). Further analysis verifies each component's effectiveness and
shows the motion-centric paradigm's promising potential when combined with
appearance matching.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 14:20:10 GMT"
}
] | 2022-03-04T00:00:00 |
[
[
"Zheng",
"Chaoda",
""
],
[
"Yan",
"Xu",
""
],
[
"Zhang",
"Haiming",
""
],
[
"Wang",
"Baoyuan",
""
],
[
"Cheng",
"Shenghui",
""
],
[
"Cui",
"Shuguang",
""
],
[
"Li",
"Zhen",
""
]
] |
new_dataset
| 0.996517 |
2203.01929
|
Muhammad Zubair Irshad
|
Muhammad Zubair Irshad, Thomas Kollar, Michael Laskey, Kevin Stone,
Zsolt Kira
|
CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and
Categorical 6D Pose and Size Estimation
|
Accepted to ICRA 2022, Project page with videos:
https://zubair-irshad.github.io/projects/CenterSnap.html
| null | null | null |
cs.CV cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper studies the complex task of simultaneous multi-object 3D
reconstruction, 6D pose and size estimation from a single-view RGB-D
observation. In contrast to instance-level pose estimation, we focus on a more
challenging problem where CAD models are not available at inference time.
Existing approaches mainly follow a complex multi-stage pipeline which first
localizes and detects each object instance in the image and then regresses to
either their 3D meshes or 6D poses. These approaches suffer from
high-computational cost and low performance in complex multi-object scenarios,
where occlusions can be present. Hence, we present a simple one-stage approach
to predict both the 3D shape and estimate the 6D pose and size jointly in a
bounding-box free manner. In particular, our method treats object instances as
spatial centers where each center denotes the complete shape of an object along
with its 6D pose and size. Through this per-pixel representation, our approach
can reconstruct in real-time (40 FPS) multiple novel object instances and
predict their 6D pose and sizes in a single-forward pass. Through extensive
experiments, we demonstrate that our approach significantly outperforms all
shape completion and categorical 6D pose and size estimation baselines on
multi-object ShapeNet and NOCS datasets respectively with a 12.6% absolute
improvement in mAP for 6D pose for novel real-world object instances.
|
[
{
"version": "v1",
"created": "Thu, 3 Mar 2022 18:59:04 GMT"
}
] | 2022-03-04T00:00:00 |
[
[
"Irshad",
"Muhammad Zubair",
""
],
[
"Kollar",
"Thomas",
""
],
[
"Laskey",
"Michael",
""
],
[
"Stone",
"Kevin",
""
],
[
"Kira",
"Zsolt",
""
]
] |
new_dataset
| 0.99937 |
1308.3181
|
Guyslain Naves
|
J\'er\'emie Chalopin, Victor Chepoi, Guyslain Naves
|
Isometric embedding of Busemann surfaces into $L_1$
| null | null |
10.1007/s00454-014-9643-0
| null |
cs.CG cs.DM math.MG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we prove that any non-positively curved 2-dimensional surface
(alias, Busemann surface) is isometrically embeddable into $L_1$. As a
corollary, we obtain that all planar graphs which are 1-skeletons of planar
non-positively curved complexes with regular Euclidean polygons as cells are
$L_1$-embeddable with distortion at most $2+\pi/2<4$. Our results significantly
improve and simplify the results of the recent paper {\it A. Sidiropoulos,
Non-positive curvature, and the planar embedding conjecture, FOCS 2013.}}
|
[
{
"version": "v1",
"created": "Wed, 14 Aug 2013 17:25:53 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Chalopin",
"Jérémie",
""
],
[
"Chepoi",
"Victor",
""
],
[
"Naves",
"Guyslain",
""
]
] |
new_dataset
| 0.952073 |
2006.14788
|
Jisui Huang
|
Na Lei, Jisui Huang, Yuxue Ren, Emil Saucan, Zhenchang Wang
|
Ricci Curvature Based Volumetric Segmentation of the Auditory Ossicles
|
There is a fundamental problem with the layout of our paper, and we
should design a general segmentation framework rather than just focusing on
the ossicles
| null | null | null |
cs.CV math.DG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The auditory ossicles that are located in the middle ear are the smallest
bones in the human body. Their damage will result in hearing loss. It is
therefore important to be able to automatically diagnose ossicles' diseases
based on Computed Tomography (CT) 3D imaging. However CT images usually include
the whole head area, which is much larger than the bones of interest, thus the
localization of the ossicles, followed by segmentation, both play a significant
role in automatic diagnosis. The commonly employed local segmentation methods
require manually selected initial points, which is a highly time consuming
process. We therefore propose a completely automatic method to locate the
ossicles which requires neither templates, nor manual labels. It relies solely
on the connective properties of the auditory ossicles themselves, and their
relationship with the surrounding tissue fluid. For the segmentation task, we
define a novel energy function and obtain the shape of the ossicles from the 3D
CT image by minimizing this new energy. Compared to the state-of-the-art
methods which usually use the gradient operator and some normalization terms,
we propose to add a Ricci curvature term to the commonly employed energy
function. We compare our proposed method with the state-of-the-art methods and
show that the performance of discrete Forman-Ricci curvature is superior to the
others.
|
[
{
"version": "v1",
"created": "Fri, 26 Jun 2020 04:09:15 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Aug 2020 08:56:31 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Mar 2022 10:09:36 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Lei",
"Na",
""
],
[
"Huang",
"Jisui",
""
],
[
"Ren",
"Yuxue",
""
],
[
"Saucan",
"Emil",
""
],
[
"Wang",
"Zhenchang",
""
]
] |
new_dataset
| 0.953461 |
2011.13880
|
Emilio Cartoni
|
Emilio Cartoni (1), Davide Montella (1), Jochen Triesch (2), Gianluca
Baldassarre (1) ((1) Institute of Cognitive Sciences and Technologies, (2)
Frankfurt Institute for Advanced Studies)
|
REAL-X -- Robot open-Ended Autonomous Learning Architectures: Achieving
Truly End-to-End Sensorimotor Autonomous Learning Systems
|
14 pages, 13 figures. Improved version of the REAL baseline including
better exploration
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open-ended learning is a core research field of developmental robotics and AI
aiming to build learning machines and robots that can autonomously acquire
knowledge and skills incrementally as infants and children. The first
contribution of this work is to study the challenges posed by the previously
proposed benchmark `REAL competition' aiming to foster the development of truly
open-ended learning robot architectures. The competition involves a simulated
camera-arm robot that: (a) in a first `intrinsic phase' acquires sensorimotor
competence by autonomously interacting with objects; (b) in a second `extrinsic
phase' is tested with tasks unknown in the intrinsic phase to measure the
quality of knowledge previously acquired. This benchmark requires the solution
of multiple challenges usually tackled in isolation, in particular exploration,
sparse-rewards, object learning, generalisation, task/goal self-generation, and
autonomous skill learning. As a second contribution, we present a set of
`REAL-X' robot architectures that are able to solve different versions of the
benchmark, where we progressively release initial simplifications. The
architectures are based on a planning approach that dynamically increases
abstraction, and intrinsic motivations to foster exploration. REAL-X achieves a
good performance level in very demanding conditions. We argue that the REAL
benchmark represents a valuable tool for studying open-ended learning in its
hardest form.
|
[
{
"version": "v1",
"created": "Fri, 27 Nov 2020 18:12:06 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Mar 2022 11:37:18 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Cartoni",
"Emilio",
""
],
[
"Montella",
"Davide",
""
],
[
"Triesch",
"Jochen",
""
],
[
"Baldassarre",
"Gianluca",
""
]
] |
new_dataset
| 0.990444 |
2103.11152
|
Kunyi Zhang
|
Kunyi Zhang, Tiankai Yang, Ziming Ding, Sheng Yang, Teng Ma, Mingyang
Li, Chao Xu and Fei Gao
|
The Visual-Inertial-Dynamical Multirotor Dataset
|
7 pages,11 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, the community has witnessed numerous datasets built for developing
and testing state estimators. However, for some applications such as aerial
transportation or search-and-rescue, the contact force or other disturbance
must be perceived for robust planning and control, which is beyond the capacity
of these datasets. This paper introduces a Visual-Inertial-Dynamical (VID)
dataset, not only focusing on traditional six degrees of freedom (6-DOF) pose
estimation but also providing dynamical characteristics of the flight platform
for external force perception or dynamics-aided estimation. The VID dataset
contains hardware synchronized imagery and inertial measurements, with accurate
ground truth trajectories for evaluating common visual-inertial estimators.
Moreover, the proposed dataset highlights rotor speed and motor current
measurements, control inputs, and ground truth 6-axis force data to evaluate
external force estimation. To the best of our knowledge, the proposed VID
dataset is the first public dataset containing visual-inertial and complete
dynamical information in the real world for pose and external force evaluation.
The dataset: https://github.com/ZJU-FAST-Lab/VID-Dataset and related files:
https://github.com/ZJU-FAST-Lab/VID-Flight-Platform are open-sourced.
|
[
{
"version": "v1",
"created": "Sat, 20 Mar 2021 10:27:29 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Sep 2021 04:05:12 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Mar 2022 15:01:38 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Zhang",
"Kunyi",
""
],
[
"Yang",
"Tiankai",
""
],
[
"Ding",
"Ziming",
""
],
[
"Yang",
"Sheng",
""
],
[
"Ma",
"Teng",
""
],
[
"Li",
"Mingyang",
""
],
[
"Xu",
"Chao",
""
],
[
"Gao",
"Fei",
""
]
] |
new_dataset
| 0.999787 |
2105.14685
|
Peng Xu
|
Peng Xu and Xiatian Zhu
|
DeepChange: A Large Long-Term Person Re-Identification Benchmark with
Clothes Change
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing person re-identification (re-id) works mostly consider short-term
application scenarios without clothes change. In real-world, however, we often
dress differently across space and time. To solve this contrast, a few recent
attempts have been made on long-term re-id with clothes change. Currently, one
of the most significant limitations in this field is the lack of a large
realistic benchmark. In this work, we contribute a large, realistic long-term
person re-identification benchmark, named as DeepChange. It has several unique
characteristics: (1) Realistic and rich personal appearance (e.g., clothes and
hair style) and variations: Highly diverse clothes change and styles, with
varying reappearing gaps in time from minutes to seasons, different weather
conditions (e.g., sunny, cloudy, windy, rainy, snowy, extremely cold) and
events (e.g., working, leisure, daily activities). (2) Rich camera setups: Raw
videos were recorded by 17 outdoor varying resolution cameras operating in a
real-world surveillance system. (3) The currently largest number of (17)
cameras, (1, 121) identities, and (178, 407) bounding boxes, over the longest
time span (12 months). Further, we investigate multimodal fusion strategies for
tackling the clothes change challenge. Extensive experiments show that our
fusion models outperform a wide variety of state-of-the-art models on
DeepChange. Our dataset and documents are available at
https://github.com/PengBoXiangShang/deepchange.
|
[
{
"version": "v1",
"created": "Mon, 31 May 2021 03:35:00 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Aug 2021 13:40:29 GMT"
},
{
"version": "v3",
"created": "Wed, 15 Sep 2021 06:47:09 GMT"
},
{
"version": "v4",
"created": "Wed, 2 Mar 2022 18:53:03 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Xu",
"Peng",
""
],
[
"Zhu",
"Xiatian",
""
]
] |
new_dataset
| 0.995496 |
2107.07486
|
Matilde Marcolli
|
Noemie Combe, Yuri I. Manin, Matilde Marcolli
|
Moufang Patterns and Geometry of Information
|
amstex, 42 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Technology of data collection and information transmission is based on
various mathematical models of encoding. The words "Geometry of information"
refer to such models, whereas the words "Moufang patterns" refer to various
sophisticated symmetries appearing naturally in such models. In this paper we
show that the symmetries of spaces of probability distributions, endowed with
their canonical Riemannian metric of information geometry, have the structure
of a commutative Moufang loop. We also show that the F-manifold structure on
the space of probability distribution can be described in terms of differential
3-webs and Malcev algebras. We then present a new construction of
(noncommutative) Moufang loops associated to almost-symplectic structures over
finite fields, and use then to construct a new class of code loops with
associated quantum error-correcting codes and networks of perfect tensors.
|
[
{
"version": "v1",
"created": "Thu, 15 Jul 2021 17:39:38 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Mar 2022 17:39:05 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Combe",
"Noemie",
""
],
[
"Manin",
"Yuri I.",
""
],
[
"Marcolli",
"Matilde",
""
]
] |
new_dataset
| 0.978537 |
2109.00087
|
Tuhin Chakrabarty Mr
|
Tuhin Chakrabarty, Yejin Choi, Vered Shwartz
|
It's not Rocket Science : Interpreting Figurative Language in Narratives
|
Accepted to TACL ( To be presented at ACL 2022, Dublin)
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Figurative language is ubiquitous in English. Yet, the vast majority of NLP
research focuses on literal language. Existing text representations by design
rely on compositionality, while figurative language is often non-compositional.
In this paper, we study the interpretation of two non-compositional figurative
languages (idioms and similes). We collected datasets of fictional narratives
containing a figurative expression along with crowd-sourced plausible and
implausible continuations relying on the correct interpretation of the
expression. We then trained models to choose or generate the plausible
continuation. Our experiments show that models based solely on pre-trained
language models perform substantially worse than humans on these tasks. We
additionally propose knowledge-enhanced models, adopting human strategies for
interpreting figurative language types : inferring meaning from the context and
relying on the constituent words' literal meanings. The knowledge-enhanced
models improve the performance on both the discriminative and generative tasks,
further bridging the gap from human performance.
|
[
{
"version": "v1",
"created": "Tue, 31 Aug 2021 21:46:35 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Feb 2022 17:37:22 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Mar 2022 21:52:17 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Chakrabarty",
"Tuhin",
""
],
[
"Choi",
"Yejin",
""
],
[
"Shwartz",
"Vered",
""
]
] |
new_dataset
| 0.999159 |
2109.06768
|
Cong Wang
|
Cong Wang, Yu-Ping Wang, Dinesh Manocha
|
MotionHint: Self-Supervised Monocular Visual Odometry with Motion
Constraints
|
Accepted by ICRA 2022
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel self-supervised algorithm named MotionHint for monocular
visual odometry (VO) that takes motion constraints into account. A key aspect
of our approach is to use an appropriate motion model that can help existing
self-supervised monocular VO (SSM-VO) algorithms to overcome issues related to
the local minima within their self-supervised loss functions. The motion model
is expressed with a neural network named PPnet. It is trained to coarsely
predict the next pose of the camera and the uncertainty of this prediction. Our
self-supervised approach combines the original loss and the motion loss, which
is the weighted difference between the prediction and the generated ego-motion.
Taking two existing SSM-VO systems as our baseline, we evaluate our MotionHint
algorithm on the standard KITTI benchmark. Experimental results show that our
MotionHint algorithm can be easily applied to existing open-sourced
state-of-the-art SSM-VO systems to greatly improve the performance by reducing
the resulting ATE by up to 28.73%.
|
[
{
"version": "v1",
"created": "Tue, 14 Sep 2021 15:35:08 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Sep 2021 07:58:20 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Mar 2022 08:58:18 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Wang",
"Cong",
""
],
[
"Wang",
"Yu-Ping",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.973302 |
2110.08658
|
Zhuoyuan Song
|
Sachin Shriwastav, Gregory Snyder and Zhuoyuan Song
|
Dynamic Compressed Sensing of Unsteady Flows with a Mobile Robot
|
8 pages, 7 figures
| null | null | null |
cs.RO eess.SP math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
Large-scale environmental sensing with a finite number of mobile sensors is a
challenging task that requires a lot of resources and time. This is especially
true when features in the environment are spatiotemporally changing with
unknown or partially known dynamics. Fortunately, these dynamic features often
evolve in a low-dimensional space, making it possible to capture their dynamics
sufficiently well with only one or several properly planned mobile sensors.
This paper investigates the problem of dynamic compressed sensing of an
unsteady flow field, which takes advantage of the inherently low dimensionality
of the underlying flow dynamics to reduce number of waypoints for a mobile
sensing robot. The optimal sensing waypoints are identified by an iterative
compressed sensing algorithm that optimizes the flow reconstruction based on
the proper orthogonal decomposition modes. An optimal sampling trajectory is
then found to traverse these waypoints while minimizing the energy consumption,
time, and flow reconstruction error. Simulation results in an unsteady double
gyre flow field is presented to demonstrate the efficacy of the proposed
algorithms. Experimental results with an indoor quadcopter are presented to
show the feasibility of the resulting trajectory.
|
[
{
"version": "v1",
"created": "Sat, 16 Oct 2021 21:05:57 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Mar 2022 07:09:27 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Shriwastav",
"Sachin",
""
],
[
"Snyder",
"Gregory",
""
],
[
"Song",
"Zhuoyuan",
""
]
] |
new_dataset
| 0.969862 |
2112.05139
|
Dongdong Chen
|
Can Wang and Menglei Chai and Mingming He and Dongdong Chen and Jing
Liao
|
CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields
|
To Appear at CVPR 2022
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present CLIP-NeRF, a multi-modal 3D object manipulation method for neural
radiance fields (NeRF). By leveraging the joint language-image embedding space
of the recent Contrastive Language-Image Pre-Training (CLIP) model, we propose
a unified framework that allows manipulating NeRF in a user-friendly way, using
either a short text prompt or an exemplar image. Specifically, to combine the
novel view synthesis capability of NeRF and the controllable manipulation
ability of latent representations from generative models, we introduce a
disentangled conditional NeRF architecture that allows individual control over
both shape and appearance. This is achieved by performing the shape
conditioning via applying a learned deformation field to the positional
encoding and deferring color conditioning to the volumetric rendering stage. To
bridge this disentangled latent representation to the CLIP embedding, we design
two code mappers that take a CLIP embedding as input and update the latent
codes to reflect the targeted editing. The mappers are trained with a
CLIP-based matching loss to ensure the manipulation accuracy. Furthermore, we
propose an inverse optimization method that accurately projects an input image
to the latent codes for manipulation to enable editing on real images. We
evaluate our approach by extensive experiments on a variety of text prompts and
exemplar images and also provide an intuitive interface for interactive
editing. Our implementation is available at
https://cassiepython.github.io/clipnerf/
|
[
{
"version": "v1",
"created": "Thu, 9 Dec 2021 18:59:55 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Feb 2022 15:53:24 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Mar 2022 18:22:49 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Wang",
"Can",
""
],
[
"Chai",
"Menglei",
""
],
[
"He",
"Mingming",
""
],
[
"Chen",
"Dongdong",
""
],
[
"Liao",
"Jing",
""
]
] |
new_dataset
| 0.991059 |
2112.05142
|
Dongdong Chen
|
Tianyi Wei and Dongdong Chen and Wenbo Zhou and Jing Liao and Zhentao
Tan and Lu Yuan and Weiming Zhang and Nenghai Yu
|
HairCLIP: Design Your Hair by Text and Reference Image
|
To Appear at CVPR 2022
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hair editing is an interesting and challenging problem in computer vision and
graphics. Many existing methods require well-drawn sketches or masks as
conditional inputs for editing, however these interactions are neither
straightforward nor efficient. In order to free users from the tedious
interaction process, this paper proposes a new hair editing interaction mode,
which enables manipulating hair attributes individually or jointly based on the
texts or reference images provided by users. For this purpose, we encode the
image and text conditions in a shared embedding space and propose a unified
hair editing framework by leveraging the powerful image text representation
capability of the Contrastive Language-Image Pre-Training (CLIP) model. With
the carefully designed network structures and loss functions, our framework can
perform high-quality hair editing in a disentangled manner. Extensive
experiments demonstrate the superiority of our approach in terms of
manipulation accuracy, visual realism of editing results, and irrelevant
attribute preservation. Project repo is https://github.com/wty-ustc/HairCLIP.
|
[
{
"version": "v1",
"created": "Thu, 9 Dec 2021 18:59:58 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Mar 2022 18:22:30 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Wei",
"Tianyi",
""
],
[
"Chen",
"Dongdong",
""
],
[
"Zhou",
"Wenbo",
""
],
[
"Liao",
"Jing",
""
],
[
"Tan",
"Zhentao",
""
],
[
"Yuan",
"Lu",
""
],
[
"Zhang",
"Weiming",
""
],
[
"Yu",
"Nenghai",
""
]
] |
new_dataset
| 0.999001 |
2201.03545
|
Saining Xie
|
Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor
Darrell and Saining Xie
|
A ConvNet for the 2020s
|
CVPR 2022; Code: https://github.com/facebookresearch/ConvNeXt
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The "Roaring 20s" of visual recognition began with the introduction of Vision
Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art
image classification model. A vanilla ViT, on the other hand, faces
difficulties when applied to general computer vision tasks such as object
detection and semantic segmentation. It is the hierarchical Transformers (e.g.,
Swin Transformers) that reintroduced several ConvNet priors, making
Transformers practically viable as a generic vision backbone and demonstrating
remarkable performance on a wide variety of vision tasks. However, the
effectiveness of such hybrid approaches is still largely credited to the
intrinsic superiority of Transformers, rather than the inherent inductive
biases of convolutions. In this work, we reexamine the design spaces and test
the limits of what a pure ConvNet can achieve. We gradually "modernize" a
standard ResNet toward the design of a vision Transformer, and discover several
key components that contribute to the performance difference along the way. The
outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt.
Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably
with Transformers in terms of accuracy and scalability, achieving 87.8%
ImageNet top-1 accuracy and outperforming Swin Transformers on COCO detection
and ADE20K segmentation, while maintaining the simplicity and efficiency of
standard ConvNets.
|
[
{
"version": "v1",
"created": "Mon, 10 Jan 2022 18:59:10 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Mar 2022 15:08:16 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Liu",
"Zhuang",
""
],
[
"Mao",
"Hanzi",
""
],
[
"Wu",
"Chao-Yuan",
""
],
[
"Feichtenhofer",
"Christoph",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Xie",
"Saining",
""
]
] |
new_dataset
| 0.998982 |
2201.13410
|
Chaim Baskin
|
Or Feldman, Amit Boyarski, Shai Feldman, Dani Kogan, Avi Mendelson,
Chaim Baskin
|
Weisfeiler and Leman Go Infinite: Spectral and Combinatorial
Pre-Colorings
| null | null | null | null |
cs.LG cs.DS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Graph isomorphism testing is usually approached via the comparison of graph
invariants. Two popular alternatives that offer a good trade-off between
expressive power and computational efficiency are combinatorial (i.e., obtained
via the Weisfeiler-Leman (WL) test) and spectral invariants. While the exact
power of the latter is still an open question, the former is regularly
criticized for its limited power, when a standard configuration of uniform
pre-coloring is used. This drawback hinders the applicability of Message
Passing Graph Neural Networks (MPGNNs), whose expressive power is upper bounded
by the WL test. Relaxing the assumption of uniform pre-coloring, we show that
one can increase the expressive power of the WL test ad infinitum. Following
that, we propose an efficient pre-coloring based on spectral features that
provably increase the expressive power of the vanilla WL test. The above claims
are accompanied by extensive synthetic and real data experiments. The code to
reproduce our experiments is available at
https://github.com/TPFI22/Spectral-and-Combinatorial
|
[
{
"version": "v1",
"created": "Mon, 31 Jan 2022 18:17:40 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Mar 2022 15:53:46 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Feldman",
"Or",
""
],
[
"Boyarski",
"Amit",
""
],
[
"Feldman",
"Shai",
""
],
[
"Kogan",
"Dani",
""
],
[
"Mendelson",
"Avi",
""
],
[
"Baskin",
"Chaim",
""
]
] |
new_dataset
| 0.971263 |
2202.03762
|
Lioba Heimbach
|
Lioba Heimbach and Roger Wattenhofer
|
Eliminating Sandwich Attacks with the Help of Game Theory
| null | null |
10.1145/3488932.3517390
| null |
cs.GT cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Predatory trading bots lurking in Ethereum's mempool present invisible
taxation of traders on automated market makers (AMMs). AMM traders specify a
slippage tolerance to indicate the maximum price movement they are willing to
accept. This way, traders avoid automatic transaction failure in case of small
price movements before their trade request executes. However, while a too-small
slippage tolerance may lead to trade failures, a too-large slippage tolerance
allows predatory trading bots to profit from sandwich attacks. These bots can
extract the difference between the slippage tolerance and the actual price
movement as profit.
In this work, we introduce the sandwich game to analyze sandwich attacks
analytically from both the attacker and victim perspectives. Moreover, we
provide a simple and highly effective algorithm that traders can use to set the
slippage tolerance. We unveil that most broadcasted transactions can avoid
sandwich attacks while simultaneously only experiencing a low risk of
transaction failure. Thereby, we demonstrate that a constant auto-slippage
cannot adjust to varying trade sizes and pool characteristics. Our algorithm
outperforms the constant auto-slippage suggested by the biggest AMM, Uniswap,
in all performed tests. Specifically, our algorithm repeatedly demonstrates a
cost reduction exceeding a factor of 100.
|
[
{
"version": "v1",
"created": "Tue, 8 Feb 2022 10:11:42 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Mar 2022 14:45:09 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Heimbach",
"Lioba",
""
],
[
"Wattenhofer",
"Roger",
""
]
] |
new_dataset
| 0.992938 |
2202.13352
|
Dongyang Li
|
Dongyang Li, Taolin Zhang, Nan Hu, Chengyu Wang, Xiaofeng He
|
HiCLRE: A Hierarchical Contrastive Learning Framework for Distantly
Supervised Relation Extraction
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Distant supervision assumes that any sentence containing the same entity
pairs reflects identical relationships. Previous works of distantly supervised
relation extraction (DSRE) task generally focus on sentence-level or bag-level
de-noising techniques independently, neglecting the explicit interaction with
cross levels. In this paper, we propose a hierarchical contrastive learning
Framework for Distantly Supervised relation extraction (HiCLRE) to reduce noisy
sentences, which integrate the global structural information and local
fine-grained interaction. Specifically, we propose a three-level hierarchical
learning framework to interact with cross levels, generating the de-noising
context-aware representations via adapting the existing multi-head
self-attention, named Multi-Granularity Recontextualization. Meanwhile, pseudo
positive samples are also provided in the specific level for contrastive
learning via a dynamic gradient-based data augmentation strategy, named Dynamic
Gradient Adversarial Perturbation. Experiments demonstrate that HiCLRE
significantly outperforms strong baselines in various mainstream DSRE datasets.
|
[
{
"version": "v1",
"created": "Sun, 27 Feb 2022 12:48:26 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Li",
"Dongyang",
""
],
[
"Zhang",
"Taolin",
""
],
[
"Hu",
"Nan",
""
],
[
"Wang",
"Chengyu",
""
],
[
"He",
"Xiaofeng",
""
]
] |
new_dataset
| 0.950815 |
2203.00789
|
Zenjie Li
|
Zenjie Li and Barry Norton
|
Unified Physical Threat Monitoring System Aided by Virtual Building
Simulation
| null |
2021 5th International Conference on Vision, Image and Signal
Processing (ICVISP), 2021, pp. 206-211
|
10.1109/ICVISP54630.2021.00045
| null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With increasing physical threats in recent years targeted at critical
infrastructures, it is crucial to establish a reliable threat monitoring system
integrating video surveillance and digital sensors based on cutting-edge
technologies. A physical threat monitoring solution unifying the floorplan,
cameras, and sensors for smart buildings has been set up in our study. Computer
vision and deep learning models are used for video streams analysis. When a
threat is detected by a rule engine based on the real-time analysis results
combining with feedback from related digital sensors, an alert is sent to the
Video Management System so that human operators can take further action. A
physical threat monitoring system typically needs to address complex and even
destructive incidents, such as fire, which is unrealistic to simulate in real
life. Restrictions imposed during the Covid-19 pandemic and privacy concerns
have added to the challenges. Our study utilises the Unreal Engine to simulate
some typical suspicious and intrusion scenes with photorealistic qualities in
the context of a virtual building. Add-on programs are implemented to transfer
the video stream from virtual PTZ cameras to the Milestone Video Management
System and enable users to control those cameras from the graphic client
application. Virtual sensors such as fire alarms, temperature sensors and door
access controls are implemented similarly, fulfilling the same programmatic VMS
interface as real-life sensors. Thanks to this simulation system's
extensibility and repeatability, we have consolidated this unified physical
threat monitoring system and verified its effectiveness and user-friendliness.
Both the simulated Unreal scenes and the software add-ons developed during this
study are highly modulated and thereby are ready for reuse in future projects
in this area.
|
[
{
"version": "v1",
"created": "Tue, 1 Mar 2022 23:28:46 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Li",
"Zenjie",
""
],
[
"Norton",
"Barry",
""
]
] |
new_dataset
| 0.983838 |
2203.00810
|
Feng Hu
|
Feng Hu
|
Robust Seatbelt Detection and Usage Recognition for Driver Monitoring
Systems
|
AAAI 2022 Workshop on Trustworthy Autonomous Systems Engineering 2022
(https://jinghany.github.io/trase2022/program/)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Wearing a seatbelt appropriately while driving can reduce serious
crash-related injuries or deaths by about half. However, current seatbelt
reminder system has multiple shortcomings, such as can be easily fooled by a
"Seatbelt Warning Stopper", and cannot recognize incorrect usages for example
seating in front of a buckled seatbelt or wearing a seatbelt under the arm.
General seatbelt usage recognition has many challenges, to name a few, lacking
of color information in Infrared (IR) cameras, strong distortion caused by wide
Field of View (FoV) fisheye lens, low contrast between belt and its background,
occlusions caused by hands or hair, and imaging blurry. In this paper, we
introduce a novel general seatbelt detection and usage recognition framework to
resolve the above challenges. Our method consists of three components: a local
predictor, a global assembler, and a shape modeling process. Our approach can
be applied to the driver in the Driver Monitoring System (DMS) or general
passengers in the Occupant Monitoring System (OMS) for various camera
modalities. Experiment results on both DMS and OMS are provided to demonstrate
the accuracy and robustness of the proposed approach.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 01:04:03 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Hu",
"Feng",
""
]
] |
new_dataset
| 0.999605 |
2203.00828
|
Dening Lu
|
Dening Lu, Qian Xie, Linlin Xu, Jonathan Li
|
3DCTN: 3D Convolution-Transformer Network for Point Cloud Classification
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Although accurate and fast point cloud classification is a fundamental task
in 3D applications, it is difficult to achieve this purpose due to the
irregularity and disorder of point clouds that make it challenging to achieve
effective and efficient global discriminative feature learning. Lately, 3D
Transformers have been adopted to improve point cloud processing. Nevertheless,
massive Transformer layers tend to incur huge computational and memory costs.
This paper presents a novel hierarchical framework that incorporates
convolution with Transformer for point cloud classification, named 3D
Convolution-Transformer Network (3DCTN), to combine the strong and efficient
local feature learning ability of convolution with the remarkable global
context modeling capability of Transformer. Our method has two main modules
operating on the downsampling point sets, and each module consists of a
multi-scale local feature aggregating (LFA) block and a global feature learning
(GFL) block, which are implemented by using Graph Convolution and Transformer
respectively. We also conduct a detailed investigation on a series of
Transformer variants to explore better performance for our network. Various
experiments on ModelNet40 demonstrate that our method achieves state-of-the-art
classification performance, in terms of both accuracy and efficiency.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 02:42:14 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Lu",
"Dening",
""
],
[
"Xie",
"Qian",
""
],
[
"Xu",
"Linlin",
""
],
[
"Li",
"Jonathan",
""
]
] |
new_dataset
| 0.977615 |
2203.00865
|
Zhilong Chen
|
Zhilong Chen, Hancheng Cao, Xiaochong Lan, Zhicong Lu, Yong Li
|
Beyond Virtual Bazaar: How Social Commerce Promotes Inclusivity for the
Traditionally Underserved Community in Chinese Developing Regions
|
Zhilong Chen and Hancheng Cao contribute equally to this work;
Accepted to CHI 2022
| null | null | null |
cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
The disadvantaged population is often underserved and marginalized in
technology engagement: prior works show they are generally more reluctant and
experience more barriers in adopting and engaging with mainstream technology.
Here, we contribute to the HCI4D and ICTD literature through a novel "counter"
case study on Chinese social commerce (e.g., Pinduoduo), which 1) first
prospers among the traditionally underserved community from developing regions
ahead of the more technologically advantaged communities, and 2) has been
heavily engaged by this community. Through 12 in-depth interviews with social
commerce users from the traditionally underserved community in Chinese
developing regions, we demonstrate how social commerce, acting as a "counter",
brings online the traditional offline socioeconomic lives the community has
lived for ages, fits into the community's social, cultural, and economic
context, and thus effectively promotes technology inclusivity. Our work
provides novel insights and implications for building inclusive technology for
the "next billion" population.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 04:25:52 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Chen",
"Zhilong",
""
],
[
"Cao",
"Hancheng",
""
],
[
"Lan",
"Xiaochong",
""
],
[
"Lu",
"Zhicong",
""
],
[
"Li",
"Yong",
""
]
] |
new_dataset
| 0.996744 |
2203.00893
|
Chunran Zheng
|
Chunran Zheng, Qingyan Zhu, Wei Xu, Xiyuan Liu, Qizhi Guo and Fu Zhang
|
FAST-LIVO: Fast and Tightly-coupled Sparse-Direct LiDAR-Inertial-Visual
Odometry
|
7 pages, 7 figures, submitted to IROS2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To achieve accurate and robust pose estimation in Simultaneous Localization
and Mapping (SLAM) task, multi-sensor fusion is proven to be an effective
solution and thus provides great potential in robotic applications. This paper
proposes FAST-LIVO, a fast LiDAR-Inertial-Visual Odometry system, which builds
on two tightly-coupled and direct odometry subsystems: a VIO subsystem and a
LIO subsystem. The LIO subsystem registers raw points (instead of feature
points on e.g., edges or planes) of a new scan to an incrementally-built point
cloud map. The map points are additionally attached with image patches, which
are then used in the VIO subsystem to align a new image by minimizing the
direct photometric errors without extracting any visual features (e.g., ORB or
FAST corner features). To further improve the VIO robustness and accuracy, a
novel outlier rejection method is proposed to reject unstable map points that
lie on edges or are occluded in the image view. Experiments on both open data
sequences and our customized device data are conducted. The results show our
proposed system outperforms other counterparts and can handle challenging
environments at reduced computation cost. The system supports both multi-line
spinning LiDARs and emerging solid-state LiDARs with completely different
scanning patterns, and can run in real-time on both Intel and ARM processors.
We open source our code and dataset of this work on Github to benefit the
robotics community.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 06:44:13 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Zheng",
"Chunran",
""
],
[
"Zhu",
"Qingyan",
""
],
[
"Xu",
"Wei",
""
],
[
"Liu",
"Xiyuan",
""
],
[
"Guo",
"Qizhi",
""
],
[
"Zhang",
"Fu",
""
]
] |
new_dataset
| 0.997336 |
2203.00900
|
Jiakang Zheng
|
Jiakang Zheng, Jiayi Zhang, Emil Bj\"ornson, Zhetao Li, Bo Ai
|
Cell-Free Massive MIMO-OFDM for High-Speed Train Communications
|
33 pages, 12 figures, Accepted in IEEE Journal on Selected Areas in
Communications
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cell-free (CF) massive multiple-input multiple-output (MIMO) systems show
great potentials in low-mobility scenarios, due to cell boundary disappearance
and strong macro diversity. However, the great Doppler frequency offset (DFO)
leads to serious inter-carrier interference in orthogonal frequency division
multiplexing (OFDM) technology, which makes it difficult to provide
high-quality transmissions for both high-speed train (HST) operation control
systems and passengers. In this paper, we focus on the performance of CF
massive MIMO-OFDM systems with both fully centralized and local minimum mean
square error (MMSE) combining in HST communications. Considering the local
maximum ratio (MR) combining, the large-scale fading decoding (LSFD)
cooperation and the practical effect of DFO on system performance, exact
closed-form expressions for uplink spectral efficiency (SE) expressions are
derived. We observe that cooperative MMSE combining achieves better SE
performance than uncooperative MR combining. In addition, HST communications
with small cell and cellular massive MIMO-OFDM systems are compared in terms of
SE. Numerical results reveal that the CF massive MIMO-OFDM system achieves a
larger and more uniform SE than the other systems. Finally, the train antenna
centric (TA-centric) CF massive MIMO-OFDM system is designed for practical
implementation in HST communications, and three power control schemes are
adopted to optimize the propagation of TAs for reducing the impact of the DFO.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 07:13:52 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Zheng",
"Jiakang",
""
],
[
"Zhang",
"Jiayi",
""
],
[
"Björnson",
"Emil",
""
],
[
"Li",
"Zhetao",
""
],
[
"Ai",
"Bo",
""
]
] |
new_dataset
| 0.998028 |
2203.00959
|
Yi Gu
|
Yi Gu, Hongzhi Cheng, Kafeng Wang, Dejing Dou, Chengzhong Xu and Hui
Kong
|
Learning Moving-Object Tracking with FMCW LiDAR
|
Submitted to IROS 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a learning-based moving-object tracking method
utilizing our newly developed LiDAR sensor, Frequency Modulated Continuous Wave
(FMCW) LiDAR. Compared with most existing commercial LiDAR sensors, our FMCW
LiDAR can provide additional Doppler velocity information to each 3D point of
the point clouds. Benefiting from this, we can generate instance labels as
ground truth in a semi-automatic manner. Given the labels, we propose a
contrastive learning framework, which pulls together the features from the same
instance in embedding space and pushes apart the features from different
instances, to improve the tracking quality. Extensive experiments are conducted
on our recorded driving data, and the results show that our method outperforms
the baseline methods by a large margin.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 09:11:36 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Gu",
"Yi",
""
],
[
"Cheng",
"Hongzhi",
""
],
[
"Wang",
"Kafeng",
""
],
[
"Dou",
"Dejing",
""
],
[
"Xu",
"Chengzhong",
""
],
[
"Kong",
"Hui",
""
]
] |
new_dataset
| 0.966101 |
2203.00964
|
Wen Zhang
|
Wen Zhang, Chi-Man Wong, Ganqinag Ye, Bo Wen, Hongting Zhou, Wei
Zhang, Huajun Chen
|
PKGM: A Pre-trained Knowledge Graph Model for E-commerce Application
|
This is an extension of work "Billion-scale Pre-trained E-commerce
Product Knowledge Graph Model" published at ICDE2021. We test PKGM on two
additional tasks, scene detection and sequential recommendation, and add
serving with item embeddings as one of the baseline. The extensive
experiments show the effectiveness of PKGM, pre-trained knowledge graph
model. arXiv admin note: text overlap with arXiv:2105.00388
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, knowledge graphs have been widely applied as a uniform way
to organize data and have enhanced many tasks requiring knowledge. In online
shopping platform Taobao, we built a billion-scale e-commerce product knowledge
graph. It organizes data uniformly and provides item knowledge services for
various tasks such as item recommendation. Usually, such knowledge services are
provided through triple data, while this implementation includes (1) tedious
data selection works on product knowledge graph and (2) task model designing
works to infuse those triples knowledge. More importantly, product knowledge
graph is far from complete, resulting error propagation to knowledge enhanced
tasks. To avoid these problems, we propose a Pre-trained Knowledge Graph Model
(PKGM) for the billion-scale product knowledge graph. On the one hand, it could
provide item knowledge services in a uniform way with service vectors for
embedding-based and item-knowledge-related task models without accessing triple
data. On the other hand, it's service is provided based on implicitly completed
product knowledge graph, overcoming the common the incomplete issue. We also
propose two general ways to integrate the service vectors from PKGM into
downstream task models. We test PKGM in five knowledge-related tasks, item
classification, item resolution, item recommendation, scene detection and
sequential recommendation. Experimental results show that PKGM introduces
significant performance gains on these tasks, illustrating the useful of
service vectors from PKGM.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 09:17:20 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Zhang",
"Wen",
""
],
[
"Wong",
"Chi-Man",
""
],
[
"Ye",
"Ganqinag",
""
],
[
"Wen",
"Bo",
""
],
[
"Zhou",
"Hongting",
""
],
[
"Zhang",
"Wei",
""
],
[
"Chen",
"Huajun",
""
]
] |
new_dataset
| 0.998471 |
2203.00993
|
Roland Van Rijswijk-Deij
|
Koen van Hove, Jeroen van der Ham, Roland van Rijswijk-Deij
|
Rpkiller: Threat Analysis from an RPKI Relying Party Perspective
|
17 pages
| null | null | null |
cs.CR cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The Resource Public Key Infrastructure (RPKI) aims to secure internet routing
by creating an infrastructure where resource holders can make attestations
about their resources. RPKI Certificate Authorities issue these attestations
and publish them at Publication Points. Relying Party software retrieves and
processes the RPKI-related data from all publication points, validates the data
and makes it available to routers so they can make secure routing decisions. In
this work, we create a threat model for Relying Party software, where an
attacker controls a Certificate Authority and Publication Point. We implement a
prototype testbed to analyse how current Relying Party software implementations
react to scenarios originating from that threat model. Our results show that
all current Relying Party software was susceptible to at least one of the
identified threats. In addition to this, we also identified threats stemming
from choices made in the protocol itself. Taken together, these threats
potentially allow an attacker to fully disrupt all RPKI Relying Party software
on a global scale. We performed a Coordinated Vulnerability Disclosure to the
implementers and have made our testbed software available for future studies.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 09:59:34 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"van Hove",
"Koen",
""
],
[
"van der Ham",
"Jeroen",
""
],
[
"van Rijswijk-Deij",
"Roland",
""
]
] |
new_dataset
| 0.997875 |
2203.01025
|
David Cerdeira Mr.
|
David Cerdeira, Jos\'e Martins, Nuno Santos, Sandro Pinto
|
ReZone: Disarming TrustZone with TEE Privilege Reduction
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In TrustZone-assisted TEEs, the trusted OS has unrestricted access to both
secure and normal world memory. Unfortunately, this architectural limitation
has opened an aisle of exploration for attackers, which have demonstrated how
to leverage a chain of exploits to hijack the trusted OS and gain full control
of the system, targeting (i) the rich execution environment (REE), (ii) all
trusted applications (TAs), and (iii) the secure monitor. In this paper, we
propose ReZone. The main novelty behind ReZone design relies on leveraging
TrustZone-agnostic hardware primitives available on commercially off-the-shelf
(COTS) platforms to restrict the privileges of the trusted OS. With ReZone, a
monolithic TEE is restructured and partitioned into multiple sandboxed domains
named zones, which have only access to private resources. We have fully
implemented ReZone for the i.MX 8MQuad EVK and integrated it with Android OS
and OP-TEE. We extensively evaluated ReZone using microbenchmarks and
real-world applications. ReZone can sustain popular applications like
DRM-protected video encoding with acceptable performance overheads. We have
surveyed 80 CVE vulnerability reports and estimate that ReZone could mitigate
86.84% of them.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 10:57:10 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Cerdeira",
"David",
""
],
[
"Martins",
"José",
""
],
[
"Santos",
"Nuno",
""
],
[
"Pinto",
"Sandro",
""
]
] |
new_dataset
| 0.98513 |
2203.01051
|
Marcell Wolnitza
|
Marcell Wolnitza, Osman Kaya, Tomas Kulvicius, Florentin W\"org\"otter
and Babette Dellen
|
3D object reconstruction and 6D-pose estimation from 2D shape for
robotic grasping of objects
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a method for 3D object reconstruction and 6D-pose estimation from
2D images that uses knowledge about object shape as the primary key. In the
proposed pipeline, recognition and labeling of objects in 2D images deliver 2D
segment silhouettes that are compared with the 2D silhouettes of projections
obtained from various views of a 3D model representing the recognized object
class. By computing transformation parameters directly from the 2D images, the
number of free parameters required during the registration process is reduced,
making the approach feasible. Furthermore, 3D transformations and projective
geometry are employed to arrive at a full 3D reconstruction of the object in
camera space using a calibrated set up. Inclusion of a second camera allows
resolving remaining ambiguities. The method is quantitatively evaluated using
synthetic data and tested with real data, and additional results for the
well-known Linemod data set are shown. In robot experiments, successful
grasping of objects demonstrates its usability in real-world environments, and,
where possible, a comparison with other methods is provided. The method is
applicable to scenarios where 3D object models, e.g., CAD-models or point
clouds, are available and precise pixel-wise segmentation maps of 2D images can
be obtained. Different from other methods, the method does not use 3D depth for
training, widening the domain of application.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 11:58:35 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Wolnitza",
"Marcell",
""
],
[
"Kaya",
"Osman",
""
],
[
"Kulvicius",
"Tomas",
""
],
[
"Wörgötter",
"Florentin",
""
],
[
"Dellen",
"Babette",
""
]
] |
new_dataset
| 0.994051 |
2203.01098
|
Mohamed Faten Zhani
|
Tarik Moufakir, Mohamed Faten Zhani, Abdelouahed Gherbi, Moayad
Aloqaily, Nadir Ghrada
|
SFCaaS: Service Function Chains as a Service in NFV Environments
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the emergence of network softwarization trend, traditional networking
services offered by Internet providers are expected to evolve by fully
leveraging new recent technologies like network function virtualization and
software defined networking. In this paper, we investigate offering Service
Function Chains as a Service (SFCaaS) in NFV Environments. We first describe
the potential business model to offer such a service. We then conduct a
detailed study of the costs of virtual machine instances offered by Amazon EC2
with respect to the location, instance size, and performance in order to guide
service chain provisioning and resource allocation. Afterwards, we address the
resource allocation problem for service chain functions from the SFC provider's
perspective while leveraging the performed cost study. We hence formulate the
problem as an Integer Linear Program (ILP) aiming at reducing the SFC
provider's operational costs of virtual machine instances and links as well as
the synchronization costs among the instances. We also propose a new heuristic
algorithm to solve the mapping problem with the same aforementioned goals
taking into account the conducted study of the costs of Amazon EC2 instances.
We show through extensive simulations that the proposed heuristic significantly
reduce operational costs compared to a Baseline algorithm inspired by the
existing literature.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 13:32:40 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Moufakir",
"Tarik",
""
],
[
"Zhani",
"Mohamed Faten",
""
],
[
"Gherbi",
"Abdelouahed",
""
],
[
"Aloqaily",
"Moayad",
""
],
[
"Ghrada",
"Nadir",
""
]
] |
new_dataset
| 0.964398 |
2203.01153
|
Oren Spector
|
Oren Spector, Vladimir Tchuiev and Dotan Di Castro
|
InsertionNet 2.0: Minimal Contact Multi-Step Insertion Using Multimodal
Multiview Sensory Input
|
Accepted to ICRA 2022, InsertionNet 1.0 :
https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9420246
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the problem of devising the means for a robot to rapidly and
safely learn insertion skills with just a few human interventions and without
hand-crafted rewards or demonstrations. Our InsertionNet version 2.0 provides
an improved technique to robustly cope with a wide range of use-cases featuring
different shapes, colors, initial poses, etc. In particular, we present a
regression-based method based on multimodal input from stereo perception and
force, augmented with contrastive learning for the efficient learning of
valuable features. In addition, we introduce a one-shot learning technique for
insertion, which relies on a relation network scheme to better exploit the
collected data and to support multi-step insertion tasks. Our method improves
on the results obtained with the original InsertionNet, achieving an almost
perfect score (above 97.5$\%$ on 200 trials) in 16 real-life insertion tasks
while minimizing the execution time and contact during insertion. We further
demonstrate our method's ability to tackle a real-life 3-step insertion task
and perfectly solve an unseen insertion task without learning.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 14:50:54 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Spector",
"Oren",
""
],
[
"Tchuiev",
"Vladimir",
""
],
[
"Di Castro",
"Dotan",
""
]
] |
new_dataset
| 0.98827 |
2203.01176
|
Tiago Ribeiro
|
Tiago Ribeiro, Ana Paiva
|
Avant-Satie! Using ERIK to encode task-relevant expressivity into the
animation of autonomous social robots
| null | null | null | null |
cs.RO cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
ERIK is an expressive inverse kinematics technique that has been previously
presented and evaluated both algorithmically and in a limited user-interaction
scenario. It allows autonomous social robots to convey posture-based expressive
information while gaze-tracking users. We have developed a new scenario aimed
at further validating some of the unsupported claims from the previous
scenario. Our experiment features a fully autonomous Adelino robot, and
concludes that ERIK can be used to direct a user's choice of actions during
execution of a given task, fully through its non-verbal expressive queues.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 15:24:52 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Ribeiro",
"Tiago",
""
],
[
"Paiva",
"Ana",
""
]
] |
new_dataset
| 0.996071 |
2203.01188
|
Piyush Kumar Garg
|
Piyush Kumar Garg and Roshni Chakraborty and Sourav Kumar Dandapat
|
EnDSUM: Entropy and Diversity based Disaster Tweet Summarization
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
The huge amount of information shared in Twitter during disaster events are
utilized by government agencies and humanitarian organizations to ensure quick
crisis response and provide situational updates. However, the huge number of
tweets posted makes manual identification of the relevant tweets impossible. To
address the information overload, there is a need to automatically generate
summary of all the tweets which can highlight the important aspects of the
disaster. In this paper, we propose an entropy and diversity based summarizer,
termed as EnDSUM, specifically for disaster tweet summarization. Our
comprehensive analysis on 6 datasets indicates the effectiveness of EnDSUM and
additionally, highlights the scope of improvement of EnDSUM.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 15:38:18 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Garg",
"Piyush Kumar",
""
],
[
"Chakraborty",
"Roshni",
""
],
[
"Dandapat",
"Sourav Kumar",
""
]
] |
new_dataset
| 0.9788 |
2203.01190
|
Joe Eappen
|
Zikang Xiong, Joe Eappen, Ahmed H. Qureshi, and Suresh Jagannathan
|
Model-free Neural Lyapunov Control for Safe Robot Navigation
|
8 pages, 6 figures
| null | null | null |
cs.RO cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Model-free Deep Reinforcement Learning (DRL) controllers have demonstrated
promising results on various challenging non-linear control tasks. While a
model-free DRL algorithm can solve unknown dynamics and high-dimensional
problems, it lacks safety assurance. Although safety constraints can be encoded
as part of a reward function, there still exists a large gap between an RL
controller trained with this modified reward and a safe controller. In
contrast, instead of implicitly encoding safety constraints with rewards, we
explicitly co-learn a Twin Neural Lyapunov Function (TNLF) with the control
policy in the DRL training loop and use the learned TNLF to build a runtime
monitor. Combined with the path generated from a planner, the monitor chooses
appropriate waypoints that guide the learned controller to provide
collision-free control trajectories. Our approach inherits the scalability
advantages from DRL while enhancing safety guarantees. Our experimental
evaluation demonstrates the effectiveness of our approach compared to DRL with
augmented rewards and constrained DRL methods over a range of high-dimensional
safety-sensitive navigation tasks.
|
[
{
"version": "v1",
"created": "Wed, 2 Mar 2022 15:43:29 GMT"
}
] | 2022-03-03T00:00:00 |
[
[
"Xiong",
"Zikang",
""
],
[
"Eappen",
"Joe",
""
],
[
"Qureshi",
"Ahmed H.",
""
],
[
"Jagannathan",
"Suresh",
""
]
] |
new_dataset
| 0.978633 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.