id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2206.05775
|
Linh K\"astner
|
Zhengcheng Shen, Linh K\"astner, Magdalena Yordanova, and Jens
Lambrecht
|
Imagination-augmented Navigation Based on 2D Laser Sensor Observations
|
7 pages, 9 figures
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous navigation of mobile robots is an essential task for various
industries. Sensor data is crucial to ensure safe and reliable navigation.
However, sensor observations are often limited by different factors.
Imagination can assist to enhance the view and aid navigation in dangerous or
unknown situations where only limited sensor observation is available. In this
paper, we propose an imagination-enhanced navigation based on 2D semantic laser
scan data. The system contains an imagination module, which can predict the
entire occupied area of the object. The imagination module is trained in a
supervised manner using a collected training dataset from a 2D simulator. Four
different imagination models are trained, and the imagination results are
evaluated. Subsequently, the imagination results are integrated into the local
and global cost map to benefit the navigation procedure. The approach is
validated on three different test maps, with seven different paths for each
map. The quality and numeric results showed that the agent with the imagination
module could generate more reliable paths without passing beneath the object,
with the cost of a longer path and slower velocity.
|
[
{
"version": "v1",
"created": "Sun, 12 Jun 2022 15:43:18 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Shen",
"Zhengcheng",
""
],
[
"Kästner",
"Linh",
""
],
[
"Yordanova",
"Magdalena",
""
],
[
"Lambrecht",
"Jens",
""
]
] |
new_dataset
| 0.999583 |
2206.05805
|
Yuanxiao Xi
|
Yuanxiao Xi, Xiangliang Kong, and Gennian Ge
|
Optimal Quaternary Locally Repairable Codes Attaining the Singleton-like
Bound
|
23 pages, the Chinese version of this paper will appear in SCIENTIA
SINICA Mathematica (DOI: 10.1360/SSM-2022-0041)
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent years, several new types of codes were introduced to provide
fault-tolerance and guarantee system reliability in distributed storage
systems, among which locally repairable codes (LRCs for short) have played an
important role.
A linear code is said to have locality $r$ if each of its code symbols can be
repaired by accessing at most $r$ other code symbols. For an LRC with length
$n$, dimension $k$ and locality $r$, its minimum distance $d$ was proved to
satisfy the Singleton-like bound $d\leq n-k-\lceil k/r\rceil+2$. Since then,
many works have been done for constructing LRCs meeting the Singleton-like
bound over small fields.
In this paper, we study quaternary LRCs meeting Singleton-like bound through
a parity-check matrix approach. Using tools from finite geometry, we provide
some new necessary conditions for LRCs being optimal. From this, we prove that
there are $27$ different classes of parameters for optimal quaternary LRCs.
Moreover, for each class, explicit constructions of corresponding optimal
quaternary LRCs are presented.
|
[
{
"version": "v1",
"created": "Sun, 12 Jun 2022 17:50:53 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Xi",
"Yuanxiao",
""
],
[
"Kong",
"Xiangliang",
""
],
[
"Ge",
"Gennian",
""
]
] |
new_dataset
| 0.997537 |
2206.05821
|
Benjamin Reidys
|
Benjamin Reidys, Peng Liu, Jian Huang
|
RSSD: Defend against Ransomware with Hardware-Isolated Network-Storage
Codesign and Post-Attack Analysis
|
This extended abstract is 2 pages containing 2 Figures. This abstract
was presented at the 2022 Non-Volatile Memories Workshop (NVMW'22) and the
full paper was published at ASPLOS 2022
| null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Encryption ransomware has become a notorious malware. It encrypts user data
on storage devices like solid-state drives (SSDs) and demands a ransom to
restore data for users. To bypass existing defenses, ransomware would keep
evolving and performing new attack models. For instance, we identify and
validate three new attacks, including (1) garbage-collection (GC) attack that
exploits storage capacity and keeps writing data to trigger GC and force SSDs
to release the retained data; (2) timing attack that intentionally slows down
the pace of encrypting data and hides its I/O patterns to escape existing
defense; (3) trimming attack that utilizes the trim command available in SSDs
to physically erase data.
To enhance the robustness of SSDs against these attacks, we propose RSSD, a
ransomware-aware SSD. It redesigns the flash management of SSDs for enabling
the hardware-assisted logging, which can conservatively retain older versions
of user data and received storage operations in time order with low overhead.
It also employs hardware-isolated NVMe over Ethernet to expand local storage
capacity by transparently offloading the logs to remote cloud/servers in a
secure manner. RSSD enables post-attack analysis by building a trusted evidence
chain of storage operations to assist the investigation of ransomware attacks.
We develop RSSD with a real-world SSD FPGA board. Our evaluation shows that
RSSD can defend against new and future ransomware attacks, while introducing
negligible performance overhead.
|
[
{
"version": "v1",
"created": "Sun, 12 Jun 2022 19:14:51 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Reidys",
"Benjamin",
""
],
[
"Liu",
"Peng",
""
],
[
"Huang",
"Jian",
""
]
] |
new_dataset
| 0.999701 |
2206.05866
|
Lei Wang
|
Lei Wang, Linlin Ge, Shan Luo, Zihan Yan, Zhaopeng Cui and Jieqing
Feng
|
TC-SfM: Robust Track-Community-Based Structure-from-Motion
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Structure-from-Motion (SfM) aims to recover 3D scene structures and camera
poses based on the correspondences between input images, and thus the ambiguity
caused by duplicate structures (i.e., different structures with strong visual
resemblance) always results in incorrect camera poses and 3D structures. To
deal with the ambiguity, most existing studies resort to additional constraint
information or implicit inference by analyzing two-view geometries or feature
points. In this paper, we propose to exploit high-level information in the
scene, i.e., the spatial contextual information of local regions, to guide the
reconstruction. Specifically, a novel structure is proposed, namely,
{\textit{track-community}}, in which each community consists of a group of
tracks and represents a local segment in the scene. A community detection
algorithm is used to partition the scene into several segments. Then, the
potential ambiguous segments are detected by analyzing the neighborhood of
tracks and corrected by checking the pose consistency. Finally, we perform
partial reconstruction on each segment and align them with a novel
bidirectional consistency cost function which considers both 3D-3D
correspondences and pairwise relative camera poses. Experimental results
demonstrate that our approach can robustly alleviate reconstruction failure
resulting from visually indistinguishable structures and accurately merge the
partial reconstructions.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 01:09:12 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Wang",
"Lei",
""
],
[
"Ge",
"Linlin",
""
],
[
"Luo",
"Shan",
""
],
[
"Yan",
"Zihan",
""
],
[
"Cui",
"Zhaopeng",
""
],
[
"Feng",
"Jieqing",
""
]
] |
new_dataset
| 0.999138 |
2206.05927
|
Yunge Cui
|
Yunge Cui, Yinlong Zhang, Jiahua Dong, Haibo Sun and Feng Zhu
|
LinK3D: Linear Keypoints Representation for 3D LiDAR Point Cloud
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Feature extraction and matching are the basic parts of many computer vision
tasks, such as 2D or 3D object detection, recognition, and registration. As we
all know, 2D feature extraction and matching have already been achieved great
success. Unfortunately, in the field of 3D, the current methods fail to support
the extensive application of 3D LiDAR sensors in vision tasks, due to the poor
descriptiveness and inefficiency. To address this limitation, we propose a
novel 3D feature representation method: Linear Keypoints representation for 3D
LiDAR point cloud, called LinK3D. The novelty of LinK3D lies in that it fully
considers the characteristics (such as sparsity, complexity of scenarios) of
LiDAR point cloud, and represents current keypoint with its robust neighbor
keypoints, which provide strong constraint on the description of current
keypoint. The proposed LinK3D has been evaluated on two public datasets (i.e.,
KITTI, Steven VLP16), and the experimental results show that our method greatly
outperforms the state-of-the-arts in matching performance. More importantly,
LinK3D shows excellent real-time performance (based on the frequence 10 Hz of
LiDAR). LinK3D only takes an average of 32 milliseconds to extract features
from the point cloud collected by a 64-ray laser beam, and takes merely about 8
milliseconds to match two LiDAR scans when executed in a notebook with an Intel
Core i7 @2.2 GHz processor. Moreover, our method can be widely extended to a
variety of 3D vision applications. In this paper, we has applied our LinK3D to
3D registration, LiDAR odometry and place recognition tasks, and achieved
competitive results compared with the state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 06:50:56 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Cui",
"Yunge",
""
],
[
"Zhang",
"Yinlong",
""
],
[
"Dong",
"Jiahua",
""
],
[
"Sun",
"Haibo",
""
],
[
"Zhu",
"Feng",
""
]
] |
new_dataset
| 0.96265 |
2206.05967
|
Evgenii Zheltonozhskii
|
Tom Avrech, Evgenii Zheltonozhskii, Chaim Baskin, Ehud Rivlin
|
GoToNet: Fast Monocular Scene Exposure and Exploration
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Autonomous scene exposure and exploration, especially in localization or
communication-denied areas, useful for finding targets in unknown scenes,
remains a challenging problem in computer navigation. In this work, we present
a novel method for real-time environment exploration, whose only requirements
are a visually similar dataset for pre-training, enough lighting in the scene,
and an on-board forward-looking RGB camera for environmental sensing. As
opposed to existing methods, our method requires only one look (image) to make
a good tactical decision, and therefore works at a non-growing, constant time.
Two direction predictions, characterized by pixels dubbed the Goto and Lookat
pixels, comprise the core of our method. These pixels encode the recommended
flight instructions in the following way: the Goto pixel defines the direction
in which the agent should move by one distance unit, and the Lookat pixel
defines the direction in which the camera should be pointing at in the next
step. These flying-instruction pixels are optimized to expose the largest
amount of currently unexplored areas.
Our method presents a novel deep learning-based navigation approach that is
able to solve this problem and demonstrate its ability in an even more
complicated setup, i.e., when computational power is limited. In addition, we
propose a way to generate a navigation-oriented dataset, enabling efficient
training of our method using RGB and depth images. Tests conducted in a
simulator evaluating both the sparse pixels' coordinations inferring process,
and 2D and 3D test flights aimed to unveil areas and decrease distances to
targets achieve promising results. Comparison against a state-of-the-art
algorithm shows our method is able to overperform it, that while measuring the
new voxels per camera pose, minimum distance to target, percentage of surface
voxels seen, and compute time metrics.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 08:28:31 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Avrech",
"Tom",
""
],
[
"Zheltonozhskii",
"Evgenii",
""
],
[
"Baskin",
"Chaim",
""
],
[
"Rivlin",
"Ehud",
""
]
] |
new_dataset
| 0.998737 |
2206.05973
|
Francesco Belardinelli
|
Rui Li, Francesco Belardinelli
|
A Sahlqvist-style Correspondence Theorem for Linear-time Temporal Logic
|
15 pages + 1 page of references
| null | null | null |
cs.LO cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The language of modal logic is capable of expressing first-order conditions
on Kripke frames. The classic result by Henrik Sahlqvist identifies a
significant class of modal formulas for which first-order conditions -- or
Sahlqvist correspondents -- can be find in an effective, algorithmic way.
Recent works have successfully extended this classic result to more complex
modal languages. In this paper, we pursue a similar line and develop a
Sahlqvist-style correspondence theorem for Linear-time Temporal Logic (LTL),
which is one of the most widely used formal languages for temporal
specification. LTL extends the syntax of basic modal logic with dedicated
temporal operators Next X and Until U . As a result, the complexity of the
class of formulas that have first-order correspondents also increases
accordingly. In this paper, we identify a significant class of LTL Sahlqvist
formulas built by using modal operators F , G, X, and U . The main result of
this paper is to prove the correspondence of LTL Sahlqvist formulas to frame
conditions that are definable in first-order language.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 08:36:13 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Li",
"Rui",
""
],
[
"Belardinelli",
"Francesco",
""
]
] |
new_dataset
| 0.998863 |
2206.06083
|
Dietmar Pfahl
|
Kristiina Rahkema and Dietmar Pfahl
|
Dataset: Dependency Networks of Open Source Libraries Available Through
CocoaPods, Carthage and Swift PM
|
5 pages
|
19th International Conference on Mining Software Repositories (MSR
2022)
|
10.1145/3524842.3528016
| null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Third party libraries are used to integrate existing solutions for common
problems and help speed up development. The use of third party libraries,
however, can carry risks, for example through vulnerabilities in these
libraries. Studying the dependency networks of package managers lets us better
understand and mitigate these risks. So far, the dependency networks of the
three most important package managers of the Apple ecosystem, CocoaPods,
Carthage and Swift PM, have not been studied. We analysed the dependencies for
all publicly available open source libraries up to December 2021 and compiled a
dataset containing the dependency networks of all three package managers. The
dependency networks can be used to analyse how vulnerabilities are propagated
through transitive dependencies. In order to ease the tracing of vulnerable
libraries we also queried the NVD database and included publicly reported
vulnerabilities for these libraries in the dataset.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 12:13:28 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Rahkema",
"Kristiina",
""
],
[
"Pfahl",
"Dietmar",
""
]
] |
new_dataset
| 0.999559 |
2206.06141
|
Soumitra Ghosh
|
Soumitra Ghosh, Asif Ekbal and Pushpak Bhattacharyya
|
Am I No Good? Towards Detecting Perceived Burdensomeness and Thwarted
Belongingness from Suicide Notes
|
Accepted for publication at IJCAI-ECAI 2022 (AI for Good Track)
| null | null | null |
cs.CL cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The World Health Organization (WHO) has emphasized the importance of
significantly accelerating suicide prevention efforts to fulfill the United
Nations' Sustainable Development Goal (SDG) objective of 2030. In this paper,
we present an end-to-end multitask system to address a novel task of detection
of two interpersonal risk factors of suicide, Perceived Burdensomeness (PB) and
Thwarted Belongingness (TB) from suicide notes. We also introduce a manually
translated code-mixed suicide notes corpus, CoMCEASE-v2.0, based on the
benchmark CEASE-v2.0 dataset, annotated with temporal orientation, PB and TB
labels. We exploit the temporal orientation and emotion information in the
suicide notes to boost overall performance. For comprehensive evaluation of our
proposed method, we compare it to several state-of-the-art approaches on the
existing CEASE-v2.0 dataset and the newly announced CoMCEASE-v2.0 dataset.
Empirical evaluation suggests that temporal and emotional information can
substantially improve the detection of PB and TB.
|
[
{
"version": "v1",
"created": "Fri, 20 May 2022 06:31:08 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Ghosh",
"Soumitra",
""
],
[
"Ekbal",
"Asif",
""
],
[
"Bhattacharyya",
"Pushpak",
""
]
] |
new_dataset
| 0.994456 |
2206.06260
|
Rahul Pandita
|
Dylan Lee and Austin Henley and Bill Hinshaw and Rahul Pandita
|
OpenCBS: An Open-Source COBOL Defects Benchmark Suite
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
As the current COBOL workforce retires, entry-level developers are left to
keep complex legacy systems maintained and operational. This creates a massive
gap in knowledge and ability as companies are having their veteran developers
replaced with a new, inexperienced workforce. Additionally, the lack of COBOL
and mainframe technology in the current academic curriculum further increases
the learning curve for this new generation of developers. These issues are
becoming even more pressing due to the business-critical nature of these
systems, which makes migrating or replacing the mainframe and COBOL anytime
soon very unlikely. As a result, there is now a huge need for tools and
resources to increase new developers' code comprehension and ability to perform
routine tasks such as debugging and defect location. Extensive work has been
done in the software engineering field on the creation of such resources.
However, the proprietary nature of COBOL and mainframe systems has restricted
the amount of work and the number of open-source tools available for this
domain. To address this issue, our work leverages the publicly available
technical forum data to build an open-source collection of COBOL programs
embodying issues/defects faced by COBOL developers. These programs were
reconstructed and organized in a benchmark suite to facilitate the testing of
developer tools. Our goal is to provide an open-source COBOL benchmark and
testing suite that encourage community contribution and serve as a resource for
researchers and tool-smiths in this domain.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 15:42:31 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Lee",
"Dylan",
""
],
[
"Henley",
"Austin",
""
],
[
"Hinshaw",
"Bill",
""
],
[
"Pandita",
"Rahul",
""
]
] |
new_dataset
| 0.996986 |
2206.06315
|
Kun Zhou
|
Wayne Xin Zhao, Kun Zhou, Zheng Gong, Beichen Zhang, Yuanhang Zhou,
Jing Sha, Zhigang Chen, Shijin Wang, Cong Liu, Ji-Rong Wen
|
JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem
Understanding
|
11 pages, Accepted by KDD 2022
| null |
10.1145/3534678.3539131
| null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper aims to advance the mathematical intelligence of machines by
presenting the first Chinese mathematical pre-trained language model~(PLM) for
effectively understanding and representing mathematical problems. Unlike other
standard NLP tasks, mathematical texts are difficult to understand, since they
involve mathematical terminology, symbols and formulas in the problem
statement. Typically, it requires complex mathematical logic and background
knowledge for solving mathematical problems.
Considering the complex nature of mathematical texts, we design a novel
curriculum pre-training approach for improving the learning of mathematical
PLMs, consisting of both basic and advanced courses. Specially, we first
perform token-level pre-training based on a position-biased masking strategy,
and then design logic-based pre-training tasks that aim to recover the shuffled
sentences and formulas, respectively. Finally, we introduce a more difficult
pre-training task that enforces the PLM to detect and correct the errors in its
generated solutions. We conduct extensive experiments on offline evaluation
(including nine math-related tasks) and online $A/B$ test. Experimental results
demonstrate the effectiveness of our approach compared with a number of
competitive baselines. Our code is available at:
\textcolor{blue}{\url{https://github.com/RUCAIBox/JiuZhang}}.
|
[
{
"version": "v1",
"created": "Mon, 13 Jun 2022 17:03:52 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Zhao",
"Wayne Xin",
""
],
[
"Zhou",
"Kun",
""
],
[
"Gong",
"Zheng",
""
],
[
"Zhang",
"Beichen",
""
],
[
"Zhou",
"Yuanhang",
""
],
[
"Sha",
"Jing",
""
],
[
"Chen",
"Zhigang",
""
],
[
"Wang",
"Shijin",
""
],
[
"Liu",
"Cong",
""
],
[
"Wen",
"Ji-Rong",
""
]
] |
new_dataset
| 0.998343 |
2206.06320
|
Shivam Agarwal Mr
|
Ramit Sawhney, Shivam Agarwal, Vivek Mittal, Paolo Rosso, Vikram
Nanda, Sudheer Chava
|
Cryptocurrency Bubble Detection: A New Stock Market Dataset, Financial
Task & Hyperbolic Models
|
Proceedings of the 2022 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language Technologies
| null | null | null |
cs.CL cs.AI cs.LG cs.SI q-fin.ST
|
http://creativecommons.org/licenses/by/4.0/
|
The rapid spread of information over social media influences quantitative
trading and investments. The growing popularity of speculative trading of
highly volatile assets such as cryptocurrencies and meme stocks presents a
fresh challenge in the financial realm. Investigating such "bubbles" - periods
of sudden anomalous behavior of markets are critical in better understanding
investor behavior and market dynamics. However, high volatility coupled with
massive volumes of chaotic social media texts, especially for underexplored
assets like cryptocoins pose a challenge to existing methods. Taking the first
step towards NLP for cryptocoins, we present and publicly release
CryptoBubbles, a novel multi-span identification task for bubble detection, and
a dataset of more than 400 cryptocoins from 9 exchanges over five years
spanning over two million tweets. Further, we develop a set of
sequence-to-sequence hyperbolic models suited to this multi-span identification
task based on the power-law dynamics of cryptocurrencies and user behavior on
social media. We further test the effectiveness of our models under zero-shot
settings on a test set of Reddit posts pertaining to 29 "meme stocks", which
see an increase in trade volume due to social media hype. Through quantitative,
qualitative, and zero-shot analyses on Reddit and Twitter spanning cryptocoins
and meme-stocks, we show the practical applicability of CryptoBubbles and
hyperbolic models.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2022 08:10:02 GMT"
}
] | 2022-06-14T00:00:00 |
[
[
"Sawhney",
"Ramit",
""
],
[
"Agarwal",
"Shivam",
""
],
[
"Mittal",
"Vivek",
""
],
[
"Rosso",
"Paolo",
""
],
[
"Nanda",
"Vikram",
""
],
[
"Chava",
"Sudheer",
""
]
] |
new_dataset
| 0.999844 |
2006.03535
|
Alvin Chan
|
Alvin Chan, Yew-Soon Ong, Bill Pung, Aston Zhang, Jie Fu
|
CoCon: A Self-Supervised Approach for Controlled Text Generation
|
ICLR 2021 Camera-Ready
| null | null | null |
cs.CL cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pretrained Transformer-based language models (LMs) display remarkable natural
language generation capabilities. With their immense potential, controlling
text generation of such LMs is getting attention. While there are studies that
seek to control high-level attributes (such as sentiment and topic) of
generated text, there is still a lack of more precise control over its content
at the word- and phrase-level. Here, we propose Content-Conditioner (CoCon) to
control an LM's output text with a content input, at a fine-grained level. In
our self-supervised approach, the CoCon block learns to help the LM complete a
partially-observed text sequence by conditioning with content inputs that are
withheld from the LM. Through experiments, we show that CoCon can naturally
incorporate target content into generated texts and control high-level text
attributes in a zero-shot manner.
|
[
{
"version": "v1",
"created": "Fri, 5 Jun 2020 16:15:46 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Mar 2021 14:23:42 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Jun 2022 03:58:27 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Chan",
"Alvin",
""
],
[
"Ong",
"Yew-Soon",
""
],
[
"Pung",
"Bill",
""
],
[
"Zhang",
"Aston",
""
],
[
"Fu",
"Jie",
""
]
] |
new_dataset
| 0.967844 |
2010.14982
|
Rui Dai
|
Rui Dai, Srijan Das, Saurav Sharma, Luca Minciullo, Lorenzo Garattoni,
Francois Bremond, Gianpiero Francesca
|
Toyota Smarthome Untrimmed: Real-World Untrimmed Videos for Activity
Detection
|
Toyota Smarthome Untrimmed dataset, project page:
https://project.inria.fr/toyotasmarthome
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Designing activity detection systems that can be successfully deployed in
daily-living environments requires datasets that pose the challenges typical of
real-world scenarios. In this paper, we introduce a new untrimmed daily-living
dataset that features several real-world challenges: Toyota Smarthome Untrimmed
(TSU). TSU contains a wide variety of activities performed in a spontaneous
manner. The dataset contains dense annotations including elementary, composite
activities and activities involving interactions with objects. We provide an
analysis of the real-world challenges featured by our dataset, highlighting the
open issues for detection algorithms. We show that current state-of-the-art
methods fail to achieve satisfactory performance on the TSU dataset. Therefore,
we propose a new baseline method for activity detection to tackle the novel
challenges provided by our dataset. This method leverages one modality (i.e.
optic flow) to generate the attention weights to guide another modality (i.e
RGB) to better detect the activity boundaries. This is particularly beneficial
to detect activities characterized by high temporal variance. We show that the
method we propose outperforms state-of-the-art methods on TSU and on another
popular challenging dataset, Charades.
|
[
{
"version": "v1",
"created": "Wed, 28 Oct 2020 13:47:16 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jun 2022 10:50:48 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Dai",
"Rui",
""
],
[
"Das",
"Srijan",
""
],
[
"Sharma",
"Saurav",
""
],
[
"Minciullo",
"Luca",
""
],
[
"Garattoni",
"Lorenzo",
""
],
[
"Bremond",
"Francois",
""
],
[
"Francesca",
"Gianpiero",
""
]
] |
new_dataset
| 0.999808 |
2103.10107
|
Luk\'a\v{s} Picek
|
Luk\'a\v{s} Picek, Milan \v{S}ulc, Ji\v{r}\'i Matas, Jacob
Heilmann-Clausen, Thomas S. Jeppesen, Thomas L{\ae}ss{\o}e, Tobias Fr{\o}slev
|
Danish Fungi 2020 -- Not Just Another Image Recognition Dataset
| null | null |
10.1109/WACV51458.2022.00334
| null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a novel fine-grained dataset and benchmark, the Danish Fungi
2020 (DF20). The dataset, constructed from observations submitted to the Atlas
of Danish Fungi, is unique in its taxonomy-accurate class labels, small number
of errors, highly unbalanced long-tailed class distribution, rich observation
metadata, and well-defined class hierarchy. DF20 has zero overlap with
ImageNet, allowing unbiased comparison of models fine-tuned from publicly
available ImageNet checkpoints. The proposed evaluation protocol enables
testing the ability to improve classification using metadata -- e.g. precise
geographic location, habitat, and substrate, facilitates classifier calibration
testing, and finally allows to study the impact of the device settings on the
classification performance. Experiments using Convolutional Neural Networks
(CNN) and the recent Vision Transformers (ViT) show that DF20 presents a
challenging task. Interestingly, ViT achieves results superior to CNN baselines
with 80.45% accuracy and 0.743 macro F1 score, reducing the CNN error by 9% and
12% respectively. A simple procedure for including metadata into the decision
process improves the classification accuracy by more than 2.95 percentage
points, reducing the error rate by 15%. The source code for all methods and
experiments is available at https://sites.google.com/view/danish-fungi-dataset.
|
[
{
"version": "v1",
"created": "Thu, 18 Mar 2021 09:33:11 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Mar 2021 12:15:47 GMT"
},
{
"version": "v3",
"created": "Mon, 22 Mar 2021 08:43:04 GMT"
},
{
"version": "v4",
"created": "Fri, 20 Aug 2021 14:35:44 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Picek",
"Lukáš",
""
],
[
"Šulc",
"Milan",
""
],
[
"Matas",
"Jiří",
""
],
[
"Heilmann-Clausen",
"Jacob",
""
],
[
"Jeppesen",
"Thomas S.",
""
],
[
"Læssøe",
"Thomas",
""
],
[
"Frøslev",
"Tobias",
""
]
] |
new_dataset
| 0.999638 |
2108.07140
|
Yiran Chen
|
Yiran Chen, Zhenqiao Song, Xianze Wu, Danqing Wang, Jingjing Xu, Jiaze
Chen, Hao Zhou, Lei Li
|
MTG: A Benchmark Suite for Multilingual Text Generation
|
NAACL2022 findings
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce MTG, a new benchmark suite for training and evaluating
multilingual text generation. It is the first-proposed multilingual multiway
text generation dataset with the largest human-annotated data (400k). It
includes four generation tasks (story generation, question generation, title
generation and text summarization) across five languages (English, German,
French, Spanish and Chinese). The multiway setup enables testing knowledge
transfer capabilities for a model across languages and tasks. Using MTG, we
train and analyze several popular multilingual generation models from different
aspects. Our benchmark suite fosters model performance enhancement with more
human-annotated parallel data. It provides comprehensive evaluations with
diverse generation scenarios. Code and data are available at
\url{https://github.com/zide05/MTG}.
|
[
{
"version": "v1",
"created": "Fri, 13 Aug 2021 13:25:08 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jun 2022 00:41:28 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Chen",
"Yiran",
""
],
[
"Song",
"Zhenqiao",
""
],
[
"Wu",
"Xianze",
""
],
[
"Wang",
"Danqing",
""
],
[
"Xu",
"Jingjing",
""
],
[
"Chen",
"Jiaze",
""
],
[
"Zhou",
"Hao",
""
],
[
"Li",
"Lei",
""
]
] |
new_dataset
| 0.999791 |
2109.10957
|
Stefan Bauer
|
Stefan Bauer and Felix Widmaier and Manuel W\"uthrich and Annika
Buchholz and Sebastian Stark and Anirudh Goyal and Thomas Steinbrenner and
Joel Akpo and Shruti Joshi and Vincent Berenz and Vaibhav Agrawal and Niklas
Funk and Julen Urain De Jesus and Jan Peters and Joe Watson and Claire Chen
and Krishnan Srinivasan and Junwu Zhang and Jeffrey Zhang and Matthew R.
Walter and Rishabh Madan and Charles Schaff and Takahiro Maeda and Takuma
Yoneda and Denis Yarats and Arthur Allshire and Ethan K. Gordon and
Tapomayukh Bhattacharjee and Siddhartha S. Srinivasa and Animesh Garg and
Harshit Sikchi and Jilong Wang and Qingfeng Yao and Shuyu Yang and Robert
McCarthy and Francisco Roldan Sanchez and Qiang Wang and David Cordova Bulens
and Kevin McGuinness and Noel O'Connor and Stephen J. Redmond and Bernhard
Sch\"olkopf
|
Real Robot Challenge: A Robotics Competition in the Cloud
| null | null | null | null |
cs.RO stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dexterous manipulation remains an open problem in robotics. To coordinate
efforts of the research community towards tackling this problem, we propose a
shared benchmark. We designed and built robotic platforms that are hosted at
MPI for Intelligent Systems and can be accessed remotely. Each platform
consists of three robotic fingers that are capable of dexterous object
manipulation. Users are able to control the platforms remotely by submitting
code that is executed automatically, akin to a computational cluster. Using
this setup, i) we host robotics competitions, where teams from anywhere in the
world access our platforms to tackle challenging tasks ii) we publish the
datasets collected during these competitions (consisting of hundreds of robot
hours), and iii) we give researchers access to these platforms for their own
projects.
|
[
{
"version": "v1",
"created": "Wed, 22 Sep 2021 18:22:35 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jun 2022 09:35:31 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Bauer",
"Stefan",
""
],
[
"Widmaier",
"Felix",
""
],
[
"Wüthrich",
"Manuel",
""
],
[
"Buchholz",
"Annika",
""
],
[
"Stark",
"Sebastian",
""
],
[
"Goyal",
"Anirudh",
""
],
[
"Steinbrenner",
"Thomas",
""
],
[
"Akpo",
"Joel",
""
],
[
"Joshi",
"Shruti",
""
],
[
"Berenz",
"Vincent",
""
],
[
"Agrawal",
"Vaibhav",
""
],
[
"Funk",
"Niklas",
""
],
[
"De Jesus",
"Julen Urain",
""
],
[
"Peters",
"Jan",
""
],
[
"Watson",
"Joe",
""
],
[
"Chen",
"Claire",
""
],
[
"Srinivasan",
"Krishnan",
""
],
[
"Zhang",
"Junwu",
""
],
[
"Zhang",
"Jeffrey",
""
],
[
"Walter",
"Matthew R.",
""
],
[
"Madan",
"Rishabh",
""
],
[
"Schaff",
"Charles",
""
],
[
"Maeda",
"Takahiro",
""
],
[
"Yoneda",
"Takuma",
""
],
[
"Yarats",
"Denis",
""
],
[
"Allshire",
"Arthur",
""
],
[
"Gordon",
"Ethan K.",
""
],
[
"Bhattacharjee",
"Tapomayukh",
""
],
[
"Srinivasa",
"Siddhartha S.",
""
],
[
"Garg",
"Animesh",
""
],
[
"Sikchi",
"Harshit",
""
],
[
"Wang",
"Jilong",
""
],
[
"Yao",
"Qingfeng",
""
],
[
"Yang",
"Shuyu",
""
],
[
"McCarthy",
"Robert",
""
],
[
"Sanchez",
"Francisco Roldan",
""
],
[
"Wang",
"Qiang",
""
],
[
"Bulens",
"David Cordova",
""
],
[
"McGuinness",
"Kevin",
""
],
[
"O'Connor",
"Noel",
""
],
[
"Redmond",
"Stephen J.",
""
],
[
"Schölkopf",
"Bernhard",
""
]
] |
new_dataset
| 0.998398 |
2110.06915
|
Roei Herzig
|
Roei Herzig, Elad Ben-Avraham, Karttikeya Mangalam, Amir Bar, Gal
Chechik, Anna Rohrbach, Trevor Darrell, Amir Globerson
|
Object-Region Video Transformers
|
CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, video transformers have shown great success in video understanding,
exceeding CNN performance; yet existing video transformer models do not
explicitly model objects, although objects can be essential for recognizing
actions. In this work, we present Object-Region Video Transformers (ORViT), an
\emph{object-centric} approach that extends video transformer layers with a
block that directly incorporates object representations. The key idea is to
fuse object-centric representations starting from early layers and propagate
them into the transformer-layers, thus affecting the spatio-temporal
representations throughout the network. Our ORViT block consists of two
object-level streams: appearance and dynamics. In the appearance stream, an
"Object-Region Attention" module applies self-attention over the patches and
\emph{object regions}. In this way, visual object regions interact with uniform
patch tokens and enrich them with contextualized object information. We further
model object dynamics via a separate "Object-Dynamics Module", which captures
trajectory interactions, and show how to integrate the two streams. We evaluate
our model on four tasks and five datasets: compositional and few-shot action
recognition on SomethingElse, spatio-temporal action detection on AVA, and
standard action recognition on Something-Something V2, Diving48 and
Epic-Kitchen100. We show strong performance improvement across all tasks and
datasets considered, demonstrating the value of a model that incorporates
object representations into a transformer architecture. For code and pretrained
models, visit the project page at \url{https://roeiherz.github.io/ORViT/}
|
[
{
"version": "v1",
"created": "Wed, 13 Oct 2021 17:51:46 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Nov 2021 15:49:19 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Jun 2022 20:48:45 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Herzig",
"Roei",
""
],
[
"Ben-Avraham",
"Elad",
""
],
[
"Mangalam",
"Karttikeya",
""
],
[
"Bar",
"Amir",
""
],
[
"Chechik",
"Gal",
""
],
[
"Rohrbach",
"Anna",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Globerson",
"Amir",
""
]
] |
new_dataset
| 0.999506 |
2201.11500
|
V. Javier Traver
|
Javier Marina-Miranda, V. Javier Traver
|
Head and eye egocentric gesture recognition for human-robot interaction
using eyewear cameras
|
Copyright 2022 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works
|
IEEE Robotics and Automation Letters, 2022
|
10.1109/LRA.2022.3180442
| null |
cs.CV cs.HC cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-verbal communication plays a particularly important role in a wide range
of scenarios in Human-Robot Interaction (HRI). Accordingly, this work addresses
the problem of human gesture recognition. In particular, we focus on head and
eye gestures, and adopt an egocentric (first-person) perspective using eyewear
cameras. We argue that this egocentric view may offer a number of conceptual
and technical benefits over scene- or robot-centric perspectives. A
motion-based recognition approach is proposed, which operates at two temporal
granularities. Locally, frame-to-frame homographies are estimated with a
convolutional neural network (CNN). The output of this CNN is input to a long
short-term memory (LSTM) to capture longer-term temporal visual relationships,
which are relevant to characterize gestures. Regarding the configuration of the
network architecture, one particularly interesting finding is that using the
output of an internal layer of the homography CNN increases the recognition
rate with respect to using the homography matrix itself. While this work
focuses on action recognition, and no robot or user study has been conducted
yet, the system has been designed to meet real-time constraints. The
encouraging results suggest that the proposed egocentric perspective is viable,
and this proof-of-concept work provides novel and useful contributions to the
exciting area of HRI.
|
[
{
"version": "v1",
"created": "Thu, 27 Jan 2022 13:26:05 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jun 2022 17:29:26 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Marina-Miranda",
"Javier",
""
],
[
"Traver",
"V. Javier",
""
]
] |
new_dataset
| 0.975156 |
2203.09127
|
Jizhou Huang
|
Jizhou Huang, Haifeng Wang, Yibo Sun, Yunsheng Shi, Zhengjie Huang, An
Zhuo, Shikun Feng
|
ERNIE-GeoL: A Geography-and-Language Pre-trained Model and its
Applications in Baidu Maps
|
Accepted by KDD 2022, camera-ready version
| null |
10.1145/3534678.3539021
| null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pre-trained models (PTMs) have become a fundamental backbone for downstream
tasks in natural language processing and computer vision. Despite initial gains
that were obtained by applying generic PTMs to geo-related tasks at Baidu Maps,
a clear performance plateau over time was observed. One of the main reasons for
this plateau is the lack of readily available geographic knowledge in generic
PTMs. To address this problem, in this paper, we present ERNIE-GeoL, which is a
geography-and-language pre-trained model designed and developed for improving
the geo-related tasks at Baidu Maps. ERNIE-GeoL is elaborately designed to
learn a universal representation of geography-language by pre-training on
large-scale data generated from a heterogeneous graph that contains abundant
geographic knowledge. Extensive quantitative and qualitative experiments
conducted on large-scale real-world datasets demonstrate the superiority and
effectiveness of ERNIE-GeoL. ERNIE-GeoL has already been deployed in production
at Baidu Maps since April 2021, which significantly benefits the performance of
various downstream tasks. This demonstrates that ERNIE-GeoL can serve as a
fundamental backbone for a wide range of geo-related tasks.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2022 07:07:33 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Apr 2022 01:29:32 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Jun 2022 08:31:18 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Huang",
"Jizhou",
""
],
[
"Wang",
"Haifeng",
""
],
[
"Sun",
"Yibo",
""
],
[
"Shi",
"Yunsheng",
""
],
[
"Huang",
"Zhengjie",
""
],
[
"Zhuo",
"An",
""
],
[
"Feng",
"Shikun",
""
]
] |
new_dataset
| 0.999556 |
2204.10380
|
Zheng Tang
|
Milind Naphade, Shuo Wang, David C. Anastasiu, Zheng Tang, Ming-Ching
Chang, Yue Yao, Liang Zheng, Mohammed Shaiqur Rahman, Archana
Venkatachalapathy, Anuj Sharma, Qi Feng, Vitaly Ablavsky, Stan Sclaroff,
Pranamesh Chakraborty, Alice Li, Shangru Li and Rama Chellappa
|
The 6th AI City Challenge
|
Summary of the 6th AI City Challenge Workshop in conjunction with
CVPR 2022. arXiv admin note: text overlap with arXiv:2104.12233
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The 6th edition of the AI City Challenge specifically focuses on problems in
two domains where there is tremendous unlocked potential at the intersection of
computer vision and artificial intelligence: Intelligent Traffic Systems (ITS),
and brick and mortar retail businesses. The four challenge tracks of the 2022
AI City Challenge received participation requests from 254 teams across 27
countries. Track 1 addressed city-scale multi-target multi-camera (MTMC)
vehicle tracking. Track 2 addressed natural-language-based vehicle track
retrieval. Track 3 was a brand new track for naturalistic driving analysis,
where the data were captured by several cameras mounted inside the vehicle
focusing on driver safety, and the task was to classify driver actions. Track 4
was another new track aiming to achieve retail store automated checkout using
only a single view camera. We released two leader boards for submissions based
on different methods, including a public leader board for the contest, where no
use of external data is allowed, and a general leader board for all submitted
results. The top performance of participating teams established strong
baselines and even outperformed the state-of-the-art in the proposed challenge
tracks.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 19:24:17 GMT"
},
{
"version": "v2",
"created": "Tue, 10 May 2022 19:14:50 GMT"
},
{
"version": "v3",
"created": "Tue, 17 May 2022 05:10:50 GMT"
},
{
"version": "v4",
"created": "Thu, 9 Jun 2022 22:52:22 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Naphade",
"Milind",
""
],
[
"Wang",
"Shuo",
""
],
[
"Anastasiu",
"David C.",
""
],
[
"Tang",
"Zheng",
""
],
[
"Chang",
"Ming-Ching",
""
],
[
"Yao",
"Yue",
""
],
[
"Zheng",
"Liang",
""
],
[
"Rahman",
"Mohammed Shaiqur",
""
],
[
"Venkatachalapathy",
"Archana",
""
],
[
"Sharma",
"Anuj",
""
],
[
"Feng",
"Qi",
""
],
[
"Ablavsky",
"Vitaly",
""
],
[
"Sclaroff",
"Stan",
""
],
[
"Chakraborty",
"Pranamesh",
""
],
[
"Li",
"Alice",
""
],
[
"Li",
"Shangru",
""
],
[
"Chellappa",
"Rama",
""
]
] |
new_dataset
| 0.998742 |
2205.02510
|
Bihui Zou
|
Bihui Zou, Chao Song, Zipeng He, Jaehyung Ju
|
Encoding of direct 4D printing of isotropic single-material system for
double-curvature and multimodal morphing
| null |
Extreme Mech. Lett. 54 (2022) 101779
|
10.1016/j.eml.2022.101779
| null |
cs.GR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The ability to morph flat sheets into complex 3D shapes is extremely useful
for fast manufacturing and saving materials while also allowing volumetrically
efficient storage and shipment and a functional use. Direct 4D printing is a
compelling method to morph complex 3D shapes out of as-printed 2D plates.
However, most direct 4D printing methods require multi-material systems
involving costly machines. Moreover, most works have used an open-cell design
for shape shifting by encoding a collection of 1D rib deformations, which
cannot remain structurally stable. Here, we demonstrate the direct 4D printing
of an isotropic single-material system to morph 2D continuous bilayer plates
into doubly curved and multimodal 3D complex shapes whose geometry can also be
locked after deployment. We develop an inverse-design algorithm that integrates
extrusion-based 3D printing of a single-material system to directly morph a raw
printed sheet into complex 3D geometries such as a doubly curved surface with
shape locking. Furthermore, our inverse-design tool encodes the localized
shape-memory anisotropy during the process, providing the processing conditions
for a target 3D morphed geometry. Our approach could be used for conventional
extrusion-based 3D printing for various applications including biomedical
devices, deployable structures, smart textiles, and pop-up Kirigami structures.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 08:38:47 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Zou",
"Bihui",
""
],
[
"Song",
"Chao",
""
],
[
"He",
"Zipeng",
""
],
[
"Ju",
"Jaehyung",
""
]
] |
new_dataset
| 0.970967 |
2205.02880
|
Farima Fatahi Bayat
|
Farima Fatahi Bayat, Nikita Bhutani, H.V. Jagadish
|
CompactIE: Compact Facts in Open Information Extraction
|
NAACL 2022 main conference (Long paper)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A major drawback of modern neural OpenIE systems and benchmarks is that they
prioritize high coverage of information in extractions over compactness of
their constituents. This severely limits the usefulness of OpenIE extractions
in many downstream tasks. The utility of extractions can be improved if
extractions are compact and share constituents. To this end, we study the
problem of identifying compact extractions with neural-based methods. We
propose CompactIE, an OpenIE system that uses a novel pipelined approach to
produce compact extractions with overlapping constituents. It first detects
constituents of the extractions and then links them to build extractions. We
train our system on compact extractions obtained by processing existing
benchmarks. Our experiments on CaRB and Wire57 datasets indicate that CompactIE
finds 1.5x-2x more compact extractions than previous systems, with high
precision, establishing a new state-of-the-art performance in OpenIE.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2022 18:27:41 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jun 2022 22:44:23 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Bayat",
"Farima Fatahi",
""
],
[
"Bhutani",
"Nikita",
""
],
[
"Jagadish",
"H. V.",
""
]
] |
new_dataset
| 0.997851 |
2206.02144
|
Joshua Hunte
|
Joshua Hunte, Martin Neil, Norman Fenton
|
Product safety idioms: a method for building causal Bayesian networks
for product safety and risk assessment
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Idioms are small, reusable Bayesian network (BN) fragments that represent
generic types of uncertain reasoning. This paper shows how idioms can be used
to build causal BNs for product safety and risk assessment that use a
combination of data and knowledge. We show that the specific product safety
idioms that we introduce are sufficient to build full BN models to evaluate
safety and risk for a wide range of products. The resulting models can be used
by safety regulators and product manufacturers even when there are limited (or
no) product testing data.
|
[
{
"version": "v1",
"created": "Sun, 5 Jun 2022 10:16:03 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jun 2022 18:04:35 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Hunte",
"Joshua",
""
],
[
"Neil",
"Martin",
""
],
[
"Fenton",
"Norman",
""
]
] |
new_dataset
| 0.972332 |
2206.03644
|
Yunzhe Qi
|
Yunzhe Qi, Yikun Ban, Jingrui He
|
Neural Bandit with Arm Group Graph
|
Accepted to SIGKDD 2022
| null |
10.1145/3534678.3539312
| null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Contextual bandits aim to identify among a set of arms the optimal one with
the highest reward based on their contextual information. Motivated by the fact
that the arms usually exhibit group behaviors and the mutual impacts exist
among groups, we introduce a new model, Arm Group Graph (AGG), where the nodes
represent the groups of arms and the weighted edges formulate the correlations
among groups. To leverage the rich information in AGG, we propose a bandit
algorithm, AGG-UCB, where the neural networks are designed to estimate rewards,
and we propose to utilize graph neural networks (GNN) to learn the
representations of arm groups with correlations. To solve the
exploitation-exploration dilemma in bandits, we derive a new upper confidence
bound (UCB) built on neural networks (exploitation) for exploration.
Furthermore, we prove that AGG-UCB can achieve a near-optimal regret bound with
over-parameterized neural networks, and provide the convergence analysis of GNN
with fully-connected layers which may be of independent interest. In the end,
we conduct extensive experiments against state-of-the-art baselines on multiple
public data sets, showing the effectiveness of the proposed algorithm.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 02:16:11 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jun 2022 03:34:35 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Qi",
"Yunzhe",
""
],
[
"Ban",
"Yikun",
""
],
[
"He",
"Jingrui",
""
]
] |
new_dataset
| 0.981362 |
2206.04688
|
Jijoong Moon
|
Ji Joong Moon, Parichay Kapoor, Ji Hoon Lee, Myung Joo Ham, Hyun Suk
Lee
|
NNTrainer: Light-Weight On-Device Training Framework
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Modern consumer electronic devices have adopted deep learning-based
intelligence services for their key features. Vendors have recently started to
execute intelligence services on devices to preserve personal data in devices,
reduce network and cloud costs. We find such a trend as the opportunity to
personalize intelligence services by updating neural networks with user data
without exposing the data out of devices: on-device training. For example, we
may add a new class, my dog, Alpha, for robotic vacuums, adapt speech
recognition for the users accent, let text-to-speech speak as if the user
speaks. However, the resource limitations of target devices incur significant
difficulties. We propose NNTrainer, a light-weight on-device training
framework. We describe optimization techniques for neural networks implemented
by NNTrainer, which are evaluated along with the conventional. The evaluations
show that NNTrainer can reduce memory consumption down to 1/28 without
deteriorating accuracy or training time and effectively personalizes
applications on devices. NNTrainer is cross-platform and practical open source
software, which is being deployed to millions of devices in the authors
affiliation.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 08:27:59 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Moon",
"Ji Joong",
""
],
[
"Kapoor",
"Parichay",
""
],
[
"Lee",
"Ji Hoon",
""
],
[
"Ham",
"Myung Joo",
""
],
[
"Lee",
"Hyun Suk",
""
]
] |
new_dataset
| 0.988494 |
2206.04785
|
Jinman Park
|
Jinman Park, Kimathi Kaai, Saad Hossain, Norikatsu Sumi, Sirisha
Rambhatla, Paul Fieguth
|
Building Spatio-temporal Transformers for Egocentric 3D Pose Estimation
|
4 pages, Extended abstract, Joint International Workshop on
Egocentric Perception, Interaction and Computing (EPIC) and Ego4D, IEEE/CVF
Computer Vision and Pattern Recognition Conference (CVPR), 2022
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Egocentric 3D human pose estimation (HPE) from images is challenging due to
severe self-occlusions and strong distortion introduced by the fish-eye view
from the head mounted camera. Although existing works use intermediate
heatmap-based representations to counter distortion with some success,
addressing self-occlusion remains an open problem. In this work, we leverage
information from past frames to guide our self-attention-based 3D HPE
estimation procedure -- Ego-STAN. Specifically, we build a spatio-temporal
Transformer model that attends to semantically rich convolutional neural
network-based feature maps. We also propose feature map tokens: a new set of
learnable parameters to attend to these feature maps. Finally, we demonstrate
Ego-STAN's superior performance on the xR-EgoPose dataset where it achieves a
30.6% improvement on the overall mean per-joint position error, while leading
to a 22% drop in parameters compared to the state-of-the-art.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 22:33:27 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Park",
"Jinman",
""
],
[
"Kaai",
"Kimathi",
""
],
[
"Hossain",
"Saad",
""
],
[
"Sumi",
"Norikatsu",
""
],
[
"Rambhatla",
"Sirisha",
""
],
[
"Fieguth",
"Paul",
""
]
] |
new_dataset
| 0.95371 |
2206.04853
|
Yuliang Li
|
Jin Wang, Yuliang Li, Wataru Hirota, Eser Kandogan
|
Machop: an End-to-End Generalized Entity Matching Framework
|
aiDM 2022
| null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Real-world applications frequently seek to solve a general form of the Entity
Matching (EM) problem to find associated entities. Such scenarios include
matching jobs to candidates in job targeting, matching students with courses in
online education, matching products with user reviews on e-commercial websites,
and beyond. These tasks impose new requirements such as matching data entries
with diverse formats or having a flexible and semantics-rich matching
definition, which are beyond the current EM task formulation or approaches. In
this paper, we introduce the problem of Generalized Entity Matching (GEM) that
satisfies these practical requirements and presents an end-to-end pipeline
Machop as the solution. Machop allows end-users to define new matching tasks
from scratch and apply them to new domains in a step-by-step manner. Machop
casts the GEM problem as sequence pair classification so as to utilize the
language understanding capability of Transformers-based language models (LMs)
such as BERT. Moreover, it features a novel external knowledge injection
approach with structure-aware pooling methods that allow domain experts to
guide the LM to focus on the key matching information thus further contributing
to the overall performance. Our experiments and case studies on real-world
datasets from a popular recruiting platform show a significant 17.1% gain in F1
score against state-of-the-art methods along with meaningful matching results
that are human-understandable.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 02:59:58 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Wang",
"Jin",
""
],
[
"Li",
"Yuliang",
""
],
[
"Hirota",
"Wataru",
""
],
[
"Kandogan",
"Eser",
""
]
] |
new_dataset
| 0.99336 |
2206.04874
|
Armstrong Aboah
|
Ashkan Behzadian, Tanner Wambui Muturi, Tianjie Zhang, Hongmin Kim,
Amanda Mullins, Yang Lu, Neema Jasika Owor, Yaw Adu-Gyamfi, William Buttlar,
Majidifard Hamed, Armstrong Aboah, David Mensching, Spragg Robert, Matthew
Corrigan, Jack Youtchef, Dave Eshan
|
The 1st Data Science for Pavements Challenge
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The Data Science for Pavement Challenge (DSPC) seeks to accelerate the
research and development of automated vision systems for pavement condition
monitoring and evaluation by providing a platform with benchmarked datasets and
codes for teams to innovate and develop machine learning algorithms that are
practice-ready for use by industry. The first edition of the competition
attracted 22 teams from 8 countries. Participants were required to
automatically detect and classify different types of pavement distresses
present in images captured from multiple sources, and under different
conditions. The competition was data-centric: teams were tasked to increase the
accuracy of a predefined model architecture by utilizing various data
modification methods such as cleaning, labeling and augmentation. A real-time,
online evaluation system was developed to rank teams based on the F1 score.
Leaderboard results showed the promise and challenges of machine for advancing
automation in pavement monitoring and evaluation. This paper summarizes the
solutions from the top 5 teams. These teams proposed innovations in the areas
of data cleaning, annotation, augmentation, and detection parameter tuning. The
F1 score for the top-ranked team was approximately 0.9. The paper concludes
with a review of different experiments that worked well for the current
challenge and those that did not yield any significant improvement in model
accuracy.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 05:02:31 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Behzadian",
"Ashkan",
""
],
[
"Muturi",
"Tanner Wambui",
""
],
[
"Zhang",
"Tianjie",
""
],
[
"Kim",
"Hongmin",
""
],
[
"Mullins",
"Amanda",
""
],
[
"Lu",
"Yang",
""
],
[
"Owor",
"Neema Jasika",
""
],
[
"Adu-Gyamfi",
"Yaw",
""
],
[
"Buttlar",
"William",
""
],
[
"Hamed",
"Majidifard",
""
],
[
"Aboah",
"Armstrong",
""
],
[
"Mensching",
"David",
""
],
[
"Robert",
"Spragg",
""
],
[
"Corrigan",
"Matthew",
""
],
[
"Youtchef",
"Jack",
""
],
[
"Eshan",
"Dave",
""
]
] |
new_dataset
| 0.998137 |
2206.04888
|
Yang Zhao
|
Yang Zhao, Xuan Lin, Wenqiang Xu, Maozong Zheng, Zhengyong Liu, Zhou
Zhao
|
AntPivot: Livestream Highlight Detection via Hierarchical Attention
Mechanism
| null | null | null | null |
cs.MM cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In recent days, streaming technology has greatly promoted the development in
the field of livestream. Due to the excessive length of livestream records,
it's quite essential to extract highlight segments with the aim of effective
reproduction and redistribution. Although there are lots of approaches proven
to be effective in the highlight detection for other modals, the challenges
existing in livestream processing, such as the extreme durations, large topic
shifts, much irrelevant information and so forth, heavily hamper the adaptation
and compatibility of these methods. In this paper, we formulate a new task
Livestream Highlight Detection, discuss and analyze the difficulties listed
above and propose a novel architecture AntPivot to solve this problem.
Concretely, we first encode the original data into multiple views and model
their temporal relations to capture clues in a hierarchical attention
mechanism. Afterwards, we try to convert the detection of highlight clips into
the search for optimal decision sequences and use the fully integrated
representations to predict the final results in a dynamic-programming
mechanism. Furthermore, we construct a fully-annotated dataset AntHighlight to
instantiate this task and evaluate the performance of our model. The extensive
experiments indicate the effectiveness and validity of our proposed method.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 05:58:11 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Zhao",
"Yang",
""
],
[
"Lin",
"Xuan",
""
],
[
"Xu",
"Wenqiang",
""
],
[
"Zheng",
"Maozong",
""
],
[
"Liu",
"Zhengyong",
""
],
[
"Zhao",
"Zhou",
""
]
] |
new_dataset
| 0.957337 |
2206.04901
|
I-Chao Shen
|
Hao-Kang Liu, I-Chao Shen, Bing-Yu Chen
|
NeRF-In: Free-Form NeRF Inpainting with RGB-D Priors
|
Hao-Kang Liu and I-Chao Shen contributed equally to the paper.
Project page: https://jdily.github.io/proj_site/nerfin_proj.html
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Though Neural Radiance Field (NeRF) demonstrates compelling novel view
synthesis results, it is still unintuitive to edit a pre-trained NeRF because
the neural network's parameters and the scene geometry/appearance are often not
explicitly associated. In this paper, we introduce the first framework that
enables users to remove unwanted objects or retouch undesired regions in a 3D
scene represented by a pre-trained NeRF without any category-specific data and
training. The user first draws a free-form mask to specify a region containing
unwanted objects over a rendered view from the pre-trained NeRF. Our framework
first transfers the user-provided mask to other rendered views and estimates
guiding color and depth images within these transferred masked regions. Next,
we formulate an optimization problem that jointly inpaints the image content in
all masked regions across multiple views by updating the NeRF model's
parameters. We demonstrate our framework on diverse scenes and show it obtained
visual plausible and structurally consistent results across multiple views
using shorter time and less user manual efforts.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 06:54:22 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Liu",
"Hao-Kang",
""
],
[
"Shen",
"I-Chao",
""
],
[
"Chen",
"Bing-Yu",
""
]
] |
new_dataset
| 0.999235 |
2206.04909
|
Jiafei Duan
|
Jieyi Ye, Jiafei Duan, Samson Yu, Bihan Wen, Cheston Tan
|
ABCDE: An Agent-Based Cognitive Development Environment
|
Accepted to CVPRW 2022,Embodied AI Workshop (Extended Abstract)
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Children's cognitive abilities are sometimes cited as AI benchmarks. How can
the most common 1,000 concepts (89\% of everyday use) be learnt in a
naturalistic children's setting? Cognitive development in children is about
quality, and new concepts can be conveyed via simple examples. Our approach of
knowledge scaffolding uses simple objects and actions to convey concepts, like
how children are taught. We introduce ABCDE, an interactive 3D environment
modeled after a typical playroom for children. It comes with 300+ unique 3D
object assets (mostly toys), and a large action space for child and parent
agents to interact with objects and each other. ABCDE is the first environment
aimed at mimicking a naturalistic setting for cognitive development in
children; no other environment focuses on high-level concept learning through
learner-teacher interactions. The simulator can be found at
https://pypi.org/project/ABCDESim/1.0.0/
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 07:23:26 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Ye",
"Jieyi",
""
],
[
"Duan",
"Jiafei",
""
],
[
"Yu",
"Samson",
""
],
[
"Wen",
"Bihan",
""
],
[
"Tan",
"Cheston",
""
]
] |
new_dataset
| 0.999294 |
2206.04911
|
Shaopeng Cheng
|
Qiuyun Lyu, Shaopeng Cheng, Hao Li, Junliang Liu, Yanzhao Shen, Zhen
Wang
|
NSSIA: A New Self-Sovereign Identity Scheme with Accountability
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-Sovereign Identity (SSI) is a new distributed method for identity
management, commonly used to address the problem that users are lack of control
over their identities. However, the excessive pursuit of self-sovereignty in
the most existing SSI schemes hinders sanctions against attackers. To deal with
the malicious behavior, a few SSI schemes introduce accountability mechanisms,
but they sacrifice users' privacy. What's more, the digital identities (static
strings or updatable chains) in the existing SSI schemes are as inputs to a
third-party executable program (mobile app, smart contract, etc.) to achieve
identity reading, storing and proving, users' self-sovereignty are weakened. To
solve the above problems, we present a new self-sovereign identity scheme to
strike a balance between privacy and accountability and get rid of the
dependence on the third-party program. In our scheme, one and only
individual-specific executable code is generated as a digital avatar-i for each
human to interact with others in cyberspace without a third-party program, in
which the embedding of biometrics enhances uniqueness and user control over
their identity. In addition, a joint accountability mechanism, which is based
on the shamir (t, n) threshold algorithm and a consortium blockchain, is
designed to restrict the power of each regulatory authority and protect users'
privacy. Finally, we analyze the security, SSI properties and conduct detailed
experiments in term of the cost of computation, storage and blockchain gas. The
analysis results indicate that our scheme resists the known attacks and
fulfills all the six SSI properties. Compared with the state-of-the-art
schemes, the extensive experiment results show that the cost is larger in
server storage, blockchain storage and blockchain gas, but is still low enough
for practical situations.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 07:25:28 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Lyu",
"Qiuyun",
""
],
[
"Cheng",
"Shaopeng",
""
],
[
"Li",
"Hao",
""
],
[
"Liu",
"Junliang",
""
],
[
"Shen",
"Yanzhao",
""
],
[
"Wang",
"Zhen",
""
]
] |
new_dataset
| 0.999823 |
2206.04925
|
Vladimir Dobrovolskii
|
Vladimir Dobrovolskii, Mariia Michurina, Alexandra Ivoylova
|
RuCoCo: a new Russian corpus with coreference annotation
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new corpus with coreference annotation, Russian Coreference
Corpus (RuCoCo). The goal of RuCoCo is to obtain a large number of annotated
texts while maintaining high inter-annotator agreement. RuCoCo contains news
texts in Russian, part of which were annotated from scratch, and for the rest
the machine-generated annotations were refined by human annotators. The size of
our corpus is one million words and around 150,000 mentions. We make the corpus
publicly available.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 07:50:09 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Dobrovolskii",
"Vladimir",
""
],
[
"Michurina",
"Mariia",
""
],
[
"Ivoylova",
"Alexandra",
""
]
] |
new_dataset
| 0.999649 |
2206.04927
|
Fanqing Lin
|
Fanqing Lin, Tony Martinez
|
Ego2HandsPose: A Dataset for Egocentric Two-hand 3D Global Pose
Estimation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Color-based two-hand 3D pose estimation in the global coordinate system is
essential in many applications. However, there are very few datasets dedicated
to this task and no existing dataset supports estimation in a non-laboratory
environment. This is largely attributed to the sophisticated data collection
process required for 3D hand pose annotations, which also leads to difficulty
in obtaining instances with the level of visual diversity needed for estimation
in the wild. Progressing towards this goal, a large-scale dataset Ego2Hands was
recently proposed to address the task of two-hand segmentation and detection in
the wild. The proposed composition-based data generation technique can create
two-hand instances with quality, quantity and diversity that generalize well to
unseen domains. In this work, we present Ego2HandsPose, an extension of
Ego2Hands that contains 3D hand pose annotation and is the first dataset that
enables color-based two-hand 3D tracking in unseen domains. To this end, we
develop a set of parametric fitting algorithms to enable 1) 3D hand pose
annotation using a single image, 2) automatic conversion from 2D to 3D hand
poses and 3) accurate two-hand tracking with temporal consistency. We provide
incremental quantitative analysis on the multi-stage pipeline and show that
training on our dataset achieves state-of-the-art results that significantly
outperforms other datasets for the task of egocentric two-hand global 3D pose
estimation.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 07:50:45 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Lin",
"Fanqing",
""
],
[
"Martinez",
"Tony",
""
]
] |
new_dataset
| 0.99986 |
2206.04973
|
Elena \'Alvarez-Mellado
|
Elena Alvarez Mellado and Constantine Lignos
|
Borrowing or Codeswitching? Annotating for Finer-Grained Distinctions in
Language Mixing
|
LREC 2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new corpus of Twitter data annotated for codeswitching and
borrowing between Spanish and English. The corpus contains 9,500 tweets
annotated at the token level with codeswitches, borrowings, and named entities.
This corpus differs from prior corpora of codeswitching in that we attempt to
clearly define and annotate the boundary between codeswitching and borrowing
and do not treat common "internet-speak" ('lol', etc.) as codeswitching when
used in an otherwise monolingual context. The result is a corpus that enables
the study and modeling of Spanish-English borrowing and codeswitching on
Twitter in one dataset. We present baseline scores for modeling the labels of
this corpus using Transformer-based language models. The annotation itself is
released with a CC BY 4.0 license, while the text it applies to is distributed
in compliance with the Twitter terms of service.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 10:06:57 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Mellado",
"Elena Alvarez",
""
],
[
"Lignos",
"Constantine",
""
]
] |
new_dataset
| 0.998764 |
2206.05016
|
Devlin Gualtieri Ph.D.
|
Devlin Gualtieri
|
Frictional Authors
|
16 page PDF file with 6 figures and two tables. Source code for
analysis is included
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
I present a method for text analysis based on an analogy with the dynamic
friction of sliding surfaces. One surface is an array of points with a
'friction coefficient' derived from the distribution frequency of a text's
alphabetic characters. The other surface is a test patch having points with
this friction coefficient equal to a median value. Examples are presented from
an analysis of a broad range of public domain texts, and comparison is made to
the Flesch Reading Ease. Source code for the analysis program is provided.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 00:37:23 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Gualtieri",
"Devlin",
""
]
] |
new_dataset
| 0.956461 |
2206.05034
|
Greg Baker
|
Greg Baker, Diego Molla-Aliod
|
The Construction and Evaluation of the LEAFTOP Dataset of Automatically
Extracted Nouns in 1480 Languages
|
LREC2022
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The LEAFTOP (language extracted automatically from thousands of passages)
dataset consists of nouns that appear in multiple places in the four gospels of
the New Testament. We use a naive approach -- probabilistic inference -- to
identify likely translations in 1480 other languages. We evaluate this process
and find that it provides lexiconaries with accuracy from 42% (Korafe) to 99%
(Runyankole), averaging 72% correct across evaluated languages. The process
translates up to 161 distinct lemmas from Koine Greek (average 159). We
identify nouns which appear to be easy and hard to translate, language families
where this technique works, and future possible improvements and extensions.
The claims to novelty are: the use of a Koine Greek New Testament as the source
language; using a fully-annotated manually-created grammatically parse of the
source text; a custom scraper for texts in the target languages; a new metric
for language similarity; a novel strategy for evaluation on low-resource
languages.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2022 01:09:41 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Baker",
"Greg",
""
],
[
"Molla-Aliod",
"Diego",
""
]
] |
new_dataset
| 0.999237 |
2206.05042
|
Maryam Edalati
|
Pardeep Kaur, Maryam Edalati
|
Sentiment analysis on electricity twitter posts
|
Keywords: Sentiment Analysis, Machine Learning, Electricity, opinion
mining, polarity assessment
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In today's world, everyone is expressive in some way, and the focus of this
project is on people's opinions about rising electricity prices in United
Kingdom and India using data from Twitter, a micro-blogging platform on which
people post messages, known as tweets. Because many people's incomes are not
good and they have to pay so many taxes and bills, maintaining a home has
become a disputed issue these days. Despite the fact that Government offered
subsidy schemes to compensate people electricity bills but it is not welcomed
by people. In this project, the aim is to perform sentiment analysis on
people's expressions and opinions expressed on Twitter. In order to grasp the
electricity prices opinion, it is necessary to carry out sentiment analysis for
the government and consumers in energy market. Furthermore, text present on
these medias are unstructured in nature, so to process them we firstly need to
pre-process the data. There are so many feature extraction techniques such as
Bag of Words, TF-IDF (Term Frequency-Inverse Document Frequency), word
embedding, NLP based features like word count. In this project, we analysed the
impact of feature TF-IDF word level on electricity bills dataset of sentiment
analysis. We found that by using TF-IDF word level performance of sentiment
analysis is 3-4 higher than using N-gram features. Analysis is done using four
classification algorithms including Naive Bayes, Decision Tree, Random Forest,
and Logistic Regression and considering F-Score, Accuracy, Precision, and
Recall performance parameters.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 12:31:56 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Kaur",
"Pardeep",
""
],
[
"Edalati",
"Maryam",
""
]
] |
new_dataset
| 0.974678 |
2206.05051
|
Yuan Yang
|
Yuan Yang, Siheng Xiong, James C Kerce and Faramarz Fekri
|
Temporal Inductive Logic Reasoning
| null | null | null | null |
cs.LG cs.AI cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Inductive logic reasoning is one of the fundamental tasks on graphs, which
seeks to generalize patterns from the data. This task has been studied
extensively for traditional graph datasets such as knowledge graphs (KGs), with
representative techniques such as inductive logic programming (ILP). Existing
ILP methods typically assume learning from KGs with static facts and binary
relations. Beyond KGs, graph structures are widely present in other
applications such as video instructions, scene graphs and program executions.
While inductive logic reasoning is also beneficial for these applications,
applying ILP to the corresponding graphs is nontrivial: they are more complex
than KGs, which usually involve timestamps and n-ary relations, effectively a
type of hypergraph with temporal events.
In this work, we study two of such applications and propose to represent them
as hypergraphs with time intervals. To reason on this graph, we propose the
multi-start random B-walk that traverses this hypergraph. Combining it with a
path-consistency algorithm, we propose an efficient backward-chaining ILP
method that learns logic rules by generalizing from both the temporal and the
relational data.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 02:33:26 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Yang",
"Yuan",
""
],
[
"Xiong",
"Siheng",
""
],
[
"Kerce",
"James C",
""
],
[
"Fekri",
"Faramarz",
""
]
] |
new_dataset
| 0.999557 |
2206.05202
|
Federico Benzi
|
Davide Ferrari, Federico Benzi and Cristian Secchi
|
Bidirectional Communication Control for Human-Robot Collaboration
|
7 pages, 4 figures, 1 table
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
A fruitful collaboration is based on the mutual knowledge of each other
skills and on the possibility of communicating their own limits and proposing
alternatives to adapt the execution of a task to the capabilities of the
collaborators. This paper aims at reproducing such a scenario in a human-robot
collaboration setting by proposing a novel communication control architecture.
Exploiting control barrier functions, the robot is made aware of its (dynamic)
skills and limits and, thanks to a local predictor, it is able to assess if it
is possible to execute a requested task and, if not, to propose alternative by
relaxing some constraints. The controller is interfaced with a communication
infrastructure that enables human and robot to set up a bidirectional
communication about the task to execute and the human to take an informed
decision on the behavior of the robot. A comparative experimental validation is
proposed.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 16:00:21 GMT"
}
] | 2022-06-13T00:00:00 |
[
[
"Ferrari",
"Davide",
""
],
[
"Benzi",
"Federico",
""
],
[
"Secchi",
"Cristian",
""
]
] |
new_dataset
| 0.966325 |
1401.7480
|
Bruce Litow
|
Bruce Litow
|
NP is contained in DTIME(n^O(log^{gamma}))
|
paper has a fatal error
| null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We use existential Diophantine predicates carefully reinterpreted over the
reals and the time complexity of Tarski algebra to show that 3-CNF SAT is in
n^O(log^{gamma} n) time for an absolute positive constant gamma.
|
[
{
"version": "v1",
"created": "Wed, 29 Jan 2014 12:07:54 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Feb 2014 20:41:46 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Feb 2014 17:51:20 GMT"
},
{
"version": "v4",
"created": "Thu, 1 Mar 2018 19:57:17 GMT"
},
{
"version": "v5",
"created": "Fri, 9 Mar 2018 21:24:25 GMT"
},
{
"version": "v6",
"created": "Mon, 29 Jun 2020 20:14:38 GMT"
},
{
"version": "v7",
"created": "Thu, 9 Jun 2022 13:58:13 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Litow",
"Bruce",
""
]
] |
new_dataset
| 0.997228 |
1910.02551
|
Ilia Sucholutsky
|
Ilia Sucholutsky, Matthias Schonlau
|
Soft-Label Dataset Distillation and Text Dataset Distillation
| null | null |
10.1109/IJCNN52387.2021.9533769
| null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dataset distillation is a method for reducing dataset sizes by learning a
small number of synthetic samples containing all the information of a large
dataset. This has several benefits like speeding up model training, reducing
energy consumption, and reducing required storage space. Currently, each
synthetic sample is assigned a single `hard' label, and also, dataset
distillation can currently only be used with image data.
We propose to simultaneously distill both images and their labels, thus
assigning each synthetic sample a `soft' label (a distribution of labels). Our
algorithm increases accuracy by 2-4% over the original algorithm for several
image classification tasks. Using `soft' labels also enables distilled datasets
to consist of fewer samples than there are classes as each sample can encode
information for multiple classes. For example, training a LeNet model with 10
distilled images (one per class) results in over 96% accuracy on MNIST, and
almost 92% accuracy when trained on just 5 distilled images.
We also extend the dataset distillation algorithm to distill sequential
datasets including texts. We demonstrate that text distillation outperforms
other methods across multiple datasets. For example, models attain almost their
original accuracy on the IMDB sentiment analysis task using just 20 distilled
sentences.
Our code can be found at
$\href{https://github.com/ilia10000/dataset-distillation}{\text{https://github.com/ilia10000/dataset-distillation}}$.
|
[
{
"version": "v1",
"created": "Sun, 6 Oct 2019 23:57:22 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Nov 2019 21:01:12 GMT"
},
{
"version": "v3",
"created": "Tue, 5 May 2020 04:09:03 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Sucholutsky",
"Ilia",
""
],
[
"Schonlau",
"Matthias",
""
]
] |
new_dataset
| 0.978478 |
2110.02871
|
Victor Schmidt
|
Victor Schmidt, Alexandra Sasha Luccioni, M\'elisande Teng, Tianyu
Zhang, Alexia Reynaud, Sunand Raghupathi, Gautier Cosne, Adrien Juraver, Vahe
Vardanyan, Alex Hernandez-Garcia, Yoshua Bengio
|
ClimateGAN: Raising Climate Change Awareness by Generating Images of
Floods
| null |
ICLR 2022
| null | null |
cs.CV cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Climate change is a major threat to humanity, and the actions required to
prevent its catastrophic consequences include changes in both policy-making and
individual behaviour. However, taking action requires understanding the effects
of climate change, even though they may seem abstract and distant. Projecting
the potential consequences of extreme climate events such as flooding in
familiar places can help make the abstract impacts of climate change more
concrete and encourage action. As part of a larger initiative to build a
website that projects extreme climate events onto user-chosen photos, we
present our solution to simulate photo-realistic floods on authentic images. To
address this complex task in the absence of suitable training data, we propose
ClimateGAN, a model that leverages both simulated and real data for
unsupervised domain adaptation and conditional image generation. In this paper,
we describe the details of our framework, thoroughly evaluate components of our
architecture and demonstrate that our model is capable of robustly generating
photo-realistic flooding.
|
[
{
"version": "v1",
"created": "Wed, 6 Oct 2021 15:54:57 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Schmidt",
"Victor",
""
],
[
"Luccioni",
"Alexandra Sasha",
""
],
[
"Teng",
"Mélisande",
""
],
[
"Zhang",
"Tianyu",
""
],
[
"Reynaud",
"Alexia",
""
],
[
"Raghupathi",
"Sunand",
""
],
[
"Cosne",
"Gautier",
""
],
[
"Juraver",
"Adrien",
""
],
[
"Vardanyan",
"Vahe",
""
],
[
"Hernandez-Garcia",
"Alex",
""
],
[
"Bengio",
"Yoshua",
""
]
] |
new_dataset
| 0.998767 |
2201.05461
|
Hamed Malek
|
Maryam Sajde, Hamed Malek, Mehran Mohsenzadeh
|
RecoMed: A Knowledge-Aware Recommender System for Hypertension
Medications
| null |
Informatics in Medicine Unlocked, vol. 30, p. 100950, Jan. 2022
|
10.1016/j.imu.2022.100950
| null |
cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Background and Objective High medicine diversity has always been a
significant challenge for prescription, causing confusion or doubt in
physicians' decision-making process. This paper aims to develop a medicine
recommender system called RecoMed to aid the physician in the prescription
process of hypertension by providing information about what medications have
been prescribed by other doctors and figuring out what other medicines can be
recommended in addition to the one in question. Methods There are two steps to
the developed method: First, association rule mining algorithms are employed to
find medicine association rules. The second step entails graph mining and
clustering to present an enriched recommendation via ATC code, which itself
comprises several steps. First, the initial graph is constructed from
historical prescription data. Then, data pruning is performed in the second
step, after which the medicines with a high repetition rate are removed at the
discretion of a general medical practitioner. Next, the medicines are matched
to a well-known medicine classification system called the ATC code to provide
an enriched recommendation. And finally, the DBSCAN and Louvain algorithms
cluster medicines in the final step. Results A list of recommended medicines is
provided as the system's output, and physicians can choose one or more of the
medicines based on the patient's clinical symptoms. Only the medicines of class
2, related to high blood pressure medications, are used to assess the system's
performance. The results obtained from this system have been reviewed and
confirmed by an expert in this field.
|
[
{
"version": "v1",
"created": "Sun, 9 Jan 2022 08:01:41 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jun 2022 14:40:57 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Sajde",
"Maryam",
""
],
[
"Malek",
"Hamed",
""
],
[
"Mohsenzadeh",
"Mehran",
""
]
] |
new_dataset
| 0.983159 |
2201.05609
|
Chester Palen-Michel
|
Chester Palen-Michel, June Kim, Constantine Lignos
|
Multilingual Open Text Release 1: Public Domain News in 44 Languages
|
Submitted to LREC 2022
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present Multilingual Open Text (MOT), a new multilingual corpus containing
text in 44 languages, many of which have limited existing text resources for
natural language processing. The first release of the corpus contains over 2.8
million news articles and an additional 1 million short snippets (photo
captions, video descriptions, etc.) published between 2001--2022 and collected
from Voice of America's news websites. We describe our process for collecting,
filtering, and processing the data. The source material is in the public
domain, our collection is licensed using a creative commons license (CC BY
4.0), and all software used to create the corpus is released under the MIT
License. The corpus will be regularly updated as additional documents are
published.
|
[
{
"version": "v1",
"created": "Fri, 14 Jan 2022 18:58:17 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jun 2022 17:21:31 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Palen-Michel",
"Chester",
""
],
[
"Kim",
"June",
""
],
[
"Lignos",
"Constantine",
""
]
] |
new_dataset
| 0.997104 |
2201.06289
|
Zhiqiu Lin
|
Zhiqiu Lin, Jia Shi, Deepak Pathak, Deva Ramanan
|
The CLEAR Benchmark: Continual LEArning on Real-World Imagery
|
Project site: https://clear-benchmark.github.io
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Continual learning (CL) is widely regarded as crucial challenge for lifelong
AI. However, existing CL benchmarks, e.g. Permuted-MNIST and Split-CIFAR, make
use of artificial temporal variation and do not align with or generalize to the
real-world. In this paper, we introduce CLEAR, the first continual image
classification benchmark dataset with a natural temporal evolution of visual
concepts in the real world that spans a decade (2004-2014). We build CLEAR from
existing large-scale image collections (YFCC100M) through a novel and scalable
low-cost approach to visio-linguistic dataset curation. Our pipeline makes use
of pretrained vision-language models (e.g. CLIP) to interactively build labeled
datasets, which are further validated with crowd-sourcing to remove errors and
even inappropriate images (hidden in original YFCC100M). The major strength of
CLEAR over prior CL benchmarks is the smooth temporal evolution of visual
concepts with real-world imagery, including both high-quality labeled data
along with abundant unlabeled samples per time period for continual
semi-supervised learning. We find that a simple unsupervised pre-training step
can already boost state-of-the-art CL algorithms that only utilize
fully-supervised data. Our analysis also reveals that mainstream CL evaluation
protocols that train and test on iid data artificially inflate performance of
CL system. To address this, we propose novel "streaming" protocols for CL that
always test on the (near) future. Interestingly, streaming protocols (a) can
simplify dataset curation since today's testset can be repurposed for
tomorrow's trainset and (b) can produce more generalizable models with more
accurate estimates of performance since all labeled data from each time-period
is used for both training and testing (unlike classic iid train-test splits).
|
[
{
"version": "v1",
"created": "Mon, 17 Jan 2022 09:09:09 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2022 04:55:51 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Jun 2022 04:41:54 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Lin",
"Zhiqiu",
""
],
[
"Shi",
"Jia",
""
],
[
"Pathak",
"Deepak",
""
],
[
"Ramanan",
"Deva",
""
]
] |
new_dataset
| 0.999782 |
2201.11578
|
Xingda Wei
|
Xingda Wei and Fangming Lu and Rong Chen and Haibo Chen
|
KRCORE: a microsecond-scale RDMA control plane for elastic computing
|
To appear in USENIX ATC'2022
(https://www.usenix.org/conference/atc22/presentation/wei)
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
We present KRCORE, an RDMA library with a microsecond-scale control plane on
commodity RDMA hardware for elastic computing. KRCORE can establish a
full-fledged RDMA connection within 10{\mu}s (hundreds or thousands of times
faster than verbs), while only maintaining a (small) fixed-sized connection
metadata at each node, regardless of the cluster scale. The key ideas include
virtualizing pre-initialized kernel-space RDMA connections instead of creating
one from scratch, and retrofitting advanced RDMA dynamic connected transport
with static transport for both low connection overhead and high networking
speed. Under load spikes, KRCORE can shorten the worker bootstrap time of an
existing disaggregated key-value store (namely RACE Hashing) by 83%. In
serverless computing (namely Fn), KRCORE can also reduce the latency for
transferring data through RDMA by 99%.
|
[
{
"version": "v1",
"created": "Wed, 29 Dec 2021 02:46:09 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jan 2022 02:43:11 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Jun 2022 08:40:06 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Wei",
"Xingda",
""
],
[
"Lu",
"Fangming",
""
],
[
"Chen",
"Rong",
""
],
[
"Chen",
"Haibo",
""
]
] |
new_dataset
| 0.999306 |
2203.15041
|
Haresh Karnan
|
Haresh Karnan, Anirudh Nair, Xuesu Xiao, Garrett Warnell, Soeren Pirk,
Alexander Toshev, Justin Hart, Joydeep Biswas, Peter Stone
|
Socially Compliant Navigation Dataset (SCAND): A Large-Scale Dataset of
Demonstrations for Social Navigation
| null |
Robotics and Automation Letters (RA-L) 2022
| null | null |
cs.RO cs.CV cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Social navigation is the capability of an autonomous agent, such as a robot,
to navigate in a 'socially compliant' manner in the presence of other
intelligent agents such as humans. With the emergence of autonomously
navigating mobile robots in human populated environments (e.g., domestic
service robots in homes and restaurants and food delivery robots on public
sidewalks), incorporating socially compliant navigation behaviors on these
robots becomes critical to ensuring safe and comfortable human robot
coexistence. To address this challenge, imitation learning is a promising
framework, since it is easier for humans to demonstrate the task of social
navigation rather than to formulate reward functions that accurately capture
the complex multi objective setting of social navigation. The use of imitation
learning and inverse reinforcement learning to social navigation for mobile
robots, however, is currently hindered by a lack of large scale datasets that
capture socially compliant robot navigation demonstrations in the wild. To fill
this gap, we introduce Socially CompliAnt Navigation Dataset (SCAND) a large
scale, first person view dataset of socially compliant navigation
demonstrations. Our dataset contains 8.7 hours, 138 trajectories, 25 miles of
socially compliant, human teleoperated driving demonstrations that comprises
multi modal data streams including 3D lidar, joystick commands, odometry,
visual and inertial information, collected on two morphologically different
mobile robots a Boston Dynamics Spot and a Clearpath Jackal by four different
human demonstrators in both indoor and outdoor environments. We additionally
perform preliminary analysis and validation through real world robot
experiments and show that navigation policies learned by imitation learning on
SCAND generate socially compliant behaviors
|
[
{
"version": "v1",
"created": "Mon, 28 Mar 2022 19:09:11 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jun 2022 20:24:44 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Karnan",
"Haresh",
""
],
[
"Nair",
"Anirudh",
""
],
[
"Xiao",
"Xuesu",
""
],
[
"Warnell",
"Garrett",
""
],
[
"Pirk",
"Soeren",
""
],
[
"Toshev",
"Alexander",
""
],
[
"Hart",
"Justin",
""
],
[
"Biswas",
"Joydeep",
""
],
[
"Stone",
"Peter",
""
]
] |
new_dataset
| 0.999659 |
2203.16434
|
Antoine Yang
|
Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, Cordelia Schmid
|
TubeDETR: Spatio-Temporal Video Grounding with Transformers
|
Updated vIoU results compared to the CVPR'22 camera-ready version; 17
pages; 8 figures
| null | null | null |
cs.CV cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of localizing a spatio-temporal tube in a video
corresponding to a given text query. This is a challenging task that requires
the joint and efficient modeling of temporal, spatial and multi-modal
interactions. To address this task, we propose TubeDETR, a transformer-based
architecture inspired by the recent success of such models for text-conditioned
object detection. Our model notably includes: (i) an efficient video and text
encoder that models spatial multi-modal interactions over sparsely sampled
frames and (ii) a space-time decoder that jointly performs spatio-temporal
localization. We demonstrate the advantage of our proposed components through
an extensive ablation study. We also evaluate our full approach on the
spatio-temporal video grounding task and demonstrate improvements over the
state of the art on the challenging VidSTG and HC-STVG benchmarks. Code and
trained models are publicly available at
https://antoyang.github.io/tubedetr.html.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2022 16:31:49 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jun 2022 13:22:50 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Yang",
"Antoine",
""
],
[
"Miech",
"Antoine",
""
],
[
"Sivic",
"Josef",
""
],
[
"Laptev",
"Ivan",
""
],
[
"Schmid",
"Cordelia",
""
]
] |
new_dataset
| 0.98391 |
2204.09918
|
Md Hasibul Amin
|
Md Hasibul Amin, Mohammed Elbtity, Mohammadreza Mohammadi, Ramtin Zand
|
MRAM-based Analog Sigmoid Function for In-memory Computing
|
6 pages. 6 figures
|
Proceedings of the Great Lakes Symposium on VLSI 2022 (GLSVLSI
'22), Association for Computing Machinery, New York, NY, USA, 319-323
|
10.1145/3526241.3530376
| null |
cs.ET cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an analog implementation of the transcendental activation function
leveraging two spin-orbit torque magnetoresistive random-access memory
(SOT-MRAM) devices and a CMOS inverter. The proposed analog neuron circuit
consumes 1.8-27x less power, and occupies 2.5-4931x smaller area, compared to
the state-of-the-art analog and digital implementations. Moreover, the
developed neuron can be readily integrated with memristive crossbars without
requiring any intermediate signal conversion units. The architecture-level
analyses show that a fully-analog in-memory computing (IMC) circuit that use
our SOT-MRAM neuron along with an SOT-MRAM based crossbar can achieve more than
1.1x, 12x, and 13.3x reduction in power, latency, and energy, respectively,
compared to a mixed-signal implementation with analog memristive crossbars and
digital neurons. Finally, through cross-layer analyses, we provide a guide on
how varying the device-level parameters in our neuron can affect the accuracy
of multilayer perceptron (MLP) for MNIST classification.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2022 07:13:54 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Amin",
"Md Hasibul",
""
],
[
"Elbtity",
"Mohammed",
""
],
[
"Mohammadi",
"Mohammadreza",
""
],
[
"Zand",
"Ramtin",
""
]
] |
new_dataset
| 0.998798 |
2204.12710
|
Yu-Siou Tang
|
Yu-Siou Tang and Chung-Hsien Wu
|
CREER: A Large-Scale Corpus for Relation Extraction and Entity
Recognition
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe the design and use of the CREER dataset, a large corpus annotated
with rich English grammar and semantic attributes. The CREER dataset uses the
Stanford CoreNLP Annotator to capture rich language structures from Wikipedia
plain text. This dataset follows widely used linguistic and semantic
annotations so that it can be used for not only most natural language
processing tasks but also scaling the dataset. This large supervised dataset
can serve as the basis for improving the performance of NLP tasks in the
future. We publicize the dataset through the link:
https://140.116.82.111/share.cgi?ssid=000dOJ4
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2022 05:43:21 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jun 2022 06:34:15 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Jun 2022 08:04:43 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Tang",
"Yu-Siou",
""
],
[
"Wu",
"Chung-Hsien",
""
]
] |
new_dataset
| 0.999563 |
2206.03429
|
Tim Brooks
|
Tim Brooks, Janne Hellsten, Miika Aittala, Ting-Chun Wang, Timo Aila,
Jaakko Lehtinen, Ming-Yu Liu, Alexei A. Efros, Tero Karras
|
Generating Long Videos of Dynamic Scenes
| null | null | null | null |
cs.CV cs.AI cs.LG cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
We present a video generation model that accurately reproduces object motion,
changes in camera viewpoint, and new content that arises over time. Existing
video generation methods often fail to produce new content as a function of
time while maintaining consistencies expected in real environments, such as
plausible dynamics and object persistence. A common failure case is for content
to never change due to over-reliance on inductive biases to provide temporal
consistency, such as a single latent code that dictates content for the entire
video. On the other extreme, without long-term consistency, generated videos
may morph unrealistically between different scenes. To address these
limitations, we prioritize the time axis by redesigning the temporal latent
representation and learning long-term consistency from data by training on
longer videos. To this end, we leverage a two-phase training strategy, where we
separately train using longer videos at a low resolution and shorter videos at
a high resolution. To evaluate the capabilities of our model, we introduce two
new benchmark datasets with explicit focus on long-term temporal dynamics.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 16:29:51 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jun 2022 06:24:12 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Brooks",
"Tim",
""
],
[
"Hellsten",
"Janne",
""
],
[
"Aittala",
"Miika",
""
],
[
"Wang",
"Ting-Chun",
""
],
[
"Aila",
"Timo",
""
],
[
"Lehtinen",
"Jaakko",
""
],
[
"Liu",
"Ming-Yu",
""
],
[
"Efros",
"Alexei A.",
""
],
[
"Karras",
"Tero",
""
]
] |
new_dataset
| 0.997292 |
2206.04049
|
Lum Ramabaja
|
Lum Ramabaja
|
Hypersyn: A Peer-to-Peer System for Mutual Credit
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The Hypersyn protocol is a new type of permissionless and peer-to-peer
payment network that is based on the concept of mutual credit and mutual
arbitrage. Unlike blockchain-based systems, Hypersyn does not rely on any
consensus algorithm. It does not require a distributed ledger to store the
history of events nor a set of validators. Hypersyn does not have a
system-imposed hard-cap on the number of transactions per second that it can
perform, and can therefore easily scale up or down depending on network usage.
Unlike in other payment systems, money in Hypersyn does not get transferred
from person $A$ to person $B$ in the conventional sense. Instead of
transferring a token between each other, peers in Hypersyn change their
exchange value of their credit (i.e. their purchasing power) within the
network. Just as in centrally-issued fiat systems, money in Hypersyn is treated
as freely tradable debt, which inherently requires trust. But unlike
centrally-issued fiat systems, money issuance in Hypersyn is not controlled by
an authority, but is instead created on the spot as mutual credit. In
blockchain-based systems and even in centrally-issued fiat systems, money is
treated as a scarce commodity. In the Hypersyn protocol on the other hand,
money supply within the system is elastic in nature. Because of these
fundamental differences in assumptions, the Hypersyn protocol does not aim to
compete with, or substitute blockchain-based systems. Instead, Hypersyn should
be viewed as a tool that aims to offer a qualitative change in the way we
exchange. It has the potential to increase the autonomy and self-organization
that people can have, by enabling people to become both the creditors and
debtors of their own "money" through mutual credit.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 08:12:37 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Ramabaja",
"Lum",
""
]
] |
new_dataset
| 0.994476 |
2206.04129
|
Benedikt Mersch
|
Benedikt Mersch, Xieyuanli Chen, Ignacio Vizzo, Lucas Nunes, Jens
Behley, Cyrill Stachniss
|
Receding Moving Object Segmentation in 3D LiDAR Data Using Sparse 4D
Convolutions
|
Accepted for RA-L
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
A key challenge for autonomous vehicles is to navigate in unseen dynamic
environments. Separating moving objects from static ones is essential for
navigation, pose estimation, and understanding how other traffic participants
are likely to move in the near future. In this work, we tackle the problem of
distinguishing 3D LiDAR points that belong to currently moving objects, like
walking pedestrians or driving cars, from points that are obtained from
non-moving objects, like walls but also parked cars. Our approach takes a
sequence of observed LiDAR scans and turns them into a voxelized sparse 4D
point cloud. We apply computationally efficient sparse 4D convolutions to
jointly extract spatial and temporal features and predict moving object
confidence scores for all points in the sequence. We develop a receding horizon
strategy that allows us to predict moving objects online and to refine
predictions on the go based on new observations. We use a binary Bayes filter
to recursively integrate new predictions of a scan resulting in more robust
estimation. We evaluate our approach on the SemanticKITTI moving object
segmentation challenge and show more accurate predictions than existing
methods. Since our approach only operates on the geometric information of point
clouds over time, it generalizes well to new, unseen environments, which we
evaluate on the Apollo dataset.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 18:51:14 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Mersch",
"Benedikt",
""
],
[
"Chen",
"Xieyuanli",
""
],
[
"Vizzo",
"Ignacio",
""
],
[
"Nunes",
"Lucas",
""
],
[
"Behley",
"Jens",
""
],
[
"Stachniss",
"Cyrill",
""
]
] |
new_dataset
| 0.986091 |
2206.04197
|
Daniel McDuff
|
Daniel McDuff, Miah Wander, Xin Liu, Brian L. Hill, Javier Hernandez,
Jonathan Lester, Tadas Baltrusaitis
|
SCAMPS: Synthetics for Camera Measurement of Physiological Signals
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of cameras and computational algorithms for noninvasive, low-cost and
scalable measurement of physiological (e.g., cardiac and pulmonary) vital signs
is very attractive. However, diverse data representing a range of environments,
body motions, illumination conditions and physiological states is laborious,
time consuming and expensive to obtain. Synthetic data have proven a valuable
tool in several areas of machine learning, yet are not widely available for
camera measurement of physiological states. Synthetic data offer "perfect"
labels (e.g., without noise and with precise synchronization), labels that may
not be possible to obtain otherwise (e.g., precise pixel level segmentation
maps) and provide a high degree of control over variation and diversity in the
dataset. We present SCAMPS, a dataset of synthetics containing 2,800 videos
(1.68M frames) with aligned cardiac and respiratory signals and facial action
intensities. The RGB frames are provided alongside segmentation maps. We
provide precise descriptive statistics about the underlying waveforms,
including inter-beat interval, heart rate variability, and pulse arrival time.
Finally, we present baseline results training on these synthetic data and
testing on real-world datasets to illustrate generalizability.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 23:48:41 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"McDuff",
"Daniel",
""
],
[
"Wander",
"Miah",
""
],
[
"Liu",
"Xin",
""
],
[
"Hill",
"Brian L.",
""
],
[
"Hernandez",
"Javier",
""
],
[
"Lester",
"Jonathan",
""
],
[
"Baltrusaitis",
"Tadas",
""
]
] |
new_dataset
| 0.954649 |
2206.04246
|
Mohammad Hossein Rohban
|
Sina Taslimi, Soroush Taslimi, Nima Fathi, Mohammadreza Salehi,
Mohammad Hossein Rohban
|
SwinCheX: Multi-label classification on chest X-ray images with
transformers
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
According to the considerable growth in the avail of chest X-ray images in
diagnosing various diseases, as well as gathering extensive datasets, having an
automated diagnosis procedure using deep neural networks has occupied the minds
of experts. Most of the available methods in computer vision use a CNN backbone
to acquire high accuracy on the classification problems. Nevertheless, recent
researches show that transformers, established as the de facto method in NLP,
can also outperform many CNN-based models in vision. This paper proposes a
multi-label classification deep model based on the Swin Transformer as the
backbone to achieve state-of-the-art diagnosis classification. It leverages
Multi-Layer Perceptron, also known as MLP, for the head architecture. We
evaluate our model on one of the most widely-used and largest x-ray datasets
called "Chest X-ray14," which comprises more than 100,000 frontal/back-view
images from over 30,000 patients with 14 famous chest diseases. Our model has
been tested with several number of MLP layers for the head setting, each
achieves a competitive AUC score on all classes. Comprehensive experiments on
Chest X-ray14 have shown that a 3-layer head attains state-of-the-art
performance with an average AUC score of 0.810, compared to the former SOTA
average AUC of 0.799. We propose an experimental setup for the fair
benchmarking of existing methods, which could be used as a basis for the future
studies. Finally, we followed up our results by confirming that the proposed
method attends to the pathologically relevant areas of the chest.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 03:17:57 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Taslimi",
"Sina",
""
],
[
"Taslimi",
"Soroush",
""
],
[
"Fathi",
"Nima",
""
],
[
"Salehi",
"Mohammadreza",
""
],
[
"Rohban",
"Mohammad Hossein",
""
]
] |
new_dataset
| 0.999097 |
2206.04253
|
Xiaojun Liu
|
Xiaojun Liu, Shunan Zang, Chuang Zhang, Xiaojun Chen, Yangyang Ding
|
CLTS+: A New Chinese Long Text Summarization Dataset with Abstractive
Summaries
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The abstractive methods lack of creative ability is particularly a problem in
automatic text summarization. The summaries generated by models are mostly
extracted from the source articles. One of the main causes for this problem is
the lack of dataset with abstractiveness, especially for Chinese. In order to
solve this problem, we paraphrase the reference summaries in CLTS, the Chinese
Long Text Summarization dataset, correct errors of factual inconsistencies, and
propose the first Chinese Long Text Summarization dataset with a high level of
abstractiveness, CLTS+, which contains more than 180K article-summary pairs and
is available online. Additionally, we introduce an intrinsic metric based on
co-occurrence words to evaluate the dataset we constructed. We analyze the
extraction strategies used in CLTS+ summaries against other datasets to
quantify the abstractiveness and difficulty of our new data and train several
baselines on CLTS+ to verify the utility of it for improving the creative
ability of models.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 03:53:52 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Liu",
"Xiaojun",
""
],
[
"Zang",
"Shunan",
""
],
[
"Zhang",
"Chuang",
""
],
[
"Chen",
"Xiaojun",
""
],
[
"Ding",
"Yangyang",
""
]
] |
new_dataset
| 0.999859 |
2206.04271
|
James Brown
|
Andrew Perrett, Charlie Barnes, Mark Schofield, Lan Qie, Petra Bosilj,
James M. Brown
|
DeepVerge: Classification of Roadside Verge Biodiversity and
Conservation Potential
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Open space grassland is being increasingly farmed or built upon, leading to a
ramping up of conservation efforts targeting roadside verges. Approximately
half of all UK grassland species can be found along the country's 500,000 km of
roads, with some 91 species either threatened or near threatened. Careful
management of these "wildlife corridors" is therefore essential to preventing
species extinction and maintaining biodiversity in grassland habitats. Wildlife
trusts have often enlisted the support of volunteers to survey roadside verges
and identify new "Local Wildlife Sites" as areas of high conservation
potential. Using volunteer survey data from 3,900 km of roadside verges
alongside publicly available street-view imagery, we present DeepVerge; a deep
learning-based method that can automatically survey sections of roadside verges
by detecting the presence of positive indicator species. Using images and
ground truth survey data from the rural county of Lincolnshire, DeepVerge
achieved a mean accuracy of 88%. Such a method may be used by local authorities
to identify new local wildlife sites, and aid management and environmental
planning in line with legal and government policy obligations, saving thousands
of hours of manual labour.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 04:42:04 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Perrett",
"Andrew",
""
],
[
"Barnes",
"Charlie",
""
],
[
"Schofield",
"Mark",
""
],
[
"Qie",
"Lan",
""
],
[
"Bosilj",
"Petra",
""
],
[
"Brown",
"James M.",
""
]
] |
new_dataset
| 0.998235 |
2206.04381
|
Zheng Chang
|
Zheng Chang, Xinfeng Zhang, Shanshe Wang, Siwei Ma, and Wen Gao
|
STIP: A SpatioTemporal Information-Preserving and Perception-Augmented
Model for High-Resolution Video Prediction
|
This journal paper is extended from our previous work accepted in
CVPR2022 and has been submitted to IEEE Transactions on Multimedia
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Although significant achievements have been achieved by recurrent neural
network (RNN) based video prediction methods, their performance in datasets
with high resolutions is still far from satisfactory because of the information
loss problem and the perception-insensitive mean square error (MSE) based loss
functions. In this paper, we propose a Spatiotemporal Information-Preserving
and Perception-Augmented Model (STIP) to solve the above two problems. To solve
the information loss problem, the proposed model aims to preserve the
spatiotemporal information for videos during the feature extraction and the
state transitions, respectively. Firstly, a Multi-Grained Spatiotemporal
Auto-Encoder (MGST-AE) is designed based on the X-Net structure. The proposed
MGST-AE can help the decoders recall multi-grained information from the
encoders in both the temporal and spatial domains. In this way, more
spatiotemporal information can be preserved during the feature extraction for
high-resolution videos. Secondly, a Spatiotemporal Gated Recurrent Unit (STGRU)
is designed based on the standard Gated Recurrent Unit (GRU) structure, which
can efficiently preserve spatiotemporal information during the state
transitions. The proposed STGRU can achieve more satisfactory performance with
a much lower computation load compared with the popular Long Short-Term (LSTM)
based predictive memories. Furthermore, to improve the traditional MSE loss
functions, a Learned Perceptual Loss (LP-loss) is further designed based on the
Generative Adversarial Networks (GANs), which can help obtain a satisfactory
trade-off between the objective quality and the perceptual quality.
Experimental results show that the proposed STIP can predict videos with more
satisfactory visual quality compared with a variety of state-of-the-art
methods. Source code has been available at
\url{https://github.com/ZhengChang467/STIPHR}.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 09:49:04 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Chang",
"Zheng",
""
],
[
"Zhang",
"Xinfeng",
""
],
[
"Wang",
"Shanshe",
""
],
[
"Ma",
"Siwei",
""
],
[
"Gao",
"Wen",
""
]
] |
new_dataset
| 0.98302 |
2206.04399
|
Constantino \'Alvarez Casado
|
Constantino \'Alvarez Casado, Manuel Lage Ca\~nellas and Miguel
Bordallo L\'opez
|
Depression Recognition using Remote Photoplethysmography from Facial
Videos
|
10 pages, 5 figures, 8 tables
| null | null | null |
cs.CV cs.ET cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Depression is a mental illness that may be harmful to an individual's health.
The detection of mental health disorders in the early stages and a precise
diagnosis are critical to avoid social, physiological, or psychological side
effects. This work analyzes physiological signals to observe if different
depressive states have a noticeable impact on the blood volume pulse (BVP) and
the heart rate variability (HRV) response. Although typically, HRV features are
calculated from biosignals obtained with contact-based sensors such as
wearables, we propose instead a novel scheme that directly extracts them from
facial videos, just based on visual information, removing the need for any
contact-based device. Our solution is based on a pipeline that is able to
extract complete remote photoplethysmography signals (rPPG) in a fully
unsupervised manner. We use these rPPG signals to calculate over 60
statistical, geometrical, and physiological features that are further used to
train several machine learning regressors to recognize different levels of
depression. Experiments on two benchmark datasets indicate that this approach
offers comparable results to other audiovisual modalities based on voice or
facial expression, potentially complementing them. In addition, the results
achieved for the proposed method show promising and solid performance that
outperforms hand-engineered methods and is comparable to deep learning-based
approaches.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 10:23:49 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Casado",
"Constantino Álvarez",
""
],
[
"Cañellas",
"Manuel Lage",
""
],
[
"López",
"Miguel Bordallo",
""
]
] |
new_dataset
| 0.998645 |
2206.04421
|
Stefano Moriconi
|
Stefano Moriconi, Parashkev Nachev, Sebastien Ourselin, M. Jorge
Cardoso
|
Solid NURBS Conforming Scaffolding for Isogeometric Analysis
| null | null | null | null |
cs.CG cs.CE cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
This work introduces a scaffolding framework to compactly parametrise solid
structures with conforming NURBS elements for isogeometric analysis. A novel
formulation introduces a topological, geometrical and parametric subdivision of
the space in a minimal plurality of conforming vectorial elements. These
determine a multi-compartmental scaffolding for arbitrary branching patterns. A
solid smoothing paradigm is devised for the conforming scaffolding achieving
higher than positional geometrical and parametric continuity. Results are shown
for synthetic shapes of varying complexity, for modular CAD geometries, for
branching structures from tessellated meshes and for organic biological
structures from imaging data. Representative simulations demonstrate the
validity of the introduced scaffolding framework with scalable performance and
groundbreaking applications for isogeometric analysis.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 11:25:01 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Moriconi",
"Stefano",
""
],
[
"Nachev",
"Parashkev",
""
],
[
"Ourselin",
"Sebastien",
""
],
[
"Cardoso",
"M. Jorge",
""
]
] |
new_dataset
| 0.99449 |
2206.04428
|
Trinh Van Chien
|
Tan N. Nguyen and Dinh-Hieu Tran and Trinh Van Chien and Van-Duc Phan
and Miroslav Voznak and Phu Tran Tin and Symeon Chatzinotas and Derrick Wing
Kwan Ng and H. Vincent Poor
|
Security-Reliability Trade-Off Analysis for SWIPT- and AF-Based IoT
Networks with Friendly Jammers
|
15 pages, 12 figures, 1 table. Accepted by IEEE Internet of Things
Journal
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Radio-frequency (RF) energy harvesting (EH) in wireless relaying networks has
attracted considerable recent interest, especially for supplying energy to
relay nodes in Internet-of-Things (IoT) systems to assist the information
exchange between a source and a destination. Moreover, limited hardware,
computational resources, and energy availability of IoT devices have raised
various security challenges. To this end, physical layer security (PLS) has
been proposed as an effective alternative to cryptographic methods for
providing information security. In this study, we propose a PLS approach for
simultaneous wireless information and power transfer (SWIPT)-based half-duplex
(HD) amplify-and-forward (AF) relaying systems in the presence of an
eavesdropper. Furthermore, we take into account both static power splitting
relaying (SPSR) and dynamic power splitting relaying (DPSR) to thoroughly
investigate the benefits of each one. To further enhance secure communication,
we consider multiple friendly jammers to help prevent wiretapping attacks from
the eavesdropper. More specifically, we provide a reliability and security
analysis by deriving closed-form expressions of outage probability (OP) and
intercept probability (IP), respectively, for both the SPSR and DPSR schemes.
Then, simulations are also performed to validate our analysis and the
effectiveness of the proposed schemes. Specifically, numerical results
illustrate the non-trivial trade-off between reliability and security of the
proposed system. In addition, we conclude from the simulation results that the
proposed DPSR scheme outperforms the SPSR-based scheme in terms of OP and IP
under the influences of different parameters on system performance.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 11:34:30 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Nguyen",
"Tan N.",
""
],
[
"Tran",
"Dinh-Hieu",
""
],
[
"Van Chien",
"Trinh",
""
],
[
"Phan",
"Van-Duc",
""
],
[
"Voznak",
"Miroslav",
""
],
[
"Tin",
"Phu Tran",
""
],
[
"Chatzinotas",
"Symeon",
""
],
[
"Ng",
"Derrick Wing Kwan",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
new_dataset
| 0.987812 |
2206.04449
|
Eric Arazo
|
Eric Arazo, Robin Aly, Kevin McGuinness
|
Segmentation Enhanced Lameness Detection in Dairy Cows from RGB and
Depth Video
|
Accepted at the CV4Animals workshop in CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Cow lameness is a severe condition that affects the life cycle and life
quality of dairy cows and results in considerable economic losses. Early
lameness detection helps farmers address illnesses early and avoid negative
effects caused by the degeneration of cows' condition. We collected a dataset
of short clips of cows passing through a hallway exiting a milking station and
annotated the degree of lameness of the cows. This paper explores the resulting
dataset and provides a detailed description of the data collection process.
Additionally, we proposed a lameness detection method that leverages
pre-trained neural networks to extract discriminative features from videos and
assign a binary score to each cow indicating its condition: "healthy" or
"lame." We improve this approach by forcing the model to focus on the structure
of the cow, which we achieve by substituting the RGB videos with binary
segmentation masks predicted with a trained segmentation model. This work aims
to encourage research and provide insights into the applicability of computer
vision models for cow lameness detection on farms.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 12:16:31 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Arazo",
"Eric",
""
],
[
"Aly",
"Robin",
""
],
[
"McGuinness",
"Kevin",
""
]
] |
new_dataset
| 0.982216 |
2206.04503
|
Mohammad Manthouri
|
Faezeh Gholamrezaie, Mohammad Manthouri
|
cycle text2face: cycle text-to-face gan via transformers
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Text-to-face is a subset of text-to-image that require more complex
architecture due to their more detailed production. In this paper, we present
an encoder-decoder model called Cycle Text2Face. Cycle Text2Face is a new
initiative in the encoder part, it uses a sentence transformer and GAN to
generate the image described by the text. The Cycle is completed by reproducing
the text of the face in the decoder part of the model. Evaluating the model
using the CelebA dataset, leads to better results than previous GAN-based
models. In measuring the quality of the generate face, in addition to
satisfying the human audience, we obtain an FID score of 3.458. This model,
with high-speed processing, provides quality face images in the short time.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 13:41:52 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Gholamrezaie",
"Faezeh",
""
],
[
"Manthouri",
"Mohammad",
""
]
] |
new_dataset
| 0.997179 |
2206.04513
|
Marc Brittain
|
Marc Brittain, Luis E. Alvarez, Kara Breeden, Ian Jessen
|
AAM-Gym: Artificial Intelligence Testbed for Advanced Air Mobility
|
10 pages, accepted for publication in 2022 IEEE/AIAA Digital Avionics
Systems Conference
| null | null | null |
cs.AI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce AAM-Gym, a research and development testbed for Advanced Air
Mobility (AAM). AAM has the potential to revolutionize travel by reducing
ground traffic and emissions by leveraging new types of aircraft such as
electric vertical take-off and landing (eVTOL) aircraft and new advanced
artificial intelligence (AI) algorithms. Validation of AI algorithms require
representative AAM scenarios, as well as a fast time simulation testbed to
evaluate their performance. Until now, there has been no such testbed available
for AAM to enable a common research platform for individuals in government,
industry, or academia. MIT Lincoln Laboratory has developed AAM-Gym to address
this gap by providing an ecosystem to develop, train, and validate new and
established AI algorithms across a wide variety of AAM use-cases. In this
paper, we use AAM-Gym to study the performance of two reinforcement learning
algorithms on an AAM use-case, separation assurance in AAM corridors. The
performance of the two algorithms is demonstrated based on a series of metrics
provided by AAM-Gym, showing the testbed's utility to AAM research.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 13:57:10 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Brittain",
"Marc",
""
],
[
"Alvarez",
"Luis E.",
""
],
[
"Breeden",
"Kara",
""
],
[
"Jessen",
"Ian",
""
]
] |
new_dataset
| 0.999415 |
2206.04523
|
Dogucan Yaman
|
Alexander Waibel and Moritz Behr and Fevziye Irem Eyiokur and Dogucan
Yaman and Tuan-Nam Nguyen and Carlos Mullov and Mehmet Arif Demirtas and
Alperen Kantarc{\i} and Stefan Constantin and Haz{\i}m Kemal Ekenel
|
Face-Dubbing++: Lip-Synchronous, Voice Preserving Translation of Videos
| null | null | null | null |
cs.CL cs.CV cs.SD eess.AS eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we propose a neural end-to-end system for voice preserving,
lip-synchronous translation of videos. The system is designed to combine
multiple component models and produces a video of the original speaker speaking
in the target language that is lip-synchronous with the target speech, yet
maintains emphases in speech, voice characteristics, face video of the original
speaker. The pipeline starts with automatic speech recognition including
emphasis detection, followed by a translation model. The translated text is
then synthesized by a Text-to-Speech model that recreates the original emphases
mapped from the original sentence. The resulting synthetic voice is then mapped
back to the original speakers' voice using a voice conversion model. Finally,
to synchronize the lips of the speaker with the translated audio, a conditional
generative adversarial network-based model generates frames of adapted lip
movements with respect to the input face image as well as the output of the
voice conversion model. In the end, the system combines the generated video
with the converted audio to produce the final output. The result is a video of
a speaker speaking in another language without actually knowing it. To evaluate
our design, we present a user study of the complete system as well as separate
evaluations of the single components. Since there is no available dataset to
evaluate our whole system, we collect a test set and evaluate our system on
this test set. The results indicate that our system is able to generate
convincing videos of the original speaker speaking the target language while
preserving the original speaker's characteristics. The collected dataset will
be shared.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 14:15:37 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Waibel",
"Alexander",
""
],
[
"Behr",
"Moritz",
""
],
[
"Eyiokur",
"Fevziye Irem",
""
],
[
"Yaman",
"Dogucan",
""
],
[
"Nguyen",
"Tuan-Nam",
""
],
[
"Mullov",
"Carlos",
""
],
[
"Demirtas",
"Mehmet Arif",
""
],
[
"Kantarcı",
"Alperen",
""
],
[
"Constantin",
"Stefan",
""
],
[
"Ekenel",
"Hazım Kemal",
""
]
] |
new_dataset
| 0.983341 |
2206.04533
|
Nipun Dhananjaya Weerakkodi Mudalige
|
Nipun Dhananjaya Weerakkodi Mudalige, Elena Nazarova, Ildar Babataev,
Pavel Kopanev, Aleksey Fedoseev, Miguel Altamirano Cabrera and Dzmitry
Tsetserukou
|
DogTouch: CNN-based Recognition of Surface Textures by Quadruped Robot
with High Density Tactile Sensors
|
Accepted paper at IEEE Vehicular Technology Conference 2022 (IEEE VTC
2022), IEEE copyright
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The ability to perform locomotion in various terrains is critical for legged
robots. However, the robot has to have a better understanding of the surface it
is walking on to perform robust locomotion on different terrains. Animals and
humans are able to recognize the surface with the help of the tactile sensation
on their feet. Although, the foot tactile sensation for legged robots has not
been much explored. This paper presents research on a novel quadruped robot
DogTouch with tactile sensing feet (TSF). TSF allows the recognition of
different surface textures utilizing a tactile sensor and a convolutional
neural network (CNN). The experimental results show a sufficient validation
accuracy of 74.37\% for our trained CNN-based model, with the highest
recognition for line patterns of 90\%. In the future, we plan to improve the
prediction model by presenting surface samples with the various depths of
patterns and applying advanced Deep Learning and Shallow learning models for
surface recognition.
Additionally, we propose a novel approach to navigation of quadruped and
legged robots. We can arrange the tactile paving textured surface (similar that
used for blind or visually impaired people). Thus, DogTouch will be capable of
locomotion in unknown environment by just recognizing the specific tactile
patterns which will indicate the straight path, left or right turn, pedestrian
crossing, road, and etc. That will allow robust navigation regardless of
lighting condition. Future quadruped robots equipped with visual and tactile
perception system will be able to safely and intelligently navigate and
interact in the unstructured indoor and outdoor environment.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 14:32:00 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Mudalige",
"Nipun Dhananjaya Weerakkodi",
""
],
[
"Nazarova",
"Elena",
""
],
[
"Babataev",
"Ildar",
""
],
[
"Kopanev",
"Pavel",
""
],
[
"Fedoseev",
"Aleksey",
""
],
[
"Cabrera",
"Miguel Altamirano",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
new_dataset
| 0.998684 |
2206.04575
|
Mohammad Daniyal Shaiq
|
Mohammad Daniyal Shaiq, Musa Dildar Ahmed Cheema, Ali Kamal
|
Transformer based Urdu Handwritten Text Optical Character Reader
| null | null | null | null |
cs.CV cs.AI cs.IR cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Extracting Handwritten text is one of the most important components of
digitizing information and making it available for large scale setting.
Handwriting Optical Character Reader (OCR) is a research problem in computer
vision and natural language processing computing, and a lot of work has been
done for English, but unfortunately, very little work has been done for low
resourced languages such as Urdu. Urdu language script is very difficult
because of its cursive nature and change of shape of characters based on it's
relative position, therefore, a need arises to propose a model which can
understand complex features and generalize it for every kind of handwriting
style. In this work, we propose a transformer based Urdu Handwritten text
extraction model. As transformers have been very successful in Natural Language
Understanding task, we explore them further to understand complex Urdu
Handwriting.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 15:43:35 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Shaiq",
"Mohammad Daniyal",
""
],
[
"Cheema",
"Musa Dildar Ahmed",
""
],
[
"Kamal",
"Ali",
""
]
] |
new_dataset
| 0.996539 |
2206.04590
|
Fares Abawi
|
Fares Abawi, Tom Weber and Stefan Wermter
|
GASP: Gated Attention For Saliency Prediction
|
International Joint Conference on Artificial Intelligence (IJCAI-21)
|
Proceedings of the Thirtieth International Joint Conference on
Artificial Intelligence (2021) 584-591
|
10.24963/ijcai.2021/81
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Saliency prediction refers to the computational task of modeling overt
attention. Social cues greatly influence our attention, consequently altering
our eye movements and behavior. To emphasize the efficacy of such features, we
present a neural model for integrating social cues and weighting their
influences. Our model consists of two stages. During the first stage, we detect
two social cues by following gaze, estimating gaze direction, and recognizing
affect. These features are then transformed into spatiotemporal maps through
image processing operations. The transformed representations are propagated to
the second stage (GASP) where we explore various techniques of late fusion for
integrating social cues and introduce two sub-networks for directing attention
to relevant stimuli. Our experiments indicate that fusion approaches achieve
better results for static integration methods, whereas non-fusion approaches
for which the influence of each modality is unknown, result in better outcomes
when coupled with recurrent models for dynamic saliency prediction. We show
that gaze direction and affective representations contribute a prediction to
ground-truth correspondence improvement of at least 5% compared to dynamic
saliency models without social cues. Furthermore, affective representations
improve GASP, supporting the necessity of considering affect-biased attention
in predicting saliency.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 16:14:09 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Abawi",
"Fares",
""
],
[
"Weber",
"Tom",
""
],
[
"Wermter",
"Stefan",
""
]
] |
new_dataset
| 0.992532 |
2206.04659
|
Safa Zaid Malik
|
Safa Zaid, Aswah Malik, Kisa Fatima
|
Jewelry Shop Conversational Chatbot
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Since the advent of chatbots in the commercial sector, they have been widely
employed in the customer service department. Typically, these commercial
chatbots are retrieval-based, so they are unable to respond to queries absent
in the provided dataset. On the contrary, generative chatbots try to create the
most appropriate response, but are mostly unable to create a smooth flow in the
customer-bot dialog. Since the client has few options left for continuing after
receiving a response, the dialog becomes short. Through our work, we try to
maximize the intelligence of a simple conversational agent so it can answer
unseen queries, and generate follow-up questions or remarks. We have built a
chatbot for a jewelry shop that finds the underlying objective of the
customer's query by finding similarity of the input to patterns in the corpus.
Our system features an audio input interface for clients, so they may speak to
it in natural language. After converting the audio to text, we trained the
model to extract the intent of the query, to find an appropriate response and
to speak to the client in a natural human voice. To gauge the system's
performance, we used performance metrics such as Recall, Precision and F1
score.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 17:56:51 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Zaid",
"Safa",
""
],
[
"Malik",
"Aswah",
""
],
[
"Fatima",
"Kisa",
""
]
] |
new_dataset
| 0.999451 |
2206.04668
|
Gaurav Mittal
|
Junwen Chen, Gaurav Mittal, Ye Yu, Yu Kong, Mei Chen
|
GateHUB: Gated History Unit with Background Suppression for Online
Action Detection
|
CVPR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Online action detection is the task of predicting the action as soon as it
happens in a streaming video. A major challenge is that the model does not have
access to the future and has to solely rely on the history, i.e., the frames
observed so far, to make predictions. It is therefore important to accentuate
parts of the history that are more informative to the prediction of the current
frame. We present GateHUB, Gated History Unit with Background Suppression, that
comprises a novel position-guided gated cross-attention mechanism to enhance or
suppress parts of the history as per how informative they are for current frame
prediction. GateHUB further proposes Future-augmented History (FaH) to make
history features more informative by using subsequently observed frames when
available. In a single unified framework, GateHUB integrates the transformer's
ability of long-range temporal modeling and the recurrent model's capacity to
selectively encode relevant information. GateHUB also introduces a background
suppression objective to further mitigate false positive background frames that
closely resemble the action frames. Extensive validation on three benchmark
datasets, THUMOS, TVSeries, and HDD, demonstrates that GateHUB significantly
outperforms all existing methods and is also more efficient than the existing
best work. Furthermore, a flow-free version of GateHUB is able to achieve
higher or close accuracy at 2.8x higher frame rate compared to all existing
methods that require both RGB and optical flow information for prediction.
|
[
{
"version": "v1",
"created": "Thu, 9 Jun 2022 17:59:44 GMT"
}
] | 2022-06-10T00:00:00 |
[
[
"Chen",
"Junwen",
""
],
[
"Mittal",
"Gaurav",
""
],
[
"Yu",
"Ye",
""
],
[
"Kong",
"Yu",
""
],
[
"Chen",
"Mei",
""
]
] |
new_dataset
| 0.994021 |
1807.01369
|
Michael Fiske S
|
Michael Stephen Fiske
|
Quantum Random Self-Modifiable Computation
|
50 pages, 3 figures. Computational Intelligence Series, Springer,
2021
| null |
10.1007/978-3-030-70873-3_27
| null |
cs.CC cs.LO quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Among the fundamental questions in computer science, at least two have a deep
impact on mathematics. What can computation compute? How many steps does a
computation require to solve an instance of the 3-SAT problem? Our work
addresses the first question, by introducing a new model called the ex-machine.
The ex-machine executes Turing machine instructions and two special types of
instructions. Quantum random instructions are physically realizable with a
quantum random number generator. Meta instructions can add new states and add
new instructions to the ex-machine. A countable set of ex-machines is
constructed, each with a finite number of states and instructions; each
ex-machine can compute a Turing incomputable language, whenever the quantum
randomness measurements behave like unbiased Bernoulli trials. In 1936, Alan
Turing posed the halting problem for Turing machines and proved that this
problem is unsolvable for Turing machines. Consider an enumeration E_a(i) =
(M_i, T_i) of all Turing machines M_i and initial tapes T_i. Does there exist
an ex-machine X that has at least one evolutionary path X --> X_1 --> X_2 --> .
. . --> X_m, so at the mth stage ex-machine X_m can correctly determine for 0
<= i <= m whether M_i's execution on tape T_i eventually halts? We demonstrate
an ex-machine Q(x) that has one such evolutionary path. The existence of this
evolutionary path suggests that David Hilbert was not misguided to propose in
1900 that mathematicians search for finite processes to help construct
mathematical proofs. Our refinement is that we cannot use a fixed computer
program that behaves according to a fixed set of mechanical rules. We must
pursue methods that exploit randomness and self-modification so that the
complexity of the program can increase as it computes.
|
[
{
"version": "v1",
"created": "Tue, 26 Jun 2018 22:45:10 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Jul 2018 02:34:17 GMT"
},
{
"version": "v3",
"created": "Fri, 6 Jul 2018 21:16:50 GMT"
},
{
"version": "v4",
"created": "Sat, 22 Sep 2018 00:04:38 GMT"
},
{
"version": "v5",
"created": "Wed, 5 Dec 2018 18:46:36 GMT"
},
{
"version": "v6",
"created": "Thu, 6 Dec 2018 18:49:34 GMT"
},
{
"version": "v7",
"created": "Mon, 31 Dec 2018 15:37:44 GMT"
},
{
"version": "v8",
"created": "Fri, 3 May 2019 19:55:40 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Fiske",
"Michael Stephen",
""
]
] |
new_dataset
| 0.953721 |
1901.06614
|
Qingkai Kong
|
Qingkai Kong, Qin Lv, Richard M. Allen
|
Earthquake Early Warning and Beyond: Systems Challenges in
Smartphone-based Seismic Network
|
6 pages, conference paper, already accepted at hotmobile 2019
|
HotMobile 2019: 57-62
|
10.1145/3301293.3302377
| null |
cs.SY cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Earthquake Early Warning (EEW) systems can effectively reduce fatalities,
injuries, and damages caused by earthquakes. Current EEW systems are mostly
based on traditional seismic and geodetic networks, and exist only in a few
countries due to the high cost of installing and maintaining such systems. The
MyShake system takes a different approach and turns people's smartphones into
portable seismic sensors to detect earthquake-like motions. However, to issue
EEW messages with high accuracy and low latency in the real world, we need to
address a number of challenges related to mobile computing. In this paper, we
first summarize our experience building and deploying the MyShake system, then
focus on two key challenges for smartphone-based EEW (sensing heterogeneity and
user/system dynamics) and some preliminary exploration. We also discuss other
challenges and new research directions associated with smartphone-based seismic
network.
|
[
{
"version": "v1",
"created": "Sun, 20 Jan 2019 02:32:52 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Kong",
"Qingkai",
""
],
[
"Lv",
"Qin",
""
],
[
"Allen",
"Richard M.",
""
]
] |
new_dataset
| 0.98044 |
1907.10101
|
Marco Scarsini
|
Roberto Cominetti, Valerio Dose, Marco Scarsini
|
The Price of Anarchy in Routing Games as a Function of the Demand
|
22 pages, 7 figures
| null |
10.1007/s10107-021-01701-7
| null |
cs.GT math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The price of anarchy has become a standard measure of the efficiency of
equilibria in games. Most of the literature in this area has focused on
establishing worst-case bounds for specific classes of games, such as routing
games or more general congestion games. Recently, the price of anarchy in
routing games has been studied as a function of the traffic demand, providing
asymptotic results in light and heavy traffic. The aim of this paper is to
study the price of anarchy in nonatomic routing games in the intermediate
region of the demand. To achieve this goal, we begin by establishing some
smoothness properties of Wardrop equilibria and social optima for general
smooth costs. In the case of affine costs we show that the equilibrium is
piecewise linear, with break points at the demand levels at which the set of
active paths changes. We prove that the number of such break points is finite,
although it can be exponential in the size of the network. Exploiting a scaling
law between the equilibrium and the social optimum, we derive a similar
behavior for the optimal flows. We then prove that in any interval between
break points the price of anarchy is smooth and it is either monotone
(decreasing or increasing) over the full interval, or it decreases up to a
certain minimum point in the interior of the interval and increases afterwards.
We deduce that for affine costs the maximum of the price of anarchy can only
occur at the break points. For general costs we provide counterexamples showing
that the set of break points is not always finite.
|
[
{
"version": "v1",
"created": "Tue, 23 Jul 2019 18:59:31 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Jul 2020 15:48:48 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Apr 2021 11:39:46 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Cominetti",
"Roberto",
""
],
[
"Dose",
"Valerio",
""
],
[
"Scarsini",
"Marco",
""
]
] |
new_dataset
| 0.994048 |
2107.03605
|
Zhaorui Wang
|
Zhaorui Wang, Ling Liu, Shengli Zhang, Pengpeng Dong, Qing Yang, and
Taotao Wang
|
PNC Enabled IIoT: A General Framework for Channel-Coded Asymmetric
Physical-Layer Network Coding
|
To appear in IEEE TWC
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper investigates the application of physical-layer network coding
(PNC) to Industrial Internet-of-Things (IIoT) where a controller and a robot
are out of each other's transmission range, and they exchange messages with the
assistance of a relay. We particularly focus on a scenario where the controller
has more transmitted information, and the channel of the controller is stronger
than that of the robot. To reduce the communication latency, we propose an
asymmetric transmission scheme where the controller and robot transmit
different amount of information in the uplink of PNC simultaneously. To achieve
this, the controller chooses a higher order modulation. In addition, the both
users apply channel codes to guarantee the reliability. A problem is a
superimposed symbol at the relay contains different amount of source
information from the two end users. It is thus hard for the relay to deduce
meaningful network-coded messages by applying the current PNC decoding
techniques which require the end users to transmit the same amount of
information. To solve this problem, we propose a lattice-based scheme where the
two users encode-and-modulate their information in lattices with different
lattice construction levels. Our design is versatile on that the two end users
can freely choose their modulation orders based on their channel power, and the
design is applicable for arbitrary channel codes.
|
[
{
"version": "v1",
"created": "Thu, 8 Jul 2021 04:55:05 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Mar 2022 06:58:14 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Jun 2022 07:29:23 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Wang",
"Zhaorui",
""
],
[
"Liu",
"Ling",
""
],
[
"Zhang",
"Shengli",
""
],
[
"Dong",
"Pengpeng",
""
],
[
"Yang",
"Qing",
""
],
[
"Wang",
"Taotao",
""
]
] |
new_dataset
| 0.999737 |
2111.06006
|
Aaron Hertzmann
|
Chenxi Liu, Pierre B\'enard, Aaron Hertzmann, Shayan Hoshyari
|
ConTesse: Accurate Occluding Contours for Subdivision Surfaces
|
Accepted to ACM Transactions on Graphics (TOG)
| null | null | null |
cs.GR cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a method for computing the visible occluding contours of
subdivision surfaces. The paper first introduces new theory for contour
visibility of smooth surfaces. Necessary and sufficient conditions are
introduced for when a sampled occluding contour is valid, that is, when it may
be assigned consistent visibility. Previous methods do not guarantee these
conditions, which helps explain why smooth contour visibility has been such a
challenging problem in the past. The paper then proposes an algorithm that,
given a subdivision surface, finds sampled contours satisfying these
conditions, and then generates a new triangle mesh matching the given occluding
contours. The contours of the output triangle mesh may then be rendered with
standard non-photorealistic rendering algorithms, using the mesh for visibility
computation. The method can be applied to any triangle mesh, by treating it as
the base mesh of a subdivision surface.
|
[
{
"version": "v1",
"created": "Thu, 11 Nov 2021 01:12:51 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Mar 2022 04:20:04 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Jun 2022 12:57:21 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Liu",
"Chenxi",
""
],
[
"Bénard",
"Pierre",
""
],
[
"Hertzmann",
"Aaron",
""
],
[
"Hoshyari",
"Shayan",
""
]
] |
new_dataset
| 0.99057 |
2112.09078
|
Chen Li
|
Divya Ramesh, Qiyuan Fu and Chen Li
|
SenSnake: A snake robot with contact force sensing for studying
locomotion in complex 3-D terrain
| null |
IEEE International Conference on Robotics and Automation (2022)
| null | null |
cs.RO physics.bio-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Despite advances in a diversity of environments, snake robots are still far
behind snakes in traversing complex 3-D terrain with large obstacles. This is
due to a lack of understanding of how to control 3-D body bending to push
against terrain features to generate and control propulsion. Biological studies
suggested that generalist snakes use contact force sensing to adjust body
bending in real time to do so. However, studying this sensory-modulated force
control in snakes is challenging, due to a lack of basic knowledge of how their
force sensing organs work. Here, we take a robophysics approach to make
progress, starting by developing a snake robot capable of 3-D body bending with
contact force sensing to enable systematic locomotion experiments and force
measurements. Through two development and testing iterations, we created a
12-segment robot with 36 piezo-resistive sheet sensors distributed on all
segments with compliant shells with a sampling frequency of 30 Hz. The robot
measured contact forces while traversing a large obstacle using vertical
bending with high repeatability, achieving the goal of providing a platform for
systematic experiments. Finally, we explored model-based calibration
considering the viscoelastic behavior of the piezo-resistive sensor, which will
for useful for future studies.
|
[
{
"version": "v1",
"created": "Wed, 15 Dec 2021 18:36:53 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Mar 2022 06:35:13 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Jun 2022 16:40:58 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Ramesh",
"Divya",
""
],
[
"Fu",
"Qiyuan",
""
],
[
"Li",
"Chen",
""
]
] |
new_dataset
| 0.999781 |
2202.12534
|
Jan Zeman
|
M. J\'ilek, K. Str\'ansk\'a, M. Somr, M. Kulich, J. Zeman, and L.
P\v{r}eu\v{c}il
|
Self-Stabilizing Self-Assembly
|
7 pages, 14 figures, 1 table; incorporates referees' and editor's
comments
| null | null | null |
cs.RO cond-mat.soft cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emerging field of passive macro-scale tile-based self-assembly (TBSA)
shows promise in enabling effective manufacturing processes by harnessing
TBSA's intrinsic parallelism. However, current TBSA methodologies still do not
fulfill their potentials, largely because such assemblies are often prone to
errors, and the size of an individual assembly is limited due to insufficient
mechanical stability. Moreover, the instability issue worsens as assemblies
grow in size. Using a novel type of magnetically-bonded tiles carried by
bristle-bot drives, we propose here a framework that reverses this tendency;
i.e., as an assembly grows, it becomes more stable. Stability is achieved by
introducing two sets of tiles that move in opposite directions, thus zeroing
the assembly net force. Using physics-based computational experiments, we
compare the performance of the proposed approach with the common orbital
shaking method, proving that the proposed system of tiles indeed possesses
self-stabilizing characteristics. Our approach enables assemblies containing
hundreds of tiles to be built, while the shaking approach is inherently limited
to a few tens of tiles. Our results indicate that one of the primary
limitations of mechanical, agitation-based TBSA approaches, instability, might
be overcome by employing a swarm of free-running, sensorless mobile robots,
herein represented by passive tiles at the macroscopic scale.
|
[
{
"version": "v1",
"created": "Fri, 25 Feb 2022 07:49:07 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jun 2022 20:12:57 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Jílek",
"M.",
""
],
[
"Stránská",
"K.",
""
],
[
"Somr",
"M.",
""
],
[
"Kulich",
"M.",
""
],
[
"Zeman",
"J.",
""
],
[
"Přeučil",
"L.",
""
]
] |
new_dataset
| 0.969302 |
2203.13658
|
Daniel Wiegreffe
|
Michelle Kampfrath, Ren\'e Staritzbichler, Guillermo P\'erez
Hern\'andez, Alexander S. Rose, Johanna K.S. Tiemann, Gerik Scheuermann,
Daniel Wiegreffe, Peter W. Hildebrand
|
MDsrv -- visual sharing and analysis of molecular dynamics simulations
|
9 pages, 3 figures
| null |
10.1093/nar/gkac398
| null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Molecular dynamics simulation is a proven technique for computing and
visualizing the time-resolved motion of macromolecules at atomic resolution.
The MDsrv is a tool that streams MD trajectories and displays them
interactively in web browsers without requiring advanced skills, facilitating
interactive exploration and collaborative visual analysis. We have now enhanced
the MDsrv to further simplify the upload and sharing of MD trajectories and
improve their online viewing and analysis. With the new instance, the MDsrv
simplifies the creation of sessions, which allows the exchange of MD
trajectories with preset representations and perspectives. An important
innovation is that the MDsrv can now access and visualize trajectories from
remote datasets, which greatly expands its applicability and use, as the data
no longer needs to be accessible on a local server. In addition, initial
analyses such as sequence or structure alignments, distance measurements, or
RMSD calculations have been implemented, which optionally support visual
analysis. Finally, the MDsrv now offers a faster and more efficient
visualization of even large trajectories.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 14:08:24 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2022 14:09:53 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Kampfrath",
"Michelle",
""
],
[
"Staritzbichler",
"René",
""
],
[
"Hernández",
"Guillermo Pérez",
""
],
[
"Rose",
"Alexander S.",
""
],
[
"Tiemann",
"Johanna K. S.",
""
],
[
"Scheuermann",
"Gerik",
""
],
[
"Wiegreffe",
"Daniel",
""
],
[
"Hildebrand",
"Peter W.",
""
]
] |
new_dataset
| 0.978475 |
2203.14412
|
Feixiang He
|
Feixiang He, Yanlong Huang, He Wang
|
iPLAN: Interactive and Procedural Layout Planning
|
Accepted in CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Layout design is ubiquitous in many applications, e.g. architecture/urban
planning, etc, which involves a lengthy iterative design process. Recently,
deep learning has been leveraged to automatically generate layouts via image
generation, showing a huge potential to free designers from laborious routines.
While automatic generation can greatly boost productivity, designer input is
undoubtedly crucial. An ideal AI-aided design tool should automate repetitive
routines, and meanwhile accept human guidance and provide smart/proactive
suggestions. However, the capability of involving humans into the loop has been
largely ignored in existing methods which are mostly end-to-end approaches. To
this end, we propose a new human-in-the-loop generative model, iPLAN, which is
capable of automatically generating layouts, but also interacting with
designers throughout the whole procedure, enabling humans and AI to co-evolve a
sketchy idea gradually into the final design. iPLAN is evaluated on diverse
datasets and compared with existing methods. The results show that iPLAN has
high fidelity in producing similar layouts to those from human designers, great
flexibility in accepting designer inputs and providing design suggestions
accordingly, and strong generalizability when facing unseen design tasks and
limited training data.
|
[
{
"version": "v1",
"created": "Sun, 27 Mar 2022 23:21:15 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jun 2022 16:28:03 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"He",
"Feixiang",
""
],
[
"Huang",
"Yanlong",
""
],
[
"Wang",
"He",
""
]
] |
new_dataset
| 0.994573 |
2204.09795
|
Jalal Mostafa
|
Jalal Mostafa, Sara Wehbi, Suren Chilingaryan, Andreas Kopmann
|
SciTS: A Benchmark for Time-Series Databases in Scientific Experiments
and Industrial Internet of Things
| null | null |
10.1145/3538712.3538723
| null |
cs.DB astro-ph.IM cs.PF hep-ex
|
http://creativecommons.org/licenses/by/4.0/
|
Time-series data has an increasingly growing usage in Industrial Internet of
Things (IIoT) and large-scale scientific experiments. Managing time-series data
needs a storage engine that can keep up with their constantly growing volumes
while providing an acceptable query latency. While traditional ACID databases
favor consistency over performance, many time-series databases with novel
storage engines have been developed to provide better ingestion performance and
lower query latency. To understand how the unique design of a time-series
database affects its performance, we design SciTS, a highly extensible and
parameterizable benchmark for time-series data. The benchmark studies the data
ingestion capabilities of time-series databases especially as they grow larger
in size. It also studies the latencies of 5 practical queries from the
scientific experiments use case. We use SciTS to evaluate the performance of 4
databases of 4 distinct storage engines: ClickHouse, InfluxDB, TimescaleDB, and
PostgreSQL.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2022 21:53:33 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jun 2022 13:11:42 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Mostafa",
"Jalal",
""
],
[
"Wehbi",
"Sara",
""
],
[
"Chilingaryan",
"Suren",
""
],
[
"Kopmann",
"Andreas",
""
]
] |
new_dataset
| 0.999194 |
2205.01952
|
Ayrat Khalimov
|
L\'eo Exibard, Emmanuel Filiot, Ayrat Khalimov
|
A Generic Solution to Register-bounded Synthesis with an Application to
Discrete Orders
|
submitted by accident; was intended as an update of arXiv:2105.09978
| null |
10.4230/LIPIcs.ICALP.2022.116
| null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
We study synthesis of reactive systems interacting with environments using an
infinite data domain. A popular formalism for specifying and modelling such
systems is register automata and transducers. They extend finite-state automata
by adding registers to store data values and to compare the incoming data
values against stored ones. Synthesis from nondeterministic or universal
register automata is undecidable in general. However, its register-bounded
variant, where additionally a bound on the number of registers in a sought
transducer is given, is known to be decidable for universal register automata
which can compare data for equality, i.e., for data domain $(N,=)$. This paper
extends the decidability border to the domain $(N,<)$ of natural numbers with
linear order. Our solution is generic: we define a sufficient condition on data
domains (regular approximability) for decidability of register-bounded
synthesis. The condition is satisfied by natural data domains like $(N,<)$. It
allows one to use simple language-theoretic arguments and avoid technical
game-theoretic reasoning. Further, by defining a generic notion of reducibility
between data domains, we show the decidability of synthesis in the domain
$(N^d,<^d)$ of tuples of numbers equipped with the component-wise partial order
and in the domain $(\Sigma^*,\prec)$ of finite strings with the prefix
relation.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2022 08:45:07 GMT"
},
{
"version": "v2",
"created": "Fri, 20 May 2022 12:17:42 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Jun 2022 09:43:16 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Exibard",
"Léo",
""
],
[
"Filiot",
"Emmanuel",
""
],
[
"Khalimov",
"Ayrat",
""
]
] |
new_dataset
| 0.99528 |
2205.14328
|
Chaohui Yu
|
Qiang Zhou, Chaohui Yu, Zhibin Wang, Hao Li
|
Point RCNN: An Angle-Free Framework for Rotated Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rotated object detection in aerial images is still challenging due to
arbitrary orientations, large scale and aspect ratio variations, and extreme
density of objects. Existing state-of-the-art rotated object detection methods
mainly rely on angle-based detectors. However, angle regression can easily
suffer from the long-standing boundary problem. To tackle this problem, we
propose a purely angle-free framework for rotated object detection, called
Point RCNN, which mainly consists of PointRPN and PointReg. In particular,
PointRPN generates accurate rotated RoIs (RRoIs) by converting the learned
representative points with a coarse-to-fine manner, which is motivated by
RepPoints. Based on the learned RRoIs, PointReg performs corner points
refinement for more accurate detection. In addition, aerial images are often
severely unbalanced in categories, and existing methods almost ignore this
issue. In this paper, we also experimentally verify that re-sampling the images
of the rare categories will stabilize training and further improve the
detection performance. Experiments demonstrate that our Point RCNN achieves the
new state-of-the-art detection performance on commonly used aerial datasets,
including DOTA-v1.0, DOTA-v1.5, and HRSC2016.
|
[
{
"version": "v1",
"created": "Sat, 28 May 2022 04:07:37 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jun 2022 02:39:08 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Zhou",
"Qiang",
""
],
[
"Yu",
"Chaohui",
""
],
[
"Wang",
"Zhibin",
""
],
[
"Li",
"Hao",
""
]
] |
new_dataset
| 0.970173 |
2206.02443
|
Thaer Sahmoud
|
Thaer Sahmoud, Dr. Mohammad Mikki
|
Spam Detection Using BERT
|
6 pages, 8 figures and 2 tabels
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Emails and SMSs are the most popular tools in today communications, and as
the increase of emails and SMSs users are increase, the number of spams is also
increases. Spam is any kind of unwanted, unsolicited digital communication that
gets sent out in bulk, spam emails and SMSs are causing major resource wastage
by unnecessarily flooding the network links. Although most spam mail originate
with advertisers looking to push their products, some are much more malicious
in their intent like phishing emails that aims to trick victims into giving up
sensitive information like website logins or credit card information this type
of cybercrime is known as phishing. To countermeasure spams, many researches
and efforts are done to build spam detectors that are able to filter out
messages and emails as spam or ham. In this research we build a spam detector
using BERT pre-trained model that classifies emails and messages by
understanding to their context, and we trained our spam detector model using
multiple corpuses like SMS collection corpus, Enron corpus, SpamAssassin
corpus, Ling-Spam corpus and SMS spam collection corpus, our spam detector
performance was 98.62%, 97.83%, 99.13% and 99.28% respectively. Keywords: Spam
Detector, BERT, Machine learning, NLP, Transformer, Enron Corpus, SpamAssassin
Corpus, SMS Spam Detection Corpus, Ling-Spam Corpus.
|
[
{
"version": "v1",
"created": "Mon, 6 Jun 2022 09:09:40 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jun 2022 21:11:29 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Sahmoud",
"Thaer",
""
],
[
"Mikki",
"Dr. Mohammad",
""
]
] |
new_dataset
| 0.998063 |
2206.03179
|
Ignacio Aguilera-Martos
|
Ignacio Aguilera-Martos, \'Angel M. Garc\'ia-Vico, Juli\'an Luengo,
Sergio Damas, Francisco J. Melero, Jos\'e Javier Valle-Alonso, Francisco
Herrera
|
TSFEDL: A Python Library for Time Series Spatio-Temporal Feature
Extraction and Prediction using Deep Learning (with Appendices on Detailed
Network Architectures and Experimental Cases of Study)
|
26 pages, 33 figures
| null | null | null |
cs.NE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The combination of convolutional and recurrent neural networks is a promising
framework that allows the extraction of high-quality spatio-temporal features
together with its temporal dependencies, which is key for time series
prediction problems such as forecasting, classification or anomaly detection,
amongst others. In this paper, the TSFEDL library is introduced. It compiles 20
state-of-the-art methods for both time series feature extraction and
prediction, employing convolutional and recurrent deep neural networks for its
use in several data mining tasks. The library is built upon a set of
Tensorflow+Keras and PyTorch modules under the AGPLv3 license. The performance
validation of the architectures included in this proposal confirms the
usefulness of this Python package.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 10:58:33 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jun 2022 09:49:38 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Aguilera-Martos",
"Ignacio",
""
],
[
"García-Vico",
"Ángel M.",
""
],
[
"Luengo",
"Julián",
""
],
[
"Damas",
"Sergio",
""
],
[
"Melero",
"Francisco J.",
""
],
[
"Valle-Alonso",
"José Javier",
""
],
[
"Herrera",
"Francisco",
""
]
] |
new_dataset
| 0.998907 |
2206.03532
|
Kartik Singhal
|
Kartik Singhal, Kesha Hietala, Sarah Marshall, Robert Rand
|
Q# as a Quantum Algorithmic Language
|
To appear at Quantum Physics and Logic (QPL) 2022
| null | null | null |
cs.PL cs.ET cs.LO quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Q# is a standalone domain-specific programming language from Microsoft for
writing and running quantum programs. Like most industrial languages, it was
designed without a formal specification, which can naturally lead to ambiguity
in its interpretation. We aim to provide a formal language definition for Q#,
placing the language on a solid mathematical foundation and enabling further
evolution of its design and type system. This paper presents $\lambda_{Q\#}$,
an idealized version of Q# that illustrates how we may view Q# as a quantum
Algol (algorithmic language). We show the safety properties enforced by
$\lambda_{Q\#}$'s type system and present its equational semantics based on a
fully complete algebraic theory by Staton.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 18:42:50 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Singhal",
"Kartik",
""
],
[
"Hietala",
"Kesha",
""
],
[
"Marshall",
"Sarah",
""
],
[
"Rand",
"Robert",
""
]
] |
new_dataset
| 0.998039 |
2206.03545
|
Yang Shi
|
Yang Shi, Min Chi, Tiffany Barnes, Thomas Price
|
Code-DKT: A Code-based Knowledge Tracing Model for Programming Tasks
|
12 pages, 8 figures, Accepted in EDM 2022
| null | null | null |
cs.SE cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge tracing (KT) models are a popular approach for predicting students'
future performance at practice problems using their prior attempts. Though many
innovations have been made in KT, most models including the state-of-the-art
Deep KT (DKT) mainly leverage each student's response either as correct or
incorrect, ignoring its content. In this work, we propose Code-based Deep
Knowledge Tracing (Code-DKT), a model that uses an attention mechanism to
automatically extract and select domain-specific code features to extend DKT.
We compared the effectiveness of Code-DKT against Bayesian and Deep Knowledge
Tracing (BKT and DKT) on a dataset from a class of 50 students attempting to
solve 5 introductory programming assignments. Our results show that Code-DKT
consistently outperforms DKT by 3.07-4.00% AUC across the 5 assignments, a
comparable improvement to other state-of-the-art domain-general KT models over
DKT. Finally, we analyze problem-specific performance through a set of case
studies for one assignment to demonstrate when and how code features improve
Code-DKT's predictions.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 19:29:44 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Shi",
"Yang",
""
],
[
"Chi",
"Min",
""
],
[
"Barnes",
"Tiffany",
""
],
[
"Price",
"Thomas",
""
]
] |
new_dataset
| 0.979794 |
2206.03560
|
Md Taimur Ahad
|
Md. Taimur Ahad (Department of Computer Science Faculty of Engineering
and Technology Eastern University, Bangladesh)
|
Mobile phone enabled Supply chain management in the RMG sector: A
conceptual framework
|
8 pages
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Relatively little is known about mobile phone use in a Supply Chain
Management (SCM) context, especially in the Bangladeshi Ready-Made Garment
(RMG) industry. RMG is a very important industry for the Bangladeshi economy
but is criticized for long product supply times due to poor SCM. RMG requires
obtaining real-time information and enhanced dynamic control, through utilizing
information sharing and connecting stakeholders in garment manufacturing.
However, a lack of IT support in the Bangladeshi RMG sector, the high price of
computers and the low level of adoption of the computer-based internet are
obstacles to providing sophisticated computer-aided SCM. Alternatively, the
explosive adoption of mobile phones and continuous improvement of this
technology is an opportunity to provide mobile-based SCM for the RMG sector.
This research presents a mobile phone-based SCM framework for the Bangladeshi
RMG sector. The proposed framework shows that mobile phone-based SCM can
positively impact communication, information exchange, information retrieval
and flow, coordination and management, which represent the main processes of
effective SCM. However, to capitalize on these benefits, it is also important
to discover the critical success factors and barriers to mobile SCM systems.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2022 20:25:33 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Ahad",
"Md. Taimur",
"",
"Department of Computer Science Faculty of Engineering\n and Technology Eastern University, Bangladesh"
]
] |
new_dataset
| 0.999585 |
2206.03678
|
Zhuoran Zheng
|
Zhuoran Zheng and Xiuyi Jia
|
UHD Image Deblurring via Multi-scale Cubic-Mixer
|
8 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Currently, transformer-based algorithms are making a splash in the domain of
image deblurring. Their achievement depends on the self-attention mechanism
with CNN stem to model long range dependencies between tokens. Unfortunately,
this ear-pleasing pipeline introduces high computational complexity and makes
it difficult to run an ultra-high-definition image on a single GPU in real
time. To trade-off accuracy and efficiency, the input degraded image is
computed cyclically over three dimensional ($C$, $W$, and $H$) signals without
a self-attention mechanism. We term this deep network as Multi-scale
Cubic-Mixer, which is acted on both the real and imaginary components after
fast Fourier transform to estimate the Fourier coefficients and thus obtain a
deblurred image. Furthermore, we combine the multi-scale cubic-mixer with a
slicing strategy to generate high-quality results at a much lower computational
cost. Experimental results demonstrate that the proposed algorithm performs
favorably against the state-of-the-art deblurring approaches on the several
benchmarks and a new ultra-high-definition dataset in terms of accuracy and
speed.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 05:04:43 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Zheng",
"Zhuoran",
""
],
[
"Jia",
"Xiuyi",
""
]
] |
new_dataset
| 0.98739 |
2206.03697
|
Kaihao Zhang
|
Puyang Zhang, Kaihao Zhang, Wenhan Luo, Changsheng Li, Guoren Wang
|
Blind Face Restoration: Benchmark Datasets and a Baseline Model
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blind Face Restoration (BFR) aims to construct a high-quality (HQ) face image
from its corresponding low-quality (LQ) input. Recently, many BFR methods have
been proposed and they have achieved remarkable success. However, these methods
are trained or evaluated on privately synthesized datasets, which makes it
infeasible for the subsequent approaches to fairly compare with them. To
address this problem, we first synthesize two blind face restoration benchmark
datasets called EDFace-Celeb-1M (BFR128) and EDFace-Celeb-150K (BFR512).
State-of-the-art methods are benchmarked on them under five settings including
blur, noise, low resolution, JPEG compression artifacts, and the combination of
them (full degradation). To make the comparison more comprehensive, five
widely-used quantitative metrics and two task-driven metrics including Average
Face Landmark Distance (AFLD) and Average Face ID Cosine Similarity (AFICS) are
applied. Furthermore, we develop an effective baseline model called Swin
Transformer U-Net (STUNet). The STUNet with U-net architecture applies an
attention mechanism and a shifted windowing scheme to capture long-range pixel
interactions and focus more on significant features while still being trained
efficiently. Experimental results show that the proposed baseline method
performs favourably against the SOTA methods on various BFR tasks.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 06:34:24 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Zhang",
"Puyang",
""
],
[
"Zhang",
"Kaihao",
""
],
[
"Luo",
"Wenhan",
""
],
[
"Li",
"Changsheng",
""
],
[
"Wang",
"Guoren",
""
]
] |
new_dataset
| 0.998064 |
2206.03702
|
Zhiyong Wang
|
Zhiyong Wang, Ge Zhang, Nineli Lashkarashvili
|
1Cademy at Semeval-2022 Task 1: Investigating the Effectiveness of
Multilingual, Multitask, and Language-Agnostic Tricks for the Reverse
Dictionary Task
|
9 pages, 1 figure, SemEval 2022 Task 1
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper describes our system for the SemEval2022 task of matching
dictionary glosses to word embeddings. We focus on the Reverse Dictionary Track
of the competition, which maps multilingual glosses to reconstructed vector
representations. More specifically, models convert the input of sentences to
three types of embeddings: SGNS, Char, and Electra. We propose several
experiments for applying neural network cells, general multilingual and
multitask structures, and language-agnostic tricks to the task. We also provide
comparisons over different types of word embeddings and ablation studies to
suggest helpful strategies. Our initial transformer-based model achieves
relatively low performance. However, trials on different retokenization
methodologies indicate improved performance. Our proposed Elmobased monolingual
model achieves the highest outcome, and its multitask, and multilingual
varieties show competitive results as well.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 06:39:04 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Wang",
"Zhiyong",
""
],
[
"Zhang",
"Ge",
""
],
[
"Lashkarashvili",
"Nineli",
""
]
] |
new_dataset
| 0.980102 |
2206.03735
|
Patrick Sch\"afer
|
Patrick Sch\"afer, Ulf Leser
|
Motiflets -- Fast and Accurate Detection of Motifs in Time Series
| null | null | null | null |
cs.LG cs.AI cs.DB cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A motif intuitively is a short time series that repeats itself approximately
the same within a larger time series. Such motifs often represent concealed
structures, such as heart beats in an ECG recording, or sleep spindles in EEG
sleep data. Motif discovery (MD) is the task of finding such motifs in a given
input series. As there are varying definitions of what exactly a motif is, a
number of algorithms exist. As central parameters they all take the length l of
the motif and the maximal distance r between the motif's occurrences. In
practice, however, suitable values for r are very hard to determine upfront,
and the found motifs show a high variability. Setting the wrong input value
will result in a motif that is not distinguishable from noise. Accordingly,
finding an interesting motif with these methods requires extensive
trial-and-error. We present a different approach to the MD problem. We define
k-Motiflets as the set of exactly k occurrences of a motif of length l, whose
maximum pairwise distance is minimal. This turns the MD problem upside-down:
Our central parameter is not the distance threshold r, but the desired size k
of a motif set, which we show is considerably more intuitive and easier to set.
Based on this definition, we present exact and approximate algorithms for
finding k-Motiflets and analyze their complexity. To further ease the use of
our method, we describe extensions to automatically determine the
right/suitable values for its input parameters. Thus, for the first time,
extracting meaningful motif sets without any a-priori knowledge becomes
feasible. By evaluating real-world use cases and comparison to 4
state-of-the-art MD algorithms, we show that our proposed algorithm is (a)
quantitatively superior, finding larger motif sets at higher similarity, (b)
qualitatively better, leading to clearer and easier to interpret motifs, and
(c) has the lowest runtime.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 08:22:28 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Schäfer",
"Patrick",
""
],
[
"Leser",
"Ulf",
""
]
] |
new_dataset
| 0.978971 |
2206.03746
|
Quan Quan
|
Quan Quan
|
Reliable Flight Control: Gravity-Compensation-First Principle
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Safety is always the priority in aviation. However, current state-of-the-art
passive fault-tolerant control is too conservative to use; current
state-of-the-art active fault-tolerant control requires time to perform fault
detection and diagnosis, and control switching. But it may be later to recover
impaired aircraft. Most designs depend on failures determined as a priori and
cannot deal with fault, causing the original system's state to be
uncontrollable. However, experienced human pilots can save a serve impaired
aircraft as far as they can. Motivated by this, this paper develops a principle
to try to explain human pilot behavior behind, coined the
gravity-compensation-first principle. This further supports reliable flight
control for aircraft such as quadcopters and tail-sitter unmanned aerial
vehicles.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 08:43:05 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Quan",
"Quan",
""
]
] |
new_dataset
| 0.998952 |
2206.03811
|
Egor Zuev
|
Egor Zuev
|
Authenticated Byzantine Gossip Protocol
| null | null | null | null |
cs.DC
|
http://creativecommons.org/publicdomain/zero/1.0/
|
ABGP refers to Authenticated Byzantine Gossip Protocol. The ABGP is a
partial-synchronous, weak consistent, BFT based consensus algorithm. The
algorithm implements the gossip protocol, but with BFT features inside (like
multisig record approval). The algorithm has been developed as an alternative
to classic private ledger solutions, like Hyperledger.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 11:15:42 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Zuev",
"Egor",
""
]
] |
new_dataset
| 0.965352 |
2206.03870
|
Andrew A. Krizhanovsky
|
Tatyana Boyko, Nina Zaitseva, Natalia Krizhanovskaya, Andrew
Krizhanovsky, Irina Novak, Nataliya Pellinen and Aleksandra Rodionova
|
The Open corpus of the Veps and Karelian languages: overview and
applications
|
9 pages, 9 figures, published in the journal
|
KnE Social Sciences. 7 (3). 2022. P. 29-40
|
10.18502/kss.v7i3.10419
| null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
A growing priority in the study of Baltic-Finnic languages of the Republic of
Karelia has been the methods and tools of corpus linguistics. Since 2016,
linguists, mathematicians, and programmers at the Karelian Research Centre have
been working with the Open Corpus of the Veps and Karelian Languages (VepKar),
which is an extension of the Veps Corpus created in 2009. The VepKar corpus
comprises texts in Karelian and Veps, multifunctional dictionaries linked to
them, and software with an advanced system of search using various criteria of
the texts (language, genre, etc.) and numerous linguistic categories (lexical
and grammatical search in texts was implemented thanks to the generator of word
forms that we created earlier). A corpus of 3000 texts was compiled, texts were
uploaded and marked up, the system for classifying texts into languages,
dialects, types and genres was introduced, and the word-form generator was
created. Future plans include developing a speech module for working with audio
recordings and a syntactic tagging module using morphological analysis outputs.
Owing to continuous functional advancements in the corpus manager and ongoing
VepKar enrichment with new material and text markup, users can handle a wide
range of scientific and applied tasks. In creating the universal national
VepKar corpus, its developers and managers strive to preserve and exhibit as
fully as possible the state of the Veps and Karelian languages in the 19th-21st
centuries.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 13:05:50 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Boyko",
"Tatyana",
""
],
[
"Zaitseva",
"Nina",
""
],
[
"Krizhanovskaya",
"Natalia",
""
],
[
"Krizhanovsky",
"Andrew",
""
],
[
"Novak",
"Irina",
""
],
[
"Pellinen",
"Nataliya",
""
],
[
"Rodionova",
"Aleksandra",
""
]
] |
new_dataset
| 0.991487 |
2206.03880
|
Tim Barfoot
|
Gabriele M T D'Eleuterio and Timothy D Barfoot
|
On the Eigenstructure of Rotations and Poses: Commonalities and
Peculiarities
|
18 pages, 2 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rotations and poses are ubiquitous throughout many fields of science and
engineering such as robotics, aerospace, computer vision and graphics. In this
paper, we provide a complete characterization of rotations and poses in terms
of the eigenstructure of their matrix Lie group representations, SO(3), SE(3)
and Ad(SE(3)). An eigendecomposition of the pose representations reveals that
they can be cast into a form very similar to that of rotations although the
structure of the former can vary depending on the relative nature of the
translation and rotation involved. Understanding the eigenstructure of these
important quantities has merit in and of itself but it is also essential to
appreciating such practical results as the minimal polynomial for rotations and
poses and the calculation of Jacobians; moreover, we can speak of a
principal-axis pose in much the same manner that we can of a principal-axis
rotation.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 13:25:21 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"D'Eleuterio",
"Gabriele M T",
""
],
[
"Barfoot",
"Timothy D",
""
]
] |
new_dataset
| 0.95976 |
2206.03943
|
Mohsen Vadidar
|
Mohsen Vadidar, Ali Kariminezhad, Christian Mayr, Laurent Kloeker and
Lutz Eckstein
|
Robust Environment Perception for Automated Driving: A Unified Learning
Pipeline for Visual-Infrared Object Detection
| null | null | null | null |
cs.CV cs.IT math.IT
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The RGB complementary metal-oxidesemiconductor (CMOS) sensor works within the
visible light spectrum. Therefore it is very sensitive to environmental light
conditions. On the contrary, a long-wave infrared (LWIR) sensor operating in
8-14 micro meter spectral band, functions independent of visible light.
In this paper, we exploit both visual and thermal perception units for robust
object detection purposes. After delicate synchronization and (cross-) labeling
of the FLIR [1] dataset, this multi-modal perception data passes through a
convolutional neural network (CNN) to detect three critical objects on the
road, namely pedestrians, bicycles, and cars. After evaluation of RGB and
infrared (thermal and infrared are often used interchangeably) sensors
separately, various network structures are compared to fuse the data at the
feature level effectively. Our RGB-thermal (RGBT) fusion network, which takes
advantage of a novel entropy-block attention module (EBAM), outperforms the
state-of-the-art network [2] by 10% with 82.9% mAP.
|
[
{
"version": "v1",
"created": "Wed, 8 Jun 2022 15:02:58 GMT"
}
] | 2022-06-09T00:00:00 |
[
[
"Vadidar",
"Mohsen",
""
],
[
"Kariminezhad",
"Ali",
""
],
[
"Mayr",
"Christian",
""
],
[
"Kloeker",
"Laurent",
""
],
[
"Eckstein",
"Lutz",
""
]
] |
new_dataset
| 0.998862 |
0808.1417
|
Shamgar Gurevich
|
Shamgar Gurevich, Ronny Hadani, Nir Sochen
|
The finite harmonic oscillator and its associated sequences
|
Published in the Proceedings of the National Academy of Sciences of
the United States of America (Communicated by Joseph Bernstein, Tel Aviv
University, Tel Aviv, Israel)
|
PNAS, July 22, 2008 vol. 105 no. 29 9869-9873
http://www.pnas.org/content/105/29/9869.abstract
|
10.1073/pnas.0801656105
| null |
cs.IT cs.CR cs.DM math-ph math.GR math.IT math.MP math.NT math.PR math.QA math.RT math.SG quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A system of functions (signals) on the finite line, called the oscillator
system, is described and studied. Applications of this system for discrete
radar and digital communication theory are explained.
Keywords: Weil representation, commutative subgroups, eigenfunctions, random
behavior, deterministic construction
|
[
{
"version": "v1",
"created": "Sun, 10 Aug 2008 17:49:20 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Dec 2008 08:10:33 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Gurevich",
"Shamgar",
""
],
[
"Hadani",
"Ronny",
""
],
[
"Sochen",
"Nir",
""
]
] |
new_dataset
| 0.990999 |
2005.07341
|
Jianxiong Guo
|
Jianxiong Guo, Xingjian Ding, Weili Wu
|
An Architecture for Distributed Energies Trading in Byzantine-Based
Blockchain
| null |
IEEE Transactions on Green Communications and Networking, 2022
|
10.1109/TGCN.2022.3142438
| null |
cs.NI cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of smart cities, not only are all corners of the city
connected to each other, but also connected from city to city. They form a
large distributed network together, which can facilitate the integration of
distributed energy station (DES) and corresponding smart aggregators.
Nevertheless, because of potential security and privacy protection arisen from
trustless energies trading, how to make such energies trading goes smoothly is
a tricky challenge. In this paper, we propose a blockchain-based multiple
energies trading (B-MET) system for secure and efficient energies trading by
executing a smart contract we design. Because energies trading requires the
blockchain in B-MET system to have high throughput and low latency, we design a
new byzantine-based consensus mechanism (BCM) based on node's credit to improve
efficiency for the consortium blockchain under the B-MET system. Then, we take
combined heat and power (CHP) system as a typical example that provides
distributed energies. We quantify their utilities, and model the interactions
between aggregators and DESs in a smart city by a novel multi-leader
multi-follower Stackelberg game. It is analyzed and solved by reaching Nash
equilibrium between aggregators, which reflects the competition between
aggregators to purchase energies from DESs. In the end, we conduct plenty of
numerical simulations to evaluate and verify our proposed model and algorithms,
which demonstrate their correctness and efficiency completely.
|
[
{
"version": "v1",
"created": "Fri, 15 May 2020 03:42:29 GMT"
}
] | 2022-06-08T00:00:00 |
[
[
"Guo",
"Jianxiong",
""
],
[
"Ding",
"Xingjian",
""
],
[
"Wu",
"Weili",
""
]
] |
new_dataset
| 0.994486 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.