id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2212.00442
|
Junhyung Lee
|
Junho Koh, Junhyung Lee, Youngwoo Lee, Jaekyum Kim, Jun Won Choi
|
MGTANet: Encoding Sequential LiDAR Points Using Long Short-Term
Motion-Guided Temporal Attention for 3D Object Detection
|
Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI'23)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most scanning LiDAR sensors generate a sequence of point clouds in real-time.
While conventional 3D object detectors use a set of unordered LiDAR points
acquired over a fixed time interval, recent studies have revealed that
substantial performance improvement can be achieved by exploiting the
spatio-temporal context present in a sequence of LiDAR point sets. In this
paper, we propose a novel 3D object detection architecture, which can encode
LiDAR point cloud sequences acquired by multiple successive scans. The encoding
process of the point cloud sequence is performed on two different time scales.
We first design a short-term motion-aware voxel encoding that captures the
short-term temporal changes of point clouds driven by the motion of objects in
each voxel. We also propose long-term motion-guided bird's eye view (BEV)
feature enhancement that adaptively aligns and aggregates the BEV feature maps
obtained by the short-term voxel encoding by utilizing the dynamic motion
context inferred from the sequence of the feature maps. The experiments
conducted on the public nuScenes benchmark demonstrate that the proposed 3D
object detector offers significant improvements in performance compared to the
baseline methods and that it sets a state-of-the-art performance for certain 3D
object detection categories. Code is available at
https://github.com/HYjhkoh/MGTANet.git
|
[
{
"version": "v1",
"created": "Thu, 1 Dec 2022 11:24:47 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Dec 2022 07:22:46 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Koh",
"Junho",
""
],
[
"Lee",
"Junhyung",
""
],
[
"Lee",
"Youngwoo",
""
],
[
"Kim",
"Jaekyum",
""
],
[
"Choi",
"Jun Won",
""
]
] |
new_dataset
| 0.999807 |
2212.06822
|
Vinay Sanjay Jogani Mr
|
Vinay Jogani, Joy Purohit, Ishaan Shivhare, Samina Attari and Shraddha
Surtkar
|
Adversarial Attacks and Defences for Skin Cancer Classification
|
6 pages, 7 figures, 2 tables, 2nd International Conference for
Advancement in Technology (ICONAT 2023), Goa, India
| null | null |
Paper ID / Submission ID : 185
|
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
There has been a concurrent significant improvement in the medical images
used to facilitate diagnosis and the performance of machine learning techniques
to perform tasks such as classification, detection, and segmentation in recent
years. As a result, a rapid increase in the usage of such systems can be
observed in the healthcare industry, for instance in the form of medical image
classification systems, where these models have achieved diagnostic parity with
human physicians. One such application where this can be observed is in
computer vision tasks such as the classification of skin lesions in
dermatoscopic images. However, as stakeholders in the healthcare industry, such
as insurance companies, continue to invest extensively in machine learning
infrastructure, it becomes increasingly important to understand the
vulnerabilities in such systems. Due to the highly critical nature of the tasks
being carried out by these machine learning models, it is necessary to analyze
techniques that could be used to take advantage of these vulnerabilities and
methods to defend against them. This paper explores common adversarial attack
techniques. The Fast Sign Gradient Method and Projected Descent Gradient are
used against a Convolutional Neural Network trained to classify dermatoscopic
images of skin lesions. Following that, it also discusses one of the most
popular adversarial defense techniques, adversarial training. The performance
of the model that has been trained on adversarial examples is then tested
against the previously mentioned attacks, and recommendations to improve neural
networks robustness are thus provided based on the results of the experiment.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 18:58:21 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Jogani",
"Vinay",
""
],
[
"Purohit",
"Joy",
""
],
[
"Shivhare",
"Ishaan",
""
],
[
"Attari",
"Samina",
""
],
[
"Surtkar",
"Shraddha",
""
]
] |
new_dataset
| 0.999526 |
2212.07072
|
Hee Suk Yoon
|
Hee Suk Yoon, Eunseop Yoon, John Harvill, Sunjae Yoon, Mark
Hasegawa-Johnson, Chang D. Yoo
|
SMSMix: Sense-Maintained Sentence Mixup for Word Sense Disambiguation
|
EMNLP2022
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Word Sense Disambiguation (WSD) is an NLP task aimed at determining the
correct sense of a word in a sentence from discrete sense choices. Although
current systems have attained unprecedented performances for such tasks, the
nonuniform distribution of word senses during training generally results in
systems performing poorly on rare senses. To this end, we consider data
augmentation to increase the frequency of these least frequent senses (LFS) to
reduce the distributional bias of senses during training. We propose
Sense-Maintained Sentence Mixup (SMSMix), a novel word-level mixup method that
maintains the sense of a target word. SMSMix smoothly blends two sentences
using mask prediction while preserving the relevant span determined by saliency
scores to maintain a specific word's sense. To the best of our knowledge, this
is the first attempt to apply mixup in NLP while preserving the meaning of a
specific word. With extensive experiments, we validate that our augmentation
method can effectively give more information about rare senses during training
with maintained target sense label.
|
[
{
"version": "v1",
"created": "Wed, 14 Dec 2022 07:48:42 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Dec 2022 07:36:49 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Yoon",
"Hee Suk",
""
],
[
"Yoon",
"Eunseop",
""
],
[
"Harvill",
"John",
""
],
[
"Yoon",
"Sunjae",
""
],
[
"Hasegawa-Johnson",
"Mark",
""
],
[
"Yoo",
"Chang D.",
""
]
] |
new_dataset
| 0.990253 |
2212.09134
|
Konstantin Taranov
|
Konstantin Taranov, Fabian Fischer, Torsten Hoefler
|
Efficient RDMA Communication Protocols
| null | null | null | null |
cs.NI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developers of networked systems often work with low-level RDMA libraries to
tailor network modules to take full advantage of offload capabilities offered
by RDMA-capable network controllers. Because of the huge design space of
networked data access protocols and variability in capabilities of RDMA
infrastructure, developers tend to reinvent and reimplement common data
exchange protocols, wasting months of development yet missing various
performance and system capabilities. In this work, we summarise and categorize
RDMA data exchange protocols and elaborate on what features they can offer to
networked systems and what implications they have on their memory and network
management.
|
[
{
"version": "v1",
"created": "Sun, 18 Dec 2022 17:10:57 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Dec 2022 19:56:12 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Taranov",
"Konstantin",
""
],
[
"Fischer",
"Fabian",
""
],
[
"Hoefler",
"Torsten",
""
]
] |
new_dataset
| 0.991357 |
2212.10432
|
Zhen Du
|
Zhen Du, Jiajia Li, Yinshan Wang, Xueqi Li, Guangming Tan, Ninghui Sun
|
AlphaSparse: Generating High Performance SpMV Codes Directly from Sparse
Matrices
| null | null | null | null |
cs.DC cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sparse Matrix-Vector multiplication (SpMV) is an essential computational
kernel in many application scenarios. Tens of sparse matrix formats and
implementations have been proposed to compress the memory storage and speed up
SpMV performance. We develop AlphaSparse, a superset of all existing works that
goes beyond the scope of human-designed format(s) and implementation(s).
AlphaSparse automatically \emph{creates novel machine-designed formats and SpMV
kernel implementations} entirely from the knowledge of input sparsity patterns
and hardware architectures. Based on our proposed Operator Graph that expresses
the path of SpMV format and kernel design, AlphaSparse consists of three main
components: Designer, Format \& Kernel Generator, and Search Engine. It takes
an arbitrary sparse matrix as input while outputs the performant
machine-designed format and SpMV implementation. By extensively evaluating 843
matrices from SuiteSparse Matrix Collection, AlphaSparse achieves significant
performance improvement by 3.2$\times$ on average compared to five
state-of-the-art artificial formats and 1.5$\times$ on average (up to
2.7$\times$) over the up-to-date implementation of traditional auto-tuning
philosophy.
|
[
{
"version": "v1",
"created": "Mon, 7 Nov 2022 14:30:24 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Dec 2022 06:58:23 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Du",
"Zhen",
""
],
[
"Li",
"Jiajia",
""
],
[
"Wang",
"Yinshan",
""
],
[
"Li",
"Xueqi",
""
],
[
"Tan",
"Guangming",
""
],
[
"Sun",
"Ninghui",
""
]
] |
new_dataset
| 0.999006 |
2212.10647
|
Felipe Gomez-Cuba
|
Felipe Gomez-Cuba
|
The SIMO Block Rayleigh Fading Channel Capacity Scaling with Number of
Antennas, Bandwidth and Coherence Length
|
6 figures. This is the author's self-archived pre-print version of a
publication accepted in IEEE Journal on Selected Areas in Information Theory
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies the capacity scaling of non-coherent Single-Input
Multiple-Output (SIMO) independent and identically distributed (i.i.d.)
Rayleigh block fading channels versus bandwidth ($B$), number of receive
antennas ($N$) and coherence block length ($L$). In non-coherent channels
(without Channel State Information --CSI) capacity scales as
$\Theta\left(\min(B,\sqrt{NL},N)\right)$. This is achievable using
Pilot-Assisted signaling. Energy Modulation signaling rate scales as
$\Theta\left(\min(B,\sqrt{N})\right)$. If $L$ is fixed while $B$ and $N$ grow,
the two expressions grow equally and Energy Modulation achieves the capacity
scaling. However, Energy Modulation rate does not scale as the capacity with
the variable $L$. The coherent channel capacity with a priori CSI, in turn,
scales as $\Theta\left(\min(B,N)\right)$. The coherent channel capacity scaling
can be fully achieved in non-coherent channels when $L\geq\Theta(N)$. In
summary, the channel coherence block length plays a pivotal role in modulation
selection and the capacity gap between coherent and non-coherent channels.
Pilot-Assisted signaling outperforms Energy Modulation's rate scaling versus
coherence block length. Only in high mobility scenarios where $L$ is much
smaller than the number of antennas ($L\ll\Theta(\sqrt{N})$), Energy Modulation
is effective in non-coherent channels.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 20:50:52 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Gomez-Cuba",
"Felipe",
""
]
] |
new_dataset
| 0.998139 |
2212.10711
|
Alex Tamkin
|
Alex Tamkin, Kunal Handa, Avash Shrestha, Noah Goodman
|
Task Ambiguity in Humans and Language Models
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Language models have recently achieved strong performance across a wide range
of NLP benchmarks. However, unlike benchmarks, real world tasks are often
poorly specified, and agents must deduce the user's intended behavior from a
combination of context, instructions, and examples. We investigate how both
humans and models behave in the face of such task ambiguity by proposing
AmbiBench, a new benchmark of six ambiguously-specified classification tasks.
We evaluate humans and models on AmbiBench by seeing how well they identify the
intended task using 1) instructions with varying degrees of ambiguity, and 2)
different numbers of labeled examples. We find that the combination of model
scaling (to 175B parameters) and training with human feedback data enables
models to approach or exceed the accuracy of human participants across tasks,
but that either one alone is not sufficient. In addition, we show how to
dramatically improve the accuracy of language models trained without
large-scale human feedback training by finetuning on a small number of
ambiguous in-context examples, providing a promising direction for teaching
models to generalize well in the face of ambiguity.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 18:35:33 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Tamkin",
"Alex",
""
],
[
"Handa",
"Kunal",
""
],
[
"Shrestha",
"Avash",
""
],
[
"Goodman",
"Noah",
""
]
] |
new_dataset
| 0.989269 |
2212.10719
|
Jens Egholm Pedersen
|
Jens Egholm Pedersen and J\"org Conradt
|
AEStream: Accelerated event-based processing with coroutines
|
7 pages, 6 figures. Submitted to Neuro Inspired Computational Element
(NICE) 2023
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Neuromorphic sensors imitate the sparse and event-based communication seen in
biological sensory organs and brains. Today's sensors can emit many millions of
asynchronous events per second, which is challenging to process on conventional
computers. To avoid bottleneck effects, there is a need to apply and improve
concurrent and parallel processing of events.
We present AEStream: a library to efficiently stream asynchronous events from
inputs to outputs on conventional computers. AEStream leverages cooperative
multitasking primitives known as coroutines to concurrently process individual
events, which dramatically simplifies the integration with event-based
peripherals, such as event-based cameras and (neuromorphic) asynchronous
hardware. We explore the effects of coroutines in concurrent settings by
benchmarking them against conventional threading mechanisms, and find that
AEStream provides at least twice the throughput. We then apply AEStream in a
real-time edge detection task on a GPU and demonstrate 1.3 times faster
processing with 5 times fewer memory operations.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 02:15:34 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Pedersen",
"Jens Egholm",
""
],
[
"Conradt",
"Jörg",
""
]
] |
new_dataset
| 0.978465 |
2212.10721
|
Tuan Thanh Nguyen
|
Tuan Thanh Nguyen, Kui Cai, and Paul H. Siegel
|
Every Bit Counts: A New Version of Non-binary VT Codes with More
Efficient Encoder
| null | null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present a new version of non-binary VT codes that are
capable of correcting a single deletion or single insertion. Moreover, we
provide the first known linear time algorithms that encode user messages into
these codes of length n over the $q$-ary alphabet for $q\ge 2$ with at most
$\ceil{\log_q n} + 1$ redundant symbols, while the optimal redundancy required
is at least $\log_q n + \log_q (q - 1)$ symbols. Our designed encoder reduces
the redundancy of the best-known encoder of Tenengolts (1984) by at least
$2+\log_q(3)$ redundant symbols, or equivalently $2\log_2 q+3$ redundant bits.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 02:23:29 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Nguyen",
"Tuan Thanh",
""
],
[
"Cai",
"Kui",
""
],
[
"Siegel",
"Paul H.",
""
]
] |
new_dataset
| 0.998707 |
2212.10740
|
Hongxiao Li
|
Hongxiao Li, Wanling Gao, Lei Wang, and Jianfeng Zhan
|
ToL: A Tensor of List-Based Unified Computation Model
| null | null | null | null |
cs.PL cs.CC cs.CL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Previous computation models either have equivalent abilities in representing
all computations but fail to provide primitive operators for programming
complex algorithms or lack generalized expression ability to represent
newly-added computations. This article presents a unified computation model
with generalized expression ability and a concise set of primitive operators
for programming high-level algorithms. We propose a unified data abstraction --
Tensor of List, and offer a unified computation model based on Tensor of List,
which we call the ToL model (in short, ToL). ToL introduces five atomic
computations that can represent any elementary computation by finite
composition, ensured with strict formal proof. Based on ToL, we design a
pure-functional language -- ToLang. ToLang provides a concise set of primitive
operators that can be used to program complex big data and AI algorithms. Our
evaluations show ToL has generalized expression ability and a built-in
performance indicator, born with a strictly defined computation metric --
elementary operation count (EOPs), consistent with FLOPs within a small error
range.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 03:22:24 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Li",
"Hongxiao",
""
],
[
"Gao",
"Wanling",
""
],
[
"Wang",
"Lei",
""
],
[
"Zhan",
"Jianfeng",
""
]
] |
new_dataset
| 0.999199 |
2212.10762
|
Hang Li
|
Bevan Koopman and Ahmed Mourad and Hang Li and Anton van der Vegt and
Shengyao Zhuang and Simon Gibson and Yash Dang and David Lawrence and Guido
Zuccon
|
AgAsk: An Agent to Help Answer Farmer's Questions From Scientific
Documents
|
17 pages, submitted to IJDL
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decisions in agriculture are increasingly data-driven; however, valuable
agricultural knowledge is often locked away in free-text reports, manuals and
journal articles. Specialised search systems are needed that can mine
agricultural information to provide relevant answers to users' questions. This
paper presents AgAsk -- an agent able to answer natural language agriculture
questions by mining scientific documents.
We carefully survey and analyse farmers' information needs. On the basis of
these needs we release an information retrieval test collection comprising real
questions, a large collection of scientific documents split in passages, and
ground truth relevance assessments indicating which passages are relevant to
each question.
We implement and evaluate a number of information retrieval models to answer
farmers questions, including two state-of-the-art neural ranking models. We
show that neural rankers are highly effective at matching passages to questions
in this context.
Finally, we propose a deployment architecture for AgAsk that includes a
client based on the Telegram messaging platform and retrieval model deployed on
commodity hardware.
The test collection we provide is intended to stimulate more research in
methods to match natural language to answers in scientific documents. While the
retrieval models were evaluated in the agriculture domain, they are
generalisable and of interest to others working on similar problems.
The test collection is available at:
\url{https://github.com/ielab/agvaluate}.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 04:49:21 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Koopman",
"Bevan",
""
],
[
"Mourad",
"Ahmed",
""
],
[
"Li",
"Hang",
""
],
[
"van der Vegt",
"Anton",
""
],
[
"Zhuang",
"Shengyao",
""
],
[
"Gibson",
"Simon",
""
],
[
"Dang",
"Yash",
""
],
[
"Lawrence",
"David",
""
],
[
"Zuccon",
"Guido",
""
]
] |
new_dataset
| 0.999501 |
2212.10770
|
Luke Vilnis
|
Luke Vilnis, Zach Fisher, Bhargav Kanagal, Patrick Murray, Sumit
Sanghai
|
ImPaKT: A Dataset for Open-Schema Knowledge Base Construction
|
14 pages. Preprint
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models have ushered in a golden age of semantic parsing. The
seq2seq paradigm allows for open-schema and abstractive attribute and relation
extraction given only small amounts of finetuning data. Language model
pretraining has simultaneously enabled great strides in natural language
inference, reasoning about entailment and implication in free text. These
advances motivate us to construct ImPaKT, a dataset for open-schema information
extraction, consisting of around 2500 text snippets from the C4 corpus, in the
shopping domain (product buying guides), professionally annotated with
extracted attributes, types, attribute summaries (attribute schema discovery
from idiosyncratic text), many-to-one relations between compound and atomic
attributes, and implication relations. We release this data in hope that it
will be useful in fine tuning semantic parsers for information extraction and
knowledge base construction across a variety of domains. We evaluate the power
of this approach by fine-tuning the open source UL2 language model on a subset
of the dataset, extracting a set of implication relations from a corpus of
product buying guides, and conducting human evaluations of the resulting
predictions.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 05:02:49 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Vilnis",
"Luke",
""
],
[
"Fisher",
"Zach",
""
],
[
"Kanagal",
"Bhargav",
""
],
[
"Murray",
"Patrick",
""
],
[
"Sanghai",
"Sumit",
""
]
] |
new_dataset
| 0.99967 |
2212.10789
|
Shengchao Liu
|
Shengchao Liu, Weili Nie, Chengpeng Wang, Jiarui Lu, Zhuoran Qiao,
Ling Liu, Jian Tang, Chaowei Xiao, Anima Anandkumar
|
Multi-modal Molecule Structure-text Model for Text-based Retrieval and
Editing
| null | null | null | null |
cs.LG cs.CL q-bio.QM stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is increasing adoption of artificial intelligence in drug discovery.
However, existing works use machine learning to mainly utilize the chemical
structures of molecules yet ignore the vast textual knowledge available in
chemistry. Incorporating textual knowledge enables us to realize new drug
design objectives, adapt to text-based instructions, and predict complex
biological activities. We present a multi-modal molecule structure-text model,
MoleculeSTM, by jointly learning molecule's chemical structures and textual
descriptions via a contrastive learning strategy. To train MoleculeSTM, we
construct the largest multi-modal dataset to date, namely PubChemSTM, with over
280K chemical structure-text pairs. To demonstrate the effectiveness and
utility of MoleculeSTM, we design two challenging zero-shot tasks based on text
instructions, including structure-text retrieval and molecule editing.
MoleculeSTM possesses two main properties: open vocabulary and compositionality
via natural language. In experiments, MoleculeSTM obtains the state-of-the-art
generalization ability to novel biochemical concepts across various benchmarks.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 06:18:31 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Liu",
"Shengchao",
""
],
[
"Nie",
"Weili",
""
],
[
"Wang",
"Chengpeng",
""
],
[
"Lu",
"Jiarui",
""
],
[
"Qiao",
"Zhuoran",
""
],
[
"Liu",
"Ling",
""
],
[
"Tang",
"Jian",
""
],
[
"Xiao",
"Chaowei",
""
],
[
"Anandkumar",
"Anima",
""
]
] |
new_dataset
| 0.980408 |
2212.10791
|
Joshua Maynez
|
Annie Louis and Joshua Maynez
|
OpineSum: Entailment-based self-training for abstractive opinion
summarization
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
A typical product or place often has hundreds of reviews, and summarization
of these texts is an important and challenging problem. Recent progress on
abstractive summarization in domains such as news has been driven by supervised
systems trained on hundreds of thousands of news articles paired with
human-written summaries. However for opinion texts, such large scale datasets
are rarely available. Unsupervised methods, self-training, and few-shot
learning approaches bridge that gap. In this work, we present a novel
self-training approach, OpineSum, for abstractive opinion summarization. The
summaries in this approach are built using a novel application of textual
entailment and capture the consensus of opinions across the various reviews for
an item. This method can be used to obtain silver-standard summaries on a large
scale and train both unsupervised and few-shot abstractive summarization
systems. OpineSum achieves state-of-the-art performance in both settings.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 06:20:28 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Louis",
"Annie",
""
],
[
"Maynez",
"Joshua",
""
]
] |
new_dataset
| 0.995882 |
2212.10854
|
Haerin Kim
|
Yongsik Kim, Jae Woong Choi, Hyo Sun Lee, Jeong Do Yoo, Haerin Kim,
Junho Jang, Kibeom Park, Huy Kang Kim
|
Defining C-ITS Environment and Attack Scenarios
|
in Korean language
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As technology advances, it is possible to process a lot of data, and as
various elements in the city become diverse and complex, cities are becoming
smart cities. One of the core systems of smart cities is
Cooperative-Intelligent Transport Systems (C-ITS). C-ITS is a system that
provides drivers with real-time accident risk information such as surrounding
traffic conditions, sudden stops, and falling objects while a vehicle is
driving, and consists of road infrastructure, C-ITS center, and vehicle
terminals. Meanwhile, smart cities can have cybersecurity problems because many
elements of the city are networked and electronically controlled. If
cybersecurity problems occur in C-ITS, there is a high risk of safety problems.
The purpose of this technical document is to describe C-ITS environment
modeling and C-ITS attack scenarios for C-ITS security. After describing the
concept of C-ITS and MITRE ATT&CK, we describe the C-ITS environment model and
the attack scenario model that we define.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 08:58:53 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Kim",
"Yongsik",
""
],
[
"Choi",
"Jae Woong",
""
],
[
"Lee",
"Hyo Sun",
""
],
[
"Yoo",
"Jeong Do",
""
],
[
"Kim",
"Haerin",
""
],
[
"Jang",
"Junho",
""
],
[
"Park",
"Kibeom",
""
],
[
"Kim",
"Huy Kang",
""
]
] |
new_dataset
| 0.989019 |
2212.10865
|
Thomas Guyet
|
Thomas Guyet (BEAGLE), Laurent Spillemaecker (ENSAI), Simon Malinowski
(LinkMedia, UR1), Anne-Isabelle Graux (PEGASE)
|
Temporal Disaggregation of the Cumulative Grass Growth
| null |
International Conference on Pattern Recognition and Artificial
Intelligence (ICPRAI), Jun 2022, Paris, France. pp.383-394,
|
10.1007/978-3-031-09282-4_32
| null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Information on the grass growth over a year is essential for some models
simulating the use of this resource to feed animals on pasture or at barn with
hay or grass silage. Unfortunately, this information is rarely available. The
challenge is to reconstruct grass growth from two sources of information: usual
daily climate data (rainfall, radiation, etc.) and cumulative growth over the
year. We have to be able to capture the effect of seasonal climatic events
which are known to distort the growth curve within the year. In this paper, we
formulate this challenge as a problem of disaggregating the cumulative growth
into a time series. To address this problem, our method applies time series
forecasting using climate information and grass growth from previous time
steps. Several alternatives of the method are proposed and compared
experimentally using a database generated from a grassland process-based model.
The results show that our method can accurately reconstruct the time series,
independently of the use of the cumulative growth information.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 09:15:34 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Guyet",
"Thomas",
"",
"BEAGLE"
],
[
"Spillemaecker",
"Laurent",
"",
"ENSAI"
],
[
"Malinowski",
"Simon",
"",
"LinkMedia, UR1"
],
[
"Graux",
"Anne-Isabelle",
"",
"PEGASE"
]
] |
new_dataset
| 0.956605 |
2212.10869
|
Ufuk Uyan
|
Ufuk Uyan, M. Tugberk Isyapar, Mahiye Uluyagmur Ozturk
|
5G Long-Term and Large-Scale Mobile Traffic Forecasting
| null | null | null | null |
cs.LG cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is crucial for the service provider to comprehend and forecast mobile
traffic in large-scale cellular networks in order to govern and manage
mechanisms for base station placement, load balancing, and network planning.
The purpose of this article is to extract and simulate traffic patterns from
more than 14,000 cells that have been installed in different metropolitan
areas. To do this, we create, implement, and assess a method in which cells are
first categorized by their point of interest and then clustered based on the
temporal distribution of cells in each region. The proposed model has been
tested using real-world 5G mobile traffic datasets collected over 31 weeks in
various cities. We found that our proposed model performed well in predicting
mobile traffic patterns up to 2 weeks in advance. Our model outperformed the
base model in most areas of interest and generally achieved up to 15\% less
prediction error compared to the na\"ive approach. This indicates that our
approach is effective in predicting mobile traffic patterns in large-scale
cellular networks.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 09:26:33 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Uyan",
"Ufuk",
""
],
[
"Isyapar",
"M. Tugberk",
""
],
[
"Ozturk",
"Mahiye Uluyagmur",
""
]
] |
new_dataset
| 0.998157 |
2212.10870
|
Yuan Liu
|
Yuan Liu, Jiacheng Chen, Hao Wu
|
MoQuad: Motion-focused Quadruple Construction for Video Contrastive
Learning
|
ECCV2022 WorkShop
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Learning effective motion features is an essential pursuit of video
representation learning. This paper presents a simple yet effective sample
construction strategy to boost the learning of motion features in video
contrastive learning. The proposed method, dubbed Motion-focused Quadruple
Construction (MoQuad), augments the instance discrimination by meticulously
disturbing the appearance and motion of both the positive and negative samples
to create a quadruple for each video instance, such that the model is
encouraged to exploit motion information. Unlike recent approaches that create
extra auxiliary tasks for learning motion features or apply explicit temporal
modelling, our method keeps the simple and clean contrastive learning paradigm
(i.e.,SimCLR) without multi-task learning or extra modelling. In addition, we
design two extra training strategies by analyzing initial MoQuad experiments.
By simply applying MoQuad to SimCLR, extensive experiments show that we achieve
superior performance on downstream tasks compared to the state of the arts.
Notably, on the UCF-101 action recognition task, we achieve 93.7% accuracy
after pre-training the model on Kinetics-400 for only 200 epochs, surpassing
various previous methods
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 09:26:40 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Liu",
"Yuan",
""
],
[
"Chen",
"Jiacheng",
""
],
[
"Wu",
"Hao",
""
]
] |
new_dataset
| 0.993579 |
2212.10875
|
Luca Geatti
|
Luca Geatti, Marco Montali, Andrey Rivkin
|
Reactive Synthesis for DECLARE via symbolic automata
| null | null | null | null |
cs.FL cs.LO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Given a specification of linear-time temporal logic interpreted over finite
traces (LTLf), the reactive synthesis problem asks to find a
finitely-representable, terminating controller that reacts to the
uncontrollable actions of an environment in order to enforce a desired system
specification. In this paper we study, for the first time, the reactive
synthesis problem for DECLARE - a fragment of LTLf extensively used both in
theory and practice for specifying declarative, constraint-based business
processes. We provide a threefold contribution. First, we give a naive, doubly
exponential time synthesis algorithm for this problem. Second, we show how an
arbitrary DECLARE specification can be compactly encoded into an equivalent
pure past one in LTLf, and we exploit this to define an optimized, singly
exponential time algorithm for DECLARE synthesis. Third, we derive a symbolic
version of this algorithm, by introducing a novel translation of pure-past
temporal formulas into symbolic deterministic finite automata.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 09:38:06 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Geatti",
"Luca",
""
],
[
"Montali",
"Marco",
""
],
[
"Rivkin",
"Andrey",
""
]
] |
new_dataset
| 0.99803 |
2212.10923
|
Zonglin Yang
|
Zonglin Yang, Li Dong, Xinya Du, Hao Cheng, Erik Cambria, Xiaodong
Liu, Jianfeng Gao, Furu Wei
|
Language Models as Inductive Reasoners
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Inductive reasoning is a core component of human intelligence. In the past
research of inductive reasoning within computer science, logic language is used
as representations of knowledge (facts and rules, more specifically). However,
logic language can cause systematic problems for inductive reasoning such as
disability of handling raw input such as natural language, sensitiveness to
mislabeled data, and incapacity to handle ambiguous input. To this end, we
propose a new task, which is to induce natural language rules from natural
language facts, and create a dataset termed DEER containing 1.2k rule-fact
pairs for the task, where rules and facts are written in natural language. New
automatic metrics are also proposed and analysed for the evaluation of this
task. With DEER, we investigate a modern approach for inductive reasoning where
we use natural language as representation for knowledge instead of logic
language and use pretrained language models as ''reasoners''. Moreover, we
provide the first and comprehensive analysis of how well pretrained language
models can induce natural language rules from natural language facts. We also
propose a new framework drawing insights from philosophy literature for this
task, which we show in the experiment section that surpasses baselines in both
automatic and human evaluations.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 11:12:14 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Yang",
"Zonglin",
""
],
[
"Dong",
"Li",
""
],
[
"Du",
"Xinya",
""
],
[
"Cheng",
"Hao",
""
],
[
"Cambria",
"Erik",
""
],
[
"Liu",
"Xiaodong",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Wei",
"Furu",
""
]
] |
new_dataset
| 0.993906 |
2212.10926
|
Changmin Lee
|
Changmin Lee, Bon-Hong Koo, Chan-Byoung Chae, and Robert Schober
|
The Internet of Bio-Nano Things in Blood Vessels: System Design and
Prototypes
| null | null | null | null |
cs.ET eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate the Internet of Bio-Nano Things (IoBNT) which
relates to networks formed by molecular communications. By providing a means of
communication through the ubiquitously connected blood vessels (arteries,
veins, and capillaries), molecular communication-based IoBNT enables a host of
new eHealth applications. For example, an organ monitoring sensor can transfer
internal body signals through the IoBNT for health monitoring applications. We
empirically show that blood vessel channels introduce a new set of challenges
for the design of molecular communication systems in comparison to free-space
channels. We then propose cylindrical duct channel models and discuss the
corresponding system designs conforming to the channel characteristics.
Furthermore, based on prototype implementations, we confirm that molecular
communication techniques can be utilized for composing the IoBNT. We believe
that the promising results presented in this work, together with the rich
research challenges that lie ahead, are strong indicators that IoBNT with
molecular communications can drive novel applications for emerging eHealth
systems.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 11:15:02 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Lee",
"Changmin",
""
],
[
"Koo",
"Bon-Hong",
""
],
[
"Chae",
"Chan-Byoung",
""
],
[
"Schober",
"Robert",
""
]
] |
new_dataset
| 0.992265 |
2212.10929
|
M Saiful Bari
|
M Saiful Bari, Aston Zhang, Shuai Zheng, Xingjian Shi, Yi Zhu, Shafiq
Joty, Mu Li
|
SPT: Semi-Parametric Prompt Tuning for Multitask Prompted Learning
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pre-trained large language models can efficiently interpolate human-written
prompts in a natural way. Multitask prompted learning can help generalization
through a diverse set of tasks at once, thus enhancing the potential for more
effective downstream fine-tuning. To perform efficient multitask-inference in
the same batch, parameter-efficient fine-tuning methods such as prompt tuning
have been proposed. However, the existing prompt tuning methods may lack
generalization. We propose SPT, a semi-parametric prompt tuning method for
multitask prompted learning. The novel component of SPT is a memory bank from
where memory prompts are retrieved based on discrete prompts. Extensive
experiments, such as (i) fine-tuning a full language model with SPT on 31
different tasks from 8 different domains and evaluating zero-shot
generalization on 9 heldout datasets under 5 NLP task categories and (ii)
pretraining SPT on the GLUE datasets and evaluating fine-tuning on the
SuperGLUE datasets, demonstrate effectiveness of SPT.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 11:18:09 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Bari",
"M Saiful",
""
],
[
"Zhang",
"Aston",
""
],
[
"Zheng",
"Shuai",
""
],
[
"Shi",
"Xingjian",
""
],
[
"Zhu",
"Yi",
""
],
[
"Joty",
"Shafiq",
""
],
[
"Li",
"Mu",
""
]
] |
new_dataset
| 0.990602 |
2212.10935
|
Zihan Wang
|
Zihan Wang and Naoki Yoshinaga
|
Esports Data-to-commentary Generation on Large-scale Data-to-text
Dataset
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Esports, a sports competition using video games, has become one of the most
important sporting events in recent years. Although the amount of esports data
is increasing than ever, only a small fraction of those data accompanies text
commentaries for the audience to retrieve and understand the plays. Therefore,
in this study, we introduce a task of generating game commentaries from
structured data records to address the problem. We first build a large-scale
esports data-to-text dataset using structured data and commentaries from a
popular esports game, League of Legends. On this dataset, we devise several
data preprocessing methods including linearization and data splitting to
augment its quality. We then introduce several baseline encoder-decoder models
and propose a hierarchical model to generate game commentaries. Considering the
characteristics of esports commentaries, we design evaluation metrics including
three aspects of the output: correctness, fluency, and strategic depth.
Experimental results on our large-scale esports dataset confirmed the advantage
of the hierarchical model, and the results revealed several challenges of this
novel task.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 11:23:31 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Wang",
"Zihan",
""
],
[
"Yoshinaga",
"Naoki",
""
]
] |
new_dataset
| 0.974345 |
2212.10992
|
Tanmay Sen
|
Abhishek Sarkar, Tanmay Sen, Srimanta Kundu, Arijit Sarkar, Abdul
Wazed
|
LogAnMeta: Log Anomaly Detection Using Meta Learning
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Modern telecom systems are monitored with performance and system logs from
multiple application layers and components. Detecting anomalous events from
these logs is key to identify security breaches, resource over-utilization,
critical/fatal errors, etc. Current supervised log anomaly detection frameworks
tend to perform poorly on new types or signatures of anomalies with few or
unseen samples in the training data. In this work, we propose a
meta-learning-based log anomaly detection framework (LogAnMeta) for detecting
anomalies from sequence of log events with few samples. LoganMeta train a
hybrid few-shot classifier in an episodic manner. The experimental results
demonstrate the efficacy of our proposed method
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 13:00:02 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Sarkar",
"Abhishek",
""
],
[
"Sen",
"Tanmay",
""
],
[
"Kundu",
"Srimanta",
""
],
[
"Sarkar",
"Arijit",
""
],
[
"Wazed",
"Abdul",
""
]
] |
new_dataset
| 0.997296 |
2212.11071
|
Guilherme Christmann
|
Guilherme Christmann, Lin Yu-Ren, Rodrigo da Silva Guerra, and Jacky
Baltes
|
Can a Robot Shoot an Olympic Recurve Bow? A preliminary study
|
Short paper presented at FIRA Summit 2020, 9 pages, 5 figures, 2
tables
| null | null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
The field of robotics, and more especially humanoid robotics, has several
established competitions with research oriented goals in mind. Challenging the
robots in a handful of tasks, these competitions provide a way to gauge the
state of the art in robotic design, as well as an indicator for how far we are
from reaching human performance. The most notable competitions are RoboCup,
which has the long-term goal of competing against a real human team in 2050,
and the FIRA HuroCup league, in which humanoid robots have to perform tasks
based on actual Olympic events. Having robots compete against humans under the
same rules is a challenging goal, and, we believe that it is in the sport of
archery that humanoid robots have the most potential to achieve it in the near
future. In this work, we perform a first step in this direction. We present a
humanoid robot that is capable of gripping, drawing and shooting a recurve bow
at a target 10 meters away with considerable accuracy. Additionally, we show
that it is also capable of shooting distances of over 50 meters.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 15:18:04 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Christmann",
"Guilherme",
""
],
[
"Yu-Ren",
"Lin",
""
],
[
"Guerra",
"Rodrigo da Silva",
""
],
[
"Baltes",
"Jacky",
""
]
] |
new_dataset
| 0.996795 |
2212.11078
|
Dipika Singhania
|
Dipika Singhania, Rahul Rahaman, Angela Yao
|
C2F-TCN: A Framework for Semi and Fully Supervised Temporal Action
Segmentation
|
arXiv admin note: text overlap with arXiv:2112.01402
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Temporal action segmentation tags action labels for every frame in an input
untrimmed video containing multiple actions in a sequence. For the task of
temporal action segmentation, we propose an encoder-decoder-style architecture
named C2F-TCN featuring a "coarse-to-fine" ensemble of decoder outputs. The
C2F-TCN framework is enhanced with a novel model agnostic temporal feature
augmentation strategy formed by the computationally inexpensive strategy of the
stochastic max-pooling of segments. It produces more accurate and
well-calibrated supervised results on three benchmark action segmentation
datasets. We show that the architecture is flexible for both supervised and
representation learning. In line with this, we present a novel unsupervised way
to learn frame-wise representation from C2F-TCN. Our unsupervised learning
approach hinges on the clustering capabilities of the input features and the
formation of multi-resolution features from the decoder's implicit structure.
Further, we provide the first semi-supervised temporal action segmentation
results by merging representation learning with conventional supervised
learning. Our semi-supervised learning scheme, called
``Iterative-Contrastive-Classify (ICC)'', progressively improves in performance
with more labeled data. The ICC semi-supervised learning in C2F-TCN, with 40%
labeled videos, performs similar to fully supervised counterparts.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 14:53:46 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Singhania",
"Dipika",
""
],
[
"Rahaman",
"Rahul",
""
],
[
"Yao",
"Angela",
""
]
] |
new_dataset
| 0.968726 |
2212.11101
|
Mehdi Delrobaei
|
Paniz Sedighi, Mohammad Hesam Norouzi, Mehdi Delrobaei
|
An RFID-Based Assistive Glove to Help the Visually Impaired
| null |
IEEE Transactions on Instrumentation and Measurement 70 (2021):
1-9
|
10.1109/TIM.2021.3069834
| null |
cs.HC cs.SY eess.SY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recent studies have focused on facilitating perception and outdoor navigation
for people with blindness or some form of vision loss. However, a significant
portion of these studies is centered around treatment and vision
rehabilitation, leaving some immediate needs, such as interaction with the
surrounding objects or recognizing colors and fine patterns without tactile
feedback. This study targets such needs and delivers a straightforward
communication method using a wearable, unobtrusive device with the environment.
We initially discuss the advantages and limitations of related works to draw
out the best-fitting design concepts. Then, we introduce the potential for
emerging technologies such as radio-frequency identification. We present the
design details and the experimental results of an assistive glove to allow
people with vision disabilities to interact with the environment more
efficiently. Based on the collected data from 17 blind-folded healthy
participants, the implemented system's success rate in identifying objects was
about 96.32%. Overall, 70% of the users found the device very satisfactory.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 15:44:34 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Sedighi",
"Paniz",
""
],
[
"Norouzi",
"Mohammad Hesam",
""
],
[
"Delrobaei",
"Mehdi",
""
]
] |
new_dataset
| 0.983715 |
2212.11121
|
Olanrewaju Tahir Aduragba
|
Olanrewaju Tahir Aduragba, Alexandra I. Cristea, Pete Phillips, Jonas
Kurlberg, Jialin Yu
|
Religion and Spirituality on Social Media in the Aftermath of the Global
Pandemic
|
Code used for this paper is available at:
https://github.com/tahirlanre/covid19-online-religion
| null | null | null |
cs.CY cs.AI cs.CL cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
During the COVID-19 pandemic, the Church closed its physical doors for the
first time in about 800 years, which is, arguably, a cataclysmic event. Other
religions have found themselves in a similar situation, and they were
practically forced to move online, which is an unprecedented occasion. In this
paper, we analyse this sudden change in religious activities twofold: we create
and deliver a questionnaire, as well as analyse Twitter data, to understand
people's perceptions and activities related to religious activities online.
Importantly, we also analyse the temporal variations in this process by
analysing a period of 3 months: July-September 2020. Additionally to the
separate analysis of the two data sources, we also discuss the implications
from triangulating the results.
|
[
{
"version": "v1",
"created": "Sun, 11 Dec 2022 18:41:02 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Aduragba",
"Olanrewaju Tahir",
""
],
[
"Cristea",
"Alexandra I.",
""
],
[
"Phillips",
"Pete",
""
],
[
"Kurlberg",
"Jonas",
""
],
[
"Yu",
"Jialin",
""
]
] |
new_dataset
| 0.983647 |
2212.11122
|
Parviz Ali
|
Parviz Ali
|
Diamond Abrasive Electroplated Surface Anomaly Detection using
Convolutional Neural Networks for Industrial Quality Inspection
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Electroplated diamond abrasive tools require nickel coating on a metal
surface for abrasive bonding and part functionality. The electroplated
nickel-coated abrasive tool is expected to have a high-quality part performance
by having a nickel coating thickness of between 50% to 60% of the abrasive
median diameter, uniformity of the nickel layer, abrasive distribution over the
electroplated surface, and bright gloss. Electroplating parameters are set
accordingly for this purpose. Industrial quality inspection for defects of
these abrasive electroplated parts with optical inspection instruments is
extremely challenging due to the diamond's light refraction, dispersion nature,
and reflective bright nickel surface. The difficulty posed by this challenge
requires parts to be quality inspected manually with an eye loupe that is
subjective and costly. In this study, we use a Convolutional Neural Network
(CNN) model in the production line to detect abrasive electroplated part
anomalies allowing us to fix or eliminate those parts or elements that are in
bad condition from the production chain and ultimately reduce manual quality
inspection cost. We used 744 samples to train our model. Our model successfully
identified over 99% of the parts with an anomaly. Keywords: Artificial
Intelligence, Anomaly Detection, Industrial Quality Inspection, Electroplating,
Diamond Abrasive Tool
|
[
{
"version": "v1",
"created": "Sun, 11 Dec 2022 20:14:18 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Ali",
"Parviz",
""
]
] |
new_dataset
| 0.99909 |
2212.11123
|
Xu Cao
|
Kun Tang, Xu Cao, Zhipeng Cao, Tong Zhou, Erlong Li, Ao Liu, Shengtao
Zou, Chang Liu, Shuqi Mei, Elena Sizikova, Chao Zheng
|
THMA: Tencent HD Map AI System for Creating HD Map Annotations
|
IAAI 2023
| null | null | null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, autonomous vehicle technology is becoming more and more mature.
Critical to progress and safety, high-definition (HD) maps, a type of
centimeter-level map collected using a laser sensor, provide accurate
descriptions of the surrounding environment. The key challenge of HD map
production is efficient, high-quality collection and annotation of large-volume
datasets. Due to the demand for high quality, HD map production requires
significant manual human effort to create annotations, a very time-consuming
and costly process for the map industry. In order to reduce manual annotation
burdens, many artificial intelligence (AI) algorithms have been developed to
pre-label the HD maps. However, there still exists a large gap between AI
algorithms and the traditional manual HD map production pipelines in accuracy
and robustness. Furthermore, it is also very resource-costly to build
large-scale annotated datasets and advanced machine learning algorithms for
AI-based HD map automatic labeling systems. In this paper, we introduce the
Tencent HD Map AI (THMA) system, an innovative end-to-end, AI-based, active
learning HD map labeling system capable of producing and labeling HD maps with
a scale of hundreds of thousands of kilometers. In THMA, we train AI models
directly from massive HD map datasets via supervised, self-supervised, and
weakly supervised learning to achieve high accuracy and efficiency required by
downstream users. THMA has been deployed by the Tencent Map team to provide
services to downstream companies and users, serving over 1,000 labeling workers
and producing more than 30,000 kilometers of HD map data per day at most. More
than 90 percent of the HD map data in Tencent Map is labeled automatically by
THMA, accelerating the traditional HD map labeling process by more than ten
times.
|
[
{
"version": "v1",
"created": "Wed, 14 Dec 2022 08:36:31 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Tang",
"Kun",
""
],
[
"Cao",
"Xu",
""
],
[
"Cao",
"Zhipeng",
""
],
[
"Zhou",
"Tong",
""
],
[
"Li",
"Erlong",
""
],
[
"Liu",
"Ao",
""
],
[
"Zou",
"Shengtao",
""
],
[
"Liu",
"Chang",
""
],
[
"Mei",
"Shuqi",
""
],
[
"Sizikova",
"Elena",
""
],
[
"Zheng",
"Chao",
""
]
] |
new_dataset
| 0.99927 |
2212.11124
|
Prasath Murugesan
|
Prasath Murugesan, Shamshu Dharwez Saganvali
|
An AI-Powered VVPAT Counter for Elections in India
|
4 pages, 4 figures
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Election Commission of India has introduced Voter Verified Paper Audit
Trail since 2019. This mechanism has increased voter confidence at the time of
casting the votes. However, physical verification of the VVPATs against the
party level counts from the EVMs is done only in 5 (randomly selected) machines
per constituency. The time required to conduct physical verification becomes a
bottleneck in scaling this activity for 100% of machines in all constituencies.
We proposed an automated counter powered by image processing and machine
learning algorithms to speed up the process and address this issue.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 14:59:40 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Murugesan",
"Prasath",
""
],
[
"Saganvali",
"Shamshu Dharwez",
""
]
] |
new_dataset
| 0.997258 |
2212.11128
|
Lam Duc Nguyen
|
Lam Duc Nguyen, James Hoang, Qin Wang, Qinghua Lu, Sherry Xu, and
Shiping Chen
|
BDSP: A Fair Blockchain-enabled Framework for Privacy-Enhanced
Enterprise Data Sharing
|
9 pages, 7 figures, submitted for review
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Across industries, there is an ever-increasing rate of data sharing for
collaboration and innovation between organizations and their customers,
partners, suppliers, and internal teams. However, many enterprises are
restricted from freely sharing data due to regulatory restrictions across
different regions, performance issues in moving large volume data, or
requirements to maintain autonomy. In such situations, the enterprise can
benefit from the concept of federated learning, in which machine learning
models are constructed at various geographic sites. In this paper, we introduce
a general framework, namely BDSP, to share data among enterprises based on
Blockchain and federated learning techniques. Specifically, we propose a
transparency contribution accounting mechanism to estimate the valuation of
data and implement a proof-of-concept for further evaluation. The extensive
experimental results show that the proposed BDSP has a competitive performance
with higher training accuracy, an increase of over 5%, and lower communication
overhead, reducing 3 times, compared to baseline approaches.
|
[
{
"version": "v1",
"created": "Fri, 16 Dec 2022 06:57:44 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Nguyen",
"Lam Duc",
""
],
[
"Hoang",
"James",
""
],
[
"Wang",
"Qin",
""
],
[
"Lu",
"Qinghua",
""
],
[
"Xu",
"Sherry",
""
],
[
"Chen",
"Shiping",
""
]
] |
new_dataset
| 0.997995 |
2212.11140
|
Hammond Pearce
|
Shailja Thakur, Baleegh Ahmad, Zhenxing Fan, Hammond Pearce, Benjamin
Tan, Ramesh Karri, Brendan Dolan-Gavitt, Siddharth Garg
|
Benchmarking Large Language Models for Automated Verilog RTL Code
Generation
|
Accepted in DATE 2023. 7 pages, 4 tables, 7 figures
| null | null | null |
cs.PL cs.LG cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automating hardware design could obviate a significant amount of human error
from the engineering process and lead to fewer errors. Verilog is a popular
hardware description language to model and design digital systems, thus
generating Verilog code is a critical first step. Emerging large language
models (LLMs) are able to write high-quality code in other programming
languages. In this paper, we characterize the ability of LLMs to generate
useful Verilog. For this, we fine-tune pre-trained LLMs on Verilog datasets
collected from GitHub and Verilog textbooks. We construct an evaluation
framework comprising test-benches for functional analysis and a flow to test
the syntax of Verilog code generated in response to problems of varying
difficulty. Our findings show that across our problem scenarios, the
fine-tuning results in LLMs more capable of producing syntactically correct
code (25.9% overall). Further, when analyzing functional correctness, a
fine-tuned open-source CodeGen LLM can outperform the state-of-the-art
commercial Codex LLM (6.5% overall). Training/evaluation scripts and LLM
checkpoints are available: https://github.com/shailja-thakur/VGen.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 16:34:39 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Thakur",
"Shailja",
""
],
[
"Ahmad",
"Baleegh",
""
],
[
"Fan",
"Zhenxing",
""
],
[
"Pearce",
"Hammond",
""
],
[
"Tan",
"Benjamin",
""
],
[
"Karri",
"Ramesh",
""
],
[
"Dolan-Gavitt",
"Brendan",
""
],
[
"Garg",
"Siddharth",
""
]
] |
new_dataset
| 0.97193 |
2212.11152
|
Naoya Yoshimura
|
Naoya Yoshimura, Jaime Morales, Takuya Maekawa, Takahiro Hara
|
OpenPack: A Large-scale Dataset for Recognizing Packaging Works in
IoT-enabled Logistic Environments
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unlike human daily activities, existing publicly available sensor datasets
for work activity recognition in industrial domains are limited by difficulties
in collecting realistic data as close collaboration with industrial sites is
required. This also limits research on and development of AI methods for
industrial applications. To address these challenges and contribute to research
on machine recognition of work activities in industrial domains, in this study,
we introduce a new large-scale dataset for packaging work recognition called
OpenPack. OpenPack contains 53.8 hours of multimodal sensor data, including
keypoints, depth images, acceleration data, and readings from IoT-enabled
devices (e.g., handheld barcode scanners used in work procedures), collected
from 16 distinct subjects with different levels of packaging work experience.
On the basis of this dataset, we propose a neural network model designed to
recognize work activities, which efficiently fuses sensor data and readings
from IoT-enabled devices by processing them within different streams in a
ladder-shaped architecture, and the experiment showed the effectiveness of the
architecture. We believe that OpenPack will contribute to the community of
action/activity recognition with sensors. OpenPack dataset is available at
https://open-pack.github.io/.
|
[
{
"version": "v1",
"created": "Sat, 10 Dec 2022 13:01:18 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Yoshimura",
"Naoya",
""
],
[
"Morales",
"Jaime",
""
],
[
"Maekawa",
"Takuya",
""
],
[
"Hara",
"Takahiro",
""
]
] |
new_dataset
| 0.999901 |
2212.11154
|
Yongding Tian
|
Yongding Tian, Zaid Al-Ars, Peter Hofstee
|
Tydi-lang: a language for typed streaming hardware -- A manual for
future Tydi-lang compiler developers
|
60 pages with 2 pages of reference, Master's thesis in TUDelft
| null | null | null |
cs.PL cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Transferring composite data structures with variable-length fields often
requires designing non-trivial protocols that are not compatible between
hardware designs. When each project designs its own data format and protocols
the ability to collaborate between hardware developers is diminished, which is
an issue especially in the open-source community. Because the high-level
meaning of a protocol is often lost in translation to low-level languages when
a custom protocol needs to be designed, extra documentation is required, the
interpretation of which introduces new opportunities for errors. The Tydi
specification (Tydi-spec) was proposed to address the above issues by codifying
the composite and variable-length data structures in a type and providing a
standard protocol to transfer typed data among hardware components. The Tydi
intermediate representation (Tydi-IR) extends the Tydi-spec by defining typed
interfaces, typed components, and connections among typed components.
In this thesis, we propose Tydi-lang, a high-level hardware description
language (HDL) for streaming designs. The language incorporates Tydi-spec to
describe typed streams and provides templates to describe abstract reusable
components. We also implement an open-source compiler from Tydi-lang to
Tydi-IR. We leverage a Tydi-IR to VHDL compiler, and also present a simulator
blueprint to identify streaming bottlenecks. We show several Tydi-lang examples
to translate high-level SQL to VHDL to demonstrate that Tydi-lang can
efficiently raise the level of abstraction and reduce design effort.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 23:46:46 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Tian",
"Yongding",
""
],
[
"Al-Ars",
"Zaid",
""
],
[
"Hofstee",
"Peter",
""
]
] |
new_dataset
| 0.999723 |
2212.11158
|
Valentina Castiglioni
|
Valentina Castiglioni and Michele Loreti and Simone Tini
|
RobTL: A Temporal Logic for the Robustness of Cyber-Physical Systems
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose the Robustness Temporal Logic (RobTL), a novel temporal logic for
the specification and analysis of distances between the behaviours of
Cyber-Physical Systems (CPSs) over a finite time horizon. Differently from
classical temporal logic expressing properties on the behaviour of a system, we
can use RobTL specifications to measure the differences in the behaviours of
systems with respect to various objectives and temporal constraints, and to
study how those differences evolve in time. Since the behaviour of CPSs is
inevitably subject to uncertainties and approximations, we show how the unique
features of RobTL allow us to specify property of robustness of systems against
perturbations, i.e., their capability to function correctly even under the
effect of perturbations. Given the probabilistic nature of CPSs, our model
checking algorithm for RobTL specifications is based on statistical inference.
As an example of an application of our framework, we consider a supervised,
self-coordinating engine system that is subject to attacks aimed at inflicting
overstress of equipment.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 16:09:01 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Castiglioni",
"Valentina",
""
],
[
"Loreti",
"Michele",
""
],
[
"Tini",
"Simone",
""
]
] |
new_dataset
| 0.998349 |
2212.11173
|
Boris Shminke
|
Boris Shminke
|
Python client for Isabelle server
|
5 pages, 1 figure, submitted to CICM 2022
(https://cicm-conference.org/2022/)
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We contribute a Python client for the Isabelle server, which gives
researchers and students using Python as their primary programming language an
opportunity to communicate with the Isabelle server through TCP directly from a
Python script. Such an approach helps avoid the complexities of integrating the
existing Python script with languages used for Isabelle development (ML and
Scala). We also describe new features that appeared since the announcement of
the first version of the client a year ago. Finally, we give examples of the
client's applications in research and education and discuss known limitations
and possible directions for future development.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 12:05:28 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Shminke",
"Boris",
""
]
] |
new_dataset
| 0.998557 |
2212.11215
|
Matthias Mayr
|
Matthias Mayr, Julian M. Salt-Ducaju
|
A C++ Implementation of a Cartesian Impedance Controller for Robotic
Manipulators
|
7 pages, 1 figure. Under submission at JOSS
(https://joss.theoj.org/). Implementation at:
https://github.com/matthias-mayr/Cartesian-Impedance-Controller
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Cartesian impedance control is a type of motion control strategy for robots
that improves safety in partially unknown environments by achieving a compliant
behavior of the robot with respect to its external forces. This compliant robot
behavior has the added benefit of allowing physical human guidance of the
robot. In this paper, we propose a C++ implementation of compliance control
valid for any torque-commanded robotic manipulator. The proposed controller
implements Cartesian impedance control to track a desired end-effector pose.
Additionally, joint impedance is projected in the nullspace of the Cartesian
robot motion to track a desired robot joint configuration without perturbing
the Cartesian motion of the robot. The proposed implementation also allows the
robot to apply desired forces and torques to its environment. Several safety
features such as filtering, rate limiting, and saturation are included in the
proposed implementation. The core functionalities are in a re-usable base
library and a Robot Operating System (ROS) ros_control integration is provided
on top of that. The implementation was tested with the KUKA LBR iiwa robot and
the Franka Emika Robot (Panda) both in simulation and with the physical robots.
|
[
{
"version": "v1",
"created": "Wed, 21 Dec 2022 17:42:33 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Mayr",
"Matthias",
""
],
[
"Salt-Ducaju",
"Julian M.",
""
]
] |
new_dataset
| 0.989094 |
2212.11245
|
Iosif Iulian Petrila
|
Iosif Iulian Petrila
|
@C -- augmented version of C programming language
| null | null | null | null |
cs.PL cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
The augmented version of C programming language is presented. The language
was completed with a series of low-level and high-level facilities to enlarge
the language usage spectrum to various computing systems, operations, users.
The ambiguities and inconsistencies have been resolved by managing problematic
and undefined languages elements through an interpretation and management
similar to that used in the case of other C syntax based languages. The
proposed augmentative completeness elements, through @C approach, preserve the
spirit of C language and its basic characteristics through compatibility with
the standard version but also allow rejuvenation and bring C language to the
present programming languages state of the art.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 07:53:14 GMT"
}
] | 2022-12-22T00:00:00 |
[
[
"Petrila",
"Iosif Iulian",
""
]
] |
new_dataset
| 0.999057 |
2205.08857
|
Andreas Toftegaard Kristensen
|
Yuqing Ren, Andreas Toftegaard Kristensen, Yifei Shen, Alexios
Balatsoukas-Stimming, Chuan Zhang, Andreas Burg
|
A Sequence Repetition Node-Based Successive Cancellation List Decoder
for 5G Polar Codes: Algorithm and Implementation
| null | null |
10.1109/TSP.2022.3216921
| null |
cs.IT cs.AR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the low-latency and high-reliability requirements of 5G,
low-complexity node-based successive cancellation list (SCL) decoding has
received considerable attention for use in 5G communications systems. By
identifying special constituent codes in the decoding tree and immediately
decoding these, node-based SCL decoding provides a significant reduction in
decoding latency compared to conventional SCL decoding. However, while there
exists many types of nodes, the current node-based SCL decoders are limited by
the lack of a more generalized node that can efficiently decode a larger number
of different constituent codes to further reduce the decoding time. In this
paper, we extend a recent generalized node, the sequence repetition (SR) node
to SCL decoding and we describe the first implementation of an SR-List decoder.
By merging certain SR-List decoding operations and applying various
optimizations for 5G New Radio (NR) polar codes, our optimized SR-List decoding
algorithm increases the throughput by almost ${2\times}$ compared to a similar
state-of-the-art node-based SCL decoder. We also present our hardware
implementation of the optimized SR-List decoding algorithm which supports all
5G NR polar codes. Synthesis results show that our SR-List decoder can achieve
a $2.94 \, \mathrm{Gbps}$ throughput and $6.70\, \mathrm{Gbps} / \mathrm{mm}^2$
area efficiency for ${L=8}$.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2022 10:44:05 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Aug 2022 21:33:32 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Ren",
"Yuqing",
""
],
[
"Kristensen",
"Andreas Toftegaard",
""
],
[
"Shen",
"Yifei",
""
],
[
"Balatsoukas-Stimming",
"Alexios",
""
],
[
"Zhang",
"Chuan",
""
],
[
"Burg",
"Andreas",
""
]
] |
new_dataset
| 0.986828 |
2206.00515
|
Omid Ghorbanzadeh
|
Omid Ghorbanzadeh, Yonghao Xu, Pedram Ghamisi, Michael Kopp, David
Kreil
|
Landslide4Sense: Reference Benchmark Data and Deep Learning Models for
Landslide Detection
| null |
IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp.
1-17, 2022
|
10.1109/TGRS.2022.3215209
| null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study introduces \textit{Landslide4Sense}, a reference benchmark for
landslide detection from remote sensing. The repository features 3,799 image
patches fusing optical layers from Sentinel-2 sensors with the digital
elevation model and slope layer derived from ALOS PALSAR. The added
topographical information facilitates the accurate detection of landslide
borders, which recent researches have shown to be challenging using optical
data alone. The extensive data set supports deep learning (DL) studies in
landslide detection and the development and validation of methods for the
systematic update of landslide inventories. The benchmark data set has been
collected at four different times and geographical locations: Iburi (September
2018), Kodagu (August 2018), Gorkha (April 2015), and Taiwan (August 2009).
Each image pixel is labelled as belonging to a landslide or not, incorporating
various sources and thorough manual annotation. We then evaluate the landslide
detection performance of 11 state-of-the-art DL segmentation models: U-Net,
ResU-Net, PSPNet, ContextNet, DeepLab-v2, DeepLab-v3+, FCN-8s, LinkNet, FRRN-A,
FRRN-B, and SQNet. All models were trained from scratch on patches from one
quarter of each study area and tested on independent patches from the other
three quarters. Our experiments demonstrate that ResU-Net outperformed the
other models for the landslide detection task. We make the multi-source
landslide benchmark data (Landslide4Sense) and the tested DL models publicly
available at \url{https://www.iarai.ac.at/landslide4sense}, establishing an
important resource for remote sensing, computer vision, and machine learning
communities in studies of image classification in general and applications to
landslide detection in particular.
|
[
{
"version": "v1",
"created": "Wed, 1 Jun 2022 14:18:23 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Jun 2022 12:35:21 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Dec 2022 11:10:48 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Ghorbanzadeh",
"Omid",
""
],
[
"Xu",
"Yonghao",
""
],
[
"Ghamisi",
"Pedram",
""
],
[
"Kopp",
"Michael",
""
],
[
"Kreil",
"David",
""
]
] |
new_dataset
| 0.995713 |
2206.05183
|
Paul Atzberger
|
Ryan Lopez and Paul J. Atzberger
|
GD-VAEs: Geometric Dynamic Variational Autoencoders for Learning
Nonlinear Dynamics and Dimension Reductions
|
15 figures. arXiv admin note: text overlap with arXiv:2012.03448
| null | null | null |
cs.LG cs.NA math.DS math.NA physics.data-an stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop data-driven methods incorporating geometric and topological
information to learn parsimonious representations of nonlinear dynamics from
observations. We develop approaches for learning nonlinear state space models
of the dynamics for general manifold latent spaces using training strategies
related to Variational Autoencoders (VAEs). Our methods are referred to as
Geometric Dynamic (GD) Variational Autoencoders (GD-VAEs). We learn encoders
and decoders for the system states and evolution based on deep neural network
architectures that include general Multilayer Perceptrons (MLPs), Convolutional
Neural Networks (CNNs), and Transpose CNNs (T-CNNs). Motivated by problems
arising in parameterized PDEs and physics, we investigate the performance of
our methods on tasks for learning low dimensional representations of the
nonlinear Burgers equations, constrained mechanical systems, and spatial fields
of reaction-diffusion systems. GD-VAEs provide methods for obtaining
representations for use in diverse learning tasks involving dynamics.
|
[
{
"version": "v1",
"created": "Fri, 10 Jun 2022 15:23:23 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Dec 2022 00:17:33 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Lopez",
"Ryan",
""
],
[
"Atzberger",
"Paul J.",
""
]
] |
new_dataset
| 0.978234 |
2207.07694
|
Martin Zimmermann
|
Shibashis Guha, Isma\"el Jecker, Karoliina Lehtinen, Martin Zimmermann
|
Parikh Automata over Infinite Words
| null | null | null | null |
cs.FL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Parikh automata extend finite automata by counters that can be tested for
membership in a semilinear set, but only at the end of a run, thereby
preserving many of the desirable algorithmic properties of finite automata.
Here, we study the extension of the classical framework onto infinite inputs:
We introduce reachability, safety, B\"uchi, and co-B\"uchi Parikh automata on
infinite words and study expressiveness, closure properties, and the complexity
of verification problems.
We show that almost all classes of automata have pairwise incomparable
expressiveness, both in the deterministic and the nondeterministic case; a
result that sharply contrasts with the well-known hierarchy in the
$\omega$-regular setting. Furthermore, emptiness is shown decidable for Parikh
automata with reachability or B\"uchi acceptance, but undecidable for safety
and co-B\"uchi acceptance. Most importantly, we show decidability of model
checking with specifications given by deterministic Parikh automata with safety
or co-B\"uchi acceptance, but also undecidability for all other types of
automata. Finally, solving games is undecidable for all types.
|
[
{
"version": "v1",
"created": "Fri, 15 Jul 2022 18:34:06 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Dec 2022 08:42:33 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Dec 2022 07:26:13 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Guha",
"Shibashis",
""
],
[
"Jecker",
"Ismaël",
""
],
[
"Lehtinen",
"Karoliina",
""
],
[
"Zimmermann",
"Martin",
""
]
] |
new_dataset
| 0.989113 |
2211.13762
|
Matteo Poggi
|
Luca De Luigi, Damiano Bolognini, Federico Domeniconi, Daniele De
Gregorio, Matteo Poggi, Luigi Di Stefano
|
ScanNeRF: a Scalable Benchmark for Neural Radiance Fields
|
WACV 2023. The first three authors contributed equally. Project page:
https://eyecan-ai.github.io/scannerf/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose the first-ever real benchmark thought for
evaluating Neural Radiance Fields (NeRFs) and, in general, Neural Rendering
(NR) frameworks. We design and implement an effective pipeline for scanning
real objects in quantity and effortlessly. Our scan station is built with less
than 500$ hardware budget and can collect roughly 4000 images of a scanned
object in just 5 minutes. Such a platform is used to build ScanNeRF, a dataset
characterized by several train/val/test splits aimed at benchmarking the
performance of modern NeRF methods under different conditions. Accordingly, we
evaluate three cutting-edge NeRF variants on it to highlight their strengths
and weaknesses. The dataset is available on our project page, together with an
online benchmark to foster the development of better and better NeRFs.
|
[
{
"version": "v1",
"created": "Thu, 24 Nov 2022 19:00:02 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Dec 2022 11:24:55 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"De Luigi",
"Luca",
""
],
[
"Bolognini",
"Damiano",
""
],
[
"Domeniconi",
"Federico",
""
],
[
"De Gregorio",
"Daniele",
""
],
[
"Poggi",
"Matteo",
""
],
[
"Di Stefano",
"Luigi",
""
]
] |
new_dataset
| 0.999599 |
2212.09808
|
Thales Costa Silva
|
Thales C. Silva, Li Shen, Xi Yu, and M. Ani Hsieh
|
Receding Horizon Control on the Broadcast of Information in Stochastic
Networks
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This paper focuses on the broadcast of information on robot networks with
stochastic network interconnection topologies. Problematic communication
networks are almost unavoidable in areas where we wish to deploy multi-robotic
systems, usually due to a lack of environmental consistency, accessibility, and
structure. We tackle this problem by modeling the broadcast of information in a
multi-robot communication network as a stochastic process with random arrival
times, which can be produced by irregular robot movements, wireless
attenuation, and other environmental factors. Using this model, we provide and
analyze a receding horizon control strategy to control the statistics of the
information broadcast. The resulting strategy compels the robots to re-direct
their communication resources to different neighbors according to the current
propagation process to fulfill global broadcast requirements. Based on this
method, we provide an approach to compute the expected time to broadcast the
message to all nodes. Numerical examples are provided to illustrate the
results.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 19:26:58 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Silva",
"Thales C.",
""
],
[
"Shen",
"Li",
""
],
[
"Yu",
"Xi",
""
],
[
"Hsieh",
"M. Ani",
""
]
] |
new_dataset
| 0.988418 |
2212.09825
|
Abhilasha Sancheti
|
Abhilasha Sancheti, Aparna Garimella, Balaji Vasan Srinivasan, Rachel
Rudinger
|
What to Read in a Contract? Party-Specific Summarization of Important
Obligations, Entitlements, and Prohibitions in Legal Documents
|
15 pages, 5 figures, 10 tables
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Legal contracts, such as employment or lease agreements, are important
documents as they govern the obligations and entitlements of the various
contracting parties. However, these documents are typically long and written in
legalese resulting in lots of manual hours spent in understanding them. In this
paper, we address the task of summarizing legal contracts for each of the
contracting parties, to enable faster reviewing and improved understanding of
them. Specifically, we collect a dataset consisting of pairwise importance
comparison annotations by legal experts for ~293K sentence pairs from lease
agreements. We propose a novel extractive summarization system to automatically
produce a summary consisting of the most important obligations, entitlements,
and prohibitions in a contract. It consists of two modules: (1) a content
categorize to identify sentences containing each of the categories (i.e.,
obligation, entitlement, and prohibition) for a party, and (2) an importance
ranker to compare the importance among sentences of each category for a party
to obtain a ranked list. The final summary is produced by selecting the most
important sentences of a category for each of the parties. We demonstrate the
effectiveness of our proposed system by comparing it against several text
ranking baselines via automatic and human evaluation.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 19:53:14 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Sancheti",
"Abhilasha",
""
],
[
"Garimella",
"Aparna",
""
],
[
"Srinivasan",
"Balaji Vasan",
""
],
[
"Rudinger",
"Rachel",
""
]
] |
new_dataset
| 0.998636 |
2212.09859
|
Martin Nisser
|
Xinyi Yang, Martin Nisser and Stefanie Mueller
|
CompuMat: A Computational Composite Material for Tangible Interaction
|
Xinyi Yang, Martin Nisser, and Stefanie Mueller. 2023. CompuMat: A
Computational Composite Material for Tangible Interaction. In ACM TEI '23:
Proceedings of the Seventeenth International Conference on Tangible,
Embedded, and Embodied Interaction (ACM TEI '23), February 26-March 1, 2023,
Warsaw, Poland. ACM, New York, NY, USA, 8 pages
| null |
10.1145/3569009.3573120
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
This paper introduces a computational composite material comprising layers
for actuation, computation and energy storage. Key to its design is inexpensive
materials assembled from traditionally available fabrication machines to
support the rapid exploration of applications from computational composites.
The actuation layer is a soft magnetic sheet that is programmed to either bond,
repel, or remain agnostic to other areas of the sheet. The computation layer is
a flexible PCB made from copper-clad kapton engraved by a fiber laser, powered
by a third energy-storage layer comprised of 0.4mm-thin lithium polymer
batteries. We present the material layup and an accompanying digital
fabrication process enabling users to rapidly prototype their own untethered,
interactive and tangible prototypes. The material is low-profile, inexpensive,
and fully untethered, capable of being used for a variety of applications in
HCI and robotics including structural origami and proprioception.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 21:13:32 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Yang",
"Xinyi",
""
],
[
"Nisser",
"Martin",
""
],
[
"Mueller",
"Stefanie",
""
]
] |
new_dataset
| 0.999653 |
2212.09879
|
Petr Plechac
|
Lenka Jungmannov\'a and Petr Plech\'a\v{c}
|
Unsigned Play by Milan Kundera? An Authorship Attribution Study
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In addition to being a widely recognised novelist, Milan Kundera has also
authored three pieces for theatre: The Owners of the Keys (Majitel\'e
kl\'i\v{c}\r{u}, 1961), The Blunder (Pt\'akovina, 1967), and Jacques and his
Master (Jakub a jeho p\'an, 1971). In recent years, however, the hypothesis has
been raised that Kundera is the true author of a fourth play: Juro
J\'ano\v{s}\'ik, first performed in a 1974 production under the name of Karel
Steigerwald, who was Kundera's student at the time. In this study, we make use
of supervised machine learning to settle the question of authorship attribution
in the case of Juro J\'ano\v{s}\'ik, with results strongly supporting the
hypothesis of Kundera's authorship.
|
[
{
"version": "v1",
"created": "Mon, 19 Dec 2022 21:59:22 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Jungmannová",
"Lenka",
""
],
[
"Plecháč",
"Petr",
""
]
] |
new_dataset
| 0.98972 |
2212.09979
|
Xitong Gao
|
Tianrui Qin, Xianghuan He, Xitong Gao, Yiren Zhao, Kejiang Ye,
Cheng-Zhong Xu
|
Flareon: Stealthy any2any Backdoor Injection via Poisoned Augmentation
| null | null | null | null |
cs.CR cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Open software supply chain attacks, once successful, can exact heavy costs in
mission-critical applications. As open-source ecosystems for deep learning
flourish and become increasingly universal, they present attackers previously
unexplored avenues to code-inject malicious backdoors in deep neural network
models. This paper proposes Flareon, a small, stealthy, seemingly harmless code
modification that specifically targets the data augmentation pipeline with
motion-based triggers. Flareon neither alters ground-truth labels, nor modifies
the training loss objective, nor does it assume prior knowledge of the victim
model architecture, training data, and training hyperparameters. Yet, it has a
surprisingly large ramification on training -- models trained under Flareon
learn powerful target-conditional (or "any2any") backdoors. The resulting
models can exhibit high attack success rates for any target choices and better
clean accuracies than backdoor attacks that not only seize greater control, but
also assume more restrictive attack capabilities. We also demonstrate the
effectiveness of Flareon against recent defenses. Flareon is fully open-source
and available online to the deep learning community:
https://github.com/lafeat/flareon.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 03:43:54 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Qin",
"Tianrui",
""
],
[
"He",
"Xianghuan",
""
],
[
"Gao",
"Xitong",
""
],
[
"Zhao",
"Yiren",
""
],
[
"Ye",
"Kejiang",
""
],
[
"Xu",
"Cheng-Zhong",
""
]
] |
new_dataset
| 0.950349 |
2212.09981
|
Jose Huaman
|
Jose Huaman, Felix O. Sumari, Luigy Machaca, Esteban Clua and Joris
Guerin
|
Benchmarking person re-identification datasets and approaches for
practical real-world implementations
|
This paper is the extended version of our short paper accepted in
VISAPP - 2023
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Person Re-Identification (Re-ID) has received a lot of attention.
Large datasets containing labeled images of various individuals have been
released, allowing researchers to develop and test many successful approaches.
However, when such Re-ID models are deployed in new cities or environments, the
task of searching for people within a network of security cameras is likely to
face an important domain shift, thus resulting in decreased performance.
Indeed, while most public datasets were collected in a limited geographic area,
images from a new city present different features (e.g., people's ethnicity and
clothing style, weather, architecture, etc.). In addition, the whole frames of
the video streams must be converted into cropped images of people using
pedestrian detection models, which behave differently from the human annotators
who created the dataset used for training. To better understand the extent of
this issue, this paper introduces a complete methodology to evaluate Re-ID
approaches and training datasets with respect to their suitability for
unsupervised deployment for live operations. This method is used to benchmark
four Re-ID approaches on three datasets, providing insight and guidelines that
can help to design better Re-ID pipelines in the future.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 03:45:38 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Huaman",
"Jose",
""
],
[
"Sumari",
"Felix O.",
""
],
[
"Machaca",
"Luigy",
""
],
[
"Clua",
"Esteban",
""
],
[
"Guerin",
"Joris",
""
]
] |
new_dataset
| 0.999413 |
2212.09988
|
Ke Zhao
|
Ke Zhao, Haining Tan, Tsz Fung Yau
|
Multi-Reference Image Super-Resolution: A Posterior Fusion Approach
| null | null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Reference-based Super-resolution (RefSR) approaches have recently been
proposed to overcome the ill-posed problem of image super-resolution by
providing additional information from a high-resolution image. Multi-reference
super-resolution extends this approach by allowing more information to be
incorporated. This paper proposes a 2-step-weighting posterior fusion approach
to combine the outputs of RefSR models with multiple references. Extensive
experiments on the CUFED5 dataset demonstrate that the proposed methods can be
applied to various state-of-the-art RefSR models to get a consistent
improvement in image quality.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 04:15:03 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Zhao",
"Ke",
""
],
[
"Tan",
"Haining",
""
],
[
"Yau",
"Tsz Fung",
""
]
] |
new_dataset
| 0.999667 |
2212.10030
|
Feng Qiu
|
Feng Qiu, Wanzeng Kong, Yu Ding
|
InterMulti:Multi-view Multimodal Interactions with Text-dominated
Hierarchical High-order Fusion for Emotion Analysis
|
9 pages, 3 figures. arXiv admin note: text overlap with
arXiv:2212.08661
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Humans are sophisticated at reading interlocutors' emotions from multimodal
signals, such as speech contents, voice tones and facial expressions. However,
machines might struggle to understand various emotions due to the difficulty of
effectively decoding emotions from the complex interactions between multimodal
signals. In this paper, we propose a multimodal emotion analysis framework,
InterMulti, to capture complex multimodal interactions from different views and
identify emotions from multimodal signals. Our proposed framework decomposes
signals of different modalities into three kinds of multimodal interaction
representations, including a modality-full interaction representation, a
modality-shared interaction representation, and three modality-specific
interaction representations. Additionally, to balance the contribution of
different modalities and learn a more informative latent interaction
representation, we developed a novel Text-dominated Hierarchical High-order
Fusion(THHF) module. THHF module reasonably integrates the above three kinds of
representations into a comprehensive multimodal interaction representation.
Extensive experimental results on widely used datasets, (i.e.) MOSEI, MOSI and
IEMOCAP, demonstrate that our method outperforms the state-of-the-art.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 07:02:32 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Qiu",
"Feng",
""
],
[
"Kong",
"Wanzeng",
""
],
[
"Ding",
"Yu",
""
]
] |
new_dataset
| 0.995676 |
2212.10049
|
Chenxi Huang
|
Chenxi Huang, Tong He, Haidong Ren, Wenxiao Wang, Binbin Lin, Deng Cai
|
OBMO: One Bounding Box Multiple Objects for Monocular 3D Object
Detection
|
9 pages, 9 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Compared to typical multi-sensor systems, monocular 3D object detection has
attracted much attention due to its simple configuration. However, there is
still a significant gap between LiDAR-based and monocular-based methods. In
this paper, we find that the ill-posed nature of monocular imagery can lead to
depth ambiguity. Specifically, objects with different depths can appear with
the same bounding boxes and similar visual features in the 2D image.
Unfortunately, the network cannot accurately distinguish different depths from
such non-discriminative visual features, resulting in unstable depth training.
To facilitate depth learning, we propose a simple yet effective plug-and-play
module, One Bounding Box Multiple Objects (OBMO). Concretely, we add a set of
suitable pseudo labels by shifting the 3D bounding box along the viewing
frustum. To constrain the pseudo-3D labels to be reasonable, we carefully
design two label scoring strategies to represent their quality. In contrast to
the original hard depth labels, such soft pseudo labels with quality scores
allow the network to learn a reasonable depth range, boosting training
stability and thus improving final performance. Extensive experiments on KITTI
and Waymo benchmarks show that our method significantly improves
state-of-the-art monocular 3D detectors by a significant margin (The
improvements under the moderate setting on KITTI validation set are
$\mathbf{1.82\sim 10.91\%}$ mAP in BEV and $\mathbf{1.18\sim 9.36\%}$ mAP in
3D}. Codes have been released at https://github.com/mrsempress/OBMO.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 07:46:49 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Huang",
"Chenxi",
""
],
[
"He",
"Tong",
""
],
[
"Ren",
"Haidong",
""
],
[
"Wang",
"Wenxiao",
""
],
[
"Lin",
"Binbin",
""
],
[
"Cai",
"Deng",
""
]
] |
new_dataset
| 0.966231 |
2212.10064
|
Arnab Bhattacharya
|
Aowabin Rahman, Arnab Bhattacharya, Thiagarajan Ramachandran, Sayak
Mukherjee, Himanshu Sharma, Ted Fujimoto, Samrat Chatterjee
|
AdverSAR: Adversarial Search and Rescue via Multi-Agent Reinforcement
Learning
| null | null | null | null |
cs.RO cs.LG cs.MA cs.SY eess.SY math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
Search and Rescue (SAR) missions in remote environments often employ
autonomous multi-robot systems that learn, plan, and execute a combination of
local single-robot control actions, group primitives, and global
mission-oriented coordination and collaboration. Often, SAR coordination
strategies are manually designed by human experts who can remotely control the
multi-robot system and enable semi-autonomous operations. However, in remote
environments where connectivity is limited and human intervention is often not
possible, decentralized collaboration strategies are needed for
fully-autonomous operations. Nevertheless, decentralized coordination may be
ineffective in adversarial environments due to sensor noise, actuation faults,
or manipulation of inter-agent communication data. In this paper, we propose an
algorithmic approach based on adversarial multi-agent reinforcement learning
(MARL) that allows robots to efficiently coordinate their strategies in the
presence of adversarial inter-agent communications. In our setup, the objective
of the multi-robot team is to discover targets strategically in an
obstacle-strewn geographical area by minimizing the average time needed to find
the targets. It is assumed that the robots have no prior knowledge of the
target locations, and they can interact with only a subset of neighboring
robots at any time. Based on the centralized training with decentralized
execution (CTDE) paradigm in MARL, we utilize a hierarchical meta-learning
framework to learn dynamic team-coordination modalities and discover emergent
team behavior under complex cooperative-competitive scenarios. The
effectiveness of our approach is demonstrated on a collection of prototype
grid-world environments with different specifications of benign and adversarial
agents, target locations, and agent rewards.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 08:13:29 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Rahman",
"Aowabin",
""
],
[
"Bhattacharya",
"Arnab",
""
],
[
"Ramachandran",
"Thiagarajan",
""
],
[
"Mukherjee",
"Sayak",
""
],
[
"Sharma",
"Himanshu",
""
],
[
"Fujimoto",
"Ted",
""
],
[
"Chatterjee",
"Samrat",
""
]
] |
new_dataset
| 0.999659 |
2212.10131
|
Rodrigo Bruno
|
Rodrigo Bruno, Serhii Ivanenko, Sutao Wang, Jovan Stevanovic, Vojin
Jovanovic
|
Graalvisor: Virtualized Polyglot Runtime for Serverless Applications
| null | null | null | null |
cs.DC cs.PL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Serverless is a new attractive computing model that offers great scalability
and elasticity, taking the infrastructure management burden away from users,
and enabling a pay-as-you-use billing model. As a result, Serverless is
becoming increasingly popular, and new use cases have recently been proposed.
Examples include video and image processing, Machine Learning inference and
training, and data analytics. However, Serverless is currently supported by
bloated virtualization stacks composed of a combination of virtual machines
and/or containers, and language runtimes. None of these were to host
lightweight and fast-executing Serverless workloads.
To reduce the virtualization stack bloat, we propose Graalvisor, a
virtualized polyglot language runtime capable of running multiple concurrent
functions with minimal overhead. Graalvisor is designed to efficiently run
lightweight and short-running Serverless functions, each running in a tiny
execution environment that launches under 500 us. A single Graalvisor instance
can run thousands of functions written in many different languages. By
virtualizing a single runtime across many function invocations, Graalvisor
reduces virtualization stack redundancy, resulting in lower memory consumption
and less cold-starts. On a set of established Serverless functions, Graalvisor
improves the throughput per memory (ops/sec/GB) on average by 170$\times$ for
Java functions, 26.6$\times$ for JavaScript functions, and 2.07$\times$ for
Python functions. When reproducing a public Serverless trace, Graalvisor
reduces the overall memory footprint by 83% and reduces the tail latency (99
percentile) by 68%.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 09:58:39 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Bruno",
"Rodrigo",
""
],
[
"Ivanenko",
"Serhii",
""
],
[
"Wang",
"Sutao",
""
],
[
"Stevanovic",
"Jovan",
""
],
[
"Jovanovic",
"Vojin",
""
]
] |
new_dataset
| 0.99875 |
2212.10190
|
Xun Wang Dr
|
Xun Wang, Tao Ge, Allen Mao, Yuki Li, Furu Wei, Si-Qing Chen
|
Pay Attention to Your Tone: Introducing a New Dataset for Polite
Language Rewrite
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce \textsc{PoliteRewrite} -- a dataset for polite language rewrite
which is a novel sentence rewrite task. Compared with previous text style
transfer tasks that can be mostly addressed by slight token- or phrase-level
edits, polite language rewrite requires deep understanding and extensive
sentence-level edits over an offensive and impolite sentence to deliver the
same message euphemistically and politely, which is more challenging -- not
only for NLP models but also for human annotators to rewrite with effort. To
alleviate the human effort for efficient annotation, we first propose a novel
annotation paradigm by a collaboration of human annotators and GPT-3.5 to
annotate \textsc{PoliteRewrite}. The released dataset has 10K polite sentence
rewrites annotated collaboratively by GPT-3.5 and human, which can be used as
gold standard for training, validation and test; and 100K high-quality polite
sentence rewrites by GPT-3.5 without human review. We wish this work (The
dataset (10K+100K) will be released soon) could contribute to the research on
more challenging sentence rewrite, and provoke more thought in future on
resource annotation paradigm with the help of the large-scaled pretrained
models.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 12:02:34 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Wang",
"Xun",
""
],
[
"Ge",
"Tao",
""
],
[
"Mao",
"Allen",
""
],
[
"Li",
"Yuki",
""
],
[
"Wei",
"Furu",
""
],
[
"Chen",
"Si-Qing",
""
]
] |
new_dataset
| 0.999805 |
2212.10265
|
Martin Schwartz
|
Martin Schwartz, Philippe Ciais, Catherine Ottl\'e, Aurelien De
Truchis, Cedric Vega, Ibrahim Fayad, Martin Brandt, Rasmus Fensholt, Nicolas
Baghdadi, Fran\c{c}ois Morneau, David Morin, Dominique Guyon, Sylvia Dayau,
Jean-Pierre Wigneron
|
High-resolution canopy height map in the Landes forest (France) based on
GEDI, Sentinel-1, and Sentinel-2 data with a deep learning approach
|
39 pages, 16 figures + supplementary contents
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In intensively managed forests in Europe, where forests are divided into
stands of small size and may show heterogeneity within stands, a high spatial
resolution (10 - 20 meters) is arguably needed to capture the differences in
canopy height. In this work, we developed a deep learning model based on
multi-stream remote sensing measurements to create a high-resolution canopy
height map over the "Landes de Gascogne" forest in France, a large maritime
pine plantation of 13,000 km$^2$ with flat terrain and intensive management.
This area is characterized by even-aged and mono-specific stands, of a typical
length of a few hundred meters, harvested every 35 to 50 years. Our deep
learning U-Net model uses multi-band images from Sentinel-1 and Sentinel-2 with
composite time averages as input to predict tree height derived from GEDI
waveforms. The evaluation is performed with external validation data from
forest inventory plots and a stereo 3D reconstruction model based on Skysat
imagery available at specific locations. We trained seven different U-net
models based on a combination of Sentinel-1 and Sentinel-2 bands to evaluate
the importance of each instrument in the dominant height retrieval. The model
outputs allow us to generate a 10 m resolution canopy height map of the whole
"Landes de Gascogne" forest area for 2020 with a mean absolute error of 2.02 m
on the Test dataset. The best predictions were obtained using all available
satellite layers from Sentinel-1 and Sentinel-2 but using only one satellite
source also provided good predictions. For all validation datasets in
coniferous forests, our model showed better metrics than previous canopy height
models available in the same region.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 14:14:37 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Schwartz",
"Martin",
""
],
[
"Ciais",
"Philippe",
""
],
[
"Ottlé",
"Catherine",
""
],
[
"De Truchis",
"Aurelien",
""
],
[
"Vega",
"Cedric",
""
],
[
"Fayad",
"Ibrahim",
""
],
[
"Brandt",
"Martin",
""
],
[
"Fensholt",
"Rasmus",
""
],
[
"Baghdadi",
"Nicolas",
""
],
[
"Morneau",
"François",
""
],
[
"Morin",
"David",
""
],
[
"Guyon",
"Dominique",
""
],
[
"Dayau",
"Sylvia",
""
],
[
"Wigneron",
"Jean-Pierre",
""
]
] |
new_dataset
| 0.997946 |
2212.10305
|
Haofeng Li
|
Wei Lou, Haofeng Li, Guanbin Li, Xiaoguang Han, Xiang Wan
|
Which Pixel to Annotate: a Label-Efficient Nuclei Segmentation Framework
|
IEEE TMI 2022, Released code: https://github.com/lhaof/NuSeg
| null |
10.1109/TMI.2022.3221666
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently deep neural networks, which require a large amount of annotated
samples, have been widely applied in nuclei instance segmentation of H\&E
stained pathology images. However, it is inefficient and unnecessary to label
all pixels for a dataset of nuclei images which usually contain similar and
redundant patterns. Although unsupervised and semi-supervised learning methods
have been studied for nuclei segmentation, very few works have delved into the
selective labeling of samples to reduce the workload of annotation. Thus, in
this paper, we propose a novel full nuclei segmentation framework that chooses
only a few image patches to be annotated, augments the training set from the
selected samples, and achieves nuclei segmentation in a semi-supervised manner.
In the proposed framework, we first develop a novel consistency-based patch
selection method to determine which image patches are the most beneficial to
the training. Then we introduce a conditional single-image GAN with a
component-wise discriminator, to synthesize more training samples. Lastly, our
proposed framework trains an existing segmentation model with the above
augmented samples. The experimental results show that our proposed method could
obtain the same-level performance as a fully-supervised baseline by annotating
less than 5% pixels on some benchmarks.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 14:53:26 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Lou",
"Wei",
""
],
[
"Li",
"Haofeng",
""
],
[
"Li",
"Guanbin",
""
],
[
"Han",
"Xiaoguang",
""
],
[
"Wan",
"Xiang",
""
]
] |
new_dataset
| 0.986755 |
2212.10388
|
Peng Gao
|
Peng Gao, Xiaoyuan Liu, Edward Choi, Sibo Ma, Xinyu Yang, Zhengjie Ji,
Zilin Zhang, Dawn Song
|
ThreatKG: A Threat Knowledge Graph for Automated Open-Source Cyber
Threat Intelligence Gathering and Management
| null | null | null | null |
cs.CR cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the increased adoption of open-source cyber threat intelligence
(OSCTI) for acquiring knowledge about cyber threats, little effort has been
made to harvest knowledge from a large number of unstructured OSCTI reports
available in the wild (e.g., security articles, threat reports). These reports
provide comprehensive threat knowledge in a variety of entities (e.g., IOCs,
threat actors, TTPs) and relations, which, however, are hard to gather due to
diverse report formats, large report quantities, and complex structures and
nuances in the natural language report text.
To bridge the gap, we propose ThreatKG, a system for automated open-source
cyber threat knowledge gathering and management. ThreatKG automatically
collects a large number of OSCTI reports from various sources, extracts
high-fidelity threat knowledge, constructs a threat knowledge graph, and
updates the knowledge graph by continuously ingesting new knowledge. To address
multiple challenges, ThreatKG provides: (1) a hierarchical ontology for
modeling a variety of threat knowledge entities and relations; (2) an accurate
deep learning-based pipeline for threat knowledge extraction; (3) a scalable
and extensible system architecture for threat knowledge graph construction,
persistence, updating, and exploration. Evaluations on a large number of
reports demonstrate the effectiveness of ThreatKG in threat knowledge gathering
and management
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 16:13:59 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Gao",
"Peng",
""
],
[
"Liu",
"Xiaoyuan",
""
],
[
"Choi",
"Edward",
""
],
[
"Ma",
"Sibo",
""
],
[
"Yang",
"Xinyu",
""
],
[
"Ji",
"Zhengjie",
""
],
[
"Zhang",
"Zilin",
""
],
[
"Song",
"Dawn",
""
]
] |
new_dataset
| 0.999569 |
2212.10411
|
Daniel Felipe Silva Santos
|
Daniel F. S. Santos, Rafael G. Pires, Leandro A. Passos, and Jo\~ao P.
Papa
|
DDIPNet and DDIPNet+: Discriminant Deep Image Prior Networks for Remote
Sensing Image Classification
|
Published in: 2021 IEEE International Geoscience and Remote Sensing
Symposium IGARSS
| null |
10.1109/IGARSS47720.2021.9554277
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research on remote sensing image classification significantly impacts
essential human routine tasks such as urban planning and agriculture. Nowadays,
the rapid advance in technology and the availability of many high-quality
remote sensing images create a demand for reliable automation methods. The
current paper proposes two novel deep learning-based architectures for image
classification purposes, i.e., the Discriminant Deep Image Prior Network and
the Discriminant Deep Image Prior Network+, which combine Deep Image Prior and
Triplet Networks learning strategies. Experiments conducted over three
well-known public remote sensing image datasets achieved state-of-the-art
results, evidencing the effectiveness of using deep image priors for remote
sensing image classification.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 16:39:04 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Santos",
"Daniel F. S.",
""
],
[
"Pires",
"Rafael G.",
""
],
[
"Passos",
"Leandro A.",
""
],
[
"Papa",
"João P.",
""
]
] |
new_dataset
| 0.992922 |
2212.10423
|
Yucheng Zhou
|
Yucheng Zhou, Tao Shen, Xiubo Geng, Chongyang Tao, Guodong Long, Can
Xu, Daxin Jiang
|
Fine-Grained Distillation for Long Document Retrieval
|
13 pages, 5 figures, 5 tables
| null | null | null |
cs.IR cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Long document retrieval aims to fetch query-relevant documents from a
large-scale collection, where knowledge distillation has become de facto to
improve a retriever by mimicking a heterogeneous yet powerful cross-encoder.
However, in contrast to passages or sentences, retrieval on long documents
suffers from the scope hypothesis that a long document may cover multiple
topics. This maximizes their structure heterogeneity and poses a
granular-mismatch issue, leading to an inferior distillation efficacy. In this
work, we propose a new learning framework, fine-grained distillation (FGD), for
long-document retrievers. While preserving the conventional dense retrieval
paradigm, it first produces global-consistent representations crossing
different fine granularity and then applies multi-granular aligned distillation
merely during training. In experiments, we evaluate our framework on two
long-document retrieval benchmarks, which show state-of-the-art performance.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 17:00:36 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Zhou",
"Yucheng",
""
],
[
"Shen",
"Tao",
""
],
[
"Geng",
"Xiubo",
""
],
[
"Tao",
"Chongyang",
""
],
[
"Long",
"Guodong",
""
],
[
"Xu",
"Can",
""
],
[
"Jiang",
"Daxin",
""
]
] |
new_dataset
| 0.989222 |
2212.10522
|
Yanran Chen
|
Yanran Chen and Steffen Eger
|
Transformers Go for the LOLs: Generating (Humourous) Titles from
Scientific Abstracts End-to-End
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We consider the end-to-end abstract-to-title generation problem, exploring
seven recent transformer based models (including ChatGPT) fine-tuned on more
than 30k abstract-title pairs from NLP and machine learning venues. As an
extension, we also consider the harder problem of generating humorous paper
titles. For the latter, we compile the first large-scale humor annotated
dataset for scientific papers in the NLP/ML domains, comprising almost 2.5k
titles. We evaluate all models using human and automatic metrics. Our human
evaluation suggests that our best end-to-end system performs similarly to human
authors (but arguably slightly worse). Generating funny titles is more
difficult, however, and our automatic systems clearly underperform relative to
humans and often learn dataset artefacts of humor. Finally, ChatGPT, without
any fine-tuning, performs on the level of our best fine-tuned system.
|
[
{
"version": "v1",
"created": "Tue, 20 Dec 2022 18:37:11 GMT"
}
] | 2022-12-21T00:00:00 |
[
[
"Chen",
"Yanran",
""
],
[
"Eger",
"Steffen",
""
]
] |
new_dataset
| 0.999715 |
2008.13670
|
Matthew Petroff
|
Matthew A. Petroff
|
A Square Equal-area Map Projection with Low Angular Distortion, Minimal
Cusps, and Closed-form Solutions
|
17 pages, 6 figures, 1 table; corrections to Appendix A
| null |
10.1145/3460521
| null |
cs.GR physics.geo-ph
|
http://creativecommons.org/licenses/by/4.0/
|
A novel square equal-area map projection is proposed. The projection combines
closed-form forward and inverse solutions with relatively low angular
distortion and minimal cusps, a combination of properties not manifested by any
previously published square equal-area projection. Thus, the new projection has
lower angular distortion than any previously published square equal-area
projection with a closed-form solution. Utilizing a quincuncial arrangement,
the new projection places the north pole at the center of the square and
divides the south pole between its four corners; the projection can be
seamlessly tiled. The existence of closed-form solutions makes the projection
suitable for real-time visualization applications, both in cartography and in
other areas, such as for the display of panoramic images.
|
[
{
"version": "v1",
"created": "Mon, 31 Aug 2020 15:20:55 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Aug 2021 00:53:17 GMT"
},
{
"version": "v3",
"created": "Sun, 18 Dec 2022 22:14:37 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Petroff",
"Matthew A.",
""
]
] |
new_dataset
| 0.997353 |
2011.04609
|
Jacob Peplinski
|
Jacob Peplinski, Joel Shor, Sachin Joglekar, Jake Garrison, Shwetak
Patel
|
FRILL: A Non-Semantic Speech Embedding for Mobile Devices
|
Accepted to Interspeech 2021
|
Proc. Interspeech 2021
|
10.21437/Interspeech.2021-2070
| null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learned speech representations can drastically improve performance on tasks
with limited labeled data. However, due to their size and complexity, learned
representations have limited utility in mobile settings where run-time
performance can be a significant bottleneck. In this work, we propose a class
of lightweight non-semantic speech embedding models that run efficiently on
mobile devices based on the recently proposed TRILL speech embedding. We
combine novel architectural modifications with existing speed-up techniques to
create embedding models that are fast enough to run in real-time on a mobile
device and exhibit minimal performance degradation on a benchmark of
non-semantic speech tasks. One such model (FRILL) is 32x faster on a Pixel 1
smartphone and 40% the size of TRILL, with an average decrease in accuracy of
only 2%. To our knowledge, FRILL is the highest-quality non-semantic embedding
designed for use on mobile devices. Furthermore, we demonstrate that these
representations are useful for mobile health tasks such as non-speech human
sounds detection and face-masked speech detection. Our models and code are
publicly available.
|
[
{
"version": "v1",
"created": "Mon, 9 Nov 2020 18:07:06 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Apr 2021 06:01:14 GMT"
},
{
"version": "v3",
"created": "Sat, 1 May 2021 04:57:34 GMT"
},
{
"version": "v4",
"created": "Wed, 19 May 2021 23:30:10 GMT"
},
{
"version": "v5",
"created": "Thu, 10 Jun 2021 16:18:35 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Peplinski",
"Jacob",
""
],
[
"Shor",
"Joel",
""
],
[
"Joglekar",
"Sachin",
""
],
[
"Garrison",
"Jake",
""
],
[
"Patel",
"Shwetak",
""
]
] |
new_dataset
| 0.997927 |
2103.13109
|
Peter Mortimer
|
Kai A. Metzger, Peter Mortimer, Hans-Joachim Wuensche
|
A Fine-Grained Dataset and its Efficient Semantic Segmentation for
Unstructured Driving Scenarios
|
Accepted at International Conference on Pattern Recognition 2020
(ICPR). For the associated project page, see
https://www.mucar3.de/icpr2020-tas500/index.html
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research in autonomous driving for unstructured environments suffers from a
lack of semantically labeled datasets compared to its urban counterpart. Urban
and unstructured outdoor environments are challenging due to the varying
lighting and weather conditions during a day and across seasons. In this paper,
we introduce TAS500, a novel semantic segmentation dataset for autonomous
driving in unstructured environments. TAS500 offers fine-grained vegetation and
terrain classes to learn drivable surfaces and natural obstacles in outdoor
scenes effectively. We evaluate the performance of modern semantic segmentation
models with an additional focus on their efficiency. Our experiments
demonstrate the advantages of fine-grained semantic classes to improve the
overall prediction accuracy, especially along the class boundaries. The dataset
and pretrained model are available at mucar3.de/icpr2020-tas500.
|
[
{
"version": "v1",
"created": "Wed, 24 Mar 2021 11:30:43 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Metzger",
"Kai A.",
""
],
[
"Mortimer",
"Peter",
""
],
[
"Wuensche",
"Hans-Joachim",
""
]
] |
new_dataset
| 0.999857 |
2104.06728
|
Ying Guo
|
Xingxing Wei, Ying Guo, Jie Yu
|
Adversarial Sticker: A Stealthy Attack Method in the Physical World
|
accepted by TPAMI 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To assess the vulnerability of deep learning in the physical world, recent
works introduce adversarial patches and apply them on different tasks. In this
paper, we propose another kind of adversarial patch: the Meaningful Adversarial
Sticker, a physically feasible and stealthy attack method by using real
stickers existing in our life. Unlike the previous adversarial patches by
designing perturbations, our method manipulates the sticker's pasting position
and rotation angle on the objects to perform physical attacks. Because the
position and rotation angle are less affected by the printing loss and color
distortion, adversarial stickers can keep good attacking performance in the
physical world. Besides, to make adversarial stickers more practical in real
scenes, we conduct attacks in the black-box setting with the limited
information rather than the white-box setting with all the details of threat
models. To effectively solve for the sticker's parameters, we design the Region
based Heuristic Differential Evolution Algorithm, which utilizes the new-found
regional aggregation of effective solutions and the adaptive adjustment
strategy of the evaluation criteria. Our method is comprehensively verified in
the face recognition and then extended to the image retrieval and traffic sign
recognition. Extensive experiments show the proposed method is effective and
efficient in complex physical conditions and has a good generalization for
different tasks.
|
[
{
"version": "v1",
"created": "Wed, 14 Apr 2021 09:32:01 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2022 15:16:43 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Wei",
"Xingxing",
""
],
[
"Guo",
"Ying",
""
],
[
"Yu",
"Jie",
""
]
] |
new_dataset
| 0.995773 |
2104.08793
|
Aaron Chan
|
Aaron Chan, Jiashu Xu, Boyuan Long, Soumya Sanyal, Tanishq Gupta,
Xiang Ren
|
SalKG: Learning From Knowledge Graph Explanations for Commonsense
Reasoning
|
NeurIPS 2021
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Augmenting pre-trained language models with knowledge graphs (KGs) has
achieved success on various commonsense reasoning tasks. However, for a given
task instance, the KG, or certain parts of the KG, may not be useful. Although
KG-augmented models often use attention to focus on specific KG components, the
KG is still always used, and the attention mechanism is never explicitly taught
which KG components should be used. Meanwhile, saliency methods can measure how
much a KG feature (e.g., graph, node, path) influences the model to make the
correct prediction, thus explaining which KG features are useful. This paper
explores how saliency explanations can be used to improve KG-augmented models'
performance. First, we propose to create coarse (Is the KG useful?) and fine
(Which nodes/paths in the KG are useful?) saliency explanations. Second, to
motivate saliency-based supervision, we analyze oracle KG-augmented models
which directly use saliency explanations as extra inputs for guiding their
attention. Third, we propose SalKG, a framework for KG-augmented models to
learn from coarse and/or fine saliency explanations. Given saliency
explanations created from a task's training set, SalKG jointly trains the model
to predict the explanations, then solve the task by attending to KG features
highlighted by the predicted explanations. On three commonsense QA benchmarks
(CSQA, OBQA, CODAH) and a range of KG-augmented models, we show that SalKG can
yield considerable performance gains -- up to 2.76% absolute improvement on
CSQA.
|
[
{
"version": "v1",
"created": "Sun, 18 Apr 2021 09:59:46 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jul 2021 18:53:44 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Dec 2021 20:00:29 GMT"
},
{
"version": "v4",
"created": "Sat, 15 Jan 2022 06:04:57 GMT"
},
{
"version": "v5",
"created": "Sun, 20 Mar 2022 04:02:52 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Chan",
"Aaron",
""
],
[
"Xu",
"Jiashu",
""
],
[
"Long",
"Boyuan",
""
],
[
"Sanyal",
"Soumya",
""
],
[
"Gupta",
"Tanishq",
""
],
[
"Ren",
"Xiang",
""
]
] |
new_dataset
| 0.999194 |
2105.08383
|
Chuhui Xue
|
Chuhui Xue, Jiaxing Huang, Wenqing Zhang, Shijian Lu, Changhu Wang,
Song Bai
|
I2C2W: Image-to-Character-to-Word Transformers for Accurate Scene Text
Recognition
|
Accepted by special issue Transformer Models in Vision of the
Transactions on Pattern Analysis and Machine Intelligence
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Leveraging the advances of natural language processing, most recent scene
text recognizers adopt an encoder-decoder architecture where text images are
first converted to representative features and then a sequence of characters
via `sequential decoding'. However, scene text images suffer from rich noises
of different sources such as complex background and geometric distortions which
often confuse the decoder and lead to incorrect alignment of visual features at
noisy decoding time steps. This paper presents I2C2W, a novel scene text
recognition technique that is tolerant to geometric and photometric degradation
by decomposing scene text recognition into two inter-connected tasks. The first
task focuses on image-to-character (I2C) mapping which detects a set of
character candidates from images based on different alignments of visual
features in an non-sequential way. The second task tackles character-to-word
(C2W) mapping which recognizes scene text by decoding words from the detected
character candidates. The direct learning from character semantics (instead of
noisy image features) corrects falsely detected character candidates
effectively which improves the final text recognition accuracy greatly.
Extensive experiments over nine public datasets show that the proposed I2C2W
outperforms the state-of-the-art by large margins for challenging scene text
datasets with various curvature and perspective distortions. It also achieves
very competitive recognition performance over multiple normal scene text
datasets.
|
[
{
"version": "v1",
"created": "Tue, 18 May 2021 09:20:58 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Mar 2022 11:04:38 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Dec 2022 02:13:43 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Xue",
"Chuhui",
""
],
[
"Huang",
"Jiaxing",
""
],
[
"Zhang",
"Wenqing",
""
],
[
"Lu",
"Shijian",
""
],
[
"Wang",
"Changhu",
""
],
[
"Bai",
"Song",
""
]
] |
new_dataset
| 0.994482 |
2107.09245
|
Evgeny Manzhosov
|
Evgeny Manzhosov, Adam Hastings, Meghna Pancholi, Ryan Piersma,
Mohamed Tarek Ibn Ziad, Simha Sethumadhavan
|
Revisiting Residue Codes for Modern Memories
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Residue codes have been traditionally used for compute error correction
rather than storage error correction. In this paper, we use these codes for
storage error correction with surprising results. We find that adapting residue
codes to modern memory systems offers a level of error correction comparable to
traditional schemes such as Reed-Solomon with fewer bits of storage. For
instance, our adaptation of residue code -- MUSE ECC -- can offer ChipKill
protection using approximately 30% fewer bits. We show that the storage gains
can be used to hold metadata needed for emerging security functionality such as
memory tagging or to provide better detection capabilities against Rowhammer
attacks. Our evaluation shows that memory tagging in a MUSE-enabled system
shows a 12% reduction in memory bandwidth utilization while providing the same
level of error correction as a traditional ECC baseline without a noticeable
loss of performance. Thus, our work demonstrates a new, flexible primitive for
co-designing reliability with security and performance.
|
[
{
"version": "v1",
"created": "Tue, 20 Jul 2021 03:35:45 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2022 14:27:45 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Manzhosov",
"Evgeny",
""
],
[
"Hastings",
"Adam",
""
],
[
"Pancholi",
"Meghna",
""
],
[
"Piersma",
"Ryan",
""
],
[
"Ziad",
"Mohamed Tarek Ibn",
""
],
[
"Sethumadhavan",
"Simha",
""
]
] |
new_dataset
| 0.987234 |
2108.09569
|
Harshil Bhatt
|
Harshil Bhatt, Pranesh G, Samarth Shankar, Shriyash Haralikar
|
Wireless Sensor Networks for Optimisation of Search and Rescue
Management in Floods
| null | null |
10.1109/CONECCT52877.2021.9622534
| null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel search-and-rescue management method that relies on the
aerial deployment of Wireless Sensor Network (WSN) for locating victims after
floods. The sensor nodes will collect vital information such as heat signatures
for detecting human presence and location, the flow of flood. The sensor
modules are packed in a portable floating buoy with a user interface to convey
emergency messages to the base station. Sensor nodes are designed based on
disaster conditions, cost-effectiveness and deployed in the affected region by
a centrifugal dispersion system from a helicopter.
A mobile ad-hoc network is set up by modifying the Low Energy Adaptive
Cluster Hierarchy (LEACH) protocol for greater efficiency and adoption of
multi-hop of Cluster Heads for long-distance communication to Base Station. The
model metrics have been defined considering previous rural floods in India. The
efficiency and power characteristics of the network are compared to other
protocols via simulations. The sensor data from the network makes resource
management, rescue planning and emergency priority more efficient, thus saving
more lives from floods.
|
[
{
"version": "v1",
"created": "Sat, 21 Aug 2021 19:37:01 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Bhatt",
"Harshil",
""
],
[
"G",
"Pranesh",
""
],
[
"Shankar",
"Samarth",
""
],
[
"Haralikar",
"Shriyash",
""
]
] |
new_dataset
| 0.962641 |
2109.07902
|
Jiahua Xu
|
Simon Cousaert, Nikhil Vadgama, Jiahua Xu
|
Token-based Insurance Solutions on Blockchain
| null |
Blockchains and the Token Economy, 2022, pp. 237-260
|
10.1007/978-3-030-95108-5_9
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rising demand for protection against new risks such as loss of
digital assets, novel insurance services and products emerge. In particular,
token-based insurance solutions on blockchain transform the insurance business
by providing cover for new risks and streamlined, (semi-)automated underwriting
and claim processes. In the chapter, we present a general framework of
token-based insurance solutions, delegating their fundamental building blocks
that include core roles, main tokens and assets, as well as key processes and
operations. We describe three major token-based insurance solutions in the
market and compare them in terms of native token functionality, tokenized cover
types, claim assessment process and capital model. Based on the discussion on
the general framework and concrete examples of token-based insurance solutions,
we summarize their advantages and point out their drawbacks. We conclude that
despite being at a nascent stage, the token-based insurance space bears the
promise to unseat the incumbent players with increasingly more action taking
place and more use cases being explored.
|
[
{
"version": "v1",
"created": "Mon, 6 Sep 2021 07:31:38 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Dec 2022 15:03:20 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Cousaert",
"Simon",
""
],
[
"Vadgama",
"Nikhil",
""
],
[
"Xu",
"Jiahua",
""
]
] |
new_dataset
| 0.997599 |
2110.07150
|
Luca Soldaini
|
Benjamin Muller, Luca Soldaini, Rik Koncel-Kedziorski, Eric Lind,
Alessandro Moschitti
|
Cross-Lingual Open-Domain Question Answering with Answer Sentence
Generation
|
AACL 2022 Long Paper
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open-Domain Generative Question Answering has achieved impressive performance
in English by combining document-level retrieval with answer generation. These
approaches, which we refer to as GenQA, can generate complete sentences,
effectively answering both factoid and non-factoid questions. In this paper, we
extend GenQA to the multilingual and cross-lingual settings. For this purpose,
we first introduce GenTyDiQA, an extension of the TyDiQA dataset with
well-formed and complete answers for Arabic, Bengali, English, Japanese, and
Russian. Based on GenTyDiQA, we design a cross-lingual generative model that
produces full-sentence answers by exploiting passages written in multiple
languages, including languages different from the question. Our cross-lingual
generative system outperforms answer sentence selection baselines for all 5
languages and monolingual generative pipelines for three out of five languages
studied.
|
[
{
"version": "v1",
"created": "Thu, 14 Oct 2021 04:36:29 GMT"
},
{
"version": "v2",
"created": "Sun, 22 May 2022 22:10:07 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Dec 2022 05:53:11 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Muller",
"Benjamin",
""
],
[
"Soldaini",
"Luca",
""
],
[
"Koncel-Kedziorski",
"Rik",
""
],
[
"Lind",
"Eric",
""
],
[
"Moschitti",
"Alessandro",
""
]
] |
new_dataset
| 0.999615 |
2112.00933
|
Ju He
|
Ju He, Shuo Yang, Shaokang Yang, Adam Kortylewski, Xiaoding Yuan,
Jie-Neng Chen, Shuai Liu, Cheng Yang, Qihang Yu, Alan Yuille
|
PartImageNet: A Large, High-Quality Dataset of Parts
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is natural to represent objects in terms of their parts. This has the
potential to improve the performance of algorithms for object recognition and
segmentation but can also help for downstream tasks like activity recognition.
Research on part-based models, however, is hindered by the lack of datasets
with per-pixel part annotations. This is partly due to the difficulty and high
cost of annotating object parts so it has rarely been done except for humans
(where there exists a big literature on part-based models). To help address
this problem, we propose PartImageNet, a large, high-quality dataset with part
segmentation annotations. It consists of $158$ classes from ImageNet with
approximately $24,000$ images. PartImageNet is unique because it offers
part-level annotations on a general set of classes including non-rigid,
articulated objects, while having an order of magnitude larger size compared to
existing part datasets (excluding datasets of humans). It can be utilized for
many vision tasks including Object Segmentation, Semantic Part Segmentation,
Few-shot Learning and Part Discovery. We conduct comprehensive experiments
which study these tasks and set up a set of baselines. The dataset and scripts
are released at https://github.com/TACJu/PartImageNet.
|
[
{
"version": "v1",
"created": "Thu, 2 Dec 2021 02:12:03 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Mar 2022 06:13:10 GMT"
},
{
"version": "v3",
"created": "Fri, 16 Dec 2022 19:18:33 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"He",
"Ju",
""
],
[
"Yang",
"Shuo",
""
],
[
"Yang",
"Shaokang",
""
],
[
"Kortylewski",
"Adam",
""
],
[
"Yuan",
"Xiaoding",
""
],
[
"Chen",
"Jie-Neng",
""
],
[
"Liu",
"Shuai",
""
],
[
"Yang",
"Cheng",
""
],
[
"Yu",
"Qihang",
""
],
[
"Yuille",
"Alan",
""
]
] |
new_dataset
| 0.998291 |
2202.06247
|
Wenyi Zhang
|
Mengxiao Liu, Yuejun Wei, Zhenyuan Chen, Wenyi Zhang
|
ORBGRAND Is Almost Capacity-Achieving
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decoding via sequentially guessing the error pattern in a received noisy
sequence has received attention recently, and ORBGRAND has been proposed as one
such decoding algorithm that is capable of utilizing the soft information
embedded in the received noisy sequence. An information theoretic study is
conducted for ORBGRAND, and it is shown that the achievable rate of ORBGRAND
using independent and identically distributed random codebooks almost coincides
with the channel capacity, for an additive white Gaussian noise channel under
antipodal input. For finite-length codes, improved guessing schemes motivated
by the information theoretic study are proposed that attain lower error rates
than ORBGRAND, especially in the high signal-to-noise ratio regime.
|
[
{
"version": "v1",
"created": "Sun, 13 Feb 2022 07:55:54 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Mar 2022 02:27:12 GMT"
},
{
"version": "v3",
"created": "Sun, 18 Dec 2022 09:11:06 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Liu",
"Mengxiao",
""
],
[
"Wei",
"Yuejun",
""
],
[
"Chen",
"Zhenyuan",
""
],
[
"Zhang",
"Wenyi",
""
]
] |
new_dataset
| 0.958419 |
2203.03540
|
Xi Yang
|
Xi Yang, Aokun Chen, Nima PourNejatian, Hoo Chang Shin, Kaleb E Smith,
Christopher Parisien, Colin Compas, Cheryl Martin, Mona G Flores, Ying Zhang,
Tanja Magoc, Christopher A Harle, Gloria Lipori, Duane A Mitchell, William R
Hogan, Elizabeth A Shenkman, Jiang Bian, Yonghui Wu
|
GatorTron: A Large Clinical Language Model to Unlock Patient Information
from Unstructured Electronic Health Records
|
24 pages, 2 figures, 3 tables
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
There is an increasing interest in developing artificial intelligence (AI)
systems to process and interpret electronic health records (EHRs). Natural
language processing (NLP) powered by pretrained language models is the key
technology for medical AI systems utilizing clinical narratives. However, there
are few clinical language models, the largest of which trained in the clinical
domain is comparatively small at 110 million parameters (compared with billions
of parameters in the general domain). It is not clear how large clinical
language models with billions of parameters can help medical AI systems utilize
unstructured EHRs. In this study, we develop from scratch a large clinical
language model - GatorTron - using >90 billion words of text (including >82
billion words of de-identified clinical text) and systematically evaluate it on
5 clinical NLP tasks including clinical concept extraction, medical relation
extraction, semantic textual similarity, natural language inference (NLI), and
medical question answering (MQA). We examine how (1) scaling up the number of
parameters and (2) scaling up the size of the training data could benefit these
NLP tasks. GatorTron models scale up the clinical language model from 110
million to 8.9 billion parameters and improve 5 clinical NLP tasks (e.g., 9.6%
and 9.5% improvement in accuracy for NLI and MQA), which can be applied to
medical AI systems to improve healthcare delivery. The GatorTron models are
publicly available at:
https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og.
|
[
{
"version": "v1",
"created": "Wed, 2 Feb 2022 14:28:51 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2022 00:18:40 GMT"
},
{
"version": "v3",
"created": "Fri, 16 Dec 2022 22:20:33 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Yang",
"Xi",
""
],
[
"Chen",
"Aokun",
""
],
[
"PourNejatian",
"Nima",
""
],
[
"Shin",
"Hoo Chang",
""
],
[
"Smith",
"Kaleb E",
""
],
[
"Parisien",
"Christopher",
""
],
[
"Compas",
"Colin",
""
],
[
"Martin",
"Cheryl",
""
],
[
"Flores",
"Mona G",
""
],
[
"Zhang",
"Ying",
""
],
[
"Magoc",
"Tanja",
""
],
[
"Harle",
"Christopher A",
""
],
[
"Lipori",
"Gloria",
""
],
[
"Mitchell",
"Duane A",
""
],
[
"Hogan",
"William R",
""
],
[
"Shenkman",
"Elizabeth A",
""
],
[
"Bian",
"Jiang",
""
],
[
"Wu",
"Yonghui",
""
]
] |
new_dataset
| 0.999208 |
2203.08496
|
Kojiro Tanaka
|
Kojiro Tanaka, Yuichi Kato, Akito Mizuno, Masahiko Mikawa, Makoto
Fujisawa
|
Dynamic Grass Color Scale Display Technique Based on Grass Length for
Green Landscape-Friendly Animation Display
|
17 pages
| null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, public displays such as liquid crystal displays (LCDs) are often
used in urban green spaces, however, the display devices can spoil green
landscape of urban green spaces because they look like artificial materials. We
previously proposed a green landscape-friendly grass animation display method
by controlling a pixel-by-pixel grass color dynamically. The grass color can be
changed by moving a green grass length in yellow grass, and the grass animation
display can play simple animations using grayscale images. In the previous
research, the color scale was mapped to the green grass length subjectively,
however, this method has not achieved displaying the grass colors corresponding
to the color scale based on objective evaluations. Here, we introduce a dynamic
grass color scale display technique based on a grass length. In this paper, we
developed a grass color scale setting procedure to map the grass length to the
color scale with five levels through image processing. Through the outdoor
experiment of the grass color scale setting procedure, the color scale can
correspond to the green grass length based on a viewpoint. After the
experiments, we demonstrated a grass animation display to show the animations
with the color scale using the experiment results.
|
[
{
"version": "v1",
"created": "Wed, 16 Mar 2022 09:40:45 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2022 04:40:30 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Tanaka",
"Kojiro",
""
],
[
"Kato",
"Yuichi",
""
],
[
"Mizuno",
"Akito",
""
],
[
"Mikawa",
"Masahiko",
""
],
[
"Fujisawa",
"Makoto",
""
]
] |
new_dataset
| 0.968381 |
2206.08010
|
Sigal Raab
|
Sigal Raab, Inbal Leibovitch, Peizhuo Li, Kfir Aberman, Olga
Sorkine-Hornung, Daniel Cohen-Or
|
MoDi: Unconditional Motion Synthesis from Diverse Data
|
Video: https://youtu.be/O1sVzwrsNUg, Project page:
https://sigal-raab.github.io/MoDi, Code: https://github.com/sigal-raab/MoDi
| null | null | null |
cs.GR cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emergence of neural networks has revolutionized the field of motion
synthesis. Yet, learning to unconditionally synthesize motions from a given
distribution remains challenging, especially when the motions are highly
diverse. In this work, we present MoDi -- a generative model trained in an
unsupervised setting from an extremely diverse, unstructured and unlabeled
dataset. During inference, MoDi can synthesize high-quality, diverse motions.
Despite the lack of any structure in the dataset, our model yields a
well-behaved and highly structured latent space, which can be semantically
clustered, constituting a strong motion prior that facilitates various
applications including semantic editing and crowd simulation. In addition, we
present an encoder that inverts real motions into MoDi's natural motion
manifold, issuing solutions to various ill-posed challenges such as completion
from prefix and spatial editing. Our qualitative and quantitative experiments
achieve state-of-the-art results that outperform recent SOTA techniques. Code
and trained models are available at https://sigal-raab.github.io/MoDi.
|
[
{
"version": "v1",
"created": "Thu, 16 Jun 2022 09:06:25 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Sep 2022 17:00:02 GMT"
},
{
"version": "v3",
"created": "Sun, 18 Dec 2022 08:27:37 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Raab",
"Sigal",
""
],
[
"Leibovitch",
"Inbal",
""
],
[
"Li",
"Peizhuo",
""
],
[
"Aberman",
"Kfir",
""
],
[
"Sorkine-Hornung",
"Olga",
""
],
[
"Cohen-Or",
"Daniel",
""
]
] |
new_dataset
| 0.999758 |
2207.03677
|
Haoran You
|
Haoran You, Baopu Li, Zhanyi Sun, Xu Ouyang, Yingyan Lin
|
SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via
Jointly Architecture Searching and Parameter Pruning
|
Accepted by ECCV 2022
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Neural architecture search (NAS) has demonstrated amazing success in
searching for efficient deep neural networks (DNNs) from a given supernet. In
parallel, the lottery ticket hypothesis has shown that DNNs contain small
subnetworks that can be trained from scratch to achieve a comparable or higher
accuracy than original DNNs. As such, it is currently a common practice to
develop efficient DNNs via a pipeline of first search and then prune.
Nevertheless, doing so often requires a search-train-prune-retrain process and
thus prohibitive computational cost. In this paper, we discover for the first
time that both efficient DNNs and their lottery subnetworks (i.e., lottery
tickets) can be directly identified from a supernet, which we term as
SuperTickets, via a two-in-one training scheme with jointly architecture
searching and parameter pruning. Moreover, we develop a progressive and unified
SuperTickets identification strategy that allows the connectivity of
subnetworks to change during supernet training, achieving better accuracy and
efficiency trade-offs than conventional sparse training. Finally, we evaluate
whether such identified SuperTickets drawn from one task can transfer well to
other tasks, validating their potential of handling multiple tasks
simultaneously. Extensive experiments and ablation studies on three tasks and
four benchmark datasets validate that our proposed SuperTickets achieve boosted
accuracy and efficiency trade-offs than both typical NAS and pruning pipelines,
regardless of having retraining or not. Codes and pretrained models are
available at https://github.com/RICE-EIC/SuperTickets.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2022 03:44:34 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jul 2022 07:07:34 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Sep 2022 04:34:42 GMT"
},
{
"version": "v4",
"created": "Mon, 19 Dec 2022 03:06:16 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"You",
"Haoran",
""
],
[
"Li",
"Baopu",
""
],
[
"Sun",
"Zhanyi",
""
],
[
"Ouyang",
"Xu",
""
],
[
"Lin",
"Yingyan",
""
]
] |
new_dataset
| 0.990568 |
2207.10409
|
Fatih Cagatay Akyon
|
Fatih Cagatay Akyon, Erdem Akagunduz, Sinan Onur Altinuc, Alptekin
Temizel
|
Sequence Models for Drone vs Bird Classification
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Drone detection has become an essential task in object detection as drone
costs have decreased and drone technology has improved. It is, however,
difficult to detect distant drones when there is weak contrast, long range, and
low visibility. In this work, we propose several sequence classification
architectures to reduce the detected false-positive ratio of drone tracks.
Moreover, we propose a new drone vs. bird sequence classification dataset to
train and evaluate the proposed architectures. 3D CNN, LSTM, and Transformer
based sequence classification architectures have been trained on the proposed
dataset to show the effectiveness of the proposed idea. As experiments show,
using sequence information, bird classification and overall F1 scores can be
increased by up to 73% and 35%, respectively. Among all sequence classification
models, R(2+1)D-based fully convolutional model yields the best transfer
learning and fine-tuning results.
|
[
{
"version": "v1",
"created": "Thu, 21 Jul 2022 11:00:44 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2022 17:22:49 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Akyon",
"Fatih Cagatay",
""
],
[
"Akagunduz",
"Erdem",
""
],
[
"Altinuc",
"Sinan Onur",
""
],
[
"Temizel",
"Alptekin",
""
]
] |
new_dataset
| 0.994885 |
2207.12362
|
Leonardo Bonati
|
Leonardo Bonati, Michele Polese, Salvatore D'Oro, Stefano Basagni,
Tommaso Melodia
|
OpenRAN Gym: AI/ML Development, Data Collection, and Testing for O-RAN
on PAWR Platforms
|
12 pages, 8 figures, 4 tables
|
Computer Networks 2023
|
10.1016/j.comnet.2022.109502
| null |
cs.NI cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open Radio Access Network (RAN) architectures will enable interoperability,
openness and programmable data-driven control in next generation cellular
networks. However, developing and testing efficient solutions that generalize
across heterogeneous cellular deployments and scales, and that optimize network
performance in such diverse environments is a complex task that is still
largely unexplored. In this paper we present OpenRAN Gym, a unified, open, and
O-RAN-compliant experimental toolbox for data collection, design, prototyping
and testing of end-to-end data-driven control solutions for next generation
Open RAN systems. OpenRAN Gym extends and combines into a unique solution
several software frameworks for data collection of RAN statistics and RAN
control, and a lightweight O-RAN near-real-time RAN Intelligent Controller
(RIC) tailored to run on experimental wireless platforms. We first provide an
overview of the various architectural components of OpenRAN Gym and describe
how it is used to collect data and design, train and test artificial
intelligence and machine learning O-RAN-compliant applications (xApps) at
scale. We then describe in detail how to test the developed xApps on
softwarized RANs and provide an example of two xApps developed with OpenRAN Gym
that are used to control a network with 7 base stations and 42 users deployed
on the Colosseum testbed. Finally, we show how solutions developed with OpenRAN
Gym on Colosseum can be exported to real-world, heterogeneous wireless
platforms, such as the Arena testbed and the POWDER and COSMOS platforms of the
PAWR program. OpenRAN Gym and its software components are open-source and
publicly-available to the research community. By guiding the readers through
running experiments with OpenRAN Gym, we aim at providing a key reference for
researchers and practitioners working on experimental Open RAN systems.
|
[
{
"version": "v1",
"created": "Mon, 25 Jul 2022 17:22:25 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Nov 2022 15:24:34 GMT"
},
{
"version": "v3",
"created": "Sat, 17 Dec 2022 21:13:51 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Bonati",
"Leonardo",
""
],
[
"Polese",
"Michele",
""
],
[
"D'Oro",
"Salvatore",
""
],
[
"Basagni",
"Stefano",
""
],
[
"Melodia",
"Tommaso",
""
]
] |
new_dataset
| 0.998623 |
2208.08227
|
Arjun Guha
|
Federico Cassano, John Gouwar, Daniel Nguyen, Sydney Nguyen, Luna
Phipps-Costin, Donald Pinckney, Ming-Ho Yee, Yangtian Zi, Carolyn Jane
Anderson, Molly Q Feldman, Arjun Guha, Michael Greenberg, Abhinav Jangda
|
MultiPL-E: A Scalable and Extensible Approach to Benchmarking Neural
Code Generation
| null | null | null | null |
cs.LG cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models have demonstrated the ability to generate both natural
language and programming language text. Such models open up the possibility of
multi-language code generation: could code generation models generalize
knowledge from one language to another? Although contemporary code generation
models can generate semantically correct Python code, little is known about
their abilities with other languages. We propose MultiPL-E, a system for
translating unit test-driven code generation benchmarks to new languages. We
create the first massively multilingual code generation benchmark by using
MultiPL-E to translate two popular Python code generation benchmarks to 18
additional programming languages.
We use MultiPL-E to extend the HumanEval benchmark and MBPP benchmark to 18
languages that encompass a range of programming paradigms and popularity. Using
these new parallel benchmarks, we evaluate the multi-language performance of
three state-of-the-art code generation models: Codex, CodeGen, and InCoder. We
find that Codex matches or even exceeds its performance on Python for several
other languages. The range of programming languages represented in MultiPL-E
allow us to explore the impact of language frequency and language features on
model performance. Finally, the MultiPL-E approach of compiling code generation
benchmarks to new programming languages is both scalable and extensible, making
it straightforward to evaluate new models, benchmarks, and languages.
|
[
{
"version": "v1",
"created": "Wed, 17 Aug 2022 11:16:52 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Aug 2022 01:12:49 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Nov 2022 05:48:57 GMT"
},
{
"version": "v4",
"created": "Mon, 19 Dec 2022 10:30:12 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Cassano",
"Federico",
""
],
[
"Gouwar",
"John",
""
],
[
"Nguyen",
"Daniel",
""
],
[
"Nguyen",
"Sydney",
""
],
[
"Phipps-Costin",
"Luna",
""
],
[
"Pinckney",
"Donald",
""
],
[
"Yee",
"Ming-Ho",
""
],
[
"Zi",
"Yangtian",
""
],
[
"Anderson",
"Carolyn Jane",
""
],
[
"Feldman",
"Molly Q",
""
],
[
"Guha",
"Arjun",
""
],
[
"Greenberg",
"Michael",
""
],
[
"Jangda",
"Abhinav",
""
]
] |
new_dataset
| 0.976049 |
2211.02069
|
Avia Efrat
|
Avia Efrat, Or Honovich, Omer Levy
|
LMentry: A Language Model Benchmark of Elementary Language Tasks
|
minor results updates
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
As the performance of large language models rapidly improves, benchmarks are
getting larger and more complex as well. We present LMentry, a benchmark that
avoids this "arms race" by focusing on a compact set of tasks that are trivial
to humans, e.g. writing a sentence containing a specific word, identifying
which words in a list belong to a specific category, or choosing which of two
words is longer. LMentry is specifically designed to provide quick and
interpretable insights into the capabilities and robustness of large language
models. Our experiments reveal a wide variety of failure cases that, while
immediately obvious to humans, pose a considerable challenge for large language
models, including OpenAI's latest 175B-parameter instruction-tuned model,
TextDavinci002. LMentry complements contemporary evaluation approaches of large
language models, providing a quick, automatic, and easy-to-run "unit test",
without resorting to large benchmark suites of complex tasks.
|
[
{
"version": "v1",
"created": "Thu, 3 Nov 2022 18:01:12 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2022 10:53:46 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Efrat",
"Avia",
""
],
[
"Honovich",
"Or",
""
],
[
"Levy",
"Omer",
""
]
] |
new_dataset
| 0.999601 |
2211.04815
|
Yang Li
|
Yang Li, Shixin Zhu, Edgar Mart\'inez-Moro
|
The hull of two classical propagation rules and their applications
|
31 pages, 6 tables
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we study and determine the dimensions of Euclidean and
Hermitian hulls of two classical propagation rules, namely, the direct sum
construction and the $(\mathbf{u},\mathbf{u+v})$-construction. Some new
criteria for the resulting codes derived from these two propagation rules being
self-dual, self-orthogonal, or linear complementary dual (LCD) codes are given.
As an application, we construct some linear codes with prescribed hull
dimensions, many new binary, ternary Euclidean formally self-dual (FSD) LCD
codes, and quaternary Hermitian FSD LCD codes. Some new even-like, odd-like,
Euclidean and Hermitian self-orthogonal codes are also obtained. Many of
{these} codes are also (almost) optimal according to the Database maintained by
Markus Grassl. Our methods contribute positively to improve the lower bounds on
the minimum distance of known LCD codes.
|
[
{
"version": "v1",
"created": "Wed, 9 Nov 2022 11:29:41 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2022 11:12:03 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Li",
"Yang",
""
],
[
"Zhu",
"Shixin",
""
],
[
"Martínez-Moro",
"Edgar",
""
]
] |
new_dataset
| 0.993485 |
2211.06869
|
Nuo Chen
|
Nuo Chen, Yan Wang, Haiyun Jiang, Deng Cai, Ziyang Chen, Longyue Wang
and Jia Li
|
What would Harry say? Building Dialogue Agents for Characters in a Story
|
14 pages
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We have a Christmas gift for Harry Potter fans all over the world. In this
paper, we present Harry Potter Dialogue (HPD), a dataset that helps train Harry
Potter-like dialogue agents. Such a task is typically viewed as a variant of
personalized dialogue agents, but they differ significantly in three respects:
1) Harry lived in a virtual world of wizards, thus, real-world commonsense may
not apply to Harry's conversations; 2) Harry's behavior is strongly linked to
background information in conversations: the scene, its attributes and its
relationship to other speakers; and 3) Such backgrounds are dynamically altered
as the storyline goes on. The HPD dataset, as the first dataset to facilitate
the study of dialogue agent construction for characters within a story,
provides rich contextual information about each dialogue session such as
scenes, character attributes, and relations. More importantly, all the
background information will change over the course of the story. In addition,
HPD could support both dialogue generation and retrieval tasks. We evaluate
baselines such as Dialog-GPT and BOB to determine the extent to which they can
generate Harry Potter-like responses. The experimental results disappoint us in
that although the generated responses are fluent, they still seem out of
character for Harry. Besides, we validate the current most robust dialogue
agent, ChatGPT, which also can't generate plausible Harry-Potter-like responses
in some cases, either. Our results suggest that there is much scope for future
research.
|
[
{
"version": "v1",
"created": "Sun, 13 Nov 2022 10:16:39 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Nov 2022 14:32:21 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Dec 2022 03:18:36 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Chen",
"Nuo",
""
],
[
"Wang",
"Yan",
""
],
[
"Jiang",
"Haiyun",
""
],
[
"Cai",
"Deng",
""
],
[
"Chen",
"Ziyang",
""
],
[
"Wang",
"Longyue",
""
],
[
"Li",
"Jia",
""
]
] |
new_dataset
| 0.999769 |
2212.02340
|
Xi Zhao
|
Xi Zhao, Wei Feng, Zheng Zhang, Jingjing Lv, Xin Zhu, Zhangang Lin,
Jinghe Hu, Jingping Shao
|
CBNet: A Plug-and-Play Network for Segmentation-based Scene Text
Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, segmentation-based methods are quite popular in scene text
detection, which mainly contain two steps: text kernel segmentation and
expansion. However, the segmentation process only considers each pixel
independently, and the expansion process is difficult to achieve a favorable
accuracy-speed trade-off. In this paper, we propose a Context-aware and
Boundary-guided Network (CBN) to tackle these problems. In CBN, a basic text
detector is firstly used to predict initial segmentation results. Then, we
propose a context-aware module to enhance text kernel feature representations,
which considers both global and local contexts. Finally, we introduce a
boundary-guided module to expand enhanced text kernels adaptively with only the
pixels on the contours, which not only obtains accurate text boundaries but
also keeps high speed, especially on high-resolution output maps. In
particular, with a lightweight backbone, the basic detector equipped with our
proposed CBN achieves state-of-the-art results on several popular benchmarks,
and our proposed CBN can be plugged into several segmentation-based methods.
Code will be available on https://github.com/XiiZhao/cbn.pytorch.
|
[
{
"version": "v1",
"created": "Mon, 5 Dec 2022 15:15:27 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2022 06:03:42 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Zhao",
"Xi",
""
],
[
"Feng",
"Wei",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Lv",
"Jingjing",
""
],
[
"Zhu",
"Xin",
""
],
[
"Lin",
"Zhangang",
""
],
[
"Hu",
"Jinghe",
""
],
[
"Shao",
"Jingping",
""
]
] |
new_dataset
| 0.999682 |
2212.05743
|
Kangcheng Liu
|
Kangcheng Liu, Huosen Ou
|
A Light-Weight LiDAR-Inertial SLAM System with Loop Closing
|
ICARM 2022
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we propose a lightweight integrated LiDAR-Inertial SLAM system
with high efficiency and a great loop closure capacity. We found that the
current State-of-the-art LiDAR-Inertial SLAM system has poor performance in
loop closure. The LiDAR-Inertial SLAM system often fails with the large
drifting and suffers from limited efficiency when faced with large-scale
circumstances. In this work, firstly, to improve the speed of the whole
LiDAR-Inertial SLAM system, we have proposed a new data structure of the sparse
voxel-hashing to enhance the efficiency of the LiDAR-Inertial SLAM system.
Secondly, to improve the point cloud-based localization performance, we have
integrated the loop closure algorithms to improve the localization performance.
Extensive experiments on the real-scene large-scale complicated circumstances
demonstrate the great effectiveness and robustness of the proposed
LiDAR-Inertial SLAM system.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 07:43:58 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2022 14:14:13 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Liu",
"Kangcheng",
""
],
[
"Ou",
"Huosen",
""
]
] |
new_dataset
| 0.998934 |
2212.06206
|
Khoa Vo Ho Viet
|
Khoa Vo, Kashu Yamazaki, Phong X. Nguyen, Phat Nguyen, Khoa Luu, Ngan
Le
|
Contextual Explainable Video Representation: Human Perception-based
Understanding
|
Accepted in Asilomar Conference 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video understanding is a growing field and a subject of intense research,
which includes many interesting tasks to understanding both spatial and
temporal information, e.g., action detection, action recognition, video
captioning, video retrieval. One of the most challenging problems in video
understanding is dealing with feature extraction, i.e. extract contextual
visual representation from given untrimmed video due to the long and
complicated temporal structure of unconstrained videos. Different from existing
approaches, which apply a pre-trained backbone network as a black-box to
extract visual representation, our approach aims to extract the most contextual
information with an explainable mechanism. As we observed, humans typically
perceive a video through the interactions between three main factors, i.e., the
actors, the relevant objects, and the surrounding environment. Therefore, it is
very crucial to design a contextual explainable video representation extraction
that can capture each of such factors and model the relationships between them.
In this paper, we discuss approaches, that incorporate the human perception
process into modeling actors, objects, and the environment. We choose video
paragraph captioning and temporal action detection to illustrate the
effectiveness of human perception based-contextual representation in video
understanding. Source code is publicly available at
https://github.com/UARK-AICV/Video_Representation.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 19:29:07 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Dec 2022 06:29:37 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Vo",
"Khoa",
""
],
[
"Yamazaki",
"Kashu",
""
],
[
"Nguyen",
"Phong X.",
""
],
[
"Nguyen",
"Phat",
""
],
[
"Luu",
"Khoa",
""
],
[
"Le",
"Ngan",
""
]
] |
new_dataset
| 0.994131 |
2212.06468
|
Fadi Zaraket
|
Mustafa Jarrar and Fadi A Zaraket and Tymaa Hammouda and Daanish
Masood Alavi and Martin Waahlisch
|
Lisan: Yemeni, Iraqi, Libyan, and Sudanese Arabic Dialect Copora with
Morphological Annotations
| null | null | null | null |
cs.CL cs.DL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This article presents morphologically-annotated Yemeni, Sudanese, Iraqi, and
Libyan Arabic dialects Lisan corpora. Lisan features around 1.2 million tokens.
We collected the content of the corpora from several social media platforms.
The Yemeni corpus (~ 1.05M tokens) was collected automatically from Twitter.
The corpora of the other three dialects (~ 50K tokens each) came manually from
Facebook and YouTube posts and comments.
Thirty five (35) annotators who are native speakers of the target dialects
carried out the annotations. The annotators segemented all words in the four
corpora into prefixes, stems and suffixes and labeled each with different
morphological features such as part of speech, lemma, and a gloss in English.
An Arabic Dialect Annotation Toolkit ADAT was developped for the purpose of the
annation. The annotators were trained on a set of guidelines and on how to use
ADAT. We developed ADAT to assist the annotators and to ensure compatibility
with SAMA and Curras tagsets. The tool is open source, and the four corpora are
also available online.
|
[
{
"version": "v1",
"created": "Tue, 13 Dec 2022 10:37:10 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Dec 2022 12:37:29 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Jarrar",
"Mustafa",
""
],
[
"Zaraket",
"Fadi A",
""
],
[
"Hammouda",
"Tymaa",
""
],
[
"Alavi",
"Daanish Masood",
""
],
[
"Waahlisch",
"Martin",
""
]
] |
new_dataset
| 0.999849 |
2212.08216
|
Gabrielle Gauthier-Melancon
|
Gabrielle Gauthier-Melan\c{c}on, Orlando Marquez Ayala, Lindsay Brin,
Chris Tyler, Fr\'ed\'eric Branchaud-Charron, Joseph Marinier, Karine Grande,
Di Le
|
Azimuth: Systematic Error Analysis for Text Classification
|
To be published in Proceedings of the 2022 Conference on Empirical
Methods in Natural Language Processing: System Demonstrations. 13 pages and
14 figures
| null | null | null |
cs.LG cs.AI cs.CL cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Azimuth, an open-source and easy-to-use tool to perform error
analysis for text classification. Compared to other stages of the ML
development cycle, such as model training and hyper-parameter tuning, the
process and tooling for the error analysis stage are less mature. However, this
stage is critical for the development of reliable and trustworthy AI systems.
To make error analysis more systematic, we propose an approach comprising
dataset analysis and model quality assessment, which Azimuth facilitates. We
aim to help AI practitioners discover and address areas where the model does
not generalize by leveraging and integrating a range of ML techniques, such as
saliency maps, similarity, uncertainty, and behavioral analyses, all in one
tool. Our code and documentation are available at
github.com/servicenow/azimuth.
|
[
{
"version": "v1",
"created": "Fri, 16 Dec 2022 01:10:41 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2022 04:01:57 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Gauthier-Melançon",
"Gabrielle",
""
],
[
"Ayala",
"Orlando Marquez",
""
],
[
"Brin",
"Lindsay",
""
],
[
"Tyler",
"Chris",
""
],
[
"Branchaud-Charron",
"Frédéric",
""
],
[
"Marinier",
"Joseph",
""
],
[
"Grande",
"Karine",
""
],
[
"Le",
"Di",
""
]
] |
new_dataset
| 0.999822 |
2212.08327
|
Xuhang Chen
|
Zinuo Li, Xuhang Chen, Chi-Man Pun and Shuqiang Wang
|
WavEnhancer: Unifying Wavelet and Transformer for Image Enhancement
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image enhancement is a technique that frequently utilized in digital image
processing. In recent years, the popularity of learning-based techniques for
enhancing the aesthetic performance of photographs has increased. However, the
majority of current works do not optimize an image from different frequency
domains and typically focus on either pixel-level or global-level enhancements.
In this paper, we propose a transformer-based model in the wavelet domain to
refine different frequency bands of an image. Our method focuses both on local
details and high-level features for enhancement, which can generate superior
results. On the basis of comprehensive benchmark evaluations, our method
outperforms the state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Fri, 16 Dec 2022 08:00:54 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2022 03:48:37 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Li",
"Zinuo",
""
],
[
"Chen",
"Xuhang",
""
],
[
"Pun",
"Chi-Man",
""
],
[
"Wang",
"Shuqiang",
""
]
] |
new_dataset
| 0.981549 |
2212.08751
|
Alex Nichol
|
Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, Mark Chen
|
Point-E: A System for Generating 3D Point Clouds from Complex Prompts
|
8 pages, 11 figures
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While recent work on text-conditional 3D object generation has shown
promising results, the state-of-the-art methods typically require multiple
GPU-hours to produce a single sample. This is in stark contrast to
state-of-the-art generative image models, which produce samples in a number of
seconds or minutes. In this paper, we explore an alternative method for 3D
object generation which produces 3D models in only 1-2 minutes on a single GPU.
Our method first generates a single synthetic view using a text-to-image
diffusion model, and then produces a 3D point cloud using a second diffusion
model which conditions on the generated image. While our method still falls
short of the state-of-the-art in terms of sample quality, it is one to two
orders of magnitude faster to sample from, offering a practical trade-off for
some use cases. We release our pre-trained point cloud diffusion models, as
well as evaluation code and models, at https://github.com/openai/point-e.
|
[
{
"version": "v1",
"created": "Fri, 16 Dec 2022 23:22:59 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Nichol",
"Alex",
""
],
[
"Jun",
"Heewoo",
""
],
[
"Dhariwal",
"Prafulla",
""
],
[
"Mishkin",
"Pamela",
""
],
[
"Chen",
"Mark",
""
]
] |
new_dataset
| 0.950228 |
2212.08857
|
Jean-Paul Allouche
|
Jean-Paul Allouche and Michel Mend\`es France
|
Automata and automatic sequences
| null |
Beyond quasicrystals (Les Houches, 1994), 293--367, Springer,
Berlin, 1995
| null | null |
cs.FL cs.DM math.NT
|
http://creativecommons.org/licenses/by/4.0/
|
In the following pages we discuss infinite sequences defined on a finite
alphabet, and more specially those which are generated by finite automata. We
have divided our paper into seven parts which are more or less self-contained.
Needless to say, we feel that the order we propose is the most natural one.
References appear at the end of each one of the parts which implies some
redundancy. Extra references are listed at the very end of our paper.
|
[
{
"version": "v1",
"created": "Sat, 17 Dec 2022 12:27:56 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Allouche",
"Jean-Paul",
""
],
[
"France",
"Michel Mendès",
""
]
] |
new_dataset
| 0.980761 |
2212.08890
|
Pengfei Xi
|
Pengfei Xi, Guifeng Wang, Zhipeng Hu, Yu Xiong, Mingming Gong, Wei
Huang, Runze Wu, Yu Ding, Tangjie Lv, Changjie Fan, Xiangnan Feng
|
TCFimt: Temporal Counterfactual Forecasting from Individual Multiple
Treatment Perspective
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Determining causal effects of temporal multi-intervention assists
decision-making. Restricted by time-varying bias, selection bias, and
interactions of multiple interventions, the disentanglement and estimation of
multiple treatment effects from individual temporal data is still rare. To
tackle these challenges, we propose a comprehensive framework of temporal
counterfactual forecasting from an individual multiple treatment perspective
(TCFimt). TCFimt constructs adversarial tasks in a seq2seq framework to
alleviate selection and time-varying bias and designs a contrastive
learning-based block to decouple a mixed treatment effect into separated main
treatment effects and causal interactions which further improves estimation
accuracy. Through implementing experiments on two real-world datasets from
distinct fields, the proposed method shows satisfactory performance in
predicting future outcomes with specific treatments and in choosing optimal
treatment type and timing than state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sat, 17 Dec 2022 15:01:05 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Xi",
"Pengfei",
""
],
[
"Wang",
"Guifeng",
""
],
[
"Hu",
"Zhipeng",
""
],
[
"Xiong",
"Yu",
""
],
[
"Gong",
"Mingming",
""
],
[
"Huang",
"Wei",
""
],
[
"Wu",
"Runze",
""
],
[
"Ding",
"Yu",
""
],
[
"Lv",
"Tangjie",
""
],
[
"Fan",
"Changjie",
""
],
[
"Feng",
"Xiangnan",
""
]
] |
new_dataset
| 0.993785 |
2212.09027
|
Sanka Mohottala
|
Sanka Mohottala, Sandun Abeygunawardana, Pradeepa Samarasinghe,
Dharshana Kasthurirathna, Charith Abhayaratne
|
2D Pose Estimation based Child Action Recognition
|
Paper Accepted for the IEEE TENCON Conference (2022). 7 pages, 5
figures
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We present a graph convolutional network with 2D pose estimation for the
first time on child action recognition task achieving on par results with an
RGB modality based model on a novel benchmark dataset containing unconstrained
environment based videos.
|
[
{
"version": "v1",
"created": "Sun, 18 Dec 2022 07:36:32 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Mohottala",
"Sanka",
""
],
[
"Abeygunawardana",
"Sandun",
""
],
[
"Samarasinghe",
"Pradeepa",
""
],
[
"Kasthurirathna",
"Dharshana",
""
],
[
"Abhayaratne",
"Charith",
""
]
] |
new_dataset
| 0.997585 |
2212.09028
|
Yu Wang
|
Yu Wang and Hongxia Jin
|
Neural Coreference Resolution based on Reinforcement Learning
|
6 pages, 2 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The target of a coreference resolution system is to cluster all mentions that
refer to the same entity in a given context. All coreference resolution systems
need to solve two subtasks; one task is to detect all of the potential
mentions, and the other is to learn the linking of an antecedent for each
possible mention. In this paper, we propose a reinforcement learning
actor-critic-based neural coreference resolution system, which can achieve both
mention detection and mention clustering by leveraging an actor-critic deep
reinforcement learning technique and a joint training algorithm. We experiment
on the BERT model to generate different input span representations. Our model
with the BERT span representation achieves the state-of-the-art performance
among the models on the CoNLL-2012 Shared Task English Test Set.
|
[
{
"version": "v1",
"created": "Sun, 18 Dec 2022 07:36:35 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Wang",
"Yu",
""
],
[
"Jin",
"Hongxia",
""
]
] |
new_dataset
| 0.985465 |
2212.09039
|
Jianan Li
|
Jianan Li, Shenwang Jiang, Liqiang Song, Peiran Peng, Feng Mu, Hui Li,
Peng Jiang, Tingfa Xu
|
Automated Optical Inspection of FAST's Reflector Surface using Drones
and Computer Vision
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Five-hundred-meter Aperture Spherical radio Telescope (FAST) is the
world's largest single-dish radio telescope. Its large reflecting surface
achieves unprecedented sensitivity but is prone to damage, such as dents and
holes, caused by naturally-occurring falling objects. Hence, the timely and
accurate detection of surface defects is crucial for FAST's stable operation.
Conventional manual inspection involves human inspectors climbing up and
examining the large surface visually, a time-consuming and potentially
unreliable process. To accelerate the inspection process and increase its
accuracy, this work makes the first step towards automating the inspection of
FAST by integrating deep-learning techniques with drone technology. First, a
drone flies over the surface along a predetermined route. Since surface defects
significantly vary in scale and show high inter-class similarity, directly
applying existing deep detectors to detect defects on the drone imagery is
highly prone to missing and misidentifying defects. As a remedy, we introduce
cross-fusion, a dedicated plug-in operation for deep detectors that enables the
adaptive fusion of multi-level features in a point-wise selective fashion,
depending on local defect patterns. Consequently, strong semantics and
fine-grained details are dynamically fused at different positions to support
the accurate detection of defects of various scales and types. Our AI-powered
drone-based automated inspection is time-efficient, reliable, and has good
accessibility, which guarantees the long-term and stable operation of FAST.
|
[
{
"version": "v1",
"created": "Sun, 18 Dec 2022 08:34:05 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Li",
"Jianan",
""
],
[
"Jiang",
"Shenwang",
""
],
[
"Song",
"Liqiang",
""
],
[
"Peng",
"Peiran",
""
],
[
"Mu",
"Feng",
""
],
[
"Li",
"Hui",
""
],
[
"Jiang",
"Peng",
""
],
[
"Xu",
"Tingfa",
""
]
] |
new_dataset
| 0.977748 |
2212.09042
|
Haidong Zhu
|
Haidong Zhu, Zhaoheng Zheng, Ram Nevatia
|
Gait Recognition Using 3-D Human Body Shape Inference
|
Accepted to WACV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gait recognition, which identifies individuals based on their walking
patterns, is an important biometric technique since it can be observed from a
distance and does not require the subject's cooperation. Recognizing a person's
gait is difficult because of the appearance variants in human silhouette
sequences produced by varying viewing angles, carrying objects, and clothing.
Recent research has produced a number of ways for coping with these variants.
In this paper, we present the usage of inferring 3-D body shapes distilled from
limited images, which are, in principle, invariant to the specified variants.
Inference of 3-D shape is a difficult task, especially when only silhouettes
are provided in a dataset. We provide a method for learning 3-D body inference
from silhouettes by transferring knowledge from 3-D shape prior from RGB
photos. We use our method on multiple existing state-of-the-art gait baselines
and obtain consistent improvements for gait identification on two public
datasets, CASIA-B and OUMVLP, on several variants and settings, including a new
setting of novel views not seen during training.
|
[
{
"version": "v1",
"created": "Sun, 18 Dec 2022 09:27:00 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Zhu",
"Haidong",
""
],
[
"Zheng",
"Zhaoheng",
""
],
[
"Nevatia",
"Ram",
""
]
] |
new_dataset
| 0.957477 |
2212.09064
|
Samuel Karumba Mr
|
Samuel Karumba, Salil S. Kanhere, Raja Jurdak, Subbu Sethuvenkatraman
|
PlexiChain: A Secure Blockchain-based Flexibility Aggregator Framework
|
10 pages, 8 figure
| null | null | null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Flexible resources in built environments are seen as a low-cost opportunity
for delivering grid management services. Consequently, the centralised
aggregator model, where the aggregator is used to bundle demand flexibility
from flexible resources and deliver it to flexibility customers such as
Distributed/Transmission System Operator (DSO/TSO) in flexibility markets, has
been adopted. However, the aggregator role introduces various security and
trust challenges. In this work, we propose a blockchain-based flexibility
trading framework dubbed PlexiChain to address the security and trust
challenges the aggregator poses in the centralised aggregator model. The
security evaluations performed using a real-world dataset show that PlexiChain
is robust against known security attacks, such as MadIoT and False Data
Injection attacks. Additionally, the performance evaluations show that
PlexiChain has lower computation and communication costs than other
blockchain-based applications in resource-constrained environments.
|
[
{
"version": "v1",
"created": "Sun, 18 Dec 2022 11:09:24 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Karumba",
"Samuel",
""
],
[
"Kanhere",
"Salil S.",
""
],
[
"Jurdak",
"Raja",
""
],
[
"Sethuvenkatraman",
"Subbu",
""
]
] |
new_dataset
| 0.999409 |
2212.09132
|
Anjan Karmakar
|
Anjan Karmakar, Miltiadis Allamanis, Romain Robbes
|
JEMMA: An Extensible Java Dataset for ML4Code Applications
| null | null | null | null |
cs.SE cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Machine Learning for Source Code (ML4Code) is an active research field in
which extensive experimentation is needed to discover how to best use source
code's richly structured information. With this in mind, we introduce JEMMA, an
Extensible Java Dataset for ML4Code Applications, which is a large-scale,
diverse, and high-quality dataset targeted at ML4Code. Our goal with JEMMA is
to lower the barrier to entry in ML4Code by providing the building blocks to
experiment with source code models and tasks. JEMMA comes with a considerable
amount of pre-processed information such as metadata, representations (e.g.,
code tokens, ASTs, graphs), and several properties (e.g., metrics, static
analysis results) for 50,000 Java projects from the 50KC dataset, with over 1.2
million classes and over 8 million methods. JEMMA is also extensible allowing
users to add new properties and representations to the dataset, and evaluate
tasks on them. Thus, JEMMA becomes a workbench that researchers can use to
experiment with novel representations and tasks operating on source code. To
demonstrate the utility of the dataset, we also report results from two
empirical studies on our data, ultimately showing that significant work lies
ahead in the design of context-aware source code models that can reason over a
broader network of source code entities in a software project, the very task
that JEMMA is designed to help with.
|
[
{
"version": "v1",
"created": "Sun, 18 Dec 2022 17:04:14 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Karmakar",
"Anjan",
""
],
[
"Allamanis",
"Miltiadis",
""
],
[
"Robbes",
"Romain",
""
]
] |
new_dataset
| 0.999838 |
2212.09149
|
Constantinos Patsakis
|
Angelos Michalas, Constantinos Patsakis, Dimitrios D. Vergados,
Dimitrios J Vergados
|
From NEA and NIA to NESAS and SCAS: Demystifying the 5G Security
Ecosystem
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the numerous pompous statements regarding 5G, it is indisputable that
5G creates a radical shift in telecommunications. The main reason is that 5G is
an enabler of numerous applications we have long envisioned and either
simulated or implemented in test environments, partially or on a smaller scale.
5G will soon unlock the potential of smart cities, industry 4.0, and IoT, to
name a few. However, a crucial question is how much we can trust this
technology. Since this technology will soon become the core infrastructure for
all of the above, it is critical to understand the fundamental security
mechanisms that comprise this technology and the guarantees they provide to
assess the potential risks we are exposed to. This work follows a non-technical
yet bottom-up approach to introduce the reader to the core security mechanisms
and establish a baseline for the security of 5G, to demystify the principal
notions and processes. Based on the above, we streamline future directions and
highlight possible threats.
|
[
{
"version": "v1",
"created": "Sun, 18 Dec 2022 19:59:02 GMT"
}
] | 2022-12-20T00:00:00 |
[
[
"Michalas",
"Angelos",
""
],
[
"Patsakis",
"Constantinos",
""
],
[
"Vergados",
"Dimitrios D.",
""
],
[
"Vergados",
"Dimitrios J",
""
]
] |
new_dataset
| 0.998256 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.